• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry: [4K] InFamous First Light - PS4 Pro Upgrade Analysed!

Fliesen

Member
Your numbers are right, but I'd argue with how they're presented. By giving different values for 1800c and "what it's checkerboarded to" you're falling into the semantic trap that some rendered pixels are "more real" than others.

I think it's more accurate to say that 1800c has exactly the same number of pixels as 1800p. (That's why it's called that!) The difference is that half the 1800c pixels have the potential to differ from the ideal render. (In practice, not all of them will; and for most, the difference will be imperceptible.)

Well, while you're fighting the holy semantic crusade (and i'm with you, don't get me wrong.) just of whether or not to call it "scaling", there's still people who believe 1800p checkerboarded means it takes an 1800p image and 'checkerboards' it to full 4k.
That's the far more common misconception, that's the more technical misconception ;)
"In any specific mode, the PS4 renders (via checkerboarding) one, and only one, internal resolution."

While this isn't - strictly speaking - wrong, the problem with those numbers is that the absolute pixel counts don't translate to actual graphics-compute costs using checkerboard.
Which isn't an issue in of itself - but it promotes a (sadly false) narrative that CB exactly halves the cost of native res you're targeting. One of the reasons quality is/can be much better than upscaling is that a fair-portion of the pipeline operates at native-target resolution (after or during reconstruction) - eg. anti-aliasing, for instance.

well, i didn't make any implications of "GPU costs to render a frame". The poster i quoted simply wanted to know the pixel count, and i thought one of the most meaningful pieces of information would be the fact that EACH of the 'checkerboarded' frames (1600x1800pixels) already contains more 'unique' pixels and thereby image information than a full frame of 1920*1080.
 

MaLDo

Member
Upscaling doesn't add more data to the image. Checkerboard rendering does.

I do think it's important to distinguish between checkerboard and native resolutions but I also think it's important to distinguish between upscaling and checkerboard rendering.

I found this quote

"upscaling technology used to interpolate 1080p content to 2160p is positively Asgardian in its brilliance. Rather than rely on linear scaling, top chips dynamically address image databases to interpolate data. The Panasonic TX-L65WT600, for example, employs a database of 120,000 textures used to guessitmate detail."

Is this an upscale method or not?
 

onQ123

Member
If only there was a simple word that could be used to explain upping a rendering resolution to a higher number than what is being sampled in the traditional way.
 
Actually it does. Although it doesn't use a target resolution as we're used to as the used resolution has "holes" in it. A checkerboard implementation will shade only 1/2 pixels of the final image. The other pixels are "made up" by the checkerboard implementation.

Everything in graphics processing is made up via some algorithmic shortcut. The only thing that differs in the method. In any case "made up pixesl" isn't really involved in how scaling is defined.

Transforming a 1920x2160 pixel grid to a 3840x2160 pixel grid is a upscale. How you calculate those new 4 million pixels is what will say if is some kind of upscale or another, right?

No, because you aren' transforming the original array of pixels into anything. They stay where they are and you add an equal number of pixels to complete the final image rendered in a method that does not involve sampling the opaque geometry of the scene.

I'd argue against this. From what I've gathered, checkerboarding starts with half the pixels of the final framebuffer, albeit in a checkerboard format, and then uses maths and magic to calculate what the final frame is meant to be. Even though that process is different to typical upscaling, it still fits the criteria of upscaling one frame to a final frame. Just because it's done prior to when upscaling to take place, doesn't mean it's not using a form of upscaling to calculate that image.

By the same definition, Ryse wouldn't be upscaled as it uses a custom technique prior to final framebuffer, although we all agreed it was upscaled.

Words have definitions. The word scaling involves a specific act that never occurs in checkerboard rendering.

Using your definition of the term upscale it still fits. It grows from ½N pixels to N pixels. It just doesn't grow outward but inward to fill the holes. You're to fixated on the width and height of a frame.
It starts at 3840*2160 as much as a interlaced image starts at 3840*2160. It creates a picture with holes in it. But unlike interlaced it actually fills those holes not too dissimilar to how other upscaling techniques fill empty spaces in a image, it just does it in a more advanced way by reusing older data.

Nothing grows anywhere. The holes in the image are filled algorithmically using a variety of high confidence data. The original rasaterized pixels never scale at all.

I can't see your logic, sorry.

In my opinion the native half resolution frame rendered in the old fashion way is upscaled to a final doubled resolution frame. What you are describing is how the holes in the Gruyère native rendered frame are filled. Filled, completed, complemented, upscaled....

If we both agree that the old style rendered image has half the resolution, I can't see why to deny that this image grows later in size, so it's upscaled.

It's not a question of logic or opinion. Words have meaning and many of you just insist on misusing this one for some reason... Not sure what cheese has to do with it, either!
 

00ich

Member
Words have definitions. The word scaling involves a specific act that never occurs in checkerboard rendering.
I be careful with that statement.

Checkerboard rendering as presented by the Rainbow 6 Siege Team is a lot more advanced than a fixed rule like bi-linear filtering (aka "scaling").
To interpolate between a blank checkerboard pixels and it's neighbors it looks at the the direction camera moves, the color values and the depth difference to its neighbors. It also looks at the previous frame's values for all three measures.
It then weights these inputs and determines the final color.

For comparison bi-linear filtering looks only at current the neighboring pixel colors. So checkerboard rendering contains a bit of something technically at least very close to scaling.
 

00ich

Member
I found this quote

"upscaling technology used to interpolate 1080p content to 2160p is positively Asgardian in its brilliance. Rather than rely on linear scaling, top chips dynamically address image databases to interpolate data. The Panasonic TX-L65WT600, for example, employs a database of 120,000 textures used to guessitmate detail."

Is this an upscale method or not?

Actually the question should be "How can I generate additional picture information?".
Bi-linear filtering doesn't add anything.
This algorithm outlined above has at least additional information. If it can guess this information in a meaningful way may be a different story, though. That's a bit like the job of an restorer.
 
Does anyone know if Second Son is at the same level of performance as First Light? I seem to remember First Light having better performance when it first came out due to some optimizations that they said would be too difficult to patch back into Second Son. Just wondering if they went back and did those while they were doing the pro patch.
 
It starts at 3840*2160 as much as a interlaced image starts at 3840*2160. It creates a picture with holes in it. But unlike interlaced it actually fills those holes not too dissimilar to how other upscaling techniques fill empty spaces in a image, it just does it in a more advanced way by reusing older data.
Saying "not too dissimilar" is handwaving vagueness exactly where we should be most precise. It's the equivalent of "Step 2. ???". The fact is, the method is very dissimilar, because it relies primarily on the properties of 3d objects within the scene. The 2D color environment is used only to test the confidence of the estimate of where last frame's polygons have moved.

The fact that checkerboard results are far superior to upscaling must also be acknowledged. Upscaling methods are very mature, and hit the wall of asymptotically approaching sinc some time ago (Lanczos has been standard for decades). Any method, like checkerboard, that suddenly produces much better results definitely warrants distinction from the class of techniques it far surpasses.

And finally, multiple developers on GAF have supported the idea of checkerboard being fundamentally different from upscaling. I don't want to lean on authority, because I think there's a strong case without it, but it'd also be perverse to ignore their expertise.

Checkerboard rendering as presented by the Rainbow 6 Siege Team is a lot more advanced than a fixed rule like bi-linear filtering (aka "scaling"). ...[But] checkerboard rendering contains a bit of something technically at least very close to scaling.
Note that presentation ends with a plan to increase the quality of the checkerboard pixels by reconsidering how to weight the input of motion vectors plus Z and color values. Sony's approach shows just such a revision of the method.

Ubisoft Montreal had to use Z values (historical and current) to help estimate pixel values. But even with stored motion vectors, that can be a rough approximation. The Sony addition of hardware ID buffer on Pro dramatically improves the estimates. Being able to accurately track objects and triangles means reprojection will have higher confidence values, and need clamping and input from neighbor pixels less often.

Checkerboard does still include looking at neighboring pixel values, because it'd be foolish to ignore that data as a check on your results. And in cases where confidence in reprojection is very low, individual pixels may be derived wholly from that data. But with hardware ID support, this situation should arise very infrequently. That checkerboard and upscaling may share some algorithms doesn't make them the same thing. Other aspects of rendering, such as antialiasing, also use these algorithms. But no one wants to call them "upscaling".
 

MaLDo

Member
Saying "not too dissimilar" is handwaving vagueness exactly where we should be most precise. It's the equivalent of "Step 2. ???". The fact is, the method is very dissimilar, because it relies primarily on the properties of 3d objects within the scene. The 2D color environment is used only to test the confidence of the estimate of where last frame's polygons have moved.

The fact that checkerboard results are far superior to upscaling must also be acknowledged. Upscaling methods are very mature, and hit the wall of asymptotically approaching sinc some time ago (Lanczos has been standard for decades). Any method, like checkerboard, that suddenly produces much better results definitely warrants distinction from the class of techniques it far surpasses.

And finally, multiple developers on GAF have supported the idea of checkerboard being fundamentally different from upscaling. I don't want to lean on authority, because I think there's a strong case without it, but it'd also be perverse to ignore their expertise.


Note that presentation ends with a plan to increase the quality of the checkerboard pixels by reconsidering how to weight the input of motion vectors plus Z and color values. Sony's approach shows just such a revision of the method.

Ubisoft Montreal had to use Z values (historical and current) to help estimate pixel values. But even with stored motion vectors, that can be a rough approximation. The Sony addition of hardware ID buffer on Pro dramatically improves the estimates. Being able to accurately track objects and triangles means reprojection will have higher confidence values, and need clamping and input from neighbor pixels less often.

Checkerboard does still include looking at neighboring pixel values, because it'd be foolish to ignore that data as a check on your results. And in cases where confidence in reprojection is very low, individual pixels may be derived wholly from that data. But with hardware ID support, this situation should arise very infrequently. That checkerboard and upscaling may share some algorithms doesn't make them the same thing. Other aspects of rendering, such as antialiasing, also use these algorithms. But no one wants to call them "upscaling".

Killzone Shadowfall multiplayer rendering solution also use adyacent pixels currently rendered in a full way to calculate the final value of the "guessed" pixels. They had to confirm in a statement that the game did not render to native resolution.

We use a technique called “temporal reprojection,” which combines pixels and motion vectors from multiple lower-resolution frames to reconstruct a full 1080p image. If native means that every part of the pipeline is 1080p then this technique is not native. [...] The technique used in KILLZONE SHADOW FALL goes further and reconstructs half of the pixels from past frames.[..]

We keep track of three images of “history pixels” sized 960x1080

- The current frame
- The past frame
- And the past-past frame

and

  • For each pixel we store its color and its motion vector – i.e. the direction of the pixel on-screen
  • We also store a full 1080p, “previous frame” which we use to improve anti-aliasing
  • Then we have to reconstruct every odd pixel in the frame:
  • We track every pixel back to the previous frame and two frames ago, by using its motion vectors
  • By looking at how this pixel moved in the past, we determine its “predictability”
  • Most pixels are very predictable, so we use reconstruction from a past frame to serve as the odd pixel
  • If the pixel is not very predictable, we pick the best value from neighbors in the current frame


Even Guerrilla make sure to distance themselves from the upscaling term...

Q: So how does “temporal reprojection” work and what’s the difference with up-scaling?
Up-scaling is a spatial interpolation filter. When up-scaling an image from one resolution to another, new pixels are added by stretching the image in X/Y dimension. The values of the new pixels are picked to lie in between the current values of the pixels. This gives a bigger, but slightly blurrier picture.


but I think the upscaling concept is more wide now than before. In no place is wrote that upscaling must use ONLY existing pixels to do his job, as seen in the Panasonic upscaling description, If Panasonic can use a library of textures to add detail predictably in his upscaling method, how is this different to use a "library" of past frames?
 
Killzone Shadowfall multiplayer rendering solution also use adyacent pixels currently rendered in a full way to calculate the final value of the "guessed" pixels. They had to confirm in a statement that the game did not render to native resolution.
I do agree that checkerboard rendering is not native resolution. It's just not upscaling either; these are three distinct methods.

Even Guerrilla make sure to distance themselves from the upscaling term.
Developers like Guerrilla and Ubisoft Montreal know the science of rendering intimately, and actually use these techniques in their jobs. And they don't think it should be called "upscaling". To be blunt, why should we ignore them and listen to you instead?

In no place is wrote that upscaling must use ONLY existing pixels to do his job, as seen in the Panasonic upscaling description, If Panasonic can use a library of textures to add detail predictably in his upscaling method, how is this different to use a "library" of past frames?
Panasonic is not adding detail, they're restricting errors. To my knowledge, saying the TV "employs a database of 120,000 textures" is very misleading. What they've done is compare 120,000+ pairs of 2K and 4K images in a computer lab. They use regularities in this data to categorize typical classes of upscale. The number of classes the data set reduces to is unknown, but let's say it's 100 pairs of input/result. That's what actually gets stored in the upscaling silicon in the TV.

My understanding is that when it receives an image, the silicon grabs a block of pixels and looks to see which of the 100 "input" patterns it matches. It then does a typical upscale using the surrounding pixel values, and compares that real result to the stored "result" that matches the input. If there's a difference, the final values are adjusted to the stored result. Basically, this is a way to allow outlier values and stop overblending.

All of this is a far cry from checkerboard. That uses true data about the movement of actual objects within a 3D scene to make the pixel value prediction. I suppose there's an analogy given that both methods have an error-correction step. But that's true of very many types of calculation, and it'd be useless to call everything in the world "upscaling" because of it.
 

thelastword

Banned
It's a shame that KZ:SF won't get a PS4 Pro patch. The MP (weird resolution, fps all over the place) really needs it.
It can still be argued that this is the best looking ps4 game. The subtle details and effects is what blew me away combined with lots of geometry detail and high quality character models.....

This was early days for Guerilla and I imagine a touched up shadowfall would really be something to behold. Hell, a 2160p checkerboarded SP at 30fps locked with AF and good AA and a geometry rendered MP or 1800p checkerboard render at 60fps solid, would be more than enough to amaze here.....They have enough GPU power, the CPU boost and extra ram to make it happen.

This is one of my most anticipated pro patches tbh. I hope they consider it when Zero Dawn ships or I hope we get a surprise at the PSX meeting.....fingers crossed@tm
 
It can still be argued that this is the best ps4 game. The subtle details and effects is what blew me away combined with lots of geometry detail and high quality character models.....

This was early days for Guerilla and I imagine a touched up shadowfall would really be something to behold. Hell, a 2160p checkerboarded SP at 30fps locked with AF and good AA and a geometry rendered MP or 1800p checkerboard render at 60fps solid, would be more than enough to amaze here.....They have enough GPU power, the CPU boost and extra ram to make it happen.

This is one of my most anticipated pro patches tbh. I hope they consider it when Zero Dawn ships or I hope we get a surprise at the PSX meeting.....fingers crossed@tm
What do you mean the best PS4 game? Uncharted 4 has surpassed it by far, unless you're talking about launch window/day 1 games.

Either way, all 1st party/exclusive games should get a PS4 Pro patch. Even non-AAA games like Resogun...
 

MaLDo

Member
I do agree that checkerboard rendering is not native resolution. It's just not upscaling either; these are three distinct methods.


Developers like Guerrilla and Ubisoft Montreal know the science of rendering intimately, and actually use these techniques in their jobs. And they don't think it should be called "upscaling". To be blunt, why should we ignore them and listen to you instead?


Panasonic is not adding detail, they're restricting errors. To my knowledge, saying the TV "employs a database of 120,000 textures" is very misleading. What they've done is compare 120,000+ pairs of 2K and 4K images in a computer lab. They use regularities in this data to categorize typical classes of upscale. The number of classes the data set reduces to is unknown, but let's say it's 100 pairs of input/result. That's what actually gets stored in the upscaling silicon in the TV.

My understanding is that when it receives an image, the silicon grabs a block of pixels and looks to see which of the 100 "input" patterns it matches. It then does a typical upscale using the surrounding pixel values, and compares that real result to the stored "result" that matches the input. If there's a difference, the final values are adjusted to the stored result. Basically, this is a way to allow outlier values and stop overblending.

All of this is a far cry from checkerboard. That uses true data about the movement of actual objects within a 3D scene to make the pixel value prediction. I suppose there's an analogy given that both methods have an error-correction step. But that's true of very many types of calculation, and it'd be useless to call everything in the world "upscaling" because of it.


Ok, let's say than checkerboard is not a native render and we both agree that is not a real render in the usual way. What I'm saying is if it's almost a render because it's calculating in an advanced way the pixel gap, it's almost a upscaling because using this method is filling the pixel gap to achieve a bigger resolution (more dense would be more precise in this case).

I know we need to expand our old concept of rendering a frame and I think we need to expand our old concept of upscaling a set of rendered pixels too. That's all. The concept of upscale is not something negative, it is something that evolves just like the concept of rendering.

Postprocess aa were not considered real antialiasing methods years ago and nowadays generate better results than MSAA in scenes with little subpixel aliasing.
 
What I'm saying is if it's almost a render because it's calculating in an advanced way the pixel gap, it's almost a upscaling because using this method is filling the pixel gap to achieve a bigger resolution (more dense would be more precise in this case).
Let me strive for maximum brevity and see if that's clearer. Typical rendering uses knowledge of the position and motion of objects in 3D space, augmented by luma and chroma inputs from various sources including adjacent pixels, to determine pixel values.

Checkerboard uses knowledge of the position and motion of objects in 3D space, augmented by luma and chroma inputs from adjacent pixels, to determine pixel values.

Upscaling uses luma and chroma inputs from adjacent pixels to determine pixel values.

I hope that makes it obvious why checkerboard is a type of rendering, and not a type of upscaling.

Postprocess aa were not considered real antialiasing methods years ago...
Yes they were, it's why they were called "postprocess AA"! The quality of the results doesn't change the type of technique. In the same way, checkerboard has been called "checkerboard rendering" since first revealed to the public. (Similarly, Guerrilla's approach was called "temporal interpolation rendering".)

The inventors and practitioners of this process say that it's a type of rendering. So I ask again, why should we ignore them and listen to your feeling that it's a type of upscaling?
 

thelastword

Banned
What do you mean the best PS4 game? Uncharted 4 has surpassed it by far, unless you're talking about launch window/day 1 games.

Either way, all 1st party/exclusive games should get a PS4 Pro patch. Even non-AAA games like Resogun...
I meant best looking, ha ha....It can be argued. Just edited.....
 
Top Bottom