Lister
Banned
So here's my poor man's understanding of checkboard rendering:
You set a target resolution for a particular set of frames. Let's say 4K (3840x2160 16:9) @ 8 Megapixels.
You break down this frame into 4x4 pixel sections in a "checkboard" pattern:
(this is a 2x2 pattern though, but you get my meaning).
Each frame you only render either the red or green pixels, and use data from the previous frame, the current frame and geometry (because that IS rendering at full 4K - right?) to interpolate data into the rest of the pixels.
Boom, you cut your per pixel GPU overhead by about half, more or less. And you are technically rendering a 4 megapixel image instead of an 8 megapixel one (even though things like geometry and probably some buffers are operating at 4K).
That's like rendering a resolution somewhere between 2560x1440 (1440p 16:9) @ 3.6 Megapixels and my own 3440x1440p (1440p 21:9) @ 5 Megapixels.
Am I correct in my current understanding?
Now this is what I'm not clear on:
When someone like Digital Foundry says this or that game on the PS4 PRO is being rendered at 1800p, does that mean the number of pixels being rendered is equivalent to 3840x1800? In otherword,s the pattern being used to render at 4K = this number of pixels being rendered (7 Megapixels), OR is it that the target frame size is 3840x1800, but the console checkerboards to get there (essentially rendering a 3.4 megapixel image or less than 1440p 16:9) and then it upscales it the rest of the way up to the full 4K using traditional upscale methods?
Could someone clear that up for me?
Also, why doesn't the base PS4 use checkerboarding to target a frame of say 1440p or even 1080p? The per pixel/shader overhead would again be cut by a lot, making performance at least a lot better (assumign no CPU bottleneck).
You set a target resolution for a particular set of frames. Let's say 4K (3840x2160 16:9) @ 8 Megapixels.
You break down this frame into 4x4 pixel sections in a "checkboard" pattern:
Each frame you only render either the red or green pixels, and use data from the previous frame, the current frame and geometry (because that IS rendering at full 4K - right?) to interpolate data into the rest of the pixels.
Boom, you cut your per pixel GPU overhead by about half, more or less. And you are technically rendering a 4 megapixel image instead of an 8 megapixel one (even though things like geometry and probably some buffers are operating at 4K).
That's like rendering a resolution somewhere between 2560x1440 (1440p 16:9) @ 3.6 Megapixels and my own 3440x1440p (1440p 21:9) @ 5 Megapixels.
Am I correct in my current understanding?
Now this is what I'm not clear on:
When someone like Digital Foundry says this or that game on the PS4 PRO is being rendered at 1800p, does that mean the number of pixels being rendered is equivalent to 3840x1800? In otherword,s the pattern being used to render at 4K = this number of pixels being rendered (7 Megapixels), OR is it that the target frame size is 3840x1800, but the console checkerboards to get there (essentially rendering a 3.4 megapixel image or less than 1440p 16:9) and then it upscales it the rest of the way up to the full 4K using traditional upscale methods?
Could someone clear that up for me?
Also, why doesn't the base PS4 use checkerboarding to target a frame of say 1440p or even 1080p? The per pixel/shader overhead would again be cut by a lot, making performance at least a lot better (assumign no CPU bottleneck).