• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The Art of Pixel Counting: How can we tell what resolution a game is rendering?

pawel86ck

Banned
This is much better thread compared to your previus one, because now you are just challenging DF, rather than attack.

NXGamer NXGamer is probably one of the few people here who has experience counting pixels, so I hope he will offer his perspective about DS remake resolution.

From my perspective Demons Souls and halo pixels looks the same on both screenshots, and that's not surprisng because we are looking at 4K screenshots on both, so they should be the same. Halo pixelation patern however looks different, because 4x smaller individual pixels form much bigger one (there still subtle color variation between each pixels, but without zooming they look like one really big pixel). I believe red chroma bleeding can explain why this small pixels make make much bigger one. I'm guessing crop from another area (without dominant red color) would show different pixels patern (without such big pixelation like here on red color).

When it comes to DS remake resolution, on YT compressed material I cant tell if the latest 60fps trailer is running at 1440p or higher, because on my 4K TV everything looks sharp thanks to the exelent bravia sharpening engine (I also cant tell if game is using dynamic 4K or not), however the most recent official screenshot from this game viewed on my PC monitor shows 1440p for sure.

42vRMqL.jpg


I have checked it by downsampling this 4K screenshot to 1440p and then upscaling it again. Screenshot still looked the same, and that would be not the case if game would be using higher resolution than 1440p (details would be reduced after such big downsampling).

Maybe this game is using dynamic resolution and this particular screenshot shows only the lowest resolution dip, but even if DS remake runs with locked 1440p in performance mode game still looks superb, so I dont think people should make such a big deal about it. No XSX game for now can match demons souls graphics fidelity (especially falconeer despite running at 8K 60fps), and there will be still 4K 30fps mode in DS remake for these people who want 4K native no matter what.
 
Last edited:

Bo_Hazem

Banned
Given the nature of the image reconstruction techniques we've seen at play this generation - from advanced temporal reconstruction, AI up-scaling, checkerboard rendering, to good old fashioned even axis multipliers - in combination with dynamic internal render target resolutions operating in, sometimes, interesting ways - altering only the horizontal axis, for example - most pixel counting techniques begin to breakdown in isolation because, frankly, that's what the upscaling techniques alter. And that's before we add in anti-aliasing. If this is the case, how does one determine the range of the dynamic internal render resolution targets on these types of reconstructed render outputs?
To stay on topic, dynamic resolutions are employed in virtually every major video game, because it ensures frame budgets are kept by sacrificing a small degree of momentary visual clarity when needed. Given that Halo 5 runs in dynamic 4k 60FPS, its logical that Halo Infinite also runs in dynamic 4k, as it is also targeting 60FPS. Because we therefore expect Halo Infinite to use dynamic 4k, what methods can we apply to the gameplay demo to determine the range of the dynamic render resolution scaling?
How do these techniques apply to the Demon's Souls gameplay demo, and what are the results?

Wonderful input. Dynamic 4K will be the norm for modes targeting stable 60fps all the time, so you either have VRS make partial resolution downgrade, or even going up and down in resolution to keep that 60fps steady and compensate with resolution when needed.

I think it's already muddied with VRS, or even lower res assets. It doesn't make it native 4K, nor lower than 4K. Going to cutscenes, it's pretty obvious that it's native 4K during that:

Taken from the bottom chest armor of Master Chief:

vlcsnap-2020-10-04-14h04m14s838.png


Here, you find a true indication of 4K, having around 3 pixels of gradation between the edge and the background, suggesting 4K. (1600% zoom)

halo-1600zoom.png


So the projected image usually stays 4K, and VRS comes in with partial resolution downgrade. So the most accurate description could be "Dynamic 4K".

Hazem doing the Shazam.


Keep up doing your great work man. :messenger_beaming: (y)


Thanks mate! I will. :messenger_winking_tongue: (y)

Why not? The shading resolution being variable does not mean it's not resolving the same number of individual pixels.

You might choose to shade 4k in 2x2 blocks but that doesn't make it 1080p. For all we know, all the pipeline could still be working with 4K buffers.

Does having a 512x512 texture occupying 1024x1024 pixels on your screen also not count as native? Because it's the same thing

Can you explain me how a lower LOD of a flat thing introduces more aliasing? Near LODs are usually shape-preserving, only reducing polygons in details, not in the shape. Assuming that we could say that, even if it's a lower LOD, it will be exhibiting the same boundaries than the higher one, thus showing the same aliasing artifacts.

The thing is that as reconstruction techniques improve, it will be harder to know the native resolution. Only these kind of things, that you not only are quick to disregard as bugs but also use to criticize DF work, will be the ones reliable telling which res a game is being natively rendered at.

Native 4K does not have VRS. VRS works like a type of real-time LOD that could drop assets below targeted resolution. So if you happen to pick a low res asset that go the effect of VRS then say that's the native resolution then that'll become a real problem. The better the game developer, the harder it is for you to spot VRS. Halo is most likely isn't even in an alpha phase when shown, probably rushed. The next one should not have those visible flaws this close.

And a lower res LOD will have 4 to 8 blocks or more of pixels compensating for one single pixel, that introduces jagginess. Just like those VRS samples shown in the OP.

I personally don't know if DS is native 4K or not, most likely it's "Dynamic 4K". If you check all screens I've posted a link to, you'll find it mostly native 4K, but that one single jagginess raises questions. So it's safe to call it "Dynamic 4K" until it's been confirmed otherwise. But 1440p? Nope, that's pretty extreme, I can easily call 1440p from 4K, even 1800p still has some "softness" if you have a trained eye.

DS is much sharper than UE5 demo for example, but UE5 demo is still ahead graphically even with the voxel artifacts and slight softness of 1440p.
 
Last edited:

Bo_Hazem

Banned
Nice thread, mate! I'm quoting only the part I'd like to refer to.


The jaggies on that beam can be a result of another layer of image, not the object one. It seems that this area has a light source (from above) or it could be a dust cloud. Those are added on top of the geometry/texture/shading layer and are very often lower resolution than the frame (just like VRS but not exactly the same). When you put them together, artefacts may appear, especially if the result is then upscaled (or downscaled). I'm sure guys at Bluepoint are looking for errors like this and, if they have enough time and find a solution, this will get eliminated in the final version (or later with a patch).

Generally speaking, pixel counting will be quite useless next gen for advanced titles. It seems like a lot of them will be using advanced techniques of upscaling or enhancing the image, like TAA. This also makes information about resolution on the box quite irrelevant. There will be games that look much better in 1440p than 4k, just like the UE presentation vs many 4k games on Pro or X1X because their object density, effects and general IQ will be miles ahead thanks to more powerful GPUs.

Edit: This also means that FPS will become even more relevant. Go tackle the tool that I pointed out to you, especially after you get your own footage.

Analyses of titles will still be relevant but with a different approach: we need to look at the overall image quality in games rather than just count pixels. It's a more difficult job but I think you're well cut out for it. Remember to update the OT when you can replace videos with raw images captured from your own console. Those vids are still compressed and even if they aim for loseless, it's all just maths and statistics so some pixels will be distorted anyway.

Just more thing, which I got from the excellent Spiderman remaster analysis by NXGamer NXGamer : for a game releasing in about a month, it is still missing a lot of things in the rendering pipeline. I imagine that folks at Insomniac are really making the name of their company literal these days so we can enjoy our toys in November. Work from home makes many things more difficult and I think it's a miracle that we're getting our consoles without delays in the current reality of the pandemic. I'm going to support devs with as many launch titles bought as I can... Publishers raising prices didn't help with that, though.

That's a very interesting input! I know you have first hand experience with game development, and glad to have you here, sir! And yes, there is a light source coming from that foggy door, timestamped:





Indeed, that minor flaw can be many things, using it as an indication of lower resolution as a whole is a "stunt" conclusion. We should expect finer build by day one patch as well! So far, couldn't see another suspicious thing in that video, would love if someone could contribute into finding anymore.

OP should be updated as well, and yes, next gen games pixel counting is not a clean cut as it was before with those advanced techniques found in the upcoming consoles and PC GPU's.

You’re one smart lad Bo!

Thanks, mate. :messenger_winking_tongue:(y)

Thanks Bo! Cool read, had no idea how pixels were counted.

Glad you like it! Well, it's probably too late, according to DF. New methods should be leveraged going forward.

If we're counting this is already your 2nd post crying about Bo in this thread only.

This is a quality thread, that would be even better without your low quality whine.

Thanks mate, would love to have constructive posts instead and would happily change the OP accordingly as well.

Honestly if the jaggies are micro, framerate is high, image is clean and crisp i'm good to go.

Of course, but personally I can see them, and it's the only one so far. Overall, it's razor-sharp 4K image, whether it's native or AI reconstructed from a lower resolution.
 
Last edited:

Bo_Hazem

Banned
This is much better thread compared to your previus one, because now you are just challenging DF, rather than attack.

NXGamer NXGamer is probably one of the few people here who has experience counting pixels, so I hope he will offer his perspective about DS remake resolution.

From my perspective Demons Souls and halo pixels looks the same on both screenshots, and that's not surprisng because we are looking at 4K screenshots on both, so they should be the same. Halo pixelation patern however looks different, because 4x smaller individual pixels form much bigger one (there still subtle color variation between each pixels, but without zooming they look like one really big pixel). I believe red chroma bleeding can explain why this small pixels make make much bigger one. I'm guessing crop from another area (without dominant red color) would show different pixels patern (without such big pixelation like here on red color).

When it comes to DS remake resolution, on YT compressed material I cant tell if the latest 60fps trailer is running at 1440p or higher, because on my 4K TV everything looks sharp thanks to the exelent bravia sharpening engine (I also cant tell if game is using dynamic 4K or not), however the most recent official screenshot from this game viewed on my PC monitor shows 1440p for sure.

42vRMqL.jpg


I have checked it by downsampling this 4K screenshot to 1440p and then upscaling it again. Screenshot still looked the same, and that would be not the case if game would be using higher resolution than 1440p (details would be reduced after such big downsampling).

Maybe this game is using dynamic resolution and this particular screenshot shows only the lowest resolution dip, but even if DS remake runs with lockes 1440p in performance mode game still looks superb, so I dont think people should make such a big deal about it. No XSX game for now can match demons souls graphics fidelity (especially falconeer despite running at 8K 60fps), and there will be still 4K 30fps mode in DS remake for these people who want 4K native no matter what.

I turn sharpening, noise reduction off on my Bravia XD70 for PC so it doesn't mask noise in photos/videos while editing.

That screenshot isn't 4K, I usually use official 4K screenshots from PlayStation.Blog flickr account or produce them from the official youtube channels and trailers. They are in PNG format.

ds-s.png


Here:



Then you go to "view all sizes", then choose "original" as it comes in PNG:



Try this one:

49996607922_97bd14bc8b_o.png


It could be as well some assets vary in resolution/quality. As the grass above is low res, compared to this:

(4K, but JPG, not found in the official flickr account but from gamespot)

3740450-ejgxuuwwsaa8gpj.jpeg


Even the overall image quality suggests that this is the visual mode, and the previous one is the performance mode.
 
Last edited:

FeiRR

Banned
That's a very interesting input! I know you have first hand experience with game development, and glad to have you here, sir! And yes, there is a light source coming from that foggy door, timestamped:


Now I see clearly that the white jaggy artefacts appear only after the dust cloud that originates with NPC being hit passes in front of that beam. Which makes me think it's not a lighting issue but an alpha error. I would never notice such a tiny thing, you're a hawk and should be a game tester!
 

onesvenus

Member
Native 4K does not have VRS
That's not true at all. The output resolution does not have anything to do with the shading resolution. You can have a 4K output, effectively outputting 3840 x 2160 pixels but shade those at half/third resolution.
If you want to simplify how VRS works it's better to think about using a 512x512 texture on a surface that fills 1024x1024 pixels on the screen. The geometry that uses that texture is not less detailed per se.

If you really want to challenge DF on a technical discussion, without only seeming a butthurt fanboy, you should learn how computer graphics work.
 

pawel86ck

Banned
I turn sharpening, noise reduction off on my Bravia XD70 for PC so it doesn't mask noise in photos/videos while editing.

That screenshot isn't 4K, I usually use official 4K screenshots from PlayStation.Blog flickr account or produce them from the official youtube channels and trailers. They are in PNG format.

ds-s.png


Here:



Then you go to "view all sizes", then choose "original" as it comes in PNG:



Try this one:

49996607922_97bd14bc8b_o.png


It could be as well some assets vary in resolution/quality. As the grass above is low res, compared to this:

(4K, but JPG, not found in the official flickr account but from gamespot)

3740450-ejgxuuwwsaa8gpj.jpeg


Even the overall image quality suggests that this is the visual mode, and the previous one is the performance mode.

You are correct, I have found original source and resolution is indeed different.


Ydu19F5.jpg


It's 2400x1350, while screenshot from my link was already upscaled to 4K.
 
Last edited:

Bo_Hazem

Banned
Now I see clearly that the white jaggy artefacts appear only after the dust cloud that originates with NPC being hit passes in front of that beam. Which makes me think it's not a lighting issue but an alpha error. I would never notice such a tiny thing, you're a hawk and should be a game tester!

Thanks, mate! Well, it happens so quickly though, and we know that the game hasn't reached gold state yet. It's the only thing I could nitpick so far from that crisp trailer!

That's not true at all. The output resolution does not have anything to do with the shading resolution. You can have a 4K output, effectively outputting 3840 x 2160 pixels but shade those at half/third resolution.
If you want to simplify how VRS works it's better to think about using a 512x512 texture on a surface that fills 1024x1024 pixels on the screen. The geometry that uses that texture is not less detailed per se.

If you really want to challenge DF on a technical discussion, without only seeming a butthurt fanboy, you should learn how computer graphics work.

It's been 4K output all this time with PS4 Pro + X1X. But throwing "butthurt" and "fanboy" won't make you any smarter. It's already confirmed by MS officials that it's "up to 4K" as stating native 4K doesn't mix well with partially downgrading resolution. You can always use VRS smartly without it being visible.

More about VRS:



You are correct, I have found original source and resolution is indeed different.


Ydu19F5.jpg


It's 2400x1350, while screenshot from my link was already upscaled to 4K.

Yup, it gets tricky sometimes. Not sure why it doesn't show highest resolution 4K screenshots in the official website.
 
Last edited:

onesvenus

Member
But throwing "butthurt" and "fanboy" won't make you any smarter.
See? Is this attitude of knowing everything better than anyone that opposes your way of thinking what makes you bad.

I have worked for more than 10 years doing computer graphics research. I don't need you to validate my view on things neither to tell me I'm smarter than someone else.

I thought this thread was a thread where everyone, including you, could learn how computer graphics work but it seems I might have had too many expectations. Good luck learning how everything really works with your attitude.
 

Bo_Hazem

Banned
See? Is this attitude of knowing everything better than anyone that opposes your way of thinking what makes you bad.

I have worked for more than 10 years doing computer graphics research. I don't need you to validate my view on things neither to tell me I'm smarter than someone else.

I thought this thread was a thread where everyone, including you, could learn how computer graphics work but it seems I might have had too many expectations. Good luck learning how everything really works with your attitude.

I'm willing to learn, but it's better be without personal attacks. Hope after this post you may contribute your knowledge and expertise. I'm in no place of "knowing everything". You're always welcome.
 
Last edited:
...

vlcsnap-2020-09-28-00h18m25s787.png


You can see that upscaling will need to make some merging between two pixels when coming from a lower resolution, as you can draw much straight forward lines in the native form.

dgu2bJE.jpg
2LkR6n2.jpg






Demon's Souls, you find much more unique, independent pixels and finer gradation even for an asset that's much more away from the view compared to the above. (1600% zoom)

ds-1600.png


Picked from the right side, the wall.




You can't find aliasing that easy, so it's near impossible to pixel count in the traditional way of DF that relies on hard-edged lines. Without further investigation of how pixels are forming the image, you can't really have a sharp indication of the true resolution anymore. Especially with VRS that makes things extremely hard as well, as it could be partial resolution like the 720p parts in Halo.


...


It's open for constructive discussions, so we can all make good knowledge out of it. OP will be updated as well.
The fact there are more independant pixels in Demon's souls doesn't prove that it runs at a higher resolution. It only means it's sharper and looks higher resolution (which is the most important for the players).

In Halo Infinite in order to reach the marketing holy grail of 4K60fps + open-world they had to sacrifice everything to get there: low res textures, low res shadows, low level of details, low quality lighting and software VRS (720p) in some parts. They probably only have the geometry running at 4K (probably dynamic 4K) in order to label the game as 4K. Since XB1, MS still have an inferiority resolution complex and the execs still think their game need to reach 4K in order to get the "next-gen" label.

They are wrong of course. In the current world of dynamic resolutions, TAA and reconstructions techs, reaching the highest resolution is not the proper way to have the best looking game (it never was of course). The best next gen looking demos are the ones with the better textures resolution, the better lighting and the highest density (number of different objects + number of polygons).

Currently (with the next-gen gameplay shown) the best balance to have the best looking game is 1440p + insane level of details like UE5 demo or Demon's souls Remastered running at 60fps (because you get twice the details in motion, it's called motion resolution).
 
Last edited:
See? Is this attitude of knowing everything better than anyone that opposes your way of thinking what makes you bad.

I have worked for more than 10 years doing computer graphics research. I don't need you to validate my view on things neither to tell me I'm smarter than someone else.

I thought this thread was a thread where everyone, including you, could learn how computer graphics work but it seems I might have had too many expectations. Good luck learning how everything really works with your attitude.

If you can't take it don't dish it
 

Bo_Hazem

Banned
The fact there are more independant pixels in Demon's souls doesn't prove that it runs at a higher resolution. It only means it's sharper and looks higher resolution (which is the most important for the players).

In Halo Infinite in order to reach the marketing holy grail of 4K60fps + open-world they had to sacrifice everything to get there: low res textures, low res shadows, low level of details, low quality lighting and software VRS (720p) in some parts. They probably only have the geometry running at 4K (probably dynamic 4K) in order to label the game as 4K. Since XB1, MS still have an inferiority resolution complex and the execs still think their game need to reach 4K in order to get the "next-gen" label.

They are wrong of course. In the current world of dynamic resolutions, TAA and reconstructions techs, reaching the highest resolution is not the proper way to have the best looking game (it never was of course). The best next gen looking demos are the ones with the better textures resolution, the better lighting and the highest density (number of different objects + number of polygons).

Currently the best balance to have the best looking game is 1440p + insane level of details like UE5 demo or Demon's souls Remastered running at 60fps (because you get twice the details in motion, it's called motion resolution).

The problem in Halo is probably "not being ready". If they kept it in UE4 (upgrade to UE5 later) it would've kept or exceeded the previous graphics shown at E3 2018. I think it's a build state that's not ready yet and not polished, it's been rushed.





The difference in DS vs UE5 demo is the first is much more sharper and cleaner, while the later has insane polygon count and using uncompressed Hollywood 8K assets with 16K shadows. I think when UE5 gets ready, that very same demo could become 1440p@60fps or 4K@30fps. Overall, UE5 demo is much more beautiful and photorealistic, while still sports minor softness in the final image that's clear isn't native 4K.
 

Bo_Hazem

Banned
And here's the same screenshot upscaled to 4K thanks to AI. If PS5 would AI upscale all games to 4K with quality like that, no one would care about native resolution anymore.

1444demons-souls-screenshot-03-disclaimer-en-30sept20-gigapixel-height-2160px.jpg

I think Sony is holding on some details until RDNA2 thing is up, probably NDA's are pretty much in place. We can't confirm if it's native or AI reconstructed, but that final result is what matters. Same with DLSS 2.0 on PC. (y)
 
Last edited:

FeiRR

Banned
The problem in Halo is probably "not being ready". If they kept it in UE4 (upgrade to UE5 later) it would've kept or exceeded the previous graphics shown at E3 2018. I think it's a build state that's not ready yet and not polished, it's been rushed.





The difference in DS vs UE5 demo is the first is much more sharper and cleaner, while the later has insane polygon count and using uncompressed Hollywood 8K assets with 16K shadows. I think when UE5 gets ready, that very same demo could become 1440p@60fps or 4K@30fps. Overall, UE5 demo is much more beautiful and photorealistic, while still sports minor softness in the final image that's clear isn't native 4K.

Halo isn't UE. It's made in Slipspace which is a descendant of old Bungie engine that 343i took over and spent 5 years trying to get to work but now we know they failed. I think Turn 10 had the only up-to-date engine in the whole MSS group. Now they have ID tech, which is one of the best in the industry, certainly excels at delivering constant 60 FPS across scalable platforms. That is the part of acquisition which matters more than some franchises, I think.
 

Bo_Hazem

Banned
Halo isn't UE. It's made in Slipspace which is a descendant of old Bungie engine that 343i took over and spent 5 years trying to get to work but now we know they failed. I think Turn 10 had the only up-to-date engine in the whole MSS group. Now they have ID tech, which is one of the best in the industry, certainly excels at delivering constant 60 FPS across scalable platforms. That is the part of acquisition which matters more than some franchises, I think.

Yup, read somewhere that Halo Infinite started on UE4 then went to Slipspace. That Doom engine should help a lot indeed! Pretty mature and advanced.
 

PaintTinJr

Member
That's not true at all. The output resolution does not have anything to do with the shading resolution. You can have a 4K output, effectively outputting 3840 x 2160 pixels but shade those at half/third resolution.
If you want to simplify how VRS works it's better to think about using a 512x512 texture on a surface that fills 1024x1024 pixels on the screen. The geometry that uses that texture is not less detailed per se.

If you really want to challenge DF on a technical discussion, without only seeming a butthurt fanboy, you should learn how computer graphics work.
I'm not sure if I'm understanding this correctly about VRS, but I was under the impression it lightens the fragment processing load on the GPU, so it might resolve to less than one (or more) fragment shader calculations per pixel (like Gouraud shading did in ancient times with interpolated shading), whereas IMHO (as I understand it) by comparison the texturing you describe will still occupy the fragment shader pipeline with one or more sampler lookups per the native framebuffer pixel - for each fragment of the orientated geometry that is visible.

If that is how it works, then I would assert that VRS operates below native framebuffer resolution, whereas texture mapping operates at the framebuffer resolution (or higher if object or screenspace AA techniques are being used).
 

onesvenus

Member
I was under the impression it lightens the fragment processing load on the GPU, so it might resolve to less than one (or more) fragment shader calculations per pixel (like Gouraud shading did in ancient times with interpolated shading), whereas IMHO (as I understand it) by comparison the texturing you describe will still occupy the fragment shader pipeline with one or more sampler lookups per the native framebuffer pixel - for each fragment of the orientated geometry that is visible.
You are absolutely right in how fragment shader works both in VRS and texture mapping, I was just comparing the end result as a mean of saying that even though you will see 2x2 blocks in a 512x512 texture being rendered in a 1024x1024 framebuffer, that does not make it a 512x512 framebuffer.

If that is how it works, then I would assert that VRS operates below native framebuffer resolution, whereas texture mapping operates at the framebuffer resolution (or higher if object or screenspace AA techniques are being used).
You are totally right that the shading is operating below native FB resolution, but that does not mean it's not native. What I was trying to explain is that the shading resolution and the framebuffer resolution are not, and don't need to be, a 1:1 mapping. In the same way that when using 2xMSAA on a 1080p framebuffer we are doing two samples per pixel and we are not saying that we are rendering at 4K (although we are running the same number of fragment shader computations as if we were doing so), the inverse is also true: Even if we are using VRS and rendering a 1080p framebuffer with 2x2 VRS, we are not rendering at 720p.

Let's see if I can be clearer with an example.
Let's say we render a single full-screen triangle with a white background on a 4K framebuffer. When that triangle goes by the rasterization step of the graphics pipeline, it will be rasterized using a 4K framebuffer. Later, in the pixel-shading stage of the graphics pipeline, the color of each fragment will be computed:
- If we are not using MSAA or VRS, a fragment shader will be run once for each pixel. A single fragment shader result will be assigned to a single pixel. There's a 1:1 mapping between PS being run and pixels on the framebufffer.
- If we are using MSAA, the fragment shader will be run multiple times for each pixel. Multiple fragment shader results will be assigned to the same pixel. There's a N:1 mapping between PS being run and pixels on the framebuffer
- If we are using VRS, the fragment shader will be run less than once for each pixel. A single fragment shader result will be assigned to more than one pixel. There's a 1:N mapping between PS being run and pixels on the framebuffer.

The thing that needs to be clear here is that the framebuffer resolution is the same in the three examples.

Now, going back to the OP and the infamous Halo images. I've taken a couple images from this one and magnified them at 800%
YgNn6n2.jpg


One from the rock labeled with 720p

Dvijoe6.png


And another one from the rock labeled 4K

ryMdg4C.png



If we look at the boundaries of those rocks on both images we can see that they have the same resolution. It's the inside of each rock, the shading, that's been reduced in the case of the first one, the one with VRS. Can you better understand the point I was trying to make with the texture mapping comparison, now? Does that mean that it's not a native 4K framebuffer? No, it doesn't, it's still rendering the whole framebuffer, even if some pixels are not shaded individually.

Notice that's different than upscaling or checkerboarding, where we are using a lower internal framebuffer that's then merged into a 4K one.
 

cormack12

Gold Member
You are absolutely right in how fragment shader works both in VRS and texture mapping, I was just comparing the end result as a mean of saying that even though you will see 2x2 blocks in a 512x512 texture being rendered in a 1024x1024 framebuffer, that does not make it a 512x512 framebuffer.

I hope you stick around with posts like this. This thread is actually quite informative tbh, hope it stays this way and gets updated throughout the generation.

One question, is this all legit? All I can see on these zoom ins of vides is shitty bit rate. Doesn't this affect all this analysis? Shouldn't we be pulling direct from the frame buffer to get true comparisons or is this adequate? Seems a bit like Victorian quackery.
 

Bo_Hazem

Banned
I hope you stick around with posts like this. This thread is actually quite informative tbh, hope it stays this way and gets updated throughout the generation.

One question, is this all legit? All I can see on these zoom ins of vides is shitty bit rate. Doesn't this affect all this analysis? Shouldn't we be pulling direct from the frame buffer to get true comparisons or is this adequate? Seems a bit like Victorian quackery.

Impressive discussion between PaintTinJr PaintTinJr and onesvenus onesvenus , enjoying it a lot. As written in the OP, it should get updated frequently.

This opens my eyes on that VRS still has some computational waste, and Matt suggests that the Geometry Engines do cut that out even before such process.



Would love further details in what he exactly means, and any suggestions in editing the OP are more than welcome.
 
Last edited:

cormack12

Gold Member
Impressive discussion between PaintTinJr PaintTinJr and onesvenus onesvenus , enjoying it a lot. As written in the OP, it should get updated frequently.

This opens my eyes on that VRS still has some computational waste, and Matt suggests that the Geometry Engines do cut that out even before such process.



Would love further details in what he exactly means, and any suggestions in editing the OP are more than welcome.


Yeah for sure, I'd like to avoid going to B3D for this type of stuff. Good thread you started so far (y)
 

VFXVeteran

Banned
You are absolutely right in how fragment shader works both in VRS and texture mapping, I was just comparing the end result as a mean of saying that even though you will see 2x2 blocks in a 512x512 texture being rendered in a 1024x1024 framebuffer, that does not make it a 512x512 framebuffer.


You are totally right that the shading is operating below native FB resolution, but that does not mean it's not native. What I was trying to explain is that the shading resolution and the framebuffer resolution are not, and don't need to be, a 1:1 mapping. In the same way that when using 2xMSAA on a 1080p framebuffer we are doing two samples per pixel and we are not saying that we are rendering at 4K (although we are running the same number of fragment shader computations as if we were doing so), the inverse is also true: Even if we are using VRS and rendering a 1080p framebuffer with 2x2 VRS, we are not rendering at 720p.

Let's see if I can be clearer with an example.
Let's say we render a single full-screen triangle with a white background on a 4K framebuffer. When that triangle goes by the rasterization step of the graphics pipeline, it will be rasterized using a 4K framebuffer. Later, in the pixel-shading stage of the graphics pipeline, the color of each fragment will be computed:
- If we are not using MSAA or VRS, a fragment shader will be run once for each pixel. A single fragment shader result will be assigned to a single pixel. There's a 1:1 mapping between PS being run and pixels on the framebufffer.
- If we are using MSAA, the fragment shader will be run multiple times for each pixel. Multiple fragment shader results will be assigned to the same pixel. There's a N:1 mapping between PS being run and pixels on the framebuffer
- If we are using VRS, the fragment shader will be run less than once for each pixel. A single fragment shader result will be assigned to more than one pixel. There's a 1:N mapping between PS being run and pixels on the framebuffer.

The thing that needs to be clear here is that the framebuffer resolution is the same in the three examples.

Now, going back to the OP and the infamous Halo images. I've taken a couple images from this one and magnified them at 800%
YgNn6n2.jpg


One from the rock labeled with 720p

Dvijoe6.png


And another one from the rock labeled 4K

ryMdg4C.png



If we look at the boundaries of those rocks on both images we can see that they have the same resolution. It's the inside of each rock, the shading, that's been reduced in the case of the first one, the one with VRS. Can you better understand the point I was trying to make with the texture mapping comparison, now? Does that mean that it's not a native 4K framebuffer? No, it doesn't, it's still rendering the whole framebuffer, even if some pixels are not shaded individually.

Notice that's different than upscaling or checkerboarding, where we are using a lower internal framebuffer that's then merged into a 4K one.

Very good explanation.

I'd like to add that resolution AND the number of pixels being shaded are crucial to the overall graphics fidelity. I would try to steer away from VRS as much as I can UNLESS the developer knows what they are doing. Having a non-linear shading algorithm that's not used "smartly" can add unnecessary artifacts when rendering. Similar to AA techniques. Now it'll be on the surface of objects instead of on their edges.

While everyone wants 60FPS with all options availabe that the developer wants to implement (i.e. ALL the RT features enabled), this is unrealistic right now with this hardware. I'd settle for 1440p BUT I'd have to have something in the line of Tensor Cores for ML the images - typical shading units on the GPU aren't going to cut it. Not only is this going to be crucial for upsampling final rendered images as discussed here, but also VERY crucial for RT. All the hardware out now still is nowhere near as powerful sampling a reasonable amount of rays for Monte Carlo integration. We haven't even gone to the next stage in PBR shading with RT yet - multiple importance sampling - the holy grail of path tracing. You can see this type of technique in the Marbles Demo from Nvidia.
 
Last edited:

PaintTinJr

Member
You are absolutely right in how fragment shader works both in VRS and texture mapping, I was just comparing the end result as a mean of saying that even though you will see 2x2 blocks in a 512x512 texture being rendered in a 1024x1024 framebuffer, that does not make it a 512x512 framebuffer.


You are totally right that the shading is operating below native FB resolution, but that does not mean it's not native. What I was trying to explain is that the shading resolution and the framebuffer resolution are not, and don't need to be, a 1:1 mapping. In the same way that when using 2xMSAA on a 1080p framebuffer we are doing two samples per pixel and we are not saying that we are rendering at 4K (although we are running the same number of fragment shader computations as if we were doing so), the inverse is also true: Even if we are using VRS and rendering a 1080p framebuffer with 2x2 VRS, we are not rendering at 720p.

Let's see if I can be clearer with an example.
Let's say we render a single full-screen triangle with a white background on a 4K framebuffer. When that triangle goes by the rasterization step of the graphics pipeline, it will be rasterized using a 4K framebuffer. Later, in the pixel-shading stage of the graphics pipeline, the color of each fragment will be computed:
- If we are not using MSAA or VRS, a fragment shader will be run once for each pixel. A single fragment shader result will be assigned to a single pixel. There's a 1:1 mapping between PS being run and pixels on the framebufffer.
- If we are using MSAA, the fragment shader will be run multiple times for each pixel. Multiple fragment shader results will be assigned to the same pixel. There's a N:1 mapping between PS being run and pixels on the framebuffer
- If we are using VRS, the fragment shader will be run less than once for each pixel. A single fragment shader result will be assigned to more than one pixel. There's a 1:N mapping between PS being run and pixels on the framebuffer.

The thing that needs to be clear here is that the framebuffer resolution is the same in the three examples.

Now, going back to the OP and the infamous Halo images. I've taken a couple images from this one and magnified them at 800%
YgNn6n2.jpg


One from the rock labeled with 720p

Dvijoe6.png


And another one from the rock labeled 4K

ryMdg4C.png



If we look at the boundaries of those rocks on both images we can see that they have the same resolution. It's the inside of each rock, the shading, that's been reduced in the case of the first one, the one with VRS. Can you better understand the point I was trying to make with the texture mapping comparison, now? Does that mean that it's not a native 4K framebuffer? No, it doesn't, it's still rendering the whole framebuffer, even if some pixels are not shaded individually.

Notice that's different than upscaling or checkerboarding, where we are using a lower internal framebuffer that's then merged into a 4K one.
I haven't read the whole of this yet, thoroughly - because of one thing I wanted to query at the midpoint - so will likely reply to this twice :)

You use the description pixel shader, which I presume is because you are talking directly in DirectX parlance(?), where as I intentional used the term fragment and pixel to keep a clear distinction between the difference between a rasterization fragment and a viewport pixel - as I believe is the normal accepted parlance/terminology that predates hardware acceleration and was coined by pixar back in the day IIRC (from reading Computer Graphics: Principles and Practice?).

The reason I bring this up, is that a fragment shader runs once per fragment (AFAIK), not once per pixel - as there will typically be multiple geometry rasterized fragments resolving to one pixel , so in texture mapping (especially if using anisotropic filtering) a texture mapped triangle - transformed and projected in the viewing frustum may trigger far more rasterization operations than texture map resolution, so much so, that the rasterization pipeline doesn't distinguish between the texture/LOD when doing it's dumb brute force calculations. So at the very minimum it does 1 fragment shader op per pixel, whereas VRS, as you explained doesn't guarantee to do the bare minimum work, and therefore IMHO still massively falls below the threshold of using the word 'native' - as it is a shortcut to do less computation that was needed, but is perceptible argued as not needed - the outline of the geometry being VRS coloured being something it can not cheat on - going by the excellent images you provided.
 

onesvenus

Member
One question, is this all legit? All I can see on these zoom ins of vides is shitty bit rate. Doesn't this affect all this analysis? Shouldn't we be pulling direct from the frame buffer to get true comparisons or is this adequate?
You are right. We are assuming that all those videos and images are taken from direct feed videos, directly from the output of the console. As far as I remember, and Bo_Hazem Bo_Hazem should clarify that in the OP, all these images come from direct feed videos so it's the best thing we'll ever have to run this kind of analysis.

You use the description pixel shader, which I presume is because you are talking directly in DirectX parlance(?), where as I intentional used the term fragment and pixel to keep a clear distinction between the difference between a rasterization fragment and a viewport pixel
Sorry about that. I should be more clear using both terms as I have a bad habit of using both indistinctively.

So at the very minimum it does 1 fragment shader op per pixel, whereas VRS, as you explained doesn't guarantee to do the bare minimum work, and therefore IMHO still massively falls below the threshold of using the word 'native'
For me being native means using the full framebuffer throughout the pipeline, not doing at least 1 fragment shader op per pixel.
 

Bo_Hazem

Banned
You are right. We are assuming that all those videos and images are taken from direct feed videos, directly from the output of the console. As far as I remember, and Bo_Hazem Bo_Hazem Bo_Hazem Bo_Hazem should clarify that in the OP, all these images come from direct feed videos so it's the best thing we'll ever have to run this kind of analysis.

There are no direct feeds available, but those are the best possible versions available. Already put it inside a spoiler cover in the OP:

You can use the extension on Firefox:

YouTube Video and Audio Downloader (Dev Edt.)

And install further files in Window's system to merge the Video WebM with the Audio WebM files. Then use VLC Media Player to extract high quality, uncompressed PNG screenshots:

gfdhfghgh.jpg


adsa.jpg


rdr2trailer.jpg

Direct feed should be at least 750MB/min for 8-bit, while they might be 10-bit as well. Youtube gimps 4K@60fps down to 68Mbps bitrate only. But they're the only videos we can deal with and they are surprisingly clean when downloaded as described above.

youtubedownlod.jpg
 
Last edited:

GymWolf

Member
Almost as cool as bo duke.

It's gonna be fun when this technique is not gonna work anymore and we had to trust what sony and M says about the resolution of their games.
 
Last edited:

Bo_Hazem

Banned
Then we can only keep speculating 😜

These 2 aren't mine though:

HaloVRS.jpg


HaloVRS2.jpg



Taken from official Halo screenshots, direct feed, PNG:

Halo-Infinite-2020_Ascension_Demo_Campaign_08_4k.png


Halo-Infinite-2020_Ascension_Demo_Campaign_10_4k.png


For PS5 official, direct feed, PNG screenshots you find them in PlayStation.Blog flicker account:


You get inside "all sizes" and pick the original file.



49996357581_16757c3081_o.png


Difference will be that we don't know if it's performance or visual mode.
 

PaintTinJr

Member
.....


Sorry about that. I should be more clear using both terms as I have a bad habit of using both indistinctively.


For me being native means using the full framebuffer throughout the pipeline, not doing at least 1 fragment shader op per pixel.

But how is that a "full" framebuffer? If we say the pipeline is - going back to earliest pixel count stuff by DF, so the end of the PS2,Cube, Xbox orig gen IIRC - vertex shading and fragment shading, and in the use of VRS, portions of the full framebuffer are silhouette-ed out and filled internally by portions of the framebuffer(fragment shader pipeline) that are significantly below the 'native' framebuffer resolution.

Would you consider UE5's demo -that uses 4 poly's per pixel, so 4x3 verts gives a minimum of 12 potential fragments per pixel - to be native 4K if they stencil masked the border and rendered a static image in the stencil masked area to makeup for the resolution difference between 1440p and 4K? Remembering back through a lot of faceoffs, the times when largely static HUDs were used to that effect didn't qualify as native IIRC.

Having a quick search for more info about VRS, the initial presentation AFAIK say it helps save memory bandwidth and computation, and then explains the artefacts it produces - which left me with the impression it is a technology to help hardware that isn't quite capable enough for a rendering problem, say a laptop to do 1080p60 for a game, and provides a nearly linear method of dropping workload to hit the technical objective but with compromised IQ. I didn't get the impression it was a technology that could match the brute force native renderer PSNR at a lower computational load, and then use that saved computation to exceed the IQ and PSNR of brute force (dumb) native solution.

In the hypothetical situation where a brute force 1440p60(non-VRS )Halo reveal using the same assets and shaders was then upscaled to 4k60 and had superior IQ to the 'native' 4K60 VRS version, what would you think about that? Would you still consider the VRS solution to be native?
 

J_Gamer.exe

Member
Interesting thread.

So we can't confirm with any degree of certainty the demon souls render resolution?

If not the how did digital Foundry?

Were DF right or wrong? Or was that their best guess?

They said there's evidence of 1440p is that the case?

Same with Halo.

Are there any signs of upscaling techniques on show in demon souls?

I know he's been in the thread but maybe NXGamer NXGamer can throw his hat into the ring here?

Been following his analysis for a long time now and interested to have another perspective, if he wants to contribute.

Sorry if i have missed information already covering some points I made.
 

Bo_Hazem

Banned
Interesting thread.

So we can't confirm with any degree of certainty the demon souls render resolution?

If not the how did digital Foundry?

Were DF right or wrong? Or was that their best guess?

They said there's evidence of 1440p is that the case?

Same with Halo.

Are there any signs of upscaling techniques on show in demon souls?

I know he's been in the thread but maybe NXGamer NXGamer can throw his hat into the ring here?

Been following his analysis for a long time now and interested to have another perspective, if he wants to contribute.

Sorry if i have missed information already covering some points I made.

They guessed 1440p in DS with no evidence from their side, while it is showing 100% 4K image whether it's AI reconstructed or native.

With halo they just said native 4K while MS said "up to 4K".

Final image quality speaks for itself anyway, and we shall investigate other games as well.
 
Last edited:

PaintTinJr

Member
They guessed 1440p in DS with no evidence from their side, while it is showing 100% 4K image whether it's AI reconstructed or native.

With halo they just said native 4K while MS said "up to 4K".

Final image quality speaks for itself anyway, and we shall investigate other games as well.
Even though it seemed like they guessed, I'm pretty sure they've used the pixel counting as a smokescreen at times to cover them being the messenger for Sony, without us knowing it is official info.

IMHO Sony don't want us thinking that it is native 4K if it isn't, and they'll know themselves that the IQ is through the roof, even in performance mode, to the extent that the resolution isn't an issue; however I also suspect they don't want to release that info directly and own it when it is still potentially a negative and so might prefer to let DF make a claim - that is probably true, going by the many times they've claimed a resolution and later been correct.

The other thing that leans towards that thinking is that Epic told DF what the UE5 Demo was running at, as DF would never have known without the direct info.
 

J_Gamer.exe

Member
They guessed 1440p in DS with no evidence from their side, while it is showing 100% 4K image whether it's AI reconstructed or native.

With halo they just said native 4K while MS said "up to 4K".

Final image quality speaks for itself anyway, and we shall investigate other games as well.

Seems an odd thing to do, is there nothing at all to suggest that res at all? What about that edge showing aliasing?

As for Halo surely pixel counts must have been done....

Insomniac have used temporal injection in the past which is apparently very good, would they have improved on this method maybe?

UE5 defied pixel counting so its getting to that point now where unnoticeable and I think that was temporal...

Really need more experts in here as not something I know much about.

But yes more analysis of more games to back up methods and compare.

Even dlss has some artifacts you can see to tell its dlss, there was one Alex showed on a DF video of death stranding where it turned dots into a line or something... i know I know, dots and a line, let me know if I'm getting too technical for some 😁
 
Last edited:

Bo_Hazem

Banned
Even though it seemed like they guessed, I'm pretty sure they've used the pixel counting as a smokescreen at times to cover them being the messenger for Sony, without us knowing it is official info.

IMHO Sony don't want us thinking that it is native 4K if it isn't, and they'll know themselves that the IQ is through the roof, even in performance mode, to the extent that the resolution isn't an issue; however I also suspect they don't want to release that info directly and own it when it is still potentially a negative and so might prefer to let DF make a claim - that is probably true, going by the many times they've claimed a resolution and later been correct.

The other thing that leans towards that thinking is that Epic told DF what the UE5 Demo was running at, as DF would never have known without the direct info.

I personally think it's a "stunt" from DF, and doubt Sony reaching them for such a thing. If it was from a lower resolution, then they don't wanna talk about it until RDNA2 has been official revealed, so they can talk about their other NDA-tied stuff. The patented AI image reconstruction from Sony sounds like exclusive to Sony, but there might be some hardware acceleration behind it that's tied to RDNA2 so they can't just talk about it yet, same goes to MS as well if they have something similar.

Seems an odd thing to do, is there nothing at all to suggest that res at all? What about that edge showing aliasing?

As for Halo surely pixel counts must have been done....

Insomniac have used temporal injection in the past which is apparently very good, would they have improved on this method maybe?

UE5 defied pixel counting so its getting to that point now where unnoticeable and I think that was temporal...

Really need more experts in here as not something I know much about.

But yes more analysis of more games to back up methods and compare.

Even dlss has some artifacts you can see to tell its dlss, there was one Alex showed on a DF video of death stranding where it turned dots into a line or something... i know I know, dots and a line, let me know if I'm getting too technical for some 😁

That jagginess could've been a result of VRS, we also need to remember that it didn't reach "gold" build so we can give them a pass for that one little flaw that happened pretty fast. If DF spotted that, I assure you they would've talked about it but I'm confident they didn't. Also the motion blur seems to be perfect as well as it can easily expose DLSS or Checkerboarding.

With UE5 demo it's pretty hard to really spot soft edges, but the assets themselves appear slightly soft. DS is much sharper than UE5 demo, but of course UE5 demo is just a level above with its photorealism.

Having 20 million polygons per frame out of billions of polygons is just insane, compared to only 4.15 million pixels (1440p). More polygons should make the perfect color blend per pixel just like real videos look.

Many think that it's exclusive to UE5, yet Sony Atom View came up with this around 2017:

Vimeo 4K version of the video:




One asset, advanced GPU-based filling algorithm. No LOD's. It's apparent that Epic Games is grateful to Sony to lend them such a tech for UE5. And its assets data is streamable through either local storage or via cloud. It could be implemented on other engines like a plugin for UE and probably Sony WWS engines indeed. They work with Sony Pictures Imageworks in the movie side of things.

 
Last edited:

J_Gamer.exe

Member
I personally think it's a "stunt" from DF, and doubt Sony reaching them for such a thing. If it was from a lower resolution, then they don't wanna talk about it until RDNA2 has been official revealed, so they can talk about their other NDA-tied stuff. The patented AI image reconstruction from Sony sounds like exclusive to Sony, but there might be some hardware acceleration behind it that's tied to RDNA2 so they can't just talk about it yet, same goes to MS as well if they have something similar.



That jagginess could've been a result of VRS, we also need to remember that it didn't reach "gold" build so we can give them a pass for that one little flaw that happened pretty fast. If DF spotted that, I assure you they would've talked about it but I'm confident they didn't. Also the motion blur seems to be perfect as well as it can easily expose DLSS or Checkerboarding.

With UE5 demo it's pretty hard to really spot soft edges, but the assets themselves appear slightly soft. DS is much sharper than UE5 demo, but of course UE5 demo is just a level above with its photorealism.

Having 20 million polygons per frame out of billions of polygons is just insane, compared to only 4.15 million pixels (1440p). More polygons should make the perfect color blend per pixel just like real videos look.

Many think that it's exclusive to UE5, yet Sony Atom View came up with this around 2017:

Vimeo 4K version of the video:




One asset, advanced GPU-based filling algorithm. No LOD's. It's apparent that Epic Games is grateful to Sony to lend them such a tech for UE5. And its assets data is streamable through either local storage or via cloud. It could be implemented on other engines like a plugin for UE and probably Sony WWS engines indeed. They work with Sony Pictures Imageworks in the movie side of things.



Maybe someone with twitter or other contact with DF could tweet them asking how they got the figure.

If they are seeing something others aren't we'd love to know to see how it was determined and give us more clues to the rendering tech used....
 

NXGamer

Member
Interesting thread.

So we can't confirm with any degree of certainty the demon souls render resolution?

If not the how did digital Foundry?

Were DF right or wrong? Or was that their best guess?

They said there's evidence of 1440p is that the case?

Same with Halo.

Are there any signs of upscaling techniques on show in demon souls?

I know he's been in the thread but maybe NXGamer NXGamer can throw his hat into the ring here?

Been following his analysis for a long time now and interested to have another perspective, if he wants to contribute.

Sorry if i have missed information already covering some points I made.
Thanks for all the comments in here with my name tagged, I will be taking a look into DS this week now due to it.

Does anyone have a link to the 4K screens in here, the ones on the blog only ONE is 4K and counts out as 4K, but may be from the RT/Quality mode.?

Thanks
 

PaintTinJr

Member
I personally think it's a "stunt" from DF, and doubt Sony reaching them for such a thing. If it was from a lower resolution, then they don't wanna talk about it until RDNA2 has been official revealed, so they can talk about their other NDA-tied stuff. The patented AI image reconstruction from Sony sounds like exclusive to Sony, but there might be some hardware acceleration behind it that's tied to RDNA2 so they can't just talk about it yet, same goes to MS as well if they have something similar.



That jagginess could've been a result of VRS, we also need to remember that it didn't reach "gold" build so we can give them a pass for that one little flaw that happened pretty fast. If DF spotted that, I assure you they would've talked about it but I'm confident they didn't. Also the motion blur seems to be perfect as well as it can easily expose DLSS or Checkerboarding.

With UE5 demo it's pretty hard to really spot soft edges, but the assets themselves appear slightly soft. DS is much sharper than UE5 demo, but of course UE5 demo is just a level above with its photorealism.

Having 20 million polygons per frame out of billions of polygons is just insane, compared to only 4.15 million pixels (1440p). More polygons should make the perfect color blend per pixel just like real videos look.

Many think that it's exclusive to UE5, yet Sony Atom View came up with this around 2017:

Vimeo 4K version of the video:




One asset, advanced GPU-based filling algorithm. No LOD's. It's apparent that Epic Games is grateful to Sony to lend them such a tech for UE5. And its assets data is streamable through either local storage or via cloud. It could be implemented on other engines like a plugin for UE and probably Sony WWS engines indeed. They work with Sony Pictures Imageworks in the movie side of things.


The two video clips do a great job, showing how UE5 is going to be so effective at moving beyond where game visuals are at now, and how it will be performant because of the PS5 IO and async compute - being able to partially stream AtomView atoms (probably 4 polygons :) ) from the GlobeScan(I should trademark that for their 360deg by 180deg 8k image layer scanning, then turn it into a data file ) and then independently render atoms(unordered) by the millions per frame with async compute, all without the drawcall overhead of a graphics api.

The softness in the UE5 demo is probably where they have removed overlapping atoms that come from where different image layers intersect, that would result in asset noise - as explained in the vimeo video.

I wouldn't be surprised if DS is higher res than 1440p, but historically IIRC Richard never commits to a resolution he isn't 100% sure about throughout a generation, but the start of generation pre-launch with so much in play for DF's relevance, who knows? IMO Nothing keeps someone part of a conversation in gaming more than a controversial claim.

If they didn't get their info from Sony then they might have taken one of the Dark Souls games on PC, adjusted the rendering and then tried to compare 1:1 using a shared asset between Demon's and Dark games until they got the same aliasing, and then calculated the resolution that way.
 

Bo_Hazem

Banned
Maybe someone with twitter or other contact with DF could tweet them asking how they got the figure.

If they are seeing something others aren't we'd love to know to see how it was determined and give us more clues to the rendering tech used....



Thanks for all the comments in here with my name tagged, I will be taking a look into DS this week now due to it.

Does anyone have a link to the 4K screens in here, the ones on the blog only ONE is 4K and counts out as 4K, but may be from the RT/Quality mode.?

Thanks

Hello Michael! Well, here are 4K screens I've made from the official trailer, Performance mode:


And my other albums if they gonna help you out with any other game:


And for official screenshots, you go to PlayStation.Blog official flicker account, get inside the photos "all sizes" and choose "original" for the PNG, original file.


Hope this helps!
 

Bo_Hazem

Banned
The two video clips do a great job, showing how UE5 is going to be so effective at moving beyond where game visuals are at now, and how it will be performant because of the PS5 IO and async compute - being able to partially stream AtomView atoms (probably 4 polygons :) ) from the GlobeScan(I should trademark that for their 360deg by 180deg 8k image layer scanning, then turn it into a data file ) and then independently render atoms(unordered) by the millions per frame with async compute, all without the drawcall overhead of a graphics api.

The softness in the UE5 demo is probably where they have removed overlapping atoms that come from where different image layers intersect, that would result in asset noise - as explained in the vimeo video.

I wouldn't be surprised if DS is higher res than 1440p, but historically IIRC Richard never commits to a resolution he isn't 100% sure about throughout a generation, but the start of generation pre-launch with so much in play for DF's relevance, who knows? IMO Nothing keeps someone part of a conversation in gaming more than a controversial claim.

If they didn't get their info from Sony then they might have taken one of the Dark Souls games on PC, adjusted the rendering and then tried to compare 1:1 using a shared asset between Demon's and Dark games until they got the same aliasing, and then calculated the resolution that way.

Thanks a lot for the great input! Personally, I'll wait for Bluepoint to clear it out, as I think they might be using that patented AI image reconstruction and it's being NDA'ed until RDNA2 reveal or something. If it is AI, then that's a bigger step over Checkerboarding and even DLSS 2.0!
 

NXGamer

Member
Hello Michael! Well, here are 4K screens I've made from the official trailer, Performance mode:


And my other albums if they gonna help you out with any other game:


And for official screenshots, you go to PlayStation.Blog official flicker account, get inside the photos "all sizes" and choose "original" for the PNG, original file.


Hope this helps!
Thanks Bo.
 

Bo_Hazem

Banned
BTW, guys, 4K quality mode of DIRT 5 on XSX seems like might not even be 4K at 4K@60fps mode. You don't even need to zoom, You can easily see the aliasing in the car. Could be VRS as well:

vlcsnap-2020-10-12-16h33m13s703.png


vlcsnap-2020-10-12-16h33m13s703.png


All are PNG files, extracted from the WebM files of this video:




A screen print from youtube:

vlcsnap-2020-10-12-16h33m13s703-2.png
 
Last edited:

PaintTinJr

Member
BTW, guys, 4K quality mode of DIRT 5 on XSX seems like might not even be 4K at 4K@60fps mode. You don't even need to zoom, You can easily see the aliasing in the car. Could be VRS as well:
vlcsnap-2020-10-12-16h33m13s703.png


vlcsnap-2020-10-12-16h33m13s703.png


All are PNG files, extracted from the WebM files of this video:




A screen print from youtube:

vlcsnap-2020-10-12-16h33m13s703-2.png


To be fair I would leave dirt out of comparisons with all other driving games, unless they also support full vehicle damage. It is a huge tradeoff in the visuals department and that has been the case since the original Mcrae rally could be compared to ridge racer, etc on PS1, but this looks like it does do some nice reflections in water and the like, and will likely still be the best handling of a rally game on the market, in all likelihood.

If it has to look as good as everything else it won't be able to do deformation - as a multiplatform title, maybe on PS5 as an exclusive because of the IO complex, maybe - so it would just cost us gaming variety to be harsh about it and it do badly; especially as a launch title IMO.
 
Last edited:

Bo_Hazem

Banned
To be fair I would leave dirt out of comparisons with all other driving games, unless they also support full vehicle damage. It is a huge tradeoff in the visuals department and that has been the case since the original Mcrae rally could be compared to ridge racer, etc on PS1, but this looks like it does do some nice reflections in water and the like, and will likely still be the best handling of a rally game on the market, in all likelihood.

If it has to look as good as everything else it won't be able to do deformation - as a multiplatform title, maybe on PS5 as an exclusive because of the IO complex, maybe - so it would just cost us gaming variety to be harsh about IMO; especially as a launch title IMO.

As I investigated further, at later frame the car seems to come back to the 4K status. It doesn't have RT yet, maybe they got the dev kits pretty late. DF said that it get down to 1800p to help keep it at 60fps, few dips in fps but nearly at 60fps most of the time for the 4K quality time. It's referred to as "Dynamic" 4K, seems to be the case with most of the demanding games.
 
Top Bottom