• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

Newari

Member
We are starting to see a recurring theme where Sony 1st party ports that flex PS5 memory management capabilities get labeled as "bad PC ports". Funny because I never heard anyone complain about Nixxes quality or work ethic until this gen.
The ports are decent and even good when GPU doesn't run out of VRAM, but these engines tweaked for PS5 memory sub system don't seem to translate well to PC. This VRAM related stuff happened last gen too, but back then it was 2 and 3GB graphics cards and nobody cared because we had new well priced cards with more VRAM (AMD RX480 8GB 229$ MSRP). This time it's happening to 8 (10 with RT) GB cards and VRAM has been stalling at that capacity since Pascal (2016) so people complain (RTX 4060ti 8GB 399$ MSRP).

Friendly bet with absolutely nothing on the line - there will be zero games where a 3080 can match PS5 Pro settings.

Edit: zero RT games.
Yup. 10GB won't be enough to match PS5 pro settings. 3080 is so stupid card with its gimped VRAM pool.
 

Zathalus

Member
even with something like %20, in the end it will eclipse series x's lifetime sales. niche indeed.
I doubt it will reach 20%, the PS4 Pro couldn't manage that in 4 years. And niche in relation to total PS5 sales, bringing up other console sales are irrelevant.
 

SmokSmog

Member
My 10GB 3080 was running out of vram in Forbidden West with max textures ON.

Max = 12GB Vram needed
High = 10GB was fine
Medium for 8GB VRAM GPUs.

High already looked inferior, medium looks terrible.

3070 with DLSS has the power to blow PS5 out of the water in this game but it doesn't have enough VRAM so it looks like crap or runs like crap if you go for high/max textures.
 
Last edited:

yamaci17

Member
The port is fine for the most part. The PS5 does outperform a 3070 but not a 2080 Ti and that's because the latter has an extra 3GB of VRAM to play with.

Also, even at 1080p, all those RT effects add a substantial amount of pressure on the VRAM. The PS5 only uses reflection at around High in its Performance Mode.


Because you're running the game at much higher settings but lower resolution. You'd get better frame time stability running at PS5 settings and 1440p+RT reflections than 1080p max settings+all RT effects. The latter uses less VRAM.
This is what this port amounts to, it is heavily pcie limited when at low textures/low settings/no ray tracing:


practically look at pcie bandwidth usage, no other game does this. this is at 1080p/dlss with low settings + low textures + no ray tracing. there's 2.6 gb worth of empty usable DXGI budget on GPU but game still uses PCIe and shared VRAM anyways. it is just unavoidable. exact same thing with tlou. notice how framerate tanks with maxed out GPU usage when pcie usage goes to 11/12 gb/s. performance comes back to normal levels with 7-8 gb/s usage. but even 7-8 gb/s usage is excessive and limits the GPU anyways





idiotic port. no excuses that it still transfers data over pcie with 2.6 gb empty vram available. it is literally hard coded there to do this. there really is not avoiding it. so even if you lower settings extremely and lower VRAM requirements immensely, game refuses to let go the PCIe/shared VRAM usage.

reason it is inexcusable because avatar pandora and alan wake 2 proved that you can have texture streaming without relying on shared VRAM/pcie bandwidth stalls. if those games can do it, so should ratchet/last of us.

it simply makes no sense to rely on shared vram so much when other engines can use a buffer space and stream textures without causing GPU performance stalls. best case scenario, you get texture downgrades in far distances that you will most likely not notice, worst case scenario you will get noticable texture quality reduction but without a big hit to GPU performance regardless and you can at least retain texture quality by changing other settings or reducing resolution

with nixxes ports, there simply is no winning. it is by design always uses a fair amount of shared VRAM. so you can dial in settings that use 4.5 GB VRAM (1080p, no ray tracing, low textures, dlss ) and still get hit by the VRAM-bound performance hit.. you still get a GPU bound performance hit with subpar textures. in most of the other engines, you get better texture quality without such a hit on GPU. this is especially apparent with games like jedi survivor (I know, funny), avatar and alan wake 2. I played jedi survivor at 4k/dlss quality and I was quite VRAM limited. game was streaming textures all the time. but I never have gotten a GPU bound performance hit. and only noticable texture quality downgrade was in very far distances, especially mountains etc. even then, it looked very subtle, like a LOD change. alan wake 2 and avatar proves that you can manage VRAM much better than whatever Nixxes is doing.
 
Last edited:

yamaci17

Member
and this is what happens if you subject it to pcie 1

1080p, dlss, low settings, low textures, no ray tracing, 3.4 gb dxgi vram usage (3.8 gb usable budget is available for game to use). instead uses 2.8 gb of shared mem, tries to transfer data over pcie UNNECESSARILY despite there being plenty of free local VRAM


tanks to 10 fps. only uses 10-20 gb/s of GPU bandwidth

1080p, dlss, high/ultra settings, ray traced global illumination, 6.2 gb dxgi vram usage (1.1 gb usable budget is available for game to use). only uses 1 gb of shared mem but does not actually transfer data over pcie, because it simply does not make sense to do so, since the game has actual free VRAM that it can put data into



keeps pushing 45-50 fps no problem. uses 150-200 gb/s of GPU bandwidth as it should

so yes, it is a bad port from the perspective of VRAM management. that much is for certain. especially when the game refuses to use free empty available VRAM and tank performance accordingly

you never want to rely on pci-e. it will slow any kind of GPU out there. a GPU can push 150-800 gb/s bandwidth. pcie 4 x16 is 32 gb/s at best.
 
Last edited:

IDWhite

Member
This is what this port amounts to, it is heavily pcie limited when at low textures/low settings/no ray tracing:


practically look at pcie bandwidth usage, no other game does this. this is at 1080p/dlss with low settings + low textures + no ray tracing. there's 2.6 gb worth of empty usable DXGI budget on GPU but game still uses PCIe and shared VRAM anyways. it is just unavoidable. exact same thing with tlou. notice how framerate tanks with maxed out GPU usage when pcie usage goes to 11/12 gb/s. performance comes back to normal levels with 7-8 gb/s usage. but even 7-8 gb/s usage is excessive and limits the GPU anyways





idiotic port. no excuses that it still transfers data over pcie with 2.6 gb empty vram available. it is literally hard coded there to do this. there really is not avoiding it. so even if you lower settings extremely and lower VRAM requirements immensely, game refuses to let go the PCIe/shared VRAM usage.

reason it is inexcusable because avatar pandora and alan wake 2 proved that you can have texture streaming without relying on shared VRAM/pcie bandwidth stalls. if those games can do it, so should ratchet/last of us.

it simply makes no sense to rely on shared vram so much when other engines can use a buffer space and stream textures without causing GPU performance stalls. best case scenario, you get texture downgrades in far distances that you will most likely not notice, worst case scenario you will get noticable texture quality reduction but without a big hit to GPU performance regardless and you can at least retain texture quality by changing other settings or reducing resolution

with nixxes ports, there simply is no winning. it is by design always uses a fair amount of shared VRAM. so you can dial in settings that use 4.5 GB VRAM (1080p, no ray tracing, low textures, dlss ) and still get hit by the VRAM-bound performance hit.. you still get a GPU bound performance hit with subpar textures. in most of the other engines, you get better texture quality without such a hit on GPU. this is especially apparent with games like jedi survivor (I know, funny), avatar and alan wake 2. I played jedi survivor at 4k/dlss quality and I was quite VRAM limited. game was streaming textures all the time. but I never have gotten a GPU bound performance hit. and only noticable texture quality downgrade was in very far distances, especially mountains etc. even then, it looked very subtle, like a LOD change. alan wake 2 and avatar proves that you can manage VRAM much better than whatever Nixxes is doing.


You are comparing dedicated console game engines with PC game engines, and unified system memory with separate memory architecture. It's impossible to extrapolate one approach from one system to another.

I'm not going to go into small details and long explanations, all I can say is that this PCIe usage it's normal on these types of games because they need to emulate PS5 I/O and decompression capabilities on a different memory architecture.
 
My 10GB 3080 was running out of vram in Forbidden West with max textures ON.

Max = 12GB Vram needed
High = 10GB was fine
Medium for 8GB VRAM GPUs.

High already looked inferior, medium looks terrible.

3070 with DLSS has the power to blow PS5 out of the water in this game but it doesn't have enough VRAM so it looks like crap or runs like crap if you go for high/max textures.

That's what I was suggesting yesterday...

As PS5 Pro will have almost 14 GB, then the MAX PC VRAM could increase, especially on a Nixxes ports
 

Bojji

Member
That's what I was suggesting yesterday...

As PS5 Pro will have almost 14 GB, then the MAX PC VRAM could increase, especially on a Nixxes ports

No game uses 12.5GB of memory on PS5 as just video memory, same way no game on pro will use 13.7GB as just VRAM.

People are also forgetting that majority of Sony games on released on pc are ports of games created solely for PS5 without any knowledge that they will be ported to machines with different memory architectures.

But games created now should be made with PC version (released few months later or so) in mind so maybe this stupid VRAM and picie bandwidth usage will get better?
 
Last edited:
Keep seeing people forgetting that the VRAM in these consoles needs to act like VRAM and System RAM at the same time. It's not really a lot of memory when you put that into perspective. Neither the PS5 nor the Series X use their VRAM pool just for video memory, the CPU eats its chunk too. SlimySnake SlimySnake touched on this a few posts back. Bojji Bojji :messenger_ok:
 
Last edited:

IDWhite

Member
Keep seeing people forgetting that the VRAM in these consoles needs to act like VRAM and System RAM at the same time. It's not really a lot of memory when you put that into perspective. Neither the PS5 nor the Series X use their VRAM pool just for video memory, the CPU eats its chunk too. SlimySnake SlimySnake touched on this a few posts back. Bojji Bojji :messenger_ok:

That's right, Ps5 and Series X are short on memory for certain things, but system memory budgets have not increased as much compared to video memory ones. And on top of that you have now super fast SSDs and I/O dedicated hardware to better use that unified memory.
 
The ports are decent and even good when GPU doesn't run out of VRAM, but these engines tweaked for PS5 memory sub system don't seem to translate well to PC. This VRAM related stuff happened last gen too, but back then it was 2 and 3GB graphics cards and nobody cared because we had new well priced cards with more VRAM (AMD RX480 8GB 229$ MSRP). This time it's happening to 8 (10 with RT) GB cards and VRAM has been stalling at that capacity since Pascal (2016) so people complain (RTX 4060ti 8GB 399$ MSRP).


Yup. 10GB won't be enough to match PS5 pro settings. 3080 is so stupid card with its gimped VRAM pool.
These "old" PS5 first party games weren't made with a PC port in mind.
 

SlimySnake

Flashless at the Golden Globes
We are starting to see a recurring theme where Sony 1st party ports that flex PS5 memory management capabilities get labeled as "bad PC ports". Funny because I never heard anyone complain about Nixxes quality or work ethic until this gen.
If they are fixed post launch then yes they are bad ports.

GOW 2018 and Horizon Zero Dawn had the same issues. And they are PS4 games with no secret cerny IO memory management sauce. besides, whatever memory management thing the PS5 is doing with its 5.5 GBps ssd, the PC can do faster with their DDR ram with 100 GBps speeds for DDR4 and well above that for DDR5. if the bottleneck is what you say it is then surely 100 Gbps is faster than 5.5 Gbps.

If PS5 outperforms the 3080 in RT games then id be very impressed. But id keep my expectations low if i were you. the 2x increase Sony themselves said gets them to around 6800xt which is way worse than the 3080 in RT games. You want the 4x increase to get close to the 3080 and we just dont know how many games will hit that 4x mark.
 

Gaiff

SBI’s Resident Gaslighter
If they are fixed post launch then yes they are bad ports.

GOW 2018 and Horizon Zero Dawn had the same issues. And they are PS4 games with no secret cerny IO memory management sauce. besides, whatever memory management thing the PS5 is doing with its 5.5 GBps ssd, the PC can do faster with their DDR ram with 100 GBps speeds for DDR4 and well above that for DDR5. if the bottleneck is what you say it is then surely 100 Gbps is faster than 5.5 Gbps.
Pretty sure GOW 2018 had almost no issues and HZD's issues were completely different with stuff like broken AF, bugs, and random crashes.
 

Bojji

Member
Pretty sure GOW 2018 had almost no issues and HZD's issues were completely different with stuff like broken AF, bugs, and random crashes.

HZD had poor performance on Nvidia GPUs at launch and very poor performance on pascal GPUs or older. Few months later pascal GPUs got massive improvement so game was unoptimized at launch.

GOW was very solid.
 

SlimySnake

Flashless at the Golden Globes
Pretty sure GOW 2018 had almost no issues and HZD's issues were completely different with stuff like broken AF, bugs, and random crashes.
GOW couldnt even run at 60 fps on a 1060 and a 6 tflops Rx 580 at 1080p using the PS4 preset. It was offering roughly the same performance as the 4.2 tflops PS4 pro with a shitty jaguar CPU holding it back while the 580 and 1060 were using far better CPUs and still couldnt hit 60 fps during combat.

it actually performed worse than the HZD port but because it didnt have many bugs and came when most people had 20 and 30 series cards brute forcing it, it didnt make the news.
 

Bojji

Member
GOW couldnt even run at 60 fps on a 1060 and a 6 tflops Rx 580 at 1080p using the PS4 preset. It was offering roughly the same performance as the 4.2 tflops PS4 pro with a shitty jaguar CPU holding it back while the 580 and 1060 were using far better CPUs and still couldnt hit 60 fps during combat.

it actually performed worse than the HZD port but because it didnt have many bugs and came when most people had 20 and 30 series cards brute forcing it, it didnt make the news.

It was very CPU limited, probably thanks to dx11 api. It wasn't very well optimized but it was quite a good port otherwise.
 

SlimySnake

Flashless at the Golden Globes
It was very CPU limited, probably thanks to dx11 api. It wasn't very well optimized but it was quite a good port otherwise.
Sure but I was replying to a post that was excusing these unoptimized ports as a result of PCs unable to handle the PS5 IO and SSDs. PS first party ports have always performed worse with some very rare exceptions. In this case, it might be the CPU. in Horizon 1's case it was just a shitty port by iron galaxy. Spiderman is also very CPU limited with a Zen 2 3600 bottlenecking the 4090 as alex showed. his fps jumped from 69 to 129 fps in one test i posted a few weeks ago. Which btw, is something they finally resolved with their horizon FW port by optimizing their decompression logic. FW still underperforms on a 3080 despite being way more polished than the original.

There is this big misconception on this board that consoles have this super secret sauce that helps them perform above their tflops just because of IO or fucking SSD efficiencies. And while it might be true to the tune of 10%, we are looking at PS5 performing equivalent to cards 40-50% more powerful and thats when i draw the line and say something is not optimized for PC, even if it releases in a polished state like Ratchet, Horizon FW and GoW Ragnorak have.

in the graphics fidelity thread, a poster was playing HFW on the same PC he played Avatar on and said, Avatar performed way better despite having RTGI, shadows and reflections HFW lacks. not to mention the insane upgrade in foliage density, draw distance and other next gen effects HFW is lacking due to it being a last gen game. Polished != Optimized.
 
@HeisenbergFX4

Season 4 Tea GIF by Outlander

I think he's just talking about Xbox starting next gen early. I'm guessing an announcement is coming soon. Which is weird considering their sales are already struggling so now they are going to do even worse after the announcement.
 

ChiefDada

Gold Member
If PS5 outperforms the 3080 in RT games then id be very impressed. But id keep my expectations low if i were you. the 2x increase Sony themselves said gets them to around 6800xt which is way worse than the 3080 in RT games. You want the 4x increase to get close to the 3080 and we just dont know how many games will hit that 4x mark.

You're way off.

 

Mr.Phoenix

Member
It's amazing that Sony had the PS5 specs locked up in Fort Knox, with a few Github leaks here and there and we had to wait for Road to PS5 specs.

But the PS5 Pro specs are just randomly leaking. One would think Sony would learn from the pass and tighten up leaks. Which could be true, cause so far the PS5 Pro hasn't been spotted on Github.

Maybe this is really one big controlled leak, which started with MLiD. Anything before MLiD leak could probably be legit leaks though.

BTW, this is just my speculation in regards to the person I'm replying to and not to be taken seriously.
I think it should be considered that with the PS5, third-party devs didn't get PS5 dev kits until after the Road to PS5 conference in March 2020. S I can see why outside engineering leaks, its was pretty much kept quiet.

But the PS5pro is different, the official announcement of this thing is coming months after third-party devs have final dev kits. There are bound to be leaks.

You're way off.


Unfortunately, that doesn't account for the BVH acceleration which all those RDNA cards lacked. It remains to be seen just how accelerated the BVH is in RDNA4 and in turn, the PS5pro.
 
Last edited:
You're way off.


I think it's a mistake comparing the Pro to PC counterparts - the performance will likely very drastically across titles, but I suspect we'll see some very impressive RT implementations across first party titles, implementations which outdo the Pro's alleged PC counterparts by a decent stretch. One of the reasons being Playstation's superior RT API and libraries, which have drastically evolved since the PS5's launch.

I'm just basing this off a hunch but time will tell.
 
Last edited:

Mr.Phoenix

Member
You are using a twitter dude who ends his claims with a ? to refute Sony’s own claims about their RT performance? lol
While I agree with what you are saying, I don't really understand Sony's RT claims either. It doesn't make sense.

Even if we ignore the BVH stuff, and assume it works in the same way the PS5 RT does. Sony says 2-4x, well thing is, there is no way you can get only 2x RT improvement.

36CU vs 60CU.

PS5 RT 4box/1Tri per RTU vs PS5pro 8box/2Tri per RTU.

The PS5pro would always have 3.3x Box and 2.2x Tri advantage over the OG PS5 from a raw hardware perspective if running at the same clock.

And this is not even taking into account that the PS5pro is rumored to have proper BVH acceleration. And we are not even factoring in PSSR here either.

Some things don't add up. This comes of as such a gross underselling of the hardware it's almost laughable.
 
No. Lowering resolution usually does absolutely nothing to the CPU.
Well most games now do ray tracing, and in several of them ray tracing taxes cpu heavily. So lowering rez should lower ray tracing workload which should lower cpu usage.

Lumen also i think might affect cpu somewhat

Also iirc some aspects of rendering might also have slight contribution from cpu.
 

vkbest

Member
Keep seeing people forgetting that the VRAM in these consoles needs to act like VRAM and System RAM at the same time. It's not really a lot of memory when you put that into perspective. Neither the PS5 nor the Series X use their VRAM pool just for video memory, the CPU eats its chunk too. SlimySnake SlimySnake touched on this a few posts back. Bojji Bojji :messenger_ok:
Sure, but in PC big portion of data is duplicated in both RAM and VRAM.
 
Last edited:
While I agree with what you are saying, I don't really understand Sony's RT claims either. It doesn't make sense.

Even if we ignore the BVH stuff, and assume it works in the same way the PS5 RT does. Sony says 2-4x, well thing is, there is no way you can get only 2x RT improvement.

36CU vs 60CU.

PS5 RT 4box/1Tri per RTU vs PS5pro 8box/2Tri per RTU.

The PS5pro would always have 3.3x Box and 2.2x Tri advantage over the OG PS5 from a raw hardware perspective if running at the same clock.

And this is not even taking into account that the PS5pro is rumored to have proper BVH acceleration. And we are not even factoring in PSSR here either.

Some things don't add up. This comes of as such a gross underselling of the hardware it's almost laughable.

Well it's always better to undersell and overdeliver rather than the opposite... :D

Benchmarks will be out sooner or later
 
Last edited:

IDWhite

Member
Sure but I was replying to a post that was excusing these unoptimized ports as a result of PCs unable to handle the PS5 IO and SSDs. PS first party ports have always performed worse with some very rare exceptions. In this case, it might be the CPU. in Horizon 1's case it was just a shitty port by iron galaxy. Spiderman is also very CPU limited with a Zen 2 3600 bottlenecking the 4090 as alex showed. his fps jumped from 69 to 129 fps in one test i posted a few weeks ago. Which btw, is something they finally resolved with their horizon FW port by optimizing their decompression logic. FW still underperforms on a 3080 despite being way more polished than the original.

There is this big misconception on this board that consoles have this super secret sauce that helps them perform above their tflops just because of IO or fucking SSD efficiencies. And while it might be true to the tune of 10%, we are looking at PS5 performing equivalent to cards 40-50% more powerful and thats when i draw the line and say something is not optimized for PC, even if it releases in a polished state like Ratchet, Horizon FW and GoW Ragnorak have.

in the graphics fidelity thread, a poster was playing HFW on the same PC he played Avatar on and said, Avatar performed way better despite having RTGI, shadows and reflections HFW lacks. not to mention the insane upgrade in foliage density, draw distance and other next gen effects HFW is lacking due to it being a last gen game. Polished != Optimized.

PCs right now can't emulate all the Ps5 dedicated hardware to memory management with the same performance. The use of RAM memory as VRAM or cache from streaming from SSDs isn't a solution because it severely hurts performance as they need all the system to be involved to be sure the right data is on the right site. But thats only one problem, because we are also talking about different memory architectures and different APIs and tools.

Even Ps4 titles which are highly optimized to run on a single pool of memory and with exclusive low level sofware functions requires completely new approaches on PC. And not all the cases are going to obtain the same performance gain as they do on the console.

Third party game engines like Snowdrop or REDengine are designed primary on PC from the ground up, focusing more on CPU and GPU performance maximization and scalability, and then they make some changes or optimizations to accommodate to consoles, but they usually don't go to the same level as first party engines on consoles.
 
Oh shit.....John has seen a couple crazy things running on PS5 hardware.

He's also likely going to get fired talking positive about PS5😅




When I read Kepler tweet I didn't agree with him. I think he generalizes too much, overall on 3rd party / Xbox games, yes. But I think on PS5 some developers still optimize very specifically for that hardware in many ways (GPU and/or with I/O). Otherwise we would not had cases like The touryst, Decima engine or Insomniac games on PS5 significantly outperforming similar desktop GPUs (6700).
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
2) Alan Wake 2 - 1080p gives 65 fps maxed out in a gpu limited scenario. 4k fsr2 performance. 47 fps. thats 35% hit. Dont remember the exact numbers for the 1440p native vs 4k fsr quality test, but it was much lower around 15-20% like callisto below.
65fps (15.4ms) down to 47fps (21.3ms). 5.9ms of added rendering time. Much more than 2ms.
3) callisto protocol - 82 fps at native 1080p. 68 fsr2 performance. 20%. native 1440p 58 fps. fsr2 quality 52 fps. 11%. Only ray traced tests because in non-RT mode i was hitting my monitors 120 fps cap at 1080p.
82fps (12.2ms) down to 68fps (14.7ms) so 2.5ms of added rendering time. Not too crazy but still 25% more.
it seems to be variable by games. but even if we go by best case scenario, 4k fsr performance is going to be a 20% hit compared to you doing native 1080p. Its definitely not free but if a game is already using fsr2, PSSR will have the same cost.
Not much to go on. Also not a big fan of Alan Wake 2 since you benchmarked it outside of a controlled environment because it has no built-in benchmarking tool. Still, it should be easy enough to find a zone without too many variables and repeatedly test it.

If you don't mind, whenever you have a moment, could you test a few of the following games that have built-in benchmarks? Avatar: Frontiers of Pandora, Cyberpunk 2077, Horizon: Zero Dawn, Assassin's Creed Valhalla or Mirage, Dying Light 2, Forza Horizon 5, Guardians of the Galaxy (I think you have that one), Returnal.

Or any of those recent games that have a benchmarking tool and DLSS/FSR.

https://www.pcgamingwiki.com/wiki/List_of_games_with_built-in_benchmarks

Hopefully, you have 2-3 of those installed. I'd do it myself but my 2080 Ti-equipped PC is at my mom's house and since my TV is busted, I can't even use 4K on my ultrawide 1440 monitor.

Thanks in advance broseph.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
If you don't mind, whenever you have a moment, could you test a few of the following games that have built-in benchmarks? Avatar: Frontiers of Pandora, Cyberpunk 2077, Horizon: Zero Dawn, Assassin's Creed Valhalla or Mirage, Dying Light 2, Forza Horizon 5, Guardians of the Galaxy (I think you have that one), Returnal.
sorry, have none of those installed. having a 1GB ssd means deleting most games after i play them.
 

Gaiff

SBI’s Resident Gaslighter
sorry, have none of those installed. having a 1GB ssd means deleting most games after i play them.
I linked the page of the PC gaming wiki that has games with built-in benchmarks. If you have one of them, that'd be cool. If not, no big deal. I'm just trying to get some data to see how far above or below 2ms we are most of the time and if the average, median, or mean come anywhere near.

It's a bit of a bitch to get this data. Think I'll look at computerbase.
 

SlimySnake

Flashless at the Golden Globes
I linked the page of the PC gaming wiki that has games with built-in benchmarks. If you have one of them, that'd be cool. If not, no big deal. I'm just trying to get some data to see how far above or below 2ms we are most of the time and if the average, median, or mean come anywhere near.

It's a bit of a bitch to get this data. Think I'll look at computerbase.
remember, Sony leaked slides said 2 ms when upscaling from 1080p so they were talking about dlss 4k performance. the 2-3 ms numbers you see are from 1440p or dlss 4k quality. AMD said the same thing for FSR2 and its probably only true for best case scenario.
 

Gaiff

SBI’s Resident Gaslighter
remember, Sony leaked slides said 2 ms when upscaling from 1080p so they were talking about dlss 4k performance. the 2-3 ms numbers you see are from 1440p or dlss 4k quality. AMD said the same thing for FSR2 and its probably only true for best case scenario.
Yes, that's what I used for Alan Wake 2. The 7900 XTX's rendering time increased by 2.23ms from 1080p to 4K FSR Performance mode. Close enough to 2ms. Your 3080 added a whopping 5.9ms or so though, which is a lot more. The 4090 only added 1.66ms.

Seems the additional rendering time scales with the base frame rate of the input resolution.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Ran the RDR2 benchmark at several resolutions. same result. there is definitely a cost to it. native 1080p maxes out at my LGCX's 120 fps max resolution in several of the benchmark areas so the results are probably worse than we are seeing here.

  • Native 4k 58 fps avg
  • Native 1440p 85 fps avg
  • Dlss 4k quality (1440p) 69 fps avg
  • Native 1080p 102 fps avg
  • Dlss 4k perf (1080p) 83 fps avg
And there is a substantial hit to image quality when using dlss performance. it looks better than native 1080p but i dont like playing anything lower than dlss quality on a big 4k screen. it becomes too soft even if the jaggies are cleared out by dlss. i want that pristine, crisp and clear native 4k image quality of horizon, spiderman 2 and ratchet 30 fps modes that you can only get at 4k dlss quality. lets hope sony studios offer both PSSR quality and performance modes even if quality cant do 60 fps.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
Ran the RDR2 benchmark at several resolutions. same result. there is definitely a cost to it. native 1080p maxes out at my LGCX's 120 fps max resolution in several of the benchmark areas so the results are probably worse than we are seeing here.

  • Native 4k 58 fps avg
  • Native 1440p 85 fps avg
  • Dlss 4k quality (1440p) 69 fps avg
  • Native 1080p 102 fps avg
  • Dlss 4k perf (1080p) 83 fps avg
And there is a substantial hit to image quality when using dlss performance. it looks better than native 1080p but i dont like playing anything lower than dlss quality on a big 4k screen. it becomes too soft even if the jaggies are cleared out by dlss. i want that pristine, crisp and clear native 4k image quality of horizon, spiderman 2 and ratchet 30 fps modes that you can only get at 4k dlss quality. lets hope sony studios offer both PSSR quality and performance modes even if quality cant do 60 fps.
So 83fps (12ms) with 4K DLSS Performance vs 102fps (9.8ms) at native 1080p. 2.2ms of added rendering time. Pretty good. Probably a bit more due to hitting the fps cap a few times, but it shouldn’t be a whole lot.
 
Last edited:
Top Bottom