• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

Panajev2001a

GAF's Pleasant Genius
If they 2-4x rt upgrade helps them run the native 4k 30 fps Spider-Man 2 rt mode at native 4k 60 fps then great. I wanted to play Horizon fw at native 4k 60 fps. If this 45% gpu and 10% cpu increase helps them double the ps5 performance then yay. Let’s go cerny! I will bend the knee and call him king.

But we all know that’s not going to happen and they will use pssr to upscale to 4k. Likely dlss performance equivalent and I’m just not a fan of that. I stick with quality. Good for everyone suffering through 720p fsr2 60 fps modes though.
Come on, they have a great looking 40 FPS mode with full RT, they have RT in their 60 FPS mode, but sure you are having a mega negative outburst on PS5 Pro, so I guess at some point we can only agree to disagree. You just bang the same 45% number drum and everything else is seen as negatively as possible.
 

Panajev2001a

GAF's Pleasant Genius
There is something I used to say, that if you gave devs a 10Ghz 16 core CPU and a GPU equivalent to two 4090s.... they would still find a way to make a 30fps game.
We will have to deal with this very soon. The free lunch of HW easily allowing you to make your code run better without having to spend a lot of time optimising and restructuring your code to fit the new HW is kind of gone.

To keep performance improvements from generation to generation going strong, HW and SW models will start getting more complex or lots of performance gets left in the table. The chiplet approach is not a free lunch. Multi-GPU rendering will be harder.
 

FireFly

Member
It's not that hard.
TH7mICT.png

PS5 rumored clock speed is 2.18GHz is using 60CUs.

60CU × 4 SIMD32 × 32 × 2 × 2.18GHz = 33.5 TFLOPS.

  • 16-bit floating point (FP16) = FLOPs
  • 8-bit floating point (FP8) = FLOPs
  • 8-bit integer (INT8) = TOPs
  • 4-bit integer (INT4) = TOPs
The leak from Tom Henderson states AI Accelerators.
  • AI Accelerator, supporting 300 TOPS of 8 bit computation / 67 TFLOPS of 16-bit floating point

60CU × 2 AI Accelerators 256 × 2.18GHz = 67 FLOPs (FP16)

60CU × 2 AI Accelerators × 512 × 2.18GHz = 134 TOPs (INT8)

RDNA4 now supports Sparsity, which doubles performance.
Examining AMD’s RDNA 4 Changes in LLVM
RDNA 4 introduces new SWMMAC (Sparse Wave Matrix Multiply Accumulate) instructions to take advantage of sparsity.


60CU × 2 AI Accelerators x 1028 × 2.18GHz = 267 TOPs. But this number is still not the 300 TOPs number. Which is where the problem with clock speed starts.

Kepler uses a clock speed of 2.45GHz to get that 300 TOPs number.
60CU × 2 AI Accelerators × 1028 × 2.45GHz = 302 TOPs


but if that's the clock, the TFLOPs would be to high.
60CU × 4 SIMD32 × 32 × 2 × 2.45GHz = 37.6 TFLOPS


The only way it all makes sense is this and the leaks tweaked stuff to protect their sources.

Normal Mode
  • CPU = 3.5GHz
  • GPU = 2.23GHz

High CPU Frequency Mode / Performance Mode
  • CPU = 3.85GHz
  • GPU = 2.18GHz

High GPU Frequency Mode / Fidelity Mode
  • CPU = 3.43GHz
  • GPU = 2.45GHz

Obviously, this is just me speculating but I find it strange that no one noticed this.

That makes sense but I thought you were suggesting a 3.010GHz clock speed.
 

Bojji

Member
I am sure I am sounding like a broken record now. But we really need to change where we point the blame when it comes to these things. Gonna give you an example of two games here. Both are on PC.

Jedi survivor



and Horizon forbidden west


For Jedi survivor, the Owen guy was trying to explain how its CPU bottlenecked, in response to people like me who were making the point, that what he describes as a CPU bottleneck isn't a bottleneck but rather underutilization. You can take a look at the video yourself and see what I mean. In JS, and the 8C/16t CPU he was running the game with, the highest clocked thread was at ~60%. The lowest was at 21%, and the rest anywhere in between that with most under 40%. And this was on a 7800X3D and with a 4090. The GPU was idling at 40%

That is not a bottleneck. It's what we, for whatever reason, call a CPU bottleneck, but it's not. It's bad game design. It's poor optimization. And should be critiqued the same way we would critique devs for using bad textures or having obvious bugs.

And then we have HFW. And I used that because it does two things, as you will see in the video. First, it blows Owens argument out of the water about how difficult it is to properly parallelize CPU tasks so everything has to run on one thread at the end of the day. Second, its shows what a properly CPU-optimized game does. In that test, they ran it on a Zen 5 3600. 6C/12t. And clocked the CPU to around 3.7Ghz to mimic the PS5 CPU. And paired it to a 4070. But what is more important, is that in this test, the lowest utilized thread was at 50%, and the highest was at 89%. With everything in between in the high 70s. They even commended the game for this.

Then they ran it in 1080p DLSS that way they can push the GPU as far as they can. And they averaged 85fps. That is what proper optimization and CPU utilization looks like. And we should be calling devs out more often for their lazy approach to this, and instead just choosing to put all; their code on one or two threads and expect ppl to just have fast CPUs.

I dont know when or why people just accepted this poor CPU utilization as the norm, and instead chose to brute force our way through it by getting the fastest CPU we can afford, but it's not normal.

Oh one more....


Thats Alan wake 2. Tested across a myriad of GPUs. Guess what its Overall CPU utilization was averaging at all times? Across all the GPUs, from a 1070 all the way to a 4090..... under 24%. At some points, it was even as low as 11%.

That right there is the problem, imagine having a 16-thread CPU, and your overall CPU utilization is under 20% because the game engine is running on practically just one of your 16 threads. Then we come and say its CPU bottlenecked.


I doesn't matter if game is using 10% of CPU or 100% of CPU if that cpu is the bottleneck. You can have game that is single core limited and 3600 will struggle to keep 60FPS but 7600 will run it above 90 and in both cases game is running at max 2 cores. At the end of the day what matters is GPU utilization number, if that's below 95% then you are for sure CPU limited no matter how much of CPU is used.

It's obviously optimization problem and some developers really don't give a fuck, PC games are seeing poorly optimized games like that all the time. But what we can do about it? Only way to fight with it is to throw better CPU into your system, this doesn't work on consoles and when game has shitty CPU code it will just run poorly until devs fix it (or not). That's why CPU uplift would be welcome on Pro to make console run CPU limited games better, Starfield port with 60FPS mode would be possible - because I don't believe Bethesda will fix their code hahaha.
 
Last edited:

Loxus

Member
That makes sense but I thought you were suggesting a 3.010GHz clock speed.
No I was suggesting that the PS5 Pro GPU clock has to be higher than the PS5 2.23GHz and not 2.18GHz base on Tom Henderson.

Audio

The ACV in the PlayStation 5 Pro runs at a higher clock speed over the standard PlayStation 5, resulting in the ACM library having 35% more performance.
  • More convolution reverbs can be processed
  • More FFT or IFFT can be processed

The 35% gives 3.010GHz but that's to high, even for Mark Cerny. I wasn't saying it's 3.010GHz.
 
I am sure I am sounding like a broken record now. But we really need to change where we point the blame when it comes to these things. Gonna give you an example of two games here. Both are on PC.

Jedi survivor



and Horizon forbidden west


For Jedi survivor, the Owen guy was trying to explain how its CPU bottlenecked, in response to people like me who were making the point, that what he describes as a CPU bottleneck isn't a bottleneck but rather underutilization. You can take a look at the video yourself and see what I mean. In JS, and the 8C/16t CPU he was running the game with, the highest clocked thread was at ~60%. The lowest was at 21%, and the rest anywhere in between that with most under 40%. And this was on a 7800X3D and with a 4090. The GPU was idling at 40%

That is not a bottleneck. It's what we, for whatever reason, call a CPU bottleneck, but it's not. It's bad game design. It's poor optimization. And should be critiqued the same way we would critique devs for using bad textures or having obvious bugs.

And then we have HFW. And I used that because it does two things, as you will see in the video. First, it blows Owens argument out of the water about how difficult it is to properly parallelize CPU tasks so everything has to run on one thread at the end of the day. Second, its shows what a properly CPU-optimized game does. In that test, they ran it on a Zen 5 3600. 6C/12t. And clocked the CPU to around 3.7Ghz to mimic the PS5 CPU. And paired it to a 4070. But what is more important, is that in this test, the lowest utilized thread was at 50%, and the highest was at 89%. With everything in between in the high 70s. They even commended the game for this.

Then they ran it in 1080p DLSS that way they can push the GPU as far as they can. And they averaged 85fps. That is what proper optimization and CPU utilization looks like. And we should be calling devs out more often for their lazy approach to this, and instead just choosing to put all; their code on one or two threads and expect ppl to just have fast CPUs.

I dont know when or why people just accepted this poor CPU utilization as the norm, and instead chose to brute force our way through it by getting the fastest CPU we can afford, but it's not normal.

Oh one more....


Thats Alan wake 2. Tested across a myriad of GPUs. Guess what its Overall CPU utilization was averaging at all times? Across all the GPUs, from a 1070 all the way to a 4090..... under 24%. At some points, it was even as low as 11%.

That right there is the problem, imagine having a 16-thread CPU, and your overall CPU utilization is under 20% because the game engine is running on practically just one of your 16 threads. Then we come and say its CPU bottlenecked.

Great post. It shows that Sony games are usually very well optimized and actually really CPU limited on PC (so it's even more surprising they run so well on PS5 vs PC). Those others games (including DD2, Jedi Survivor, Alan Wake 2 and the likes) are badly optimized, not CPU limited. But DF thinks those Sony games are badly optimized while those badly optimized games are...CPU limited. Inversion of reality as they are just there to push their pro PCMR / Xbox agenda.

"Those Sony games run so badly on PC vs PS5 because they are bad ports" which is their main narrative, is BS. The reality is that the ports are usually exceptionnally good and that PS5 just punches above its weight in many cases thanks to smart and efficient hardware (like I/O) and better APIs all around.
 

Loxus

Member
There is something I used to say, that if you gave devs a 10Ghz 16 core CPU and a GPU equivalent to two 4090s.... they would still find a way to make a 30fps game.
Exactly this. 100%

Game 1

Target – image quality close to Fidelity Mode (1800p) with Performance Mode FPS (60 FPS)

Standard PlayStation 5 –

  • Performance Mode – 1080p at 60FPS
  • Fidelity Mode – 1800p at 30FPS
PlayStation 5 Pro –
  • 1440p at 60FPS (PSSR used)

Game 1 shows that the CPU is actually not going to be a bottleneck. I used to think with the same CPU, 60fps would be a problem.

But Game 1 target runs Fidelity Mode at 60fps. The fact that they had to drop the resolution to 1440p suggests to me the GPU may be more of a bottleneck than the CPU.

This isn't an issue, as PSSR than take that 1440p to 4k. So we can now get a 4k performance fidelity mode.


Game 2

Target – Add Raytracing to gameplay

Standard PlayStation 5 achieved 60FPS without raytracing, and PlayStation 5 Pro achieved 60FPS with Raytracing.


Game 2 further suggest the CPU isn't going to be a problem, as the PS5 Pro can run Game at 60fps with RT enabled. Which would most likely run at 30fps on PS5 because of the weaker GPU.
 

HeisenbergFX4

Gold Member
Because they use different architectures, from different GPU makers.
TFLOPs is only one among many metrics in a GPU, that influence performance. And it varies with different architectures.

Seriously, this non-sense of focusing only on TFLOPs is as moronic as the bits during the 90's console war.
Because this gen was all about how TFs make all the difference, the Digital Foundries of the world made sure those numbers were brought up every chance they got.

It was the playbook they were given

Can very likely go back maybe a year ago and see where I said that won’t be the buzzword this time for PS5 Pro, it will be Ray Tracing

Now off for my what used to be my 5 mile run that slowly turned into a jog and recently turned into an old man swift walk ;)
 

Bojji

Member
Because this gen was all about how TFs make all the difference, the Digital Foundries of the world made sure those numbers were brought up every chance they got.

It was the playbook they were given

Can very likely go back maybe a year ago and see where I said that won’t be the buzzword this time for PS5 Pro, it will be Ray Tracing

Now off for my what used to be my 5 mile run that slowly turned into a jog and recently turned into an old man swift walk ;)

You can compare rdna2 teraflops on pc GPUs and you will see that cards usually perform as they should so 10TF GPU < 12TF GPU <16TF GPU etc. You can even compare them to rdna1 and get the same results.

Problem is with Rdna3 because with that dual issue added we now have 35TF GPU that performs comparable to 16TF rdna2 cards (6800 vs 7700xt).

Between PS5 and Xbox there are other factors, Xbox has very low clock for rdna2 part, PS5 thanks to higher clock has performance advantage in some aspects on paper. Add to that (apparently) better developer tools and some games perform better on this console but most of the time both consoles are very, very close to each other.
 
Last edited:

sncvsrtoip

Member
If they 2-4x rt upgrade helps them run the native 4k 30 fps Spider-Man 2 rt mode at native 4k 60 fps then great. I wanted to play Horizon fw at native 4k 60 fps. If this 45% gpu and 10% cpu increase helps them double the ps5 performance then yay. Let’s go cerny! I will bend the knee and call him king.

But we all know that’s not going to happen and they will use pssr to upscale to 4k. Likely dlss performance equivalent and I’m just not a fan of that. I stick with quality. Good for everyone suffering through 720p fsr2 60 fps modes though.
On the other hand if this pssr is realy good, we get monster machine for 1080p internal res, wasnt your demand that we want next gen graphics and devs should target lower res and not wasting resources ? ;)
 

winjer

Gold Member
Because this gen was all about how TFs make all the difference, the Digital Foundries of the world made sure those numbers were brought up every chance they got.

It was the playbook they were given

Can very likely go back maybe a year ago and see where I said that won’t be the buzzword this time for PS5 Pro, it will be Ray Tracing

Now off for my what used to be my 5 mile run that slowly turned into a jog and recently turned into an old man swift walk ;)

DF knows better than that.
Though most gaming journalists are complete idiots that have no idea what any technical term means.
But for the next generation, you can be sure that the buzzword will be GigaRays or something similar.
 

Mr.Phoenix

Member
I get that you are generally agreeing with me, with the only caveat being your stance of "What can we do?", but...

I doesn't matter if game is using 10% of CPU or 100% of CPU if that cpu is the bottleneck. You can have game that is single core limited and 3600 will struggle to keep 60FPS but 7600 will run it above 90 and in both cases game is running at max 2 cores. At the end of the day what matters is GPU utilization number, if that's below 95% then you are for sure CPU limited no matter how much of CPU is used.
Of course, it matters. It's the other half of the game rendering equation. GPU utilization is a by-product of CPU utilization. CPU does its thing first, and hands over to GPU. We literally cannot say that what the CPU does... doesn't matter.
It's obviously optimization problem and some developers really don't give a fuck, PC games are seeing poorly optimized games like that all the time. But what we can do about it? Only way to fight with it is to throw better CPU into your system, this doesn't work on consoles and when game has shitty CPU code it will just run poorly until devs fix it (or not). That's why CPU uplift would be welcome on Pro to make console run CPU limited games better, Starfield port with 60FPS mode would be possible - because I don't believe Bethesda will fix their code hahaha.
And this part is kinda a contradiction. We can't acknowledge that its a problem, but then say it doesn't matter because we accept that we can't do anything about it and that devs do not care, so all we can do is just buy faster CPUs. We can't say, it's an optimization problem, which ultimately presents as very poor CPU utilization, but then dismiss that and decide to call the CPU underutilization, a CPU bottleneck instead. Because the second you do that, you are saying, "it's not the dev's fault, it's the CPUs fault", we are saying, "hey devs don't bother fixing your code, we will just spend more money and get a better CPU".

But most importantly (at least in this case) we end up in situations like this, where there is this group or hive mind type gross mistake or assumption being made that an 8 core 16 thread Zen 2 CPU, is not a good enough CPU to handle 60fps. When this is a literal case, where there is literal proof, of what is possible from some devs on that CPU and what most devs choose to do with their CPU code in general... which even affects PCs too. But for some reason, that I simply cannot understand, we give the devs a pass, and call the CPu bottlenecked when their code is not even pushing the CPU that is bottlenecked) past 25% utilization.

This doesn't sound crazy to you?
 

SlimySnake

Flashless at the Golden Globes
On the other hand if this pssr is realy good, we get monster machine for 1080p internal res, wasnt your demand that we want next gen graphics and devs should target lower res and not wasting resources ? ;)
Thats not how this works. The base PS5 has already rendered those games at native 4k and the game is already held back in fidelity. Unless Sony goes back and adds nanite, lumen, next gen textures and asset quality along with higher quality character models for PS5 Pro games, they will be native 4k games up ported to PS5 Pro. Only difference is that they wont be native 4k but will be upscaled using PSSR. And thats ok if thats what you want. But I was replying to people who said the PS5 Pro will overperform its 45% gpu power and get devs 100% more power in actual games. Which is why i brought up HFW and Spiderman 2 running at native 4k 30 fps on the PS5 and native 4k 60 fps on the PS5 pro. It seems no one here believes we will get that so you all agree with me. Good to know.
 

HeisenbergFX4

Gold Member
DF knows better than that.
Though most gaming journalists are complete idiots that have no idea what any technical term means.
But for the next generation, you can be sure that the buzzword will be GigaRays or something similar.
Of course DF knows better but how many times did we hear about the TF advantage the Series X had because it was the playbook

People even focusing on TFs for this machine need to look at the 45% faster render more than TFs.
 

winjer

Gold Member
Of course DF knows better but how many times did we hear about the TF advantage the Series X had because it was the playbook

People even focusing on TFs for this machine need to look at the 45% faster render more than TFs.

But the Series X does have a Compute advantage, and that shows in some games.
But other games will have it's performance scale better with other metrics, such as rasterization, latency, etc.
And in those cases, the PS5 will come on top.

The reality is that TFLOPs are neither the only spec that maters, nor a useless spec.
It is just one among many. Each contributing in different ways.
 
Last edited:

sncvsrtoip

Member
Thats not how this works. The base PS5 has already rendered those games at native 4k and the game is already held back in fidelity. Unless Sony goes back and adds nanite, lumen, next gen textures and asset quality along with higher quality character models for PS5 Pro games, they will be native 4k games up ported to PS5 Pro. Only difference is that they wont be native 4k but will be upscaled using PSSR. And thats ok if thats what you want. But I was replying to people who said the PS5 Pro will overperform its 45% gpu power and get devs 100% more power in actual games. Which is why i brought up HFW and Spiderman 2 running at native 4k 30 fps on the PS5 and native 4k 60 fps on the PS5 pro. It seems no one here believes we will get that so you all agree with me. Good to know.
but I mean future games, when internal res is 1080p as Sony want then devs realy have opportunity to create great looking games as consoles are defenitly powerful enough for 1080p
 
Last edited:

yamaci17

Member
4k dlss performance beats "native" 1440p that most people "adore" most of the time. if pssr can come close to dlss, it is a big win for Sony/ps5 pro. you not "liking" 4k dlss performance does not make it any less worthwhile.

if PSSR ends up good, it will mean a game that runs at 60 fps with 4k pssr performance will look better than the same game running at native 1440p/30 FPS on PS5 despite the lower internal resolution (but of course people will have to be educated since they would psychologically think 1440p would look better)


 

SlimySnake

Flashless at the Golden Globes
Why doesn't RTX4060 (15TF) run 45% faster than PS5 (10TF)?
4zjx9VI.jpg
Because nvidia has been inflating tflops since the 30 series. The 4060 is not a 14.5 tflops GPU just like how the PS5 pro is not a 33.5 tflops GPU.


The PS5 compares almost 1:1 with the 2070 Super which is around 10 tflops and the 2080 which is 11.5 tflops. +/- 10% in most games and virtually identical in others.

you dont even need to go to nvidia. just compare other RDNA2 cards like the 6600xt and 6800. 10.7 vs 16.2 tflops. They scale 1:1 with tflops. 16.17/10.7=51%. Actual performance gain is 54%.

UBvPfJ8.jpg


You can see how similarly the 13.1 tflops 6700xt scales linearly with tflops. If the PS5 pro, which is 16.75 tflops and is 63% more powerful than the 10.23 tflops PS5, can only give 45% more performance then we can safely assume that the PS5 Pro GPU is being bottlenecked. Could be bottlenecked by the meager 28% increase in vram bandwidth. Could be bottlenecked by the wide and slow design Cerny himself rejected with the base PS5 and MS is still suffering from. Could be something else entirely.

Point is that we have the numbers on the PC side. You guys might be new to this, but we know how these GPUs are supposed to perform. We know what bottlenecks them. I know a guy bottlenecking his 3070Ti with a zen+ CPU being forced to run games at 30-40 fps. We have seen the 3070 get bottlenecked by the 8GB vram buffer leading a 3060 to outperform it. We have seen CPUs cripple RT performance in even 4090s. If Cerny had said that the PS5 GPU was performing like a 100% faster GPU despite the 63% increase in tflops, i wouldve believed him because he has the numbers. But im simply taking him for his word here and he himself said 45% for 63% more tflops. no need to fight me on this, take it up with cerny.
 

Audiophile

Member
No I was suggesting that the PS5 Pro GPU clock has to be higher than the PS5 2.23GHz and not 2.18GHz base on Tom Henderson.

Audio

The ACV in the PlayStation 5 Pro runs at a higher clock speed over the standard PlayStation 5, resulting in the ACM library having 35% more performance.
  • More convolution reverbs can be processed
  • More FFT or IFFT can be processed

The 35% gives 3.010GHz but that's to high, even for Mark Cerny. I wasn't saying it's 3.010GHz.

I can't recall ever finding out, but I always wondered if the "modified CU" that made up Tempest was one of the existing 36CUs in the WGPs or an additional piece of logic elsewhere on the APU.

Had a look around and I don't think anyone's ever identified it on the die:

Play-Station-5-die-close-marks.jpg


I suspect if it is separate from the main GPU logic that it may just be able to run decoupled in the Pro(?)
 

Mr.Phoenix

Member
But the Series X does have a Compute advantage, and that shows in some games.
But other games will have it's performance scale better with other metrics, such as rasterization, latency, etc.
And in those cases, the PS5 will come on top.

The reality is that TFLOPs are either the only spec that maters, nor a useless spec.
It is just one among many. Each contributing in different ways.
You are right, but his point is that this thing that you just said, was never focused on by DF. And its the ONLY thing that should have been focused on. Instead, they focused on the TF number as if it was the ONLY metric that separated the two consoles apart. And even when the games started coming, the PS5 was racking win after win, they refused to acknowledge that this could be all those areas that the PS5 was better than the XSX coming to light. They basically made themselves look like idiots and doubled down on their hubris. All that could have been avoided if they just did their actual jobs and not peddle the one thing that shone the XSX in a better light.

Anyone with half a brain could have said (and a lot of us did say) TF is just one part of the whole, there are areas that the PS5 will have an advantage and others that the XSX would. Not DF though.
but I mean future games, when internal res is 1080p as Sony want then devs realy have opportunity to create great looking games as consoles are defenitly powerful enough for 1080p
He's right. Doesnt work this way. No matter what, the base PS5 is still the lead platform. The best you can expect from the Pro, is having something akin to the base PS5 30fps fidelity mode but that running at 60fps on the PS5pro with the use of PSSR. In some odd cases here and there you can have slightly better features, like maybe slightly better RT, or higher quality reflections, or adding RT reflections/RT AO to a game that only had RT shadows. But outside that, there will not be much difference from a game on both SKUs outside, framerate and render method of the respective performance and fidelity modes.

Think of it this way,

PS5 1800p-4K@30fps quality mode, becomes PS5pro 1296p-1440p+PSSR>4K@60fps quality mode. (PSSR quality 1296-1440p internal rez)

PS5 720p-1080p@60fps performance mode. Becomes PS5pro 1080p + PSSR>1440p@80+fps (also PSSR quality, but for 1440p) or PS5pro 720p + RT + PSSR>1440p@80fps+
4k dlss performance beats "native" 1440p that most people "adore" most of the time. if pssr can come close to dlss, it is a big win for Sony/ps5 pro. you not "liking" 4k dlss performance does not make it any less worthwhile.

if PSSR ends up good, it will mean a game that runs at 60 fps with 4k pssr performance will look better than the same game running at native 1440p/30 FPS on PS5 despite the lower internal resolution (but of course people will have to be educated since they would psychologically think 1440p would look better)


Thank you... its high time someone pointed these out.
 

Loxus

Member
I can't recall ever finding out, but I always wondered if the "modified CU" that made up Tempest was one of the existing 36CUs in the WGPs or an additional piece of logic elsewhere on the APU.

Had a look around and I don't think anyone's ever identified it on the die:

Play-Station-5-die-close-marks.jpg


I suspect if it is separate from the main GPU logic that it may just be able to run decoupled in the Pro(?)
Imo, it would have to be located near the Media Engine, in that area not colored. While the IO Complex is located around the PCI-E ×4.

JHeRUAx.jpg
 
Last edited:

Bojji

Member
I get that you are generally agreeing with me, with the only caveat being your stance of "What can we do?", but...


Of course, it matters. It's the other half of the game rendering equation. GPU utilization is a by-product of CPU utilization. CPU does its thing first, and hands over to GPU. We literally cannot say that what the CPU does... doesn't matter.

And this part is kinda a contradiction. We can't acknowledge that its a problem, but then say it doesn't matter because we accept that we can't do anything about it and that devs do not care, so all we can do is just buy faster CPUs. We can't say, it's an optimization problem, which ultimately presents as very poor CPU utilization, but then dismiss that and decide to call the CPU underutilization, a CPU bottleneck instead. Because the second you do that, you are saying, "it's not the dev's fault, it's the CPUs fault", we are saying, "hey devs don't bother fixing your code, we will just spend more money and get a better CPU".

But most importantly (at least in this case) we end up in situations like this, where there is this group or hive mind type gross mistake or assumption being made that an 8 core 16 thread Zen 2 CPU, is not a good enough CPU to handle 60fps. When this is a literal case, where there is literal proof, of what is possible from some devs on that CPU and what most devs choose to do with their CPU code in general... which even affects PCs too. But for some reason, that I simply cannot understand, we give the devs a pass, and call the CPu bottlenecked when their code is not even pushing the CPU that is bottlenecked) past 25% utilization.

This doesn't sound crazy to you?

I completely agree with you for the most part. I was using 2600K CPU for many many, years with my 1070 and I started to become CPU limited in many games in 2018 and 2019 (to get stable 60FPS) - but that was super old CPU so no wonder, I changed it to 3600 and guess what? There were some new games that were bottleneck by this 2019 mid range CPU as well (3600 is much faster than 2600X no to mention 2 more cores and 4 more threads). Devs become super lazy or it's just byproduct of ballooned budgets, they don't have time to optimze to make their games actually use available hardware.

We have CPUs with many cores for years now yet most games still max out around 4 or 6 cores, even hyper threading/SMT doesn't change much.

Here you have 6 core vs 8 core cpus and different variants with different cache sizes:

mgJHN8b.jpg


2 more cores don't change shit but cache? That's important.

Here is 16 core - 32 thread CPU vs 6 core - 12 thread CPU:

fJshrGb.jpg


Majority of that 7950x is basically unused.

Here Daniel Owen explains how game that is not maximizing CPU can be still bottlenecked by CPU (timestamped):



It's TRAGIC that games are still written like that, single core performance is still dominant force and consoles that have weak single cores but not bad multi thread performance are not driving developers to write their games with that in mind. That's why there are some CPU limited games on consoles and we are not able to do anything about it, Sony isn't upgrading CPU in Pro but at the same time they aren't persuading devs to write better code.

That's why we have games like DD2 that are unable to run above ~35-40FPS on consoles, even if they dropped resolution to 720p it wouldn't change much.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
We already see internal res around 1080p or even lower in performance mode on ps5 in some new games, I think it will be more often in future
Here is a question for both you and Mr.Phoenix Mr.Phoenix . note that this is not a trick question and there is no wrong answer per say. Im genuinely curious.

Spiderman 2 runs at native 4k 30 fps with RT reflections on.
But then drops to 1080p to run at 60 fps.

Why drop the resolution by 4x just to get 2x more frames? Is there something in the PS5's RT hardware that renders the RT 2x faster when its at native 4k and 2x slower at lower resolutions? Whats the bottleneck here? Not talking about the PS5 Pro here. Just the base PS5.

i was playing around with callisto the other day to get the performance hit DLSS and FSR add. And I was getting 22 Fps at native 4k maxed out, and exactly 44 fps at 1440p. it seems my GPU scales down perfectly while the PS5 struggles to do the same. Whats the bottleneck here? The vram bandwidth? The CPU? poor optimization by insomniac, literally the best sony studio when it comes to RT? Or is there something in the PS5 RDNA2 architecture thats causing these games to scale down poorly as we reduce resolution and thus the GPU load?
 

Perrott

Gold Member

Game 2

Target – Add Raytracing to gameplay

Standard PlayStation 5 achieved 60FPS without raytracing, and PlayStation 5 Pro achieved 60FPS with Raytracing.


Game 2 further suggest the CPU isn't going to be a problem, as the PS5 Pro can run Game at 60fps with RT enabled. Which would most likely run at 30fps on PS5 because of the weaker GPU.
What could be some actual, real-world examples of that second scenario for PS5 Pro utilization that Sony suggests?
  • Gran Turismo 7
    PS5: Native 2160p at 60FPS with drops
    PS5 Pro: PSSR Quality (1440p) 2160p at 60FPS with RT reflections
  • Demon's Souls
    PS5: Native 1440p at 60FPS
    PS5 Pro: PSSR Quality (1440p) 2160p at 60FPS with RTGI and/or RT shadows
  • Horizon: Forbidden West
    PS5: Checkerboard 1800p at 60FPS
    PS5 Pro: PSSR Quality (1440p) 2160p at 60FPS with RTGI (as seemingly utilized in DS2)
Is any of this unreasonable to expect given the improvements PS5 Pro is said to be bringing to the table? And if so, in which ways exactly?
 

sncvsrtoip

Member
Here is a question for both you and Mr.Phoenix Mr.Phoenix . note that this is not a trick question and there is no wrong answer per say. Im genuinely curious.

Spiderman 2 runs at native 4k 30 fps with RT reflections on.
But then drops to 1080p to run at 60 fps.

Why drop the resolution by 4x just to get 2x more frames? Is there something in the PS5's RT hardware that renders the RT 2x faster when its at native 4k and 2x slower at lower resolutions? Whats the bottleneck here? Not talking about the PS5 Pro here. Just the base PS5.

i was playing around with callisto the other day to get the performance hit DLSS and FSR add. And I was getting 22 Fps at native 4k maxed out, and exactly 44 fps at 1440p. it seems my GPU scales down perfectly while the PS5 struggles to do the same. Whats the bottleneck here? The vram bandwidth? The CPU? poor optimization by insomniac, literally the best sony studio when it comes to RT? Or is there something in the PS5 RDNA2 architecture thats causing these games to scale down poorly as we reduce resolution and thus the GPU load?
decreasing resolution for example 2x doesnt mean gpu has 2x less to work as geometry is still same (and of course cpu usage is also same)
 

LordOfChaos

Member
Because nvidia has been inflating tflops since the 30 series. The 4060 is not a 14.5 tflops GPU just like how the PS5 pro is not a 33.5 tflops GPU.

I think "inflating it" is putting a conspiratorial spin on what is really a performance boosting mechanism that both AMD and Nvidia do now, but the "problem" is that they double the flops as you calculate them on paper with the dual issue pipelines, but can only dual issue for a small fraction of game data, leading to a smaller bump in performance.

It's still more efficient and more performant to do this, but people see Tflops going up by 2x and performance going up 10% and get disappointed, but to me that's as much a symptom of gamers long overreliance on flops to understand performance which has always been on shaky footing anyways.
 

Loxus

Member
What could be some actual, real-world examples of that second scenario for PS5 Pro utilization that Sony suggests?
  • Gran Turismo 7
    PS5: Native 2160p at 60FPS with drops
    PS5 Pro: PSSR Quality (1440p) 2160p at 60FPS with RT reflections
  • Demon's Souls
    PS5: Native 1440p at 60FPS
    PS5 Pro: PSSR Quality (1440p) 2160p at 60FPS with RTGI and/or RT shadows
  • Horizon: Forbidden West
    PS5: Checkerboard 1800p at 60FPS
    PS5 Pro: PSSR Quality (1440p) 2160p at 60FPS with RTGI (as seemingly utilized in DS2)
Is any of this unreasonable to expect given the improvements PS5 Pro is said to be bringing to the table? And if so, in which ways exactly?
This is exactly what I think will happen.

Before we didn't know about PSSR.
With PSSR, we'll get Fidelity Performance Modes with RT enabled.
 

yamaci17

Member
decreasing resolution for example 2x doesnt mean gpu has 2x less to work as geometry is still same (and of course cpu usage is also same)
I've proved this many times with explanations but he does not seem to care. he still is obsessed with pixel counts lol.

just mere days ago I just shown in this thread how some games do not scale well with resolution and has heavy geometry cost on GPU that does not scale with resolution.

rdr 2
Imgsli

native 4k 26 FPS
native 1440p 41 FPS
native 1080p 54 FPS
1080p dlss quality 65 FPS

4k dlss quality (internal 1440p) 33 FPS
4k dlss performance (internal 1080p) 40 FPS
4k dlss ultra performance (internal 720p) 46 FPS


You can ask him to explain the performance scaling above. He will probably tell you it is a CPU bottleneck :)



960p 1.7m pixels
1440p 3.6m pixels

but only a mere %25-30 performance bump.

but 2x pixel reduction... but my 3080 in callisto.. but...

this is direct proof that spiderman is geometry heavy and does not benefit that much from lowering resolution, EVEN LESS SO with upscaling due to native buffers in play

this is direct proof of why spiderman needs to drop to 4k IGTI performance to hit 60 fps while being able to hit native 4k at 30 fps. if it actually outputted to actual 1080p, it would've performed better. but it is upscaling, which means it still pays 4k native buffer costs of a lot of elements
 
Last edited:

SlimySnake

Flashless at the Golden Globes
decreasing resolution for example 2x doesnt mean gpu has 2x less to work as geometry is still same (and of course cpu usage is also same)
But i dont see that on PC. never have. it scales down 1:1. Just take demon souls. native 4k 30 fps. 1440p 60 fps. On the same GPU. Why didnt geometry come into play there?

Why can demon souls run at 1440p 60 fps with a 30 fps native 4k mode and Spiderman 2 has to drop all the way down to 1080p?
 

ChiefDada

Gold Member
Luckily for you, neither DF nor I ignored the PS5 Pro ML or RT hardware. Including in the very post you quoted.

Sure, but you are downplaying by virtue of your emphasis on TF spec.

Honestly what did the PS4 Pro offer other than 1080p to 1440p resolution bump? Because games certainly weren't going from 30 to 60fps with PS4 Pro. It's very simple: Last gen, we were CPU limited. This gen, with the transition to RT, Nvidia proven gamechanger of DLSS upscaling, and console decompression taking burden off of CPU, we are FAR more GPU limited than we are CPU limited. Which is why you will see much more games reaching 60fps with PS5 Pro vs last gen Pro upgrade.
 

sncvsrtoip

Member
But i dont see that on PC. never have. it scales down 1:1. Just take demon souls. native 4k 30 fps. 1440p 60 fps. On the same GPU. Why didnt geometry come into play there?

Why can demon souls run at 1440p 60 fps with a 30 fps native 4k mode and Spiderman 2 has to drop all the way down to 1080p?
You can test spiderman 2, you sure it scale idealy lineary ? Also when game target 60fps it usualy have to be able to run above it to have consistent experience, demon souls in 30 mode possible run without cap above 40? Just theory
 
Last edited:

yamaci17

Member
You can test spiderman 2, you sure it scale idealy lineary ? Also when game target 60fps it usualy have to be able to run above it to have consistent experience
no game will scale linearly with upscaling. most games don't even scale linearly without upscaling

native 4k 26 FPS
native 1440p 41 FPS
native 1080p 54 FPS
1080p dlss quality 65 FPS

4k dlss quality (internal 1440p) 33 FPS
4k dlss performance (internal 1080p) 40 FPS
4k dlss ultra performance (internal 720p) 46 FPS

rdr 2 performance stats completely dismantles his logic

rdr 2 needs 4k to 1080p to get 2x performance
some other game will need less, some other game will need more. I guess this is a concept too hard for him to understand

also just because demon souls runs at 4k 30 fps does not mean it cannot run at a high framerate. it is entirely possible demon souls can run at 40+ fps at 4k while spiderman 2 is probably at the limits of hitting that 4k/30 fps. as a result, trying to make sense of resolution drops in games that have locked performance caps is nonsensical altogether
 
Last edited:

SlimySnake

Flashless at the Golden Globes
You can test spiderman 2, you sure it scale idealy lineary ?
how do i test spiderman 2? its not on pc yet. the other day when i tested a bunch of pc games, i was able to get 1:1 scale when going from native 4k to 1440p in several games. this has been my experience over the last 15 years or so on pc gaming. Unless the game is CPU bound.
Also when game target 60fps it usualy have to be able to run above it to have consistent experience
same is true for 30 fps.
 

Mr.Phoenix

Member
I completely agree with you for the most part. I was using 2600K CPU for many many, years with my 1070 and I started to become CPU limited in many games in 2018 and 2019 (to get stable 60FPS) - but that was super old CPU so no wonder, I changed it to 3600 and guess what? There were some new games that were bottleneck by this 2019 mid range CPU as well (3600 is much faster than 2600X no to mention 2 more cores and 4 more threads). Devs become super lazy or it's just byproduct of ballooned budgets, they don't have time to optimze to make their games actually use available hardware.

We have CPUs with many cores for years now yet most games still max out around 4 or 6 cores, even hyper threading/SMT doesn't change much.

Here you have 6 core vs 8 core cpus and different variants with different cache sizes:

mgJHN8b.jpg


2 more cores don't change shit but cache? That's important.

Here is 16 core - 32 thread CPU vs 6 core - 12 thread CPU:

fJshrGb.jpg


Majority of that 7950x is basically unused.

Here Daniel Owen explains how game that is not maximizing CPU can be still bottlenecked by CPU (timestamped):



It's TRAGIC that games are still written like that, single core performance is still dominant force and consoles that have weak single cores but not bad multi thread performance are not driving developers to write their games with that in mind. That's why there are some CPU limited games on consoles and we are not able to do anything about it, Sony isn't upgrading CPU in Pro but at the same time they aren't persuading devs to write better code.

That's why we have games like DD2 that are unable to run above ~35-40FPS on consoles, even if they dropped resolution to 720p it wouldn't change much.

Well said, Its truly tragic.
Here is a question for both you and Mr.Phoenix Mr.Phoenix . note that this is not a trick question and there is no wrong answer per say. Im genuinely curious.

Spiderman 2 runs at native 4k 30 fps with RT reflections on.
But then drops to 1080p to run at 60 fps.

Why drop the resolution by 4x just to get 2x more frames? Is there something in the PS5's RT hardware that renders the RT 2x faster when its at native 4k and 2x slower at lower resolutions? Whats the bottleneck here? Not talking about the PS5 Pro here. Just the base PS5.

i was playing around with callisto the other day to get the performance hit DLSS and FSR add. And I was getting 22 Fps at native 4k maxed out, and exactly 44 fps at 1440p. it seems my GPU scales down perfectly while the PS5 struggles to do the same. Whats the bottleneck here? The vram bandwidth? The CPU? poor optimization by insomniac, literally the best sony studio when it comes to RT? Or is there something in the PS5 RDNA2 architecture thats causing these games to scale down poorly as we reduce resolution and thus the GPU load?
This one is easy...

First off, this is not a 1080p and 2160p thing as you are making it sound. Far more nuanced than that.

Performance mode runs with DRS 1008p -1440p then reconstructed to 1440p. Fidelity mode runs from DRS 1440p -2160p and then is reconstructed to 4k.

So no, this is not indicative of a 4x drop in rex to accommodate a 2x jump in framerate. Its actually indicative of only a ~2x drop in rez to accommodate a 2x boost to framerate. It is also indicative of what their render ethos is. Their engine is trying to run at native 1440p in performance mode and native 4k in fidelity mode. Obviously, it cant do that at all times, so it uses DRS and reconstruction to allow it to adjust resolutions.

Basically, in situations where the fidelity is running at 2160p internally, the performance mode is running at 1440p. In situations where fidelity crashes down to 1440p internally, performance mode is probably running at 1008p internally. And you needn't worry about the cost of the reconstruction, because it applies to both the fidelity and performance modes. And its cost is likely similar as they have different target resolutions. You can see this for yourself in the timestamped video. Or if the timestamp doesn't work just go to 6:40.

 
Last edited:

yamaci17

Member
Well said, Its truly tragic.

This one is easy...

First off, this is not a 1080p and 2160p thing as you are making it sound. Far more nuanced than that.

Performance mode runs with DRS 1008p -1440p then reconstructed to 1440p. Fidelity mode runs from DRS 1440p -2160p and then is reconstructed to 4k.

So no, this is not indicative of a 4x drop in rex to accommodate a 2x jump in framerate. Its actually indicative of only a ~2x drop in rez to accommodate a 2x boost to framerate. It is also indicative of what their render ethos is. Their engine is trying to run at native 1440p in performance mode and native 4k in fidelity mode. Obviously, it cant do that at all times, so it uses DRS and reconstruction to allow it to adjust resolutions.

Basically, in situations where the fidelity is running at 2160p internally, the performance mode is running at 1440p. In situations where fidelity crashes down to 1440p internally, performance mode is probably running at 1008p internally. And you needn't worry about the cost of the reconstruction, because it applies to both the fidelity and performance modes. And its cost is likely similar as they have different target resolutions. You can see this for yourself in the timestamped video. Or if the timestamp doesn't work just go to 6:40.


his mistake is assuming that upscaling will have a linear performance cost. even the statement "15 years" is funny as heck. we didnt have this much upscaling just 6 years ago on PC.

here's rtx 4080 in fallen order
KtqmcP5.png


needs 4k dlss performance to get 2x performance
it is not a cpu limitation either in this review because we can see the game can go up to 96 fps with 1440p dlss quality

not all games scale linearly. he's cherry picking games that bottleneck the gpu heavily at 4k for other reasons. I can do the reverse. this argument will have no winner till he admits some games will scale differently
 
Last edited:

sncvsrtoip

Member
how do i test spiderman 2? its not on pc yet. the other day when i tested a bunch of pc games, i was able to get 1:1 scale when going from native 4k to 1440p in several games. this has been my experience over the last 15 years or so on pc gaming. Unless the game is CPU bound.

same is true for 30 fps.
you can try miles morales
 

SlimySnake

Flashless at the Golden Globes
I completely agree with you for the most part. I was using 2600K CPU for many many, years with my 1070 and I started to become CPU limited in many games in 2018 and 2019 (to get stable 60FPS) - but that was super old CPU so no wonder, I changed it to 3600 and guess what? There were some new games that were bottleneck by this 2019 mid range CPU as well (3600 is much faster than 2600X no to mention 2 more cores and 4 more threads). Devs become super lazy or it's just byproduct of ballooned budgets, they don't have time to optimze to make their games actually use available hardware.

We have CPUs with many cores for years now yet most games still max out around 4 or 6 cores, even hyper threading/SMT doesn't change much.

Here you have 6 core vs 8 core cpus and different variants with different cache sizes:

mgJHN8b.jpg


2 more cores don't change shit but cache? That's important.

Here is 16 core - 32 thread CPU vs 6 core - 12 thread CPU:

fJshrGb.jpg


Majority of that 7950x is basically unused.

Here Daniel Owen explains how game that is not maximizing CPU can be still bottlenecked by CPU (timestamped):



It's TRAGIC that games are still written like that, single core performance is still dominant force and consoles that have weak single cores but not bad multi thread performance are not driving developers to write their games with that in mind. That's why there are some CPU limited games on consoles and we are not able to do anything about it, Sony isn't upgrading CPU in Pro but at the same time they aren't persuading devs to write better code.

That's why we have games like DD2 that are unable to run above ~35-40FPS on consoles, even if they dropped resolution to 720p it wouldn't change much.

Yeah, we can always blame developers for being lazy and everyone here knows i do it more the most, but at the end of the day, the vast majority will not rewrite their games to be properly multithreaded and do what they have been doing all gen. Which is to remove RT from 60 fps modes on consoles, and leave PC bros to upgrade their CPUs. IIRC, Remnant devs had to remove Lumen because nanite was already too expensive. Lord of the Fallen devs also only used nanite while using Lumen to bake in the lighting. And robocop and fortnite use software lumen to get 60 fps on consoles.

That said, a Rockstar engineer just yesterday said they can do 60 fps in GTA6. presumably on the base console before the tweet was deleted. So if GTA6 can run at 60 fps on PS5 and XSX, the Pro should run it with no problems. Maybe they have figured out how to either multithread their CPU load or they are doing what the Kingsmaker engineers are doing and doing all the NPC calculations on the GPU. That lets them run 4000 NPCs with their own physics and animations at 60 fps. No room for lumen or nanite but they do have a realtime GI solution so maybe Rockstar can switch off RTGI for the 60 fps mode while keeping GI dynamic instead of baked.


battlefieldAztec.gif


If a bunch of indie devs can do this at 60 fps then surely rockstar with their 3000 devs have probably figured out how to do this at 60 fps.
ce66ca25d79f283fd1f0d7b833c90c504888df22.gifv


I also heard that Epic is finally improving multithreading in UE5's latest build. not sure when it comes out but my guess is that the CPU utilization might improve in the next few years for UE5 games. But the first few years of this gen have not given me any reason to trust devs to optimize their engines to support the 16 thread console CPUs. DG2 and Rise of Ronin being CPU limited in towns is simply the norm this gen.
 

Mr.Phoenix

Member
Yeah, we can always blame developers for being lazy and everyone here knows i do it more the most, but at the end of the day, the vast majority will not rewrite their games to be properly multithreaded and do what they have been doing all gen. Which is to remove RT from 60 fps modes on consoles, and leave PC bros to upgrade their CPUs. IIRC, Remnant devs had to remove Lumen because nanite was already too expensive. Lord of the Fallen devs also only used nanite while using Lumen to bake in the lighting. And robocop and fortnite use software lumen to get 60 fps on consoles.
Here is the sad reality of the situation. This is just history repeating itself as always.

I was also expecting at least 30% bump in CPU clocks on the PS5pro. Just from looking at the PS4pro. But when I saw they only gave it a 10% "special use case scenario" bump. It just made sense to me. When you consider all else they did; more CUs, better RT, AI, PSSR, better audio chip, more usable game RAM, more bandwidth...etc. A CPU clock bump of like 30%+ would have been the easiest thing to do. So it got me asking, why didn't they? And the simple obvious answer was that; it was not needed.

At least, as far as Sony is concerned and how they do their performance analytics, Sony would have no doubt concluded that there practically wasn't any tested game on the PS5 that was using more than 40-50% CPU utilization. You cant make an argument for more CPU power when you are faced with that kinda statistic.

But what I mean by history repeating itself; in every gen, we always have these situations where first-party IQ and performance is always ahead of most third-party stuff. And this gen its showing to be no different. Just look at HZFW and FF16. HZFW looks better, and has a significantly bigger scale, but somehow manages 60fps locked and 1440p. But FF16, even at 720p... cannot manage 60fps. And they are both on the same hardware.

This is why I have been arguing tooth and nail about this notion of CPU bottlenecks. It unfortunately may present that way, but its just not the case.
I also heard that Epic is finally improving multithreading in UE5's latest build. not sure when it comes out but my guess is that the CPU utilization might improve in the next few years for UE5 games. But the first few years of this gen have not given me any reason to trust devs to optimize their engines to support the 16 thread console CPUs. DG2 and Rise of Ronin being CPU limited in towns is simply the norm this gen.
About time too. UE5 is one of the most lopsided single-thread-based engines on the planet. One would think that when they were designing their engine, that should have been something they prioritized. All their innovation and good work on the GPU side of things is made useless by their poor CPU utilization. I gave it as an example earlier... imagine Alan wake 2, with an overall average CPU utilization of 11%. 11 fucking percent. How can anyone in their right mind see that kinda of evidence and still come and say, the "problem" is that such and such has a weak CPU?

We should strive to keep the same energy. We are quick to point out a CPU bottleneck when looking at GPU utilization sitting at under 60ish%. We see that and say... there is a CPU bottleneck because the GPU isnt really doing anything. Shouldn't we also then look at CPU utilization sitting at under 25% which precedes the GPU sitting at under 50%, and instead of call it a CPU bottleneck call it a dev bottleneck? Because that's an underutilized CPU.
 
Last edited:

Jesb

Member
I’m so confused on how you make a pro console and still very little improvement in the cpu. I never bought a ps5 and was thinking that at this point it wouldn’t make sense to get a ps5 but a pro instead. But now I’m not sure if that’s even gonna be worth waiting on.
 

SonGoku

Member
Why doesn't RTX4060 (15TF) run 45% faster than PS5 (10TF)?
4zjx9VI.jpg
Because Lovelace like RDNA3 is doing some dual issue shaders shenenigans for their theoretical flops
If you divide RDNA3 theoretical flops by 2 you get a pretty close comparison with RDNA2. Not sure how NVidia math works but their theoretical flops are inflated as well.
In regards to expectations, you and I both had very similar expectations based on leaks. We both expected 17 tflops, but i did not expect that to translate to 45% or 14.5 tflops in reality. I also expected RDNA4 IPC gains that would help it perform like a 20 tflops or 100% more powerful console.
I am still puzzled by the whole 1,45x GPU gain though, it's the one thing in all of this that makes no sense to me. Being that we have a 1.67x raw TF increase and even a decent bandwidth bump.
This might be a cope, but maybe the PS5 + 45% rendering performance is the target resolution Sony set for the Pro and recommends to developers. Not that the GPU compute is only 45% faster
 
Last edited:
Top Bottom