• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Playstation 5 Pro specs analysis, also new information

Gaiff

SBI’s Resident Gaslighter
(Horizon Forbidden West ) PS5 = 3070 / 2080TI

rLL3aia.jpg
Those are with Very High settings and a 28% higher pixel count than 1600x1800. PS5 also uses DRS and this is fixed resolution. Not exactly an apples to apples comparison there.
 

Mr.Phoenix

Member
Zen 5 specifically the 8700x but cut down and add 3d cache if possible
I don't know if its just stubbornness on your part... or maybe just delusion. But you are refusing to try and understand not just how consoles work, but also why Sony left the PS5s CPU mostly untouched. Gonna borrow the post below to help make the point (again). Feel free to watch the video.

In that benchmark, the 4070 is paired to a 8c/16t CPU running at 4.7Ghz + 4070. In 4K DLSS quality, the game is averaging 60fps+ using very high settings across the board. So basically at settings slightly higher than the PS5 fidelity mode.

Now I am going to point out something very important, and the very thing Sony engineers take note of when the hardware is being profiled and they are trying to address bottlenecks. Please, try and just forget all your what you would like things for the remainder of this post.

Take a look at CPU and GPU utilization %. First, bare in mind that HZFW is representative of one of the few games that actually have above-average hardware utilization. The GPU is sitting at 99% utilization and the CPU? Its sitting at 50% utilization. This is a indicating that if there is a bottleneck here, its the GPU. Not the CPU.

Again, 8c/16t@4.7Ghz. 50% CPU utilization. That literally means that you should be able drop that CPU clock to 2.8Ghz, and STILL get GPU utilization of 99%. In a perfect world... it doesn't work this way though, because utilization is indicative of how much work the component is tasked to do at any given time. Basically, in this case, at any given time, the CPU is only doing work 50% of the time, at 4.7Ghz. And mind you, this is an example of really good CPU utilization. Games like Alan Wake 2 for instance has CPU utilization sitting at around 25%, sometimes even as low as 11%.

And this is the real issue, and why Sony left the PS5 CPU unchanged. If you are looking to upgrade components in your hardware, and you have a limited budget, and you profile all the games on your platform, and notice, that none of them are recording more than 50% CPU utilization, would you honestly come to the conclusion that what you need to do is increase your CPU clock speed?
 
Last edited:
Assuming Sony would continue tweaking Zen2 density and PS5 Pro being RDNA3.5.

Using Zen4c and RDNA3 die shots. I made an 8 Core, 60CU, 256-bit bus APU.
3074A5I.png

This APU measures ~280mm² on 5nm.
PS5 7nm = 308 mm²
PS5 6nm = ~260mm²

This one, which is what I would like from a console, has 12 Cores, 60CUs and a 320-bit bus for 20GB of GDDR6.
EbpXISP.png

Which measures ~316mm² on 5nm.

64CUs may be around 300mm² on 5nm.
The Pro on 4nm maybe unlikely.
Off topic loxus as nice as 12 cores would be shouldn’t we save that for the ps6 I think the focus should be updating the cpu arc
 
I don't know if its just stubbornness on your part... or maybe just delusion. But you are refusing to try and understand not just how consoles work, but also why Sony left the PS5s CPU mostly untouched. Gonna borrow the post below to help make the point (again). Feel free to watch the video.

In that benchmark, the 4070 is paired to a 8c/16t CPU running at 4.7Ghz + 4070. In 4K DLSS quality, the game is averaging 60fps+ using very high settings across the board. So basically at settings slightly higher than the PS5 fidelity mode.

Now I am going to point out something very important, and the very thing Sony engineers take note of when the hardware is being profiled and they are trying to address bottlenecks. Please, try and just forget all your what you would like things for the remainder of this post.

Take a look at CPU and GPU utilization %. First, bare in mind that HZFW is representative of one of the few games that actually have above-average hardware utilization. The GPU is sitting at 99% utilization and the CPU? Its sitting at 50% utilization. This is a indicating that if there is a bottleneck here, its the GPU. Not the CPU.

Again, 8c/16t@4.7Ghz. 50% CPU utilization. That literally means that you should be able drop that CPU clock to 2.8Ghz, and STILL get GPU utilization of 99%. In a perfect world... it doesn't work this way though, because utilization is indicative of how much work the component is tasked to do at any given time. Basically, in this case, at any given time, the CPU is only doing work 50% of the time, at 4.7Ghz. And mind you, this is an example of really good CPU utilization. Games like Alan Wake 2 for instance has CPU utilization sitting at around 25%, sometimes even as low as 11%.

And this is the real issue, and why Sony left the PS5 CPU unchanged. If you are looking to upgrade components in your hardware, and you have a limited budget, and you profile all the games on your platform, and notice, that none of them are recording more than 50% CPU utilization, would you honestly come to the conclusion that what you need to do is increase your CPU clock speed?
Zen 5 wouldn’t need a clock increase cause of the ipc
 

Panajev2001a

GAF's Pleasant Genius
What makes you so sure? Have you seen secret info or just a guess?
Just a guess based on the Slim being 6nm and what Sony actually announced. It could be an optimised layout (saving some space) and Sony allowing a bit bigger chip than before.
The "confirmed" leak does not mention 6nm afaik
They do it mention anything yes… they do not deny it yet. I feel the desire for N4-N5 comes from the desire to be outraged they do not see Zen4c or something… or much bigger changes which is in and of itself out of place for what a mid-generation console upgrade targets and what 3.5-4 years buy you.
But PS5 Pro is not RDNA3...
It is a mix of RDNA3 and RDNA4, like PS4 Pro was a mix of Polaris and Vega. So it needs a base. If RDNA3 had only been designed for N5 and below it would have made it harder to justify N6 for the Pro.

At this point i would be happy with 4070 level and fear we might get 4060ti/3070 :(
I think the GPU upgrade is very solid :).
 

Panajev2001a

GAF's Pleasant Genius
There is no reason they have to make the pro cheap for one if anything the average opinion would prefer something more expensive.
Average opinion being a few random people here that talk the talk but not all might walk the walk and give it great word of mouth either? A much more expensive and specced PS5 Pro, without any exclusive software that could truly be optimised for btw (you should see how this is a problem already, right?), to be succeeded in a few years by a much cheaper PS6 would be a very hard proposition.

Also, a mid generation upgrade without any exclusive software released one year or so before MS next-generation HW will not have the best multiplatform games but can still be designed to hold its own well (it has been) and the MS console will have software whose development has mostly been led on base PS5 anyways… anything beyond that I would save and put towards PS6.

PS5 Pro is both the best PS5 it can be and a test of tech for PS6.

I also think your greatly exaggerating the price
Fine, but there is evidence on one side (even reported in this thread so I am not going to retread there) while on the other side we seem to be upset they short changed us or something and something so much more should have been on the card which is wrong IMHO.
 
Last edited:

Imtjnotu

Member
Just watched some videos on FSR 3.1. Particularly, frame generation.

All I can say is that we are sleeping on this with PSSR. If a non-AI accelerated FSR can do FG as well as it does, then I think it's safe to say PSSR doing frame gen is a no-brainer. Would they use it and get games hovering around 50fps to a locked 60? or take a 40fps game to 70fps?

Could be interesting.
I don't think pssr will use frame Gen. But then again with the claims of 120fps I could be wrong
 

Bojji

Member


?

From the raw numbers, its on par with a 4070 super. Or thats its closest match on the Nvidia side of things.

4070Super = 35.5TF (17.75TF), 12GB, 504GB/s.
4070 = 29.15TF (14.57TF), 12GB, 504GB/s.
PS5pro = 33.5TF (16.75TF), 13.7GB, 576GB/s.

I think it safe to say, that if the RT is anywhere close to what RT is like on the Nvidia cards, or at least for non-RT-based workloads, the PS5pro would literally land somewhere in the middle of those two cards performance-wise. That's being within spitting distance of a $600 GPU.

Flops are calculated for "boost clock" figure on nvidia cards, usually GPU's run higher than that. 4070 boost clock on paper

tgJkS2h.jpg


Vs actual clock in games when not power limited:

nUc5lTi.jpg


So with that in mind:

5888 x 2475 x 2 = 29.1TF
5888 x 2800 x 2 = 32.9TF

Most GPUs will boost to 2700-2800 for end users.

For comparisons AMD (with 7800XT as example):

PL7H07I.jpg


zJUvZ6x.jpg


That's why Nvidia GPUs perform better in real life than pure TF number would suggest, because TF are calculated from very conservative boost clock vs AMD that use boost clock much closer to real clock end users get
4070S: 7160 x 2800 x 2 = 40.1TF

(Horizon Forbidden West ) PS5 = 3070 / 2080TI

rLL3aia.jpg

PS5 doesn't run in very high settings and it doesn't run with fixed 2560x1440. PS5 runs with dynamic resolution that MAXIMUM value is 3200x1800CB or 1600x1800 in raw pixels and it's BELOW that most of the time.

That's why we don't see DF comparing some games where fair comparison can't be made, you can't even set CB on PC version and dynamic resolution prevents performance comparison anyway.
 
Last edited:

Fafalada

Fafracer forever
I don't think pssr will use frame Gen. But then again with the claims of 120fps I could be wrong
Frame gen is orthogonal to upscale use, I know Nvidia and Amd marketing bundle it like its all one thing, but it really isn't.
And Sony's been doing frame gen since 2016 on consoles, so yes I fully expect it going forward, as I've said in other threads, the hope on consoles is we move past interpolation and get latency improvements as well.
 

shamoomoo

Member
?



Flops are calculated for "boost clock" figure on nvidia cards, usually GPU's run higher than that. 4070 boost clock on paper

tgJkS2h.jpg


Vs actual clock in games when not power limited:

nUc5lTi.jpg


So with that in mind:

5888 x 2475 x 2 = 29.1TF
5888 x 2800 x 2 = 32.9TF

Most GPUs will boost to 2700-2800 for end users.

For comparisons AMD (with 7800XT as example):

PL7H07I.jpg


zJUvZ6x.jpg


That's why Nvidia GPUs perform better in real life than pure TF number would suggest, because TF are calculated from very conservative boost clock vs AMD that use boost clock much closer to real clock end users get
4070S: 7160 x 2800 x 2 = 40.1TF



PS5 doesn't run in very high settings and it doesn't run with fixed 2560x1440. PS5 runs with dynamic resolution that MAXIMUM value is 3200x1800CB or 1600x1800 in raw pixels and it's BELOW that most of the time.

That's why we don't see DF comparing some games where fair comparison can't be made, you can't even set CB on PC version and dynamic resolution prevents performance comparison anyway.
The paper specs of the RTX 4070S vs the Pro is 20% with regards to textures and 7% with its various fill rates.
 
Last edited:

Bojji

Member
The paper specs of the RTX 4070S vs the Pro is 20% with regards to textures and 7% with its various fill rates.

We only know flops numbers and memory bandwidth for pro, we don't know detailed specs beyond that (you can calculate some things based on clocks but we are not sure about clocks anyway).

You should compare it with 7800XT as that is RDNA3 GPU with 60CUs, you can't get closer than that with PC GPU.

7800XT is on paper faster than 4070S but loses in games (and that TF number I calculated using actual 4070S n game clocks somewhat answers that).

7800 on the left and 4070S on the right:

8rfIZ5D.jpg



Hov7UXd.jpg
 

Gaiff

SBI’s Resident Gaslighter
The paper specs of the RTX 4070S vs the Pro is 20% with regards to textures and 7% with its various fill rates.
We'd need the number of ROPs and clock speed of the Pro to calculate that. Assuming 96 ROPs at the reported 2.18GHz speed, this put its fill rate at 209.3 GPixels/s vs around 220 GPixels/s for the 4070S at a clock of 2750 MHz. It would, however, have a higher pixel fill rate than the regular 4070 which is around 176 GPixels/s with clocks of 2750MHz.
 
Last edited:

shamoomoo

Member
We only know flops numbers and memory bandwidth for pro, we don't know detailed specs beyond that (you can calculate some things based on clocks but we are not sure about clocks anyway).

You should compare it with 7800XT as that is RDNA3 GPU with 60CUs, you can't get closer than that with PC GPU.

7800XT is on paper faster than 4070S but loses in games (and that TF number I calculated using actual 4070S n game clocks somewhat answers that).

7800 on the left and 4070S on the right:

8rfIZ5D.jpg



Hov7UXd.jpg
Yeah. There weren't any leaks with regards to frequency but the Pro should minimally have 96 ROPs and 240 TMUs with a clock rate of 2.18 GHZ, though I believe it might be 2.2 GHz.
 

Gaiff

SBI’s Resident Gaslighter
?



Flops are calculated for "boost clock" figure on nvidia cards, usually GPU's run higher than that. 4070 boost clock on paper



Vs actual clock in games when not power limited:



So with that in mind:

5888 x 2475 x 2 = 29.1TF
5888 x 2800 x 2 = 32.9TF

Most GPUs will boost to 2700-2800 for end users.

For comparisons AMD (with 7800XT as example):
That's the most frequent mistake people make when reading NVIDIA GPUs specs sheets. Even the boost clocks are much lower than the actual in-game clocks. I recall having to correct people several times who thought that the 2080 Ti's 13.45 TFLOPs figure is accurate when it isn't. Virtually every card will boost to 1850MHz and above, with good aftermarket models boosting to 2000MHz+. It's actually much closer to a 16-17 TFLOPs GPU. This also extends to its pixel fill rate which according to the specs sheets are beaten by around 5% by the PS5 but in the real world, it's actually at 162.8 to 176, so it's really 14-23% better there too.
 

Gaiff

SBI’s Resident Gaslighter
Also, curious, but how do we arrive at a bandwidth of 576 GB/s with a 256-bit bus and clocks of 2.18GHz? This would put it at 558 GB/s. 576 GB/s means the clocks are at 2.25GHz and subsequently 17.28 TFLOPs or 34.56 dual-issue.

Edit: Never mind, I'm an idiot. Was looking at GPU clock and not memory clock. So 2.25GHz for the memory clock for a bandwidth of 576 GB/s.
 
Last edited:

64bitmodels

Reverse groomer.
They should have spent that budget on half decent gunplay mechanics instead
you right click, point cursor towards enemy, click, enemy dies. How is it flawed?? You act like the gunplay in it is irredeemable dog shit or something, play Starfield or new Vegas if you want actual bottom of the barrel gunplay
 

Loxus

Member
MMG0Vk6.png


The number of ROPs PS5 Pro will have, depends of the number of Shader Engines and the type/number of Render Backends used.

Render Backend = 4 Color + 16 Depth ROPs
Render Backend+ = 8 Color + 16 Depth ROPs

RDNA1/PS5 has 8 Render Backends per Shader Engine.
RDNA2/3/XBSX has 4 Render Backend+ per Shader Engine.

Normally,
1 Shader Engine = 32 ROPs
2 Shader Engines = 64 ROPs
3 Shader Engines = 96 ROPs

The PS5 has a total of 16 Render Backends.
If the PS5 Pro keeps that number of Render Backends but upgrade them to Render Backend+.

It will have,
1 Shader Engine = 64 ROPs
2 Shader Engines = 128 ROPs
 
Last edited:

Puscifer

Member
Yes, without a doubt.

GTA V on PS360 looked amazing for its time. GTA IV looked great and was widely praised for its "living and breathing" city. To this date, no other game has NPC interactions like Rockstar games, a robust police system like them, and a vibrant open-world like they do. Their games control horribly and are dull as hell but what they do is almost unmatched. The game that came the closest is probably Sleeping Dogs.

I have almost no doubt GTA VI will be the most advanced game when it hits the market but it'll still be more effective than sleeping pills and making me fall asleep.

Also, how good a game looks doesn't necessarily have much to do with CPU demands. HFW looks great mainly because of the nearly unmatched texture work, granular details, and art direction, none of which are CPU-intensive. In terms of how interactive the world is, it doesn't even come close to BOTW.
This is 100% why I'm okay waiting for a PC version, RDR left a real bad taste in my mouth for what it attempted and I didn't like it at all. If this is the direction they're headed in I'm okay admiring it from a distance, just saying.
 

Thanati

Member
I never got a PS5 and by the time I was thinking of getting one, they were talking about the pro. I’ll most likely pick one up.
 

SonGoku

Member
How is this even remotely possible even if you take the 45% figure at face value? Some of y'all are taking FUD to unprecedented heights with these gloom and doom takes.
6cJujaw.jpg
Im just assuming the worst case scenario, to be pleasantly surprised with 4070 levels of performance
Your comparison is off btw, you should be using the 4060 as baseline not the Ti
Just a guess based on the Slim being 6nm and what Sony actually announced. It could be an optimised layout (saving some space) and Sony allowing a bit bigger chip than before.
I think its different from the PS4 Slim situation because back then they jumped straight from 28nm to 16nm, there was no in between node with a "free" upgrade path like 6nm is to 7nm
They do it mention anything yes… they do not deny it yet. I feel the desire for N4-N5 comes from the desire to be outraged they do not see Zen4c or something… or much bigger changes which is in and of itself out of place for what a mid-generation console upgrade targets and what 3.5-4 years buy you.
Why would they deny something that was not previously mentioned? DF is the one that made the 6nm conjecture as part of their own subjective speculation AFTER the leak came out.
We expected 4070 performance all along on 5nm which is what we are (hopefully) getting. Theres no need for Zen 4 to justify using 5nm
Its more expensive yes but its also way more dense and power efficient allowing for cost cuttings to reduce relative cost compared to bigger more power hungry APU on 6nm

I read the other argument about 5nm being on hot demand, but is that really still the case in 2024? Didn't apple move to 3nm now?
It is a mix of RDNA3 and RDNA4, like PS4 Pro was a mix of Polaris and Vega. So it needs a base. If RDNA3 had only been designed for N5 and below it would have made it harder to justify N6 for the Pro.
It would be more accurate to compare with PS5 which had elements of RDNA1 but was mostly based on RDNA2. Which is very similar situation now considering the RT/AI improvements
At time of PS4 Pro release Polaris was a brand new architecture which released just a few months before PS4 Pro, Vega came a year later likewise PS5 Pro and RDNA4 will release close to each other. So i dont see why theres a need to assume Pro would have to be based on RDNA3 instead of following the same approach they did with PS4 Pro and OG PS5 which used a custom design based on the latest architecture and picked and chose features
I think the GPU upgrade is very solid :).
Likewise if its on par or close to 4070 will be great, 4070 handless pathtracing at max settings with AI image reconstruction it would be a great RT mode for single player games
 
Last edited:
I just feel like they could've delayed a Pro to even 2025 and done a little better on the CPU aspect, even if it was a Zen 3 or 4 clocked higher.

Just an all-around letdown on the CPU aspect, but the other features seem nice. I guess, in a way, we are going to see our first real look at just how powerful or important AI will be in the future of games and their performance.
 

Panajev2001a

GAF's Pleasant Genius
I think its different from the PS4 Slim situation because back then they jumped straight from 28nm to 16nm, there was no in between node with a "free" upgrade path like 6nm is to 7nm
Free… we will see, “free” does not seem as free as free :p.

Its more expensive yes but its also way more dense and power efficient allowing for cost cuttings to reduce relative cost compared to bigger more power hungry APU on 6nm
It depends if the cost to design and manufacture, when you may not need the logic density and frequency advantage it brings is greater than the savings in power dissipation tech and power delivery or not.

I read the other argument about 5nm being on hot demand, but is that really still the case in 2024? Didn't apple move to 3nm now?
Yes, but they are still manufacturing chips on 5nm too and the competition, which could not get many 5nm chips when Apple bought most of the supply are now taking its place (for designs that do need it).

It would be more accurate to compare with PS5 which had elements of RDNA1 but was mostly based on RDNA2. Which is very similar situation now considering the RT/AI improvements
At time of PS4 Pro release Polaris was a brand new architecture which released just a few months before PS4 Pro, Vega came a year later likewise PS5 Pro and RDNA4 will release close to each other. So i dont see why theres a need to assume Pro would have to be based on RDNA3 instead of following the same approach they did with PS4 Pro and OG PS5 which used a custom design based on the latest architecture and picked and chose features
Wasn’t the rumor that PS5 Pro had been further along than expected but then put on hold or revised? It could explain and justify that Polaris with Vega features comparison applying here.
 

Taycan77

Member
That was a bigger improvement though over the base model.
PS4 Pro offered a nice resolution bump for those who recently bought a 4K TV - let's ignore the fact most 4K TV's at the time where garbage. Sceptics couldn't wait to tell us they could not tell the difference between 1080p and 'fake' 4K - or how they'd much rather see additional power put towards greater visual fidelity.

PS5 Pro is going to offer very significant real world improvements to visual fidelity - not just base resolutions - but significant headroom for improved RT, effects, and all manner of visual & performance improvements PC gamers have enjoyed in recent years. There's concern trolling on three fronts - Xbox gamers sulking MS aren't going to deliver a mid-gen upgrade - PlayStation gamers upset their base PS5 will be overshadowed - PC gamers who will face the indignity of a $500/$600 console showing a clean pair of heels to the latest and greatest PC's.

OK, the last part may be a slight exaggeration - but you really are going to need cutting edge PC hardware to deliver a significant advantage over PS5 Pro in real world scenarios. That's with 3rd Party titles, we'll have to see what Sony offer with their 1st Party.

BTW, I did enjoy seeing Alex sulking in the DF Direct while trying to underplay the tech in PS5 Pro. DF spend so much time talking up the most insignificant PC tech developments - while hyping up Nvidia's software solutions to get more out of less. When Sony does the same with PS5 Pro - they revert to talking about old school hardware specs - rather than the significance of delivering an integrated package that delivers all these developments & solutions - in one plug & play package.

Oh, and while I do very much enjoy some of DF's content - it's took them 3 years to admit PS5 is on par with Series X - and Series S is a commercial and technical failure. Even now they keep trotting out the MS PR FUD about their next console being 'the biggest technical leap yet' - when in the real world people have very real concerns MS will even continue in the console space beyond their immediate obligations.
 
I don't know if its just stubbornness on your part... or maybe just delusion. But you are refusing to try and understand not just how consoles work, but also why Sony left the PS5s CPU mostly untouched. Gonna borrow the post below to help make the point (again). Feel free to watch the video.

In that benchmark, the 4070 is paired to a 8c/16t CPU running at 4.7Ghz + 4070. In 4K DLSS quality, the game is averaging 60fps+ using very high settings across the board. So basically at settings slightly higher than the PS5 fidelity mode.

Now I am going to point out something very important, and the very thing Sony engineers take note of when the hardware is being profiled and they are trying to address bottlenecks. Please, try and just forget all your what you would like things for the remainder of this post.

Take a look at CPU and GPU utilization %. First, bare in mind that HZFW is representative of one of the few games that actually have above-average hardware utilization. The GPU is sitting at 99% utilization and the CPU? Its sitting at 50% utilization. This is a indicating that if there is a bottleneck here, its the GPU. Not the CPU.

Again, 8c/16t@4.7Ghz. 50% CPU utilization. That literally means that you should be able drop that CPU clock to 2.8Ghz, and STILL get GPU utilization of 99%. In a perfect world... it doesn't work this way though, because utilization is indicative of how much work the component is tasked to do at any given time. Basically, in this case, at any given time, the CPU is only doing work 50% of the time, at 4.7Ghz. And mind you, this is an example of really good CPU utilization. Games like Alan Wake 2 for instance has CPU utilization sitting at around 25%, sometimes even as low as 11%.

And this is the real issue, and why Sony left the PS5 CPU unchanged. If you are looking to upgrade components in your hardware, and you have a limited budget, and you profile all the games on your platform, and notice, that none of them are recording more than 50% CPU utilization, would you honestly come to the conclusion that what you need to do is increase your CPU clock speed?
In general AAA games have been GPU bound since the PS4/XB1 generation in part because developers could lean on GPGPU calculations for particles and some physics while also balancing a nice solid 30fps framerate with dynamic resolution scaling as the generation went on and visual boundaries were pushed.

This generation there’s been very little of the predicted improvements to physics simulations outside of games like Dragons Dogma 2 which you guessed it can drop to 20fps out in the World when multiple characters combine said physics simulations together.

Ray Tracing is also another easy win for PS5 Pro with more dedicated RT cores taking some of the strain off the current CPU’s when RTGI, RTAO or RT reflections are enabled. That combined with the 10% CPU clock boost should see games that hovered around 40-45fps be able to reach for the VRR window and for games to seem much smoother for those of us lucky enough to have VRR displays (which is also the Pro market).
 

Mr.Phoenix

Member
I just feel like they could've delayed a Pro to even 2025 and done a little better on the CPU aspect, even if it was a Zen 3 or 4 clocked higher.

Just an all-around letdown on the CPU aspect, but the other features seem nice. I guess, in a way, we are going to see our first real look at just how powerful or important AI will be in the future of games and their performance.
They don't need to have delayed PS5pro to some time in the future to allow for a clock bump to the CPU. I am still repeating myself; bumping CPU clocks to 4.2Ghz, 4.4Ghz, would probably have been the easiest thing they could have done considering all else they already did. That they didn't simply mean one thing. They didn't think it was needed.
On RT performance.
  • 2-3x Ray-tracing (x4 in some cases)
Anyone wondering where that puts the PS5 Pro.
I thinking around 3080, 4070 for 4× 6700xt.
Radeon RX 7700 XT review
qhTULrN.png


UCInItb.jpg

Li8dtBY.jpg

D9Km8f7.jpg



On top of that, RDNA4 may include a Traversal Engine.

I just wish there was an easier-to-digest solid number to unit to measure RT performance in these GPUs. Kinda like how we have Teraflops and we mostly have an idea what to expect just by looking at TFs, ROPs, TMUs...etc.

Or maybe it's me that doesn't quite understand what the metrics mean. Like what the fuck is a box and triangle intersection? Box is a collection of objects on a BVH tree? Triangle is a hit on a literal triangle making up a model? Right now I am at the place where to me its just the more box and triangle intersections the better. Idk, just doesn't feel like the documentation on this thing is clear or straightforward enough...

To me, whenever stuff like this is described so ambiguously, xx times better than xyz, 50% faster than x... alarm bells go off in my head.
 

Loxus

Member
They don't need to have delayed PS5pro to some time in the future to allow for a clock bump to the CPU. I am still repeating myself; bumping CPU clocks to 4.2Ghz, 4.4Ghz, would probably have been the easiest thing they could have done considering all else they already did. That they didn't simply mean one thing. They didn't think it was needed.

I just wish there was an easier-to-digest solid number to unit to measure RT performance in these GPUs. Kinda like how we have Teraflops and we mostly have an idea what to expect just by looking at TFs, ROPs, TMUs...etc.

Or maybe it's me that doesn't quite understand what the metrics mean. Like what the fuck is a box and triangle intersection? Box is a collection of objects on a BVH tree? Triangle is a hit on a literal triangle making up a model? Right now I am at the place where to me its just the more box and triangle intersections the better. Idk, just doesn't feel like the documentation on this thing is clear or straightforward enough...

To me, whenever stuff like this is described so ambiguously, xx times better than xyz, 50% faster than x... alarm bells go off in my head.
This should help you with Ray Tracing performance.


Ray tracing performance

This is based on the Hot Chips XSX info. I assume the RT used in XSX and PS5 is the same RDNA2 RT.

The XSX specs are:

Either 4 texture or 4 ray ops per CU per clock. Ray intersection unit is in the texture sampler.

4 \* 52 \* 1.825GHz = 380G/sec ray-box theoretical peak performance.

1 \* 52 \* 1.825GHz = 95G/sec ray-triangle ops peak , means 1 ray-triangle intersection per CU per clock.

Applying that to PS5:

4 \* 36 \* 2.23GHz = 321G/s ray-box

1 \* 36 \* 2.23GHz = 80G/s ray-triangle


Edit:
Assuming RDNA4 is now 8 Ray/Box and 2 Ray/Triangle.

PS5 Pro Ray Tracing performance should look like this.
For example, if using 54CUs and +10% of PS5 clocks.
(2.23GHz + 10% = 2.45GHz)

8 Ray/Box × 54CU × 2.45GHz = 1,058.4G/sec ray-box
2 Ray/Tri × 54CU × 2.45GHz = 264.6G/sec ray-triangle
 
Last edited:
I just feel like they could've delayed a Pro to even 2025 and done a little better on the CPU aspect, even if it was a Zen 3 or 4 clocked higher.

Just an all-around letdown on the CPU aspect, but the other features seem nice. I guess, in a way, we are going to see our first real look at just how powerful or important AI will be in the future of games and their performance.

Releasing a mid-gen after 5 years would mean a 10-year generation, 4 years is already the latest possibile timeframe

Beyond that, you just wait for PS6 that is clearly what you want....

This is not PS6
 
Last edited:
PS4 Pro offered a nice resolution bump for those who recently bought a 4K TV - let's ignore the fact most 4K TV's at the time where garbage. Sceptics couldn't wait to tell us they could not tell the difference between 1080p and 'fake' 4K - or how they'd much rather see additional power put towards greater visual fidelity.

PS5 Pro is going to offer very significant real world improvements to visual fidelity - not just base resolutions - but significant headroom for improved RT, effects, and all manner of visual & performance improvements PC gamers have enjoyed in recent years. There's concern trolling on three fronts - Xbox gamers sulking MS aren't going to deliver a mid-gen upgrade - PlayStation gamers upset their base PS5 will be overshadowed - PC gamers who will face the indignity of a $500/$600 console showing a clean pair of heels to the latest and greatest PC's.

OK, the last part may be a slight exaggeration - but you really are going to need cutting edge PC hardware to deliver a significant advantage over PS5 Pro in real world scenarios. That's with 3rd Party titles, we'll have to see what Sony offer with their 1st Party.

BTW, I did enjoy seeing Alex sulking in the DF Direct while trying to underplay the tech in PS5 Pro. DF spend so much time talking up the most insignificant PC tech developments - while hyping up Nvidia's software solutions to get more out of less. When Sony does the same with PS5 Pro - they revert to talking about old school hardware specs - rather than the significance of delivering an integrated package that delivers all these developments & solutions - in one plug & play package.

Oh, and while I do very much enjoy some of DF's content - it's took them 3 years to admit PS5 is on par with Series X - and Series S is a commercial and technical failure. Even now they keep trotting out the MS PR FUD about their next console being 'the biggest technical leap yet' - when in the real world people have very real concerns MS will even continue in the console space beyond their immediate obligations.
Very well put. They have being praising Nvidia GPUs for years with a focus on AI upscaling and dedicated RT. But now that Sony is about to literally copy Nvidia solutions and release both techs in a cheap package using AMD hardware (that will be used by Xbox afterwise!) suddenly they are most concerned about the lack of CPU grunt and have launched a whole "PS5 Pro sucks because of CPU" campaign that has being successfully echoed by the usual (and soon dead) legacy medias.

Who pay those shills spouting their MS fed propaganda? Oops, did I answer my own question?
 
Last edited:

sachos

Member
I really can't wait to see PSSR in action, if it reaches DLSS levels of quality then console gamers not that well versed in tech will be surprised. DLSS is awesome. I also hope the RT imrpovements are big enough to push more devs to implement RTGI in their games.
 

SonGoku

Member
Free… we will see, “free” does not seem as free as free :p.
I know what you mean :p but still most of the chip design cost is avoided because its design compatible and the main benefit which your reasoning for PS4 Slim & Pro falls on is not there: They made sense because of shared R&D to alleviate design cost on 16nm but when the 7nm->6nm design cost is already negligible this isnt much of a benefit. At best Sony would get a discount for compounded orders
It depends if the cost to design and manufacture, when you may not need the logic density and frequency advantage it brings is greater than the savings in power dissipation tech and power delivery or not.
I agree this is a calculus that Sony would have made to decide whats more economic short term and long term. What i dont agree with is that we have to speculate from the position that 6nm is the main possibility (short of you having some insider info you are not sharing)
Short term and long term the question is which one is cheaper to produce
5nm: 41% smaller APU (70% density improvement) means the wafer cost will be offset as each wafer will produce more viable APUs, 50% less power consumption will mean cheaper cooling, psu and smaller console
vs
6nm: 41% cheaper wafer cost, 45% cheaper chip design cost

Even if short term 6nm ends up winning by some margin because of 5nm demand, you also have to consider long term when demand for 5nm lowers so will the wafer cost, whatever they decide to go with they have to stick with until the end of generation, chip design costs are too high to justify shrinks of the same apu design

There's also the possibility of going with chiplets for CPU and IO on 6nm and GCD on 5nm if that makes things cheaper

Yes, but they are still manufacturing chips on 5nm too and the competition, which could not get many 5nm chips when Apple bought most of the supply are now taking its place (for designs that do need it).
But even then its not like PS5 Pro needs massive quantities like a base console, wasnt PS4 Pro like 10-20% of total PS4 sales?
So even if there's less production this year, things will improve next year in time for GTA6

its not like supply for 5nm has just opened up, AMD & Nvidia have been using it since their current cards released
Wasn’t the rumor that PS5 Pro had been further along than expected but then put on hold or revised? It could explain and justify that Polaris with Vega features comparison applying here.
Wasnt there a similar rumor for PS5 releasing in 2019 but decided to hold? Applying the same logic here, PS5 is RDNA1 with some RDNA2 features
Also didn't PS4 Pro rumors and detailed specs leak much earlier yet it ended up using the latest GPU arc at time of release.

The other reason why im not sold on RDNA3 besides power consumption is the RT improvements which basically make it RDNA4. Its likely the main difference between the two beside hopefully better power efficiency
 
Last edited:

sendit

Member
I really can't wait to see PSSR in action, if it reaches DLSS levels of quality then console gamers not that well versed in tech will be surprised. DLSS is awesome. I also hope the RT imrpovements are big enough to push more devs to implement RTGI in their games.

It's a step in the right direction. However, PSSR needs to include frame generation for me to be impressed. I can only imagine what DLSS 4 will bring to the table.
 

Mr.Phoenix

Member
This should help you with Ray Tracing performance.


Ray tracing performance

This is based on the Hot Chips XSX info. I assume the RT used in XSX and PS5 is the same RDNA2 RT.

The XSX specs are:

Either 4 texture or 4 ray ops per CU per clock. Ray intersection unit is in the texture sampler.

4 \* 52 \* 1.825GHz = 380G/sec ray-box theoretical peak performance.

1 \* 52 \* 1.825GHz = 95G/sec ray-triangle ops peak , means 1 ray-triangle intersection per CU per clock.

Applying that to PS5:

4 \* 36 \* 2.23GHz = 321G/s ray-box

1 \* 36 \* 2.23GHz = 80G/s ray-triangle


Edit:
Assuming RDNA4 is now 8 Ray/Box and 2 Ray/Triangle.

PS5 Pro Ray Tracing performance should look like this.
For example, if using 54CUs and +10% of PS5 clocks.
(2.23GHz + 10% = 2.45GHz)

8 Ray/Box × 54CU × 2.45GHz = 1,058.4G/sec ray-box
2 Ray/Tri × 54CU × 2.45GHz = 264.6G/sec ray-triangle

I know how to do the calculation. Not much different from doing the calc. for TF.

What I don't know, is what exactly is a ray box or ray triangle. How is BVH traversal measured?
 

Radical_3d

Member
BTW, I did enjoy seeing Alex sulking in the DF Direct while trying to underplay the tech in PS5 Pro
I don’t watch Alex at all. I don’t judge DF as harsh as you guys do. I have my concerns but not big enough to complain in a forum. But the nanosecond Battaglia opens his pie hole I just hit fast forward until anybody or anything else is on the screen. I have had enough patience with that one.
 

Loxus

Member
I know how to do the calculation. Not much different from doing the calc. for TF.

What I don't know, is what exactly is a ray box or ray triangle. How is BVH traversal measured?
I don't know much about Ray Tracing, it's complicated. Hopefully someone else will chime in, but from my understanding.

A box or triangle is what geometry (objects) in games are made of.
YxBouMp.jpg



Within a RT unit is an intersection engine, which can calculate the intersection of rays (light) with boxes and triangles.
N1WvZQh.png

obzyMMq.png

MFfPyeQ.png


The numbers earlier, are basically how fast the GPU can calculate the intersections.


BVH is more of a technique.
Ray Tracing
Bounding Volume Hierarchy (BVH) is a popular ray tracing acceleration technique that uses a tree-based “acceleration structure” that contains multiple hierarchically-arranged bounding boxes (bounding volumes) that encompass or surround different amounts of scene geometry or primitives. Testing each ray against every primitive intersection in the scene is inefficient and computationally expensive, and BVH is one of many techniques and optimizations that can be used to accelerate it.

The BVH can be organized in different types of tree structures and each ray only needs to be tested against the BVH using a depth-first tree traversal process instead of against every primitive in the scene. Prior to rendering a scene for the first time, a BVH structure must be created (called BVH building) from source geometry. The next frame will require either a new BVH build operation or a BVH refitting based on scene changes.


lWUmPVt.jpg
 
Top Bottom