• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro devkits arrive at third-party studios, Sony expects Pro specs to leak

Bojji

Member
The real world RT numbers for PS5 Pro that's been floating around is >2x base PS5 RT performance. That lines up with RT compounded performance improvement from RDNA 2 - prospective RDNA 4.

RDNA 2 to RDNA 3 RT +50% = 100 ==> 150

RDNA 3 to RDNA 4 RT +50% = 150 ==> 225

225/100 = 2.25

But raster performance will also be ~2x so where is the improvement in RT?
 

ChiefDada

Gold Member
But raster performance will also be ~2x so where is the improvement in RT?

Should be on "per CU" basis, otherwise the the spec is meaningless and wouldn't gel with the idea that PS5 is specifically focusing on accelerated RT.

jTgb3jQ.jpg
 

NeonGhost

uses 'M$' - What year is it? Not 2002.
So what games will you first play on the pro that fixes current games frame rate problems I think I’m gonna play resident evil 2,3,and 4 in ray tracing modes
 
In a radically inflationary environment where Sony has increased the price of the PS5 base console revision 3 years in, there is no way in hell they're selling the Pro to people who will replace their current consoles at a loss.

Expect $600+ for a middling increase, and $750+ for a meaningful increase in power.
 

Mr.Phoenix

Member
In a radically inflationary environment where Sony has increased the price of the PS5 base console revision 3 years in, there is no way in hell they're selling the Pro to people who will replace their current consoles at a loss.

Expect $600+ for a middling increase, and $750+ for a meaningful increase in power.
It being an inflationary environment is why it would be sold for $499-$599. Won't be anything over $599 and definitely not $750. The PS5pro will likely have a BOM of around $450-$500. Nothing of the leaks making the rounds and some informed guessing suggests it would/could be costing more than that to make.
 

Perrott

Member
So what games will you first play on the pro that fixes current games frame rate problems I think I’m gonna play resident evil 2,3,and 4 in ray tracing modes
Marvel's Spider-Man 2 should achieve both native 4K and locked 60fps in its quality mode thanks to the dynamic resolution topping out at 2160p and the unlock framerate option.

Death Stranding: Director's Cut should also achieve a locked 60fps in its native 4K quality mode, same as Gran Turismo 7 which can drop a frame every once in a while.

Both Uncharted: Legacy of Thieves Collection (A Thief's End and The Lost Legacy) plus The Last Of Us Part I and probably The Last Of Us Part II Remastered would also achieve 60fps in their native 4K quality modes due to the ability to unlock the framerate across all those titles.

There are a handful of other titles (DMC5's RT performance mode comes to mind) that might benefit of PS5 Pro's power by default, among of which is Hogwarts Legacy. Now this one is really interesting: the game does have an unlocked framerate variant of its quality mode, which does feature raytracing effects, but on a base PS5 it barely performs above the 30fps mark in the most demanding sequences... would the PS5 Pro be able to turn Hogwarts' quality mode into a locked 60fps through its boost mode without the aid of the optimizations of a Pro patch? We'll see.
 
It being an inflationary environment is why it would be sold for $499-$599. Won't be anything over $599 and definitely not $750. The PS5pro will likely have a BOM of around $450-$500. Nothing of the leaks making the rounds and some informed guessing suggests it would/could be costing more than that to make.
750 is an exaggeration but it’s gonna be 599 minimum not maximum the maximum is 699
 
Marvel's Spider-Man 2 should achieve both native 4K and locked 60fps in its quality mode thanks to the dynamic resolution topping out at 2160p and the unlock framerate option.

Death Stranding: Director's Cut should also achieve a locked 60fps in its native 4K quality mode, same as Gran Turismo 7 which can drop a frame every once in a while.

Both Uncharted: Legacy of Thieves Collection (A Thief's End and The Lost Legacy) plus The Last Of Us Part I and probably The Last Of Us Part II Remastered would also achieve 60fps in their native 4K quality modes due to the ability to unlock the framerate across all those titles.

There are a handful of other titles (DMC5's RT performance mode comes to mind) that might benefit of PS5 Pro's power by default, among of which is Hogwarts Legacy. Now this one is really interesting: the game does have an unlocked framerate variant of its quality mode, which does feature raytracing effects, but on a base PS5 it barely performs above the 30fps mark in the most demanding sequences... would the PS5 Pro be able to turn Hogwarts' quality mode into a locked 60fps through its boost mode without the aid of the optimizations of a Pro patch? We'll see.
Isn’t uncharted and last of us unlocked up to 120 not just 60 I think it would do 80-90 on pro
 

PeteBull

Member
You Guys are all wrong… Sony will unlock the true potential of all ps5’s by unlocking the “stacked” extra chipset. Just like misterxmedia allways said, only Sony stole his idea.
Lol i remember that fud back in early xbox one/kinect times, that guy was full on delulu ;D
 

DenchDeckard

Moderated wildly
Ram prices increasing by 15 to 20 percent in q1 2024 kinda sucks. But, it was crazy low in 2023. Maybe it's just a short term thing.
 
Last edited:

Zathalus

Member
No, that side note of the discussion was entirely about AMD RDNA2, with Nvidia hardware comparison a different context and part of the discussion .

I was making the side-point that unlike all other RDNA2 based cards the PS5 doesn't block texture unit access when using the BVH accelerators. Cerny's words I quoted convey that shaders would run in parallel to BVH tests. A shader without access to texture samplers would be a very limited use case for shaders running in parallel, so it is clear the PS5 doesn't block shader execution while BVH tests are running.
I would assume it would be the same as any other RDNA2 card. I don't recall the PS5 having any meaningful RT advantages over XSX or PC RDNA2 cards.
 

winjer

Gold Member
Should be on "per CU" basis, otherwise the the spec is meaningless and wouldn't gel with the idea that PS5 is specifically focusing on accelerated RT.

jTgb3jQ.jpg

That presentation is very misleading, on the RT part.
The performance difference in games is almost negligible between RDNA2 and RDNA3.

Just look at the 6650XT and the 7600. Both have 32 CUs and similar clocks.
But in RT there is only a 2% performance improvement.

relative-performance-rt-1920-1080.png
 

Mr.Phoenix

Member
That presentation is very misleading, on the RT part.
The performance difference in games is almost negligible between RDNA2 and RDNA3.

Just look at the 6650XT and the 7600. Both have 32 CUs and similar clocks.
But in RT there is only a 2% performance improvement.

relative-performance-rt-1920-1080.png
I think we are overcomplicating this one. You are right that that table is misleading.

Alan Wake - Non RT and (RT) @1440p
7900XTX - 94fps (44fps)
7900XT - 82fps (39fps)
4070 - 62fps (50fps)
7800XT - 62fps (29fps)
6800XT - 57fps (26fps)
7700XT - 53fps (26fps

The simple fact of the matter is that AMD RT (be it RDNA2 or RDNA3) significantly sucks compared to Nvidia. This has been attributed to the lack of BVH acceleration on AMD GPUs. This is something RDNA4 has been primarily earmarked to fix.

As it stands right now, in RT-supported engines, we can see up to a consistent 50% drop in frames on the AMD cards when comparing them not using RT vs using RT. But only a 20% drop in fps on Nvidia.
 

PaintTinJr

Member
I would assume it would be the same as any other RDNA2 card. I don't recall the PS5 having any meaningful RT advantages over XSX or PC RDNA2 cards.
Which is exactly why, for no other reason, Cerny dropped in that rather specific technical aspect about BVH testing and shaders working in parallel, that didn't need to be in the presentation other than to communicate to the technical viewers that aren't privy to the SDK, that this isn't just RDNA2, but custom geometry engine with RDNA2 basis.

Your two responses has us going around in circles, because you didn't follow the original side note comment, didn't understand the comment by Cerny and then just restated the same wrong claim as the other poster.

An absence of meaningful differences you talk of - in largely cross-gen games - tells us nothing about the hardware, and doesn't even tell us anything about the software. In a typical scenario where RDNA2 BVH tests block shaders - via texture units access being blocked - developers will optimise around those issues. On a GPU with significantly more CUs, they could offset the texture access on other groups effectively lowering their effective CU count through those bottlenecks, or just not fully texture and have sections look like a texturing bug, or do less coherent lighting through bottlenecked sections by dropping the BVH workload temporarily/

DF spent 2years feigning surprise at PS5 results and making excuses for all sorts of texture LoD issues, hitching and lighting errors on the XsX IIRC. So maybe these issues are all there and just hand waved away as something else, and on the PC side, maybe the way the PS5 RT is inline with expected Nvidia performance, while AMD RDNA2 GPUs IMO have felt they underperform the minute RT s used, is a sign of this too. But even if there is no multi-platform cross-gen game proof, that doesn't contradict Cerny's words about the engineered hardware capabilities.
 

Little Mac

Member
Pretend Season 2 GIF by Outlander


No way it's over $600. Also digital slim probably drops to $400 when Pro launches. Disc Slim vanishes once COD/SM2 bundles stock dry up. Disc Drive Add-on stays the same price.
 

PeteBull

Member
Pretend Season 2 GIF by Outlander


No way it's over $600. Also digital slim probably drops to $400 when Pro launches. Disc Slim vanishes once COD/SM2 bundles stock dry up. Disc Drive Add-on stays the same price.
Yup that sounds like realistic prediction, that way sony has it covered on both sides, vs xss and xsx, both mashines bit more expensive but offer whole lot more and obviously even casual will recognise its amazing deal.
They wont price more than 600usd msrp upclocked zen2+ downvolted/downclocked 7800xt combo of mashine, it will already be profitable or close to profitable(with 1 game) at that price point.


We got leaked data on amount of ps4pr0 in the wild/its sales, barely over 14milions total, which makes it around 20% of all ps4's sold from its launch.
Makes sense too, that 1/5th of players is hardcore/techsavy enough to care for bit more expensive but much beefed up specswise mashine, it also means sony has to prepare their ps5pr0 to sell around 4milion units yearly after its launch in 2024 till start of next gen- aka launch of ps6 holidays 2028, 4years of roughly 4m units each=16m units cap, thats what probably we can expect from ps5pr0 if everything goes smoothly/we get plenty good games/high quality exclusives to eat.
 
All we know info about RT RDNA4(8700XT 64CU) - from 10% and up 50% faster than RDNA3, based on game/application, but RDNA4 have difference WGP compared to RDNA 3-3.5.
RDNA4 WGPs should be about the same compared to RDNA3.5 (ignoring RT traversal stuff). The 64CUs base architecture (with 2 shader engines) of RDNA4 will be featured first in RDNA3.5. RDNA3.5 shader engine layout will be a sizeable break compared to RDNA3 to work properly with 32 CUs / SE (instead 20CUs / SE in RDNA3).
 
Last edited:

Zathalus

Member
Which is exactly why, for no other reason, Cerny dropped in that rather specific technical aspect about BVH testing and shaders working in parallel, that didn't need to be in the presentation other than to communicate to the technical viewers that aren't privy to the SDK, that this isn't just RDNA2, but custom geometry engine with RDNA2 basis.

Your two responses has us going around in circles, because you didn't follow the original side note comment, didn't understand the comment by Cerny and then just restated the same wrong claim as the other poster.

An absence of meaningful differences you talk of - in largely cross-gen games - tells us nothing about the hardware, and doesn't even tell us anything about the software. In a typical scenario where RDNA2 BVH tests block shaders - via texture units access being blocked - developers will optimise around those issues. On a GPU with significantly more CUs, they could offset the texture access on other groups effectively lowering their effective CU count through those bottlenecks, or just not fully texture and have sections look like a texturing bug, or do less coherent lighting through bottlenecked sections by dropping the BVH workload temporarily/

DF spent 2years feigning surprise at PS5 results and making excuses for all sorts of texture LoD issues, hitching and lighting errors on the XsX IIRC. So maybe these issues are all there and just hand waved away as something else, and on the PC side, maybe the way the PS5 RT is inline with expected Nvidia performance, while AMD RDNA2 GPUs IMO have felt they underperform the minute RT s used, is a sign of this too. But even if there is no multi-platform cross-gen game proof, that doesn't contradict Cerny's words about the engineered hardware capabilities.
Even non cross-gen RT games offer no meaningful performance difference on the PS5 compared to the XSX or PC RDNA2 cards. The latest (and heaviest) RT game on console (Avatar) has slightly higher resolution on XSX for example.

Look you can speculate and bring up Mark Cerny all you want, but unless there are any meaningful results of RT games performing much better on the PS5 it amounts to basically nothing. I'm not claiming that the PS5 RT setup is or is not different compared to regular RDNA 2, just that it doesn't amount to any meaningful performance difference which either means it's the exact same or the hardware difference has no real impact on performance.

The majority of the PS5 wins against the XSX doesn't involve RT at all and comes down to either API differences or the game favouring the hardware advantages (fill rate) that the PS5 has over the XSX.
 

PaintTinJr

Member
Even non cross-gen RT games offer no meaningful performance difference on the PS5 compared to the XSX or PC RDNA2 cards. The latest (and heaviest) RT game on console (Avatar) has slightly higher resolution on XSX for example.

Look you can speculate and bring up Mark Cerny all you want, but unless there are any meaningful results of RT games performing much better on the PS5 it amounts to basically nothing. I'm not claiming that the PS5 RT setup is or is not different compared to regular RDNA 2, just that it doesn't amount to any meaningful performance difference which either means it's the exact same or the hardware difference has no real impact on performance.

The majority of the PS5 wins against the XSX doesn't involve RT at all and comes down to either API differences or the game favouring the hardware advantages (fill rate) that the PS5 has over the XSX.
Why continue to move the goal post when your first response was to defend your beloved Nvidia that wasn't even part of the conversation or under threat I was having with the other poster?

Performance in actual multiplatform games(or any games) examples was never part of the side note point I was making in a wider theoretical discussion about how AMD's silicon compares by design to Nvidia's (and intel's) for Raster and RT in FP16/FP32 specs and you using multi-platform games as some proof argument is just a pivot. IMO, the main reason people take such interest in PlayStation hardware capabilities is because of the new exclusives that will use the hardware and won't necessarily have a PC port, so your opinion about comparing benchmarks, whether I think is a garbage take or not, still isn't relevant to the topic you interjected in to, and interjected without clearly bothering to read the discussion.
 

Zathalus

Member
Why continue to move the goal post when your first response was to defend your beloved Nvidia that wasn't even part of the conversation or under threat I was having with the other poster?

Performance in actual multiplatform games(or any games) examples was never part of the side note point I was making in a wider theoretical discussion about how AMD's silicon compares by design to Nvidia's (and intel's) for Raster and RT in FP16/FP32 specs and you using multi-platform games as some proof argument is just a pivot. IMO, the main reason people take such interest in PlayStation hardware capabilities is because of the new exclusives that will use the hardware and won't necessarily have a PC port, so your opinion about comparing benchmarks, whether I think is a garbage take or not, still isn't relevant to the topic you interjected in to, and interjected without clearly bothering to read the discussion.
Dude, no need to get so worked up. I clearly stated that I thought your were talking about the RT capabilities of the RT cores and not the texture issue in the intersection engine. Once you corrected me on what you meant I then shared my opinion on that matter, that the RT capabilities of the PS5 appear to be inline with regular RDNA2, which is a 100% factual statement.

Maybe the PS5 has some super secret RT capabilities that regular RDNA2 doesn't have and that only PS developers are capable of using, just that nothing so far indicates this is the case even looking at PS5 exclusives with RT (Returnal, Ratchet & Clank, Miles Morales).
 
Last edited:

DJ12

Member
That presentation is very misleading, on the RT part.
The performance difference in games is almost negligible between RDNA2 and RDNA3.

Just look at the 6650XT and the 7600. Both have 32 CUs and similar clocks.
But in RT there is only a 2% performance improvement.

relative-performance-rt-1920-1080.png
which is probably why Sony aren't leaving RT up to AMD and will have a custom solution if the pro ever releases.
 

Mr.Phoenix

Member
I think they are more involved with the actual chip design than you give them credit for in these semi-custom solutions.
They are only as involved as it would take for them to pick and choose what they want from the technologies AMD has. Out of everything AMD has, sony can literally pick any of them. They want more AI units? They can. Do they want an infinity cache? 3d cache, 320-bit bus, two RT cores in each CU....etc. Whatever they want, as long as its based on a component AMD has already made, they can use it. They can even choose what those components do.
 

winjer

Gold Member
They are only as involved as it would take for them to pick and choose what they want from the technologies AMD has. Out of everything AMD has, sony can literally pick any of them. They want more AI units? They can. Do they want an infinity cache? 3d cache, 320-bit bus, two RT cores in each CU....etc. Whatever they want, as long as its based on a component AMD has already made, they can use it. They can even choose what those components do.

Sony can also choose to have new features.
Although there isn't concrete evidence, there is a wide belief that AMD's async compute, introduced with CGN, was a request from Sony.
So although Sony didn't create the tech itself, they were probably instrumental in pushing AMD to implement this feature.
 

Panajev2001a

GAF's Pleasant Genius
They are only as involved as it would take for them to pick and choose what they want from the technologies AMD has. Out of everything AMD has, sony can literally pick any of them. They want more AI units? They can. Do they want an infinity cache? 3d cache, 320-bit bus, two RT cores in each CU....etc. Whatever they want, as long as its based on a component AMD has already made, they can use it. They can even choose what those components do.
I think more than that, see the FPU of the Zen 2 cores.

They have bona-fide chip designers in Sony (CPU and graphics chip designers) and plenty of people that do understand graphics and low level computing experts. I think you are underestimating them a tad.
Larrabee, the overall solution and ISA (Intel AVX-512 and the new AVX instruction set are a direct child of that design) was led by Tom Forsyth and Michael Abrash who do not come from the pure chip silicon background either.
 
Last edited:

Mr.Phoenix

Member
I think more than that, see the FPU of the Zen 2 cores.

They have bona-fide chip designers in Sony (CPU and graphics chip designers) and plenty of people that do understand graphics and low level computing experts. I think you are underestimating them a tad.
Larrabee, the overall solution and ISA (Intel AVX-512 and the new AVX instruction set are a direct child of that design) was led by Tom Forsyth and Michael Abrash who do not come from the pure chip silicon background either.
Fair enough... I know Sony has engineers that are in the chip design field. I mean, they did make the Cell, Bravia XR, and the imaging processors. And while the FPU in the PS5 is smaller than those in the off-the-shelf Zen2 CPUs, we can't say that AMD didn't have a spec of Zen2 with smaller FPUs to be used in some capacity that was just never publicized.

So while I will concede that I can't just flat out say Sony will not or does not do this or that, I will stick to my guns and say, 95%+ of what goes into any APU they get from AMD comprises of off the help components from AMD and maybe 5% can be as a result to some sony-amd customization.

Funny enough, I think it isn't a coincidence that this whole "pro" console concept started last gen which was the first time Sony was using an APU wholly designed by AMD. Being able to pick and choose features to put in an APU when the overall design of said architectures is already done by someone else, probably makes it easy enough for Sony to iterate on their hardware like they do with the Pro consoles.
 

FireFly

Member
That presentation is very misleading, on the RT part.
The performance difference in games is almost negligible between RDNA2 and RDNA3.

Just look at the 6650XT and the 7600. Both have 32 CUs and similar clocks.
But in RT there is only a 2% performance improvement.
Navi 33 has a cut down register file which may be affecting performance. We do see some modest gains with Navi 31 with RT set to max.

 
Yup that sounds like realistic prediction, that way sony has it covered on both sides, vs xss and xsx, both mashines bit more expensive but offer whole lot more and obviously even casual will recognise its amazing deal.
They wont price more than 600usd msrp upclocked zen2+ downvolted/downclocked 7800xt combo of mashine, it will already be profitable or close to profitable(with 1 game) at that price point.


We got leaked data on amount of ps4pr0 in the wild/its sales, barely over 14milions total, which makes it around 20% of all ps4's sold from its launch.
Makes sense too, that 1/5th of players is hardcore/techsavy enough to care for bit more expensive but much beefed up specswise mashine, it also means sony has to prepare their ps5pr0 to sell around 4milion units yearly after its launch in 2024 till start of next gen- aka launch of ps6 holidays 2028, 4years of roughly 4m units each=16m units cap, thats what probably we can expect from ps5pr0 if everything goes smoothly/we get plenty good games/high quality exclusives to eat.
I wanted our first 700 PlayStation with this and for them to use zen 5. I think the ps5 pro will sell worse than the 4 pro if the specs aren’t good enough (I think that matters more than the price this time)
 

winjer

Gold Member
Navi 33 has a cut down register file which may be affecting performance. We do see some modest gains with Navi 31 with RT set to max.


Yes that is probably the reason. Since a 7900XTX can keep a lot more work waves in execution, than a 6900XT.
But it's still far from the 50% performance improvement AMD claimed. Maybe there is some RT synthetic workload that scales like that, but most games are very far from such improvement.

RDNA 3’s SIMDs have a 192 KB vector register file, compared to 128 KB on RDNA 2. That potentially lets RDNA 3 keep more waves in flight, especially if each shader wants to use a lot of registers. Raytracing kernels apparently use a lot of vector registers, so RDNA 3 gains a clear occupancy advantage. Cyberpunk 2077’s path tracing call is especially hungry for vector registers. For comparison, the normal raytracing mode uses far fewer registers and enjoys higher occupancy. It’s also invoked over a grid with half of the screens vertical resolution, probably to make the load lighter.

ScenarioCallVector Register (VGPR) UsageOccupancy (Per SIMD)
RDNA 2, Path TracingDispatchRays<Unified>(1920, 1080, 1)253 used, 256 allocated
32 KB per wave
4 waves
RDNA 2, RT UltraDispatchRays<Unified>(960, 1080, 1)101 used, 112 allocated
14 KB per wave
9 waves
RDNA 3, Path TracingDispatchRays<Unified>(1920, 1080, 1)254 used, 264 allocated
33.7 KB per wave
5 waves
RDNA 3, RT PsychoDispatchRays<Unified>(960, 1080, 1)99 used, 120 allocated
15.3 KB per wave
12 waves
 

FireFly

Member
Yes that is probably the reason. Since a 7900XTX can keep a lot more work waves in execution, than a 6900XT.
But it's still far from the 50% performance improvement AMD claimed. Maybe there is some RT synthetic workload that scales like that, but most games are very far from such improvement.



ScenarioCallVector Register (VGPR) UsageOccupancy (Per SIMD)
RDNA 2, Path TracingDispatchRays<Unified>(1920, 1080, 1)253 used, 256 allocated
32 KB per wave
4 waves
RDNA 2, RT UltraDispatchRays<Unified>(960, 1080, 1)101 used, 112 allocated
14 KB per wave
9 waves
RDNA 3, Path TracingDispatchRays<Unified>(1920, 1080, 1)254 used, 264 allocated
33.7 KB per wave
5 waves
RDNA 3, RT PsychoDispatchRays<Unified>(960, 1080, 1)99 used, 120 allocated
15.3 KB per wave
12 waves
In the footnotes to the slides, they admit that the 50% is in a synthetic benchmark.

"Based on a November 2022 AMD internal performance lab measurement of rays with indirect calls on RX 7900 XTX GPU vs. RX 6900 XT GPU. RX-808"

 

Mr.Phoenix

Member
I wanted our first 700 PlayStation with this and for them to use zen 5. I think the ps5 pro will sell worse than the 4 pro if the specs aren’t good enough (I think that matters more than the price this time)
Naaa man, I think you are delusional if you think specs have EVER been the primary driving factor as to why consoles sell. By that kinda reasoning the XSX should be handily outselling the PS5. The people that care about specs like you are insinuating, build PCs. In the console space, a vast majority of buyers of a Pro console would be content with 60fps where it was 30fps and slightly better visuals.

And let's look at what a PS5pro BOM could look like.

APU - 300mm2, 5nm - $150
RAM - 16GB. 18gbs - $70
SSD - 1TB - $35
PCB + other IC + misc - $60 (+$40)
Cooling - $40
PSU - $20
Controller - $30
Assembly +packaging - $25

That comes up to $440, that's the ballpark that we see the PS5pro BOM in. It can be $480 if you add the $40 extra misc, in case they decide to do things like use more DDR RAM, A bigger SSD, or go for a slightly bigger APU. The point is though, that at no point is Sony going to sit down in their offices and build a console that costs them anything more than $499 to make. And if you look at those prices, its hard to make a case for them to even need to do that.
 

shamoomoo

Member
In the footnotes to the slides, they admit that the 50% is in a synthetic benchmark.

"Based on a November 2022 AMD internal performance lab measurement of rays with indirect calls on RX 7900 XTX GPU vs. RX 6900 XT GPU. RX-808"

Unless I'm blind,where exactly does it start that in the footnotes? Because I didn't see that.


Here's the footnotes, unless I overlooked something, I'm not seeing what you are saying :

  1. Based on AMD labs testing in November 2022, on a system configured with a Radeon RX 7900 XTX GPU, driver 31.0.14000.24040, AMD Ryzen 9 5900X CPU, 32GB DDR4-3200MHz, ROG CROSSHAIR VIII HERO (WI-FI) motherboard, set to 300W TBP, on Win10 Pro, versus a similarly configured test system with a 300W Radeon 6900 XT GPU and driver 31.0.12019.16007, measuring FPS performance in select titles. Performance per watt is calculated using the manufacturers’ stated total board power (TBP) of the AMD GPUs listed herein. System manufacturers may vary configurations, yielding different results. RX-816.
  2. Based on AMD internal measurements, November 2022, comparing the Radeon RX 7900 XTX at 2.505 GHz boost clock with 96 CUs issuing 2X the Bfloat16 math operations per clocks vs. the RX 6900 XT GPU at 2.25 GHz boost clock and 80 CUs issue 1X the Bfloat16 math operations per clock. RX-821.
  3. Based on a November 2022 AMD internal performance lab measurement of rays with indirect calls on RX 7900 XTX GPU vs. RX 6900 XT GPU. RX-808
  4. Video codec acceleration (including at least the HEVC (H.265), H.264, VP9, and AV1 codecs) is subject to and not operable without inclusion/installation of compatible media players. GD-176.
 
Last edited:

FireFly

Member
Unless I'm blind,where exactly does it start that in the footnotes? Because I didn't see that.


Here's the footnotes, unless I overlooked something, I'm not seeing what you are saying :

  1. Based on AMD labs testing in November 2022, on a system configured with a Radeon RX 7900 XTX GPU, driver 31.0.14000.24040, AMD Ryzen 9 5900X CPU, 32GB DDR4-3200MHz, ROG CROSSHAIR VIII HERO (WI-FI) motherboard, set to 300W TBP, on Win10 Pro, versus a similarly configured test system with a 300W Radeon 6900 XT GPU and driver 31.0.12019.16007, measuring FPS performance in select titles. Performance per watt is calculated using the manufacturers’ stated total board power (TBP) of the AMD GPUs listed herein. System manufacturers may vary configurations, yielding different results. RX-816.
  2. Based on AMD internal measurements, November 2022, comparing the Radeon RX 7900 XTX at 2.505 GHz boost clock with 96 CUs issuing 2X the Bfloat16 math operations per clocks vs. the RX 6900 XT GPU at 2.25 GHz boost clock and 80 CUs issue 1X the Bfloat16 math operations per clock. RX-821.
  3. Based on a November 2022 AMD internal performance lab measurement of rays with indirect calls on RX 7900 XTX GPU vs. RX 6900 XT GPU. RX-808
  4. Video codec acceleration (including at least the HEVC (H.265), H.264, VP9, and AV1 codecs) is subject to and not operable without inclusion/installation of compatible media players. GD-176.
It's point 3.) in your list. The "Up to 50% more raytracing performance per CU" claim on the website has a superscript 3 next to it.
 
Naaa man, I think you are delusional if you think specs have EVER been the primary driving factor as to why consoles sell. By that kinda reasoning the XSX should be handily outselling the PS5. The people that care about specs like you are insinuating, build PCs. In the console space, a vast majority of buyers of a Pro console would be content with 60fps where it was 30fps and slightly better visuals.

And let's look at what a PS5pro BOM could look like.

APU - 300mm2, 5nm - $150
RAM - 16GB. 18gbs - $70
SSD - 1TB - $35
PCB + other IC + misc - $60 (+$40)
Cooling - $40
PSU - $20
Controller - $30
Assembly +packaging - $25

That comes up to $440, that's the ballpark that we see the PS5pro BOM in. It can be $480 if you add the $40 extra misc, in case they decide to do things like use more DDR RAM, A bigger SSD, or go for a slightly bigger APU. The point is though, that at no point is Sony going to sit down in their offices and build a console that costs them anything more than $499 to make. And if you look at those prices, its hard to make a case for them to even need to do that.
The apu is gonna be on 4nm I also just don’t buy the zen 2 leaks but I won’t go into this again 700 may be a stretch but I don’t think it’s completely impossible as an upper limit
 

Mr.Phoenix

Member
The apu is gonna be on 4nm I also just don’t buy the zen 2 leaks but I won’t go into this again 700 may be a stretch but I don’t think it’s completely impossible as an upper limit
Sony is not going to make something that may not even cost $500 to make, and then sell it for $700. Even if you went 4nm instead of 5nm, that may just take that BOM from $480 to maybe $500.

I think the real issue with you guys talking about $600+ and $700 consoles is twofold. 1.) You haven't ever taken a cursory look into component pricing at the OEM scale, and 2.) You don't realize that when you see AMD/Nvidia price a GPU at $1000, that GPU costs them less than or around half that amount to make.
 

Go_Ly_Dow

Member
Sony is not going to make something that may not even cost $500 to make, and then sell it for $700. Even if you went 4nm instead of 5nm, that may just take that BOM from $480 to maybe $500.

I think the real issue with you guys talking about $600+ and $700 consoles is twofold. 1.) You haven't ever taken a cursory look into component pricing at the OEM scale, and 2.) You don't realize that when you see AMD/Nvidia price a GPU at $1000, that GPU costs them less than or around half that amount to make.

I take it the BOM estimate doesn't factor manufacturing, packaging and distribution costs? Or does it?
 
Last edited:

Little Mac

Member
Sorry if this is off topic but I’d figure you spec specialists would know. I just bought a Western Digital 2TB SN850X drive for my new Slim and I’m wondering if it’s better to install the games to the console’s internal storage or the new drive? What’s fastest in regards to booting up games?

Also what are the odds that Sony keeps the expandable storage bay in the new rumored Pro?
 
Last edited:

Forth

Neophyte
Sorry if this is off topic but I’d figure you spec specialists would know. I just bought a Western Digital 2TB SN850X drive for my new Slim and I’m wondering if it’s better to install the games to the console’s internal storage or the new drive? What’s fastest in regards to booting up games?

Also what are the odds that Sony keeps the expandable storage bay in the new rumored Pro?
I don't know if this is true but I store my games on the exact same drive in my Slim in the hope that it saves wear and tear on the internal space simply because that is much harder to replace also with a heatsink installed on my WD drive I can have a heatsink fitted to keep the temperature down.
 
Sorry if this is off topic but I’d figure you spec specialists would know. I just bought a Western Digital 2TB SN850X drive for my new Slim and I’m wondering if it’s better to install the games to the console’s internal storage or the new drive? What’s fastest in regards to booting up games?

Also what are the odds that Sony keeps the expandable storage bay in the new rumored Pro?
Id leave it on the m.2 you can transfer it as well to the pro without having tor download everything
 
Top Bottom