• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Die Shot has been revealed

Someone knows, what these are?

ztZD4hO.png
 

MonarchJT

Banned
This is really not true - the equation is very complex.

The performance gain from frequency increase is linear for the individual component. Going wider has increasing diminishing returns (each additional CU adds less than the last CU added).

However, frequency increase results in power and thermals at some point increasing exponentially and manufacturing yields to go down. In addition, unless other components of the system (CPU, memory etc) matches the frequency increase of the individual component you get synchronisation problems (which negatively impacts performance even more).

Wider is not always a good thing. You can see this with SLI data. With games that supported it adding +100% CUs gave roughly 30% increase in performance. Clear diminishing returns.

In a controlled environment such as a console - frequency is probably your most powerful tool as long as you can keep power and thermals in check.
yes we know everything about clocks and what advantage give. Im from a generation where we was overclocking everything. You should go at amd and nvidia and explain all of this because even that past,today and future project are based on even bigger parallelism (same goes for cpu's) probably after the 36cu's ps5 and this post they will change plans
Clock have his own advantages ....being "wide" have just lots better advantages there's nothing to be upset about
 
Last edited:

TheContact

Member
Honestly, don't really know on my side. In first time, I thought it could be some very big placeholders structures, but clearly too big and in really less number than usual. Big thermal Vias ? (big connections from substrate using all metal layers to package)

almost looks like small processors, maybe for shaders or other commands
 

Elog

Member
yes we know everything about clocks and what advantage give. Im from a generation where we was overclocking everything. You should go at amd and nvidia and explain all of this because even that past,today and future project are based on even bigger parallelism (same goes for cpu's) probably after the 36cu's ps5 and this post they will change plans
Clock have his own advantages ....being "wide" have just lots better advantages there's nothing to be upset about
Why do you even write this? If you look at GPUs over the last 10 years there has been a clear increase in both frequency and parallelism - it has gone hand-in-hand. Frequency however only goes that far or yields will become abysmal. If you look at cards with similar frequencies but differences in number of cores you see a clear drop in benefit as well - compare 3080 and 3090 as an example: +20% cores/transistors vs. +10% FPS increase - clear diminishing returns and at a massive cost.
 

MonarchJT

Banned
Why do you even write this? If you look at GPUs over the last 10 years there has been a clear increase in both frequency and parallelism - it has gone hand-in-hand. Frequency however only goes that far or yields will become abysmal. If you look at cards with similar frequencies but differences in number of cores you see a clear drop in benefit as well - compare 3080 and 3090 as an example: +20% cores/transistors vs. +10% FPS increase - clear diminishing returns and at a massive cost.
The main discussion you felt compelled to answer it started with a user giggling behind the advantage of the xsx teraflops in comparison to the advantages brought by the clock.
What you say is correct clock and parallelism should go in fact hand in hand for this you have a 2017 GCN Xbox One X with 1172Ghz an 40 Cu's and a rdna2 Series X with 1825Ghz and 52 Cu's vs a 2016 GCN Ps4 Pro with 911MHz and 36 Cu's and a rdna2 Ps5 with 2233Ghz (variable) and the same 36 cu's
 
Last edited:

Lysandros

Member
So where the ACE/HWS blocks are located, within central GCP+GE cluster? Maybe they contribute to the size difference, especially if PS5 has more ACEs.
 

ethomaz

Banned
So where the ACE/HWS blocks are located, within central GCP+GE cluster? Maybe they contribute to the size difference, especially if PS5 has more ACEs.
AMD changed how ACE works so now you have only one that do more than the 8 in the past.
And yes they are together with the Command Processor.
 

ToTTenTranz

Banned
Yeah did a little more thinking and looking at a Navi 21 GPU reference again...99% sure PS5 doesn't have Infinity Cache.

Agreed. There needs to be some form of glue / Infinity Fabric. After taking that into account, the area available for substantial amounts of LLC for the GPU would be almost null.
The space between the GDDR6 PHYs seems thicker than usual, but it could be just Infinity Fabric that AMD seems to put "wherever there's some space left".



As for the space between the CPU and GPU, I wonder if there's enough space for AMD/Sony to have implemented some sort of ring bus between the CCXs and that would justify the performance advantage at high framerates.
If they did and it has a similar behavior to what Intel has on their Low-Core-Count server CPUs (and most recently Comet Lake 8-core client), then it wouldn't be as good as Zen3's truly unified cache but it would be substantially better than Zen2.


Here are the access times on Zen2 with separate L3 between CCXs:

2WK8uQP.png



On Zen3 with unified L3:
DLljKjr.png



On the 8-core Comet Lake with separate L3 on each core, connected by a ring bus:
TkDYDrP.png






This isn't a change that is very important if the goal is to make games that run great at 30 to 60FPS, but it makes a substantial difference when/if the CPU becomes a bottleneck at >100 FPS.
Microsoft wouldn't be very interested in this optimization but Sony could be, due to PSVR2.
It would also give some credence to the leaksters claiming "unified L3" on the PS5 CPU, since developers could assume the L3 is unified after looking at Comet Lake-like access times.
 
Honestly, don't really know on my side. In first time, I thought it could be some very big placeholders structures, but clearly too big and in really less number than usual. Big thermal Vias ? (big connections from substrate using all metal layers to package)
Thought about sensors for frequencies, too

We have it also in the XsX and XsS die, seems used in the same cells. We have one on XsS, two on XsX, and... three on PS5. More redundancy? :D
8cPFXpo.jpg
That's a good catch. So one per Shader Array, but the Ps5 also has one in the GPU frontend.
On the Series S it's only on the Frontend, tho.

So something we should also find on regular RDNA GPUs!?
 
Last edited:

ToTTenTranz

Banned
Agreed. There needs to be some form of glue / Infinity Fabric. After taking that into account, the area available for substantial amounts of LLC for the GPU would be almost null.
The space between the GDDR6 PHYs seems thicker than usual, but it could be just Infinity Fabric that AMD seems to put "wherever there's some space left".



As for the space between the CPU and GPU, I wonder if there's enough space for AMD/Sony to have implemented some sort of ring bus between the CCXs and that would justify the performance advantage at high framerates.
If they did and it has a similar behavior to what Intel has on their Low-Core-Count server CPUs (and most recently Comet Lake 8-core client), then it wouldn't be as good as Zen3's truly unified cache but it would be substantially better than Zen2.


Here are the access times on Zen2 with separate L3 between CCXs:

2WK8uQP.png



On Zen3 with unified L3:
DLljKjr.png



On the 8-core Comet Lake with separate L3 on each core, connected by a ring bus:
TkDYDrP.png






This isn't a change that is very important if the goal is to make games that run great at 30 to 60FPS, but it makes a substantial difference when/if the CPU becomes a bottleneck at >100 FPS.
Microsoft wouldn't be very interested in this optimization but Sony could be, due to PSVR2.
It would also give some credence to the leaksters claiming "unified L3" on the PS5 CPU, since developers could assume the L3 is unified after looking at Comet Lake-like access times.
Please note: this is just speculation.
 

Lysandros

Member
AMD changed how ACE works so now you have only one that do more than the 8 in the past.
And yes they are together with the Command Processor.
So 4 ACEs+1HWS as in RDNA1? I was thinking maybe PS5 could have more because of BC compability reasons, i guess that's not needed then.
 
Xbox has RDNA2 because they waited to the end, because AMD endedn this cutting edge technology in may, that´s why the developer kits of xbox were not ready until the last day, and because of that first games were poor optimized for xbox series.


---"In our quest to put gamers and developers first we chose to wait for the most advanced technology from our partners at AMD before finalizing our architecture."

Microsoft waited for AMD to end RDNA2, sony didn´t.
Great, now wait for your frames. It seems like it didn't help, all they got was a console about the same as their main competition at about the same retail price.

Which is fine by the way.
 

DJ12

Member
That's atrocious.

Edit: Sorry that was a mosquito flying past, my bad. :messenger_beaming: Seriously, my X1 VCR can be a lot louder than that. Some GPUs can really scream, if you are going to make a video make sure the mic picks the sound up.
Isn't it confirmed that the actual Coul whine is coming from the power supply anyway and nothing to do with the apu.

Surprised the original post was allowed to stand to be honest, I had a factual post removed yesterday as the mods didn't want to entice more console warring in here.

Fair play i say, but the video is not even relevant for this topic.
 
Exactly. He literally says there that "RDNA2 includes hardware support for DIRECT X ray tracing"

So... PS5 by definition can't be this so called "full RDNA2" because PlayStation would never ever use fucking DirectX.

Wow the PS5 is not a PC. Shocker.

Its dishonest marketing from Microsoft. "Full RDNA2 must include DirectX" ... sure.

PS5 has a custom RDNA2 based GPU. Sony said that. AMD said that. Everyone knows this.
Complete nonsense. Just because DirectX is the API Microsoft uses to expose their chip's ray tracing hardware to developers, that's no reason to claim PS5 by definition technically can't be full RDNA 2 on that sole basis alone.

The reason it can't be considered full RDNA 2 has absolutely nothing to do with DirectX. It's because it's missing multiple major new hardware features of RDNA 2. Mesh Shaders, Variable Rate Shading (Tier 2) and Sampler Feedback.

Did Microsoft building DX12 right into Xbox One X stop PS4 Pro from being full Polaris?

Microsoft says it the way they do because their ray tracing API is the most widely known and marketed API for ray tracing, so why not take advantage of their brand's awareness for Xbox?

PS5 not being DirectX has nothing to do with why they're missing those hardware features, unless among those hardware features one or more are somehow totally Microsoft owned intellectual property, in which case even an AMD semi-custom partner couldn't have it. I recall Microsoft continually stating for Xbox series x variable rate shading was their patented technique, but never did get the reasoning behind them saying so. That said, this DirectX argument is a joke.
 

ethomaz

Banned
Complete nonsense. Just because DirectX is the API Microsoft uses to expose their chip's ray tracing hardware to developers, that's no reason to claim PS5 by definition technically can't be full RDNA 2 on that sole basis alone.

The reason it can't be considered full RDNA 2 has absolutely nothing to do with DirectX. It's because it's missing multiple major new hardware features of RDNA 2. Mesh Shaders, Variable Rate Shading (Tier 2) and Sampler Feedback.

Did Microsoft building DX12 right into Xbox One X stop PS4 Pro from being full Polaris?

Microsoft says it the way they do because their ray tracing API is the most widely known and marketed API for ray tracing, so why not take advantage of their brand's awareness for Xbox?

PS5 not being DirectX has nothing to do with why they're missing those hardware features, unless among those hardware features one or more are somehow totally Microsoft owned intellectual property, in which case even an AMD semi-custom partner couldn't have it. I recall Microsoft continually stating for Xbox series x variable rate shading was their patented technique, but never did get the reasoning behind them saying so. That said, this DirectX argument is a joke.
Where is the proof that PS5 is lacking hardware to do Mesh Shaders, VRS and Sampler Feedback?
 
This is for you DJ12.
I highlighted the areas to see it better.
LhpXQDN.jpg


P.S. Before anyone come at me for adding in Infinity Cache, this is just all speculation.
Could be not in the PS5 at all.

If infinity cache were in ANY console it would definitely be cut down. Microsoft told DF there was 76MB of SRAM in Series X. We have never identified it all, but since it hasn't actually been confirmed I'm just going to assume there is no infinity cache in Series X, but there's a lot of cache unaccounted for on the SoC.

YTycXTJ.jpg
 

sinnergy

Member
Where is the proof that PS5 is lacking hardware to do Mesh Shaders, VRS and Sampler Feedback?

Works both ways.

Where is the proof, it has something equal.. that’s the mist Sony didn’t clear ...

At least MS fully disclosed almost everything.
 
Last edited:

ethomaz

Banned
Next question is haw much cpu and gpu time it takes away .. from other rendering budgets ..

Where is the proof, it has something equal.. that’s the mist Sony didn’t clear ..
No just Sony... MS and AMD.
People keeping saying PS5 doesn't have the hardware to do these tasks but it is still RDNA 2.
So what is the difference in silicon that make these tasks possible on Series and RDNA 2 and not on PS5?

I mean it should be easy to explain if we know what makes it a hardware feature.

What I see is a lot of people backing it because PS5 doesn't have DX12U but that is not something that makes a hardware feature not work... you can do your own software implementation.

Second people uses that twiiter where the guy says PS5 lacks one hardwarae feature from RDNA 2.
Well which feature? People says 3 (Mesh Shaders, VRS and Sampler Feedback... BTW Sampler Feedback is full software)... so which of these 3 are not in PS5 APU?

To trick more the things... PS5's APU lacks hardware Infinite Cache... can that be the hardware feature the tweet hinted? So that means it can do all the other three that Xbox fans keep throwing here.
 
Last edited:

sinnergy

Member
No just Sony... MS and AMD.
People keeping saying PS5 doesn't have the hardware to do these tasks but it is still RDNA 2.
So what is the difference in silicon that make these tasks possible on Series and RDNA 2 and not on PS5?

I mean it should be easy to explain if we know what makes it a hardware feature.
That’s still on Sony... they need to clear it up, otherwise we have threads for years about this stuff 🤣
 
Last edited:

John Wick

Member
If infinity cache were in ANY console it would definitely be cut down. Microsoft told DF there was 76MB of SRAM in Series X. We have never identified it all, but since it hasn't actually been confirmed I'm just going to assume there is no infinity cache in Series X, but there's a lot of cache unaccounted for on the SoC.

YTycXTJ.jpg
Isn't that what Geordieimp has been saying?
 
Where is the proof that PS5 is lacking hardware to do Mesh Shaders, VRS and Sampler Feedback?

For me it's none of it being confirmed to date by Sony. Mark Cerny is a details heavy guy. If VRS, Mesh Shaders and Sampler Feedback were there he would have said it. The moment he said Primitive Shaders I had a suspicion.

Metro developers confirmed they are doing VRS for PS5 in software aka the shaders like they did for PS4, DF Alex saying PS5 doesn't have VRS among other things said by DF regarding feature differences between the consoles, same for Mesh Shaders and Sampler Feedback (say what you will about DF, but their access to dev sources are legit and second to none)

Then it's Microsoft, backed up by AMD, saying that Xbox Series X|S is the only console with full hardware support for all RDNA 2 features AMD showcased at the RX 6000 reveal event. We've caught Microsoft being cute before, but I don't think this is one of those cases. AMD wouldn't have allowed such a statement were it not accurate. Microsoft has always known what their advantages are over PS5, which is even why they mentioned to DF about their clocks on GPU and CPU being stable or fixed.

Not really making a judgment on what's better between fixed or variable, but clearly Microsoft saw it as an advantage over PS5. I feel each side from the get go were pushing what they felt was their big advantage. Sony pushed their SSD I/O, Microsoft pushed their GPU and CPU power.
 

John Wick

Member
the response of xbox series X and ps5 being the same quality at same prize. Xbox Series has better quality, period, so it´s normal to have better architecture. Xbox series has rdna2 and ps5 doesnt.
SX has 12 teraflops but still ain't seen them.
So we should believe you over Sony and AMD?
I mean who are you exactly? Beside being a fangirl/warrior
 

Garani

Member
the response of xbox series X and ps5 being the same quality at same prize. Xbox Series has better quality, period, so it´s normal to have better architecture. Xbox series has rdna2 and ps5 doesnt.

Drop it, it's the usual FUD and by now is pathetic.

If the "PS5 is not RDNA 2", neither is the Series X.

They just don't get it, it's hilarious.

It's actually worse: if "PS5 is not RDNA2", then the XSX gets beaten by an old technology. It's simply stupid argument.
 
Top Bottom