• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020

Allandor

Member
RDNA can already exceed 2.23Ghz in the 5700XT with the right cooling and voltages.

What I do know is that for the 5700XT to see gains from that clockspeed it needs more memory bandwidth. With the RDNA2 colour compression enhancements I would not be surprised if 2.23GHz is the point where the memory bandwidth gets saturated so clocking higher uses more power and creates more heat for practically no performance uplift.
1st: not stable (at least not with air or normal liquids)
2nd: with really high power requirements

Sony chose 2.23 GHz with smartshift in action so they can always keep the chip cool enough. The chip should be quite small so the extra cooling solution is needed to get the heat off the small die area. If the cooling would be good enough (without getting to load) they wouldn't need smartshift. They would just use bit better power supply and give the chip the power it needs for to reach the frequency in all conditions. But as smartshift is needed, if the GPU is really under heavy load it will clock down because at the same time it will consume to much power. So only expect the GPU to reach its full potential if it isn't fully utilized.
Those "boost" modes should be optimal for use with dynamic resolution and VRR. But the 1% lows could get really bad if developers push it a bit to hard.

Smartshift was developed for laptop APUs. So they can redirect the power in a "smart" way so the APU does not need to much power and doesn't produce to much heat. I would bet, if sony could just give the GPU the power it needs to really hold 2.23 GHz stable, they would do it. The power supply shouldn't be the problem here. In such a big case a power supply that supplies a bit more power shouldn't be a problem (and shouldn't cost much more). Thermals must be the "problem" why smartshift is needed. 7nm has its price here. The chips get smaller and the area that can spread the heat gets smaller, too. We just reached a point were the Die area is just to tiny to effectivly transfer the heat. Could get much harder with the next-next gen when <5nm is available and the costs for chip area get even higher. Smaller chips with >200W could get a real problem. Maybe we must artificially increase the Die size by then so we can still cool the chip (even "dead" transistors transfer the heat) :)
 
Last edited:

Dampf

Member

Interesting, Moore's Law Is Dead suggest based on XSX Hot Chips info RDNA2 RT performance will be supperior compared to Turing. I knew it must be the case, because RT results on PS5 were too impressive. RTX 2060 cant run RT at 4K 30fps like PS5, and it looks like PS5 will run some RT games at 4K 60fps as well.


Nothing from the hot chip presentation suggests RDNA2 performs better than Turing in Raytracing, as we have different metrics here. All MLID did was comparing the 3-10x factor using Crytek's Noir Demo which was not even using the Turing RT cores, so that comparison is pretty pointless.
 
Last edited:

jimbojim

Banned
. I would bet, if sony could just give the GPU the power it needs to really hold 2.23 GHz stable, they would do it.

And PS5 to be a weaker console than it is now. LOL. Variable frequencies are there to squeeze more power out of the GPU. There is no boost mode in PS5. 2.23 GHz is just max. clock. What are the CPU and GPU in PS5 are doing are just, so called to say "load balancers". That's why Smartshift is there. To distribute power WHERE IS IT NEEDED. People are really struggling to realize how a variable clock/power system gets more performance out of a certain hardware than one with fixed clocks/power balancing.
PS5 variable clocks are there to balance the load between the CPU and the GPU in order to eliminate per frame bottlenecks, they're not there because "the system can't handle both at max clocks". Whenever there's a bottleneck in either the CPU/GPU, power can be channeled from either one of them to the other without compromising performance thus making the system more resilient to bottlenecks.
With a system like the Series X, you just throw fixed power at both and if there's a bottleneck in either the CPU or the GPU at a given time, the bottleneck simply occurs. With locked clocks, some power will be left out on a table, it won't be used 100%
Now, of course the Series X still has a more powerful CPU/GPU, that's not up for debate, what I'm trying here to say is that If the system was also designed as a variable clock system with the APU load balancing like the PS5, it would end up being even more powerful than what it currently is precisely because of that.
 
Last edited:

pawel86ck

Banned
Nothing from the hot chip presentation suggests RDNA2 performs better than Turing in Raytracing, as we have different metrics here. All MLID did was comparing the 3-10x factor using Crytek's Noir Demo which was not even using the Turing RT cores, so that comparison is pretty pointless.
Turing has up to 6x RT performance improvement over software RT according to Nvidia, while XSX up to 10x according to MS.

After 2 years people can expect AMD will not only match turing RT performance, but also surpass it. Otherwise RDNA2 GPUs will look very bad next to Ampere (rumors suggest Ampere will offer much better RT performance compared to Turing)
 
Last edited:

M1chl

Currently Gif and Meme Champion
OK watch this


It does really cost A LOT to R&D and make those machines!

This is something I already saw, my question was pretty different, I know how lithography works, but my question was about layers and how you make connections between those layers. I find some handy article about it : )

Also this picture:
EUV-beginners.jpg
 
Last edited:

Allandor

Member
And PS5 to be a weaker console than it is now. LOL. Variable frequencies are there to squeeze more power out of the GPU. There is no boost mode in PS5. 2.23 GHz is just max. clock. What are the CPU and GPU in PS5 are doing are just, so called to say "load balancers". That's why Smartshift is there. To distribute power WHERE IS IT NEEDED. People are really struggling to realize how a variable clock/power system gets more performance out of a certain hardware than one with fixed clocks/power balancing.
PS5 variable clocks are there to balance the load between the CPU and the GPU in order to eliminate per frame bottlenecks, they're not there because "the system can't handle both at max clocks". Whenever there's a bottleneck in either the CPU/GPU, power can be channeled from either one of them to the other without compromising performance thus making the system more resilient to bottlenecks.
With a system like the Series X, you just throw fixed power at both and if there's a bottleneck in either the CPU or the GPU at a given time, the bottleneck simply occurs. With locked clocks, some power will be left out on a table, it won't be used 100%
Now, of course the Series X still has a more powerful CPU/GPU, that's not up for debate, what I'm trying here to say is that If the system was also designed as a variable clock system with the APU load balancing like the PS5, it would end up being even more powerful than what it currently is precisely because of that.
I think you should just let the fanboy stay at home and try to understand what I've wrote.

You know, PS5 would be even stronger if they could fix those 2.23 Ghz (and also the max frequency of the CPU). That is what I've wrote. But as a fanboy you seem just to overread such things ("oh it seems he tries to write a negative picture of our holy sony artifact").
Why should a variable frequency make the console stronger vs a fixed frequency for the CPU and GPU at their current max? That doesn't make sense at all. Smartshift is just there to get a stable power usage. But that is nothing a "bigger" PSU couldn't handle (well I guess bigger is relative, because even 700W PSUs can be really really small, so a console PSU shouldn't be to big. And it is not even that much more expensive. So the only problem I see, why they use smartshift is the heat. Else they would have gone the way of a stronger PSU.
I don't think the usage of dynamic frequencies was a reaction to the xbox console. Smartshift is a tech AMD had build into their mobile APUs and was more or less planned to get into the PS5 soc long before the xbox reveal. The only thing they might have done is to stretch the frequencies a bit higher than first planned. But because they already planned to use that feature shows, that the chip is just to small to distribute heat fast enough to the cooling solution. Smaller chip -> lower cost in the production -> better yields.
MS took the other route. Bigger chip -> worse yields -> but easier to cool (because of the bigger Die area).

That is all I've wrote before. So before you let your inner fanboy out again, just read. There was nothing negative about your holy PS5. Just a justification for their design choices. There are still some people here that are interested in the tech of the new consoles, not in a stupid "console war".

And before you read that wrong again, yes, on the same chip dynamic frequencies are better that fixed lower frequencies, but that was never the intention of my post.
 
Last edited:

jimbojim

Banned
Why should a variable frequency make the console stronger vs a fixed frequency for the CPU and GPU at their current max? That doesn't make sense at all. Smartshift is just there to get a stable power usage. But that is nothing a "bigger" PSU couldn't handle (well I guess bigger is relative, because even 700W PSUs can be really really small, so a console PSU shouldn't be to big. And it is not even that much more expensive. So the only problem I see, why they use smartshift is the heat. Else they would have gone the way of a stronger PSU.
I don't think the usage of dynamic frequencies was a reaction to the xbox console. Smartshift is a tech AMD had build into their mobile APUs and was more or less planned to get into the PS5 soc long before the xbox reveal. The only thing they might have done is to stretch the frequencies a bit higher than first planned. But because they already planned to use that feature shows, that the chip is just to small to distribute heat fast enough to the cooling solution. Smaller chip -> lower cost in the production -> better yields.
MS took the other route. Bigger chip -> worse yields -> but easier to cool (because of the bigger Die area).

That is all I've wrote before. So before you let your inner fanboy out again, just read. There was nothing negative about your holy PS5. Just a justification for their design choices. There are still some people here that are interested in the tech of the new consoles, not in a stupid "console war".

And before you read that wrong again, yes, on the same chip dynamic frequencies are better that fixed lower frequencies, but that was never the intention of my post


As Cerny said, both CPU and GPU can be at max clock most of the time. Also Cerny said that with locked clock PS5 GPU wouldn't go above 2 GHz. Calling others a fanboys with no good reason. So, you called me a fanboy 3 times. WTF are you then?

Sure, bud.


Pw2Knj4.jpg


EDIT :

In addition, instead a calling me a fanboy for few times in a row, maybe you should read this :

Sony did tell us how their design works. The thing you're missing is that the PS5 approach is not just letting clocks be variable, like uncapping a framerate. That would indeed have no effect on the lowest dips in frequency. But they've also changed the trigger for throttling from temperature to silicon activity. And that actually changes how much power can be supplied to the chip without issues. This is because the patterns of GPU power needs aren't straightforward.

Here's a depiction of the change. (This is not real data, just for illustrative purposes of the general principle.) The blue line represents power draw over time, for profiled game code. The solid orange line represents the minimum power supply that would need to be used for this profile. Indeed, actual power draw must stay well below the rated capacity. Power supplies function best when actually drawing ~80% of their rating. And when designing a console the architects, working solely from current code, will build in a buffer zone to accommodate ever more demanding scenarios projected for years down the line.
standardpowersokvg.png


You'd think the tallest peaks, highlighted in yellow, would be when the craziest visuals are happening onscreen in the game: many characters, destruction, smoke, lights, etc. But in fact, that's often not the case. Such impressive scenes are so complicated, the calculations necessary to render them bump into each other and stall briefly. Every transistor on the GPU may need to be in action, but some have to wait on work, so they don't all flip with every tick of the clock. So those scenes, highlighted in pink, don't contain the greatest spikes. (Though note that their sustained need is indeed higher.)

Instead, the yellow peaks are when there's work that's complex enough to spread over the whole chip, but just simple enough that it can flow smoothly without tripping over itself. Unbounded framerates can skyrocket, or background processes cycle over and over without meaningful effect. The useful work could be done with a lot less energy, but because clockspeed is fixed, the scenes blitz as fast as possible, spiking power draw.

Sony's approach is to sense for these abnormal spikes in activity, when utilization explodes, and preemptively reduce clockspeed. As mentioned, even at the lower speed, these blitz events are still capable of doing the necessary work. The user sees no quality loss. But now behind the scenes, the events are no longer overworking the GPU for no visible advantage.
choppedpower2hjss.png



But now we have lots of new headroom between our highest spikes and the power supply buffer zone. How can we easily use that? Simply by raising the clockspeed until the highest peaks are back at the limit. Since total power draw is a function of number of transistors flipped, times how fast they're flipping, the power drawn rises across the board. But now, the non-peak parts of your code have more oomph. There's literally more computing power to throw at the useful work. You can increase visible quality for the user in all the non-blitz scenes, which is the vast majority of the game.

raisedpowerc3keg.png


Look what that's done. The heaviest, most impressive scenarios are now closer to the ceiling, meaning these most crucial events are leaving fewer resources untapped. The variability of power draw has gone down, meaning it's easier to predictively design a cooling solution that remains quiet more often. You're probably even able to reduce the future proofing buffer zone, and raise speed even more (though I haven't shown that here). Whatever unexpected spikes do occur, they won't endanger power stability (and fear of them won't push the efficiency of all work down in the design phase, only reduce the spikes themselves). All this without any need to change the power supply, GPU silicon, or spend time optimizing the game code.

Keep in mind that these pictures are for clarity, and specifics about exactly how much extra power is made available, how often and far clockspeed may dip, etc. aren't derivable from them. But I think the general idea comes through strongly. It shows why, though PS5's GPU couldn't be set to 2GHz with fixed clocks, that doesn't necessarily mean it must still fall below 2 GHz sometimes. Sony's approach changes the power profile's shape, making different goals achievable.

I'll end with this (slowly) animated version of the above.

variablepowerudkmp.gif




 
Last edited:

Allandor

Member
As Cerny said, both CPU and GPU can be at max clock most of the time. Also Cerny said that with locked clock PS5 GPU wouldn't go above 2 GHz. Calling others a fanboys with no good reason. So, you called me a fanboy 3 times. WTF are you then?
If a chip reaches a certain frequency it can reach it all the time. It is just a question of power and heat. Else the max frequencies would lead to an unstable chip. That is not in sonys interest.

PS:
the fanboy thing -> you reacted like a fanboy. Just read what you've wrote.
e.g. "And PS5 to be a weaker console than it is now. LOL. " -> never wrote such thing
 
Last edited:
Time stamped, yes games can use tempest if computationaly expensive workloads are better suited to SPU like logic and lots of FFT.



That's cool, and I remember Cerny saying this. But, I'm just curious what's the weight devs will actually use Tempest in this regard will be. I still expect vast majority of graphics duty to be handled by the GPU and AI/physics by CPU (and maybe GPU depending on how devs see fit to use it).

I think any use of Tempest for selective non-audio workloads will come from 1st-party later in the generation when they are pushing the design to its limit.

Also quick aside, but apparently I had to do a bit of research quickly and the SPFP MS were referring to in the audio component of their system is actually FP32, not FP16. Just in case there was any further confusion on it (one poste in particular, P Physiognomonics , was claiming it was FP16 earlier).

Single-precision floating-point format is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

EDIT: Never mind, P Panajev2001a beat me to it.

EDIT EDIT: Also beaten by TBiddy TBiddy . Damn I'm slow x3

i would love to know where this tempest engine actually resides. Would be crazy if its own on the APU. a Dual CU on the 5700xt is around 5-8mm2, on the enhanced node its probably 3-4 mm2 meaning this 1 CU tempest engine might actually be just a 1-2 mm2 piece on the die offering insane amounts of audio processing power.

For the time being we should probably assume it's its own processor utilizing a a single modified CU core of a typical Dual Compute Unit. They have what they call an "Onion bus" for it as well, that's where the claim it could consume up to 20 GB/s of memory bandwidth comes from.

Would be pretty surprised if it's on the GPU die and while I'd say the probability of that is pretty low, it isn't 100% low until we get more of a look or until someone at Sony talks about Tempest to such a degree where they pretty much just say it's its own little chip.
 
Last edited:

jimbojim

Banned
If a chip reaches a certain frequency it can reach it all the time. It is just a question of power and heat. Else the max frequencies would lead to an unstable chip. That is not in sonys interest.

PS:
the fanboy thing -> you reacted like a fanboy. Just read what you've wrote.
e.g. "And PS5 to be a weaker console than it is now. LOL. " -> never wrote such thing

You said PS5 with fixed 2.23 GHz would be stronger. It wouldn't, that's for sure. Mentioned in posts above.
Also PS5 would be weaker because GPU couldn't go above 2 Ghz. Why? Maybe you should rewatch "Road to PS5" and listen what Cerny said.
 

Dodkrake

Banned
i would love to know where this tempest engine actually resides. Would be crazy if its own on the APU. a Dual CU on the 5700xt is around 5-8mm2, on the enhanced node its probably 3-4 mm2 meaning this 1 CU tempest engine might actually be just a 1-2 mm2 piece on the die offering insane amounts of audio processing power.
I know where it doesn't: in the GPU.
 
Did they address any of the api overheads?

Hardware event, not software. You can already find info on DX12 api overhead compared to Vulkan for example if you look around. It's honestly very, very good, and DX12U should lower the overhead even further.

I don't think overhead gap between MS and Sony is going to be anything significant at all going into next-generation. This isn't an XBO situation.

Dev talent is what ultimately matters. Give an ape the finest paint brushes, he still won't become the next Rembrandt.

It's a good thing MS have a great amount of talent already there. Ninja Theory, Coalition, Obsidian, Rare, Playground, Turn 10. If they manage to bring Asobo into the fold that will make seven. Those studios have already made some incredibly good games and in some cases (Obsidian, Ninja Theory) they'll finally have budgets up there with larger AAA devs..if they so choose.

Also FWIW FS2020 more or less cements MS as king of the sim genre between it and the Forza games. I'd love to see them do a modern-day 4X space action/adventure/resource management simulator.
 
D

Deleted member 471617

Unconfirmed Member
Both of them are.

If both means XSX and PS5, not XSS...

Yeah, PlayStation 5 will be great as well. I just have no hype for it. Will still buy it day one for Miles Morales unless it gets delayed, then I will wait until the game releases.
 
Hardware event, not software. You can already find info on DX12 api overhead compared to Vulkan for example if you look around. It's honestly very, very good, and DX12U should lower the overhead even further.

I don't think overhead gap between MS and Sony is going to be anything significant at all going into next-generation. This isn't an XBO situation.



It's a good thing MS have a great amount of talent already there. Ninja Theory, Coalition, Obsidian, Rare, Playground, Turn 10. If they manage to bring Asobo into the fold that will make seven. Those studios have already made some incredibly good games and in some cases (Obsidian, Ninja Theory) they'll finally have budgets up there with larger AAA devs..if they so choose.

Also FWIW FS2020 more or less cements MS as king of the sim genre between it and the Forza games. I'd love to see them do a modern-day 4X space action/adventure/resource management simulator.

Ninja Theory always have my attention.
 

Derktron

Banned
I think I read more about the PS5 than I did the XBox Hot chips breakdown over the last few pages. Some of you guys got to give it a rest. Every Xbox thread shouldn't be buried in bullshit by PS5 defenders.
The hate will always be there and as much as I'm not an Xbox player. I'm tired of it. This is so 2013 and it needs to stop and makes the gaming community very toxic.
 

Shmunter

Member
If the Series X multiplats run better, there's going to be a giganourmous shitshow around here.

We are not ready.
Isn’t the unanimous consensus from Sony & Xbox fans alike that XsX will indeed perform higher? The numbers back it up.

The only discussion point is how the difference will materialise. Higher rez, less aggressive scaling, framerate consistency, more RT?

At the same time PS5 has a memory system that is beyond the XsX specs. How will that materialise, faster loading, faster streaming, less pop in?

Nothing controversial with the above. Only when people cannot accept any part of that reality things get childish.
 

MrFunSocks

Banned
You are acting like a battered spouse. Who hurt you?
Nice attempt but move along.


Isn’t the unanimous consensus from Sony & Xbox fans alike that XsX will indeed perform higher? The numbers back it up.

The only discussion point is how the difference will materialise. Higher rez, less aggressive scaling, framerate consistency, more RT?

At the same time PS5 has a memory system that is beyond the XsX specs. How will that materialise, faster loading, faster streaming, less pop in?

Nothing controversial with the above. Only when people cannot accept any part of that reality things get childish.
How is the PS5 memory system beyond the XSX Specs? The XSX has significantly faster RAM speeds and access to use the SSD essentially as RAM. Are you simply talking about the SSD?

The hate is fucking retarded. They are video game consoles.
People, especially children and young adults, want confirmation that they’ve chosen “right” (or that their parents did) because they don’t want the “loser” console, so they’ll feverishly defend it and attack the other one. Some people never grow out of it.
 
Last edited:

Shmunter

Member
Nice attempt but move along.



How is the PS5 memory system beyond the XSX Specs? The XSX has significantly faster RAM speeds and access to use the SSD essentially as RAM. Are you simply talking about the SSD?
What do you think I’m talking about? Memory subsystem is cache <-> ram <-> storage

Every single developer agrees secondary storage has been the bottleneck for game design and all modern games & engines rely on streaming.

This is unrelated to ram performance connected to GPU rendering but related to freely feeding the GPU to do its thing.
 
Last edited:

MrFunSocks

Banned
What do you think I’m talking about? Memory subsystem is cache <-> ram <-> storage

Every single developer agrees secondary storage has been the bottleneck for game design and all modern games & engines rely on streaming.

This is unrelated to ram performance connected to GPU rendering but related to freely feeding the GPU to do its thing.
so a single part is better, but other parts are worse.
 
Last edited:

Shmunter

Member
Can you not read?

The memory subsystem isn’t “beyond the XSX’s” just because it has a faster SSD, especially after what we found out from the hot chips MS talk.
The ram is one part of a “System”. In fact 6/10 gig on XsX is slower than PS5 if you want to compare just one aspect of the whole thing. But thanks for proving the point of my original post.
 
Last edited:

MrFunSocks

Banned
The ram is one part of a “System”. In fact 6/10 gig on XsX is slower than PS5 if you want to compare just one aspect of the whole thing. But thanks for proving the point of my original post.
And the other 10GB is significantly faster lol. Games don’t need super fast ram for everything btw.

Your original post said the PS5s memory system is better, which is just conjecture. They both have their pros and cons. Neither is Proven better or worse yet.
 
Last edited:

GODbody

Member
At the same time PS5 has a memory system that is beyond the XsX specs. How will that materialise, faster loading, faster streaming, less pop in?

Nothing controversial with the above. Only when people cannot accept any part of that reality things get childish.

Well maybe beyond in terms of raw specs yes, but the SFS portion (if devs choose to implement this) multiples the bandwidth and the GPU optimal memory by 2.5x (effectively able to transmit 6 GB/s bandwidth and hold 25 GB of GPU optimal memory). The compression on the Xbox is also better at a 2:1 ratio (PS5's compression is in the range of 1.46:1 - 1.64:1 based on their given compressed I/O throughput). What PS5 has done with hardware it seems like the Series X has solved and moved beyond with software.
 

Shmunter

Member
Well maybe beyond in terms of raw specs yes, but the SFS portion (if devs choose to implement this) multiples the bandwidth and the GPU optimal memory by 2.5x (effectively able to transmit 6 GB/s bandwidth and hold 25 GB of GPU optimal memory). The compression on the Xbox is also better at a 2:1 ratio (PS5's compression is in the range of 1.46:1 - 1.64:1 based on their given compressed I/O throughput). What PS5 has done with hardware it seems like the Series X has solved and moved beyond with software.
I’m not optimistic about software bridging sizable hardware disparities, but ruling it out wholesale would also be silly. See how it plays out.
 

ZywyPL

Banned
Well maybe beyond in terms of raw specs yes, but the SFS portion (if devs choose to implement this) multiples the bandwidth and the GPU optimal memory by 2.5x (effectively able to transmit 6 GB/s bandwidth and hold 25 GB of GPU optimal memory). The compression on the Xbox is also better at a 2:1 ratio (PS5's compression is in the range of 1.46:1 - 1.64:1 based on their given compressed I/O throughput). What PS5 has done with hardware it seems like the Series X has solved and moved beyond with software.

That's now exactly how it works, no software feature will ever allow you to store 25GB of data inside 13GB RAM, it's the opposite - instead of let's say 10GB used normally thanks to SFS only 4 will be needed, requiring less bandwidth, and leaving space for additional data to be stored.
 
Last edited:

Vognerful

Member
You are mixing up CPU frequencies and GPU frequencies. Just take a look that the dieshot. CPU logic is quite small therefore higher frequencies are way easier. GPUs are much more complicated because you have many many small cores. The result is you reach lower frequencies. Therefore GPUs have much higher power requirements. Higher frequencies -> much higher power requirements. E.g. GPUs can easily use 300W constant (if the chip is big enough to spread the heat). CPUs would melt if they would use that constant power because they are much smaller nowadays.


Sony has no factories for those chips. They have at best assembly-factories where they stick the boards into a box. Not anything more. They still rely (like all others) on buying all those components inside they box.
Some one pointed out that at the rate this factory is working, it will only make 1 million units per year.
 

Allandor

Member
You said PS5 with fixed 2.23 GHz would be stronger. It wouldn't, that's for sure. Mentioned in posts above.
Also PS5 would be weaker because GPU couldn't go above 2 Ghz. Why? Maybe you should rewatch "Road to PS5" and listen what Cerny said.
Well, we know the GPU can hold steady 2.23 GHz because you can fix that frequency in the dev-kit (same chip) to test how your code works at different frequencies. ;)
So the chip is capable of the fixed frequency, so it must be another problem why the console can't hold it. And there we have the fixed power delivery (via smartshift) that is needed because of the cooling solution (at least this seems to be the only reason why they won't fix the frequency at their maximum settings).
 
Last edited:

M1chl

Currently Gif and Meme Champion
That's now exactly how it works, no software feature will ever allow you to store 25GB of data inside 13GB RAM, it's the opposite - instead of let's say 10GB used normally thanks to SFS only 4 will be needed, requiring less bandwidth, and leaving space for additional data to be stored.
Probably not that many but zRam on Linux exists and it's great. However for gaming, not so sure about that one though.

You said PS5 with fixed 2.23 GHz would be stronger. It wouldn't, that's for sure. Mentioned in posts above.
Also PS5 would be weaker because GPU couldn't go above 2 Ghz. Why? Maybe you should rewatch "Road to PS5" and listen what Cerny said.
I still somewhat doubt it, that's the case, we alredy have same technology from nVidia and performance is not better:

I saw Matt and his somewhat vague reasoning, I will wait and see, but all it points out that's it's due to power delivery, not to enhance power (which would hardly make sense)

These technologies are used in notebooks which is all you need to know about them...

csm_AMD_Ryzen_Mobile_Tech_Day_Breakout_Session_Performance_Optimization_02_642aece136.png
 
Last edited:

Marlenus

Member
Probably not that many but zRam on Linux exists and it's great. However for gaming, not so sure about that one though.


I still somewhat doubt it, that's the case, we alredy have same technology from nVidia and performance is not better:

I saw Matt and his somewhat vague reasoning, I will wait and see, but all it points out that's it's due to power delivery, not to enhance power (which would hardly make sense)

These technologies are used in notebooks which is all you need to know about them...

csm_AMD_Ryzen_Mobile_Tech_Day_Breakout_Session_Performance_Optimization_02_642aece136.png

Ps5 is essentially fixed power usage but variable clocks and series X is fixed clocks but variable power usage.

For the former you can have less overhead in your thermal solution and you can hit higher peak clockspeeds when other parts of the system are underutilized.

For the latter you have consistent performance but your peak clockspeed is constrained by the max power draw you want to allow and you have to account for outliers that may be rare in real world scenarios.

When Matt says variable clocks are more performant I believe he is essentially saying that if sony were to run fixed clocks they would need to account for certain outliers which would mean the clockspeed is below 2.23Ghz, possibly by a significant margin. With the variable clocks they can maintain a higher average clockspeed because the majority of the time the outlier scenario is not in play and when it is you can lower clocks to keep power and heat in line.
 

Marlenus

Member
Cerny did say specfically



So I am interpreting propagation logic delay but it could be multiple factors....as it is quite a vague statement.

But agree we dont have much to go on. It wont be long before PC chips on RDNA2 come out and we will know a shit ton more.

Also that Cerny ND patent removing the bottlenecks in GPU and compressing the vertices probably helps get up to the 2.23 as well. It does mention all the bottlenecks solved in the patent.

I interpret his claim about 2.23Ghz as two things. 1) performance gains above this are marginal with the amount of memory bandwidth they have and 2) it is possible the voltage required to go above this for their 'baseline'* apu is higher than they want as it creates too much silicon degradation.

*Since they want all PS5's to behave the same regardless of silicon quality they need to find the worst passable examples and use those to create the smartshift profiles all consoles will use.
 

M1chl

Currently Gif and Meme Champion
Ps5 is essentially fixed power usage but variable clocks and series X is fixed clocks but variable power usage.

For the former you can have less overhead in your thermal solution and you can hit higher peak clockspeeds when other parts of the system are underutilized.

For the latter you have consistent performance but your peak clockspeed is constrained by the max power draw you want to allow and you have to account for outliers that may be rare in real world scenarios.

When Matt says variable clocks are more performant I believe he is essentially saying that if sony were to run fixed clocks they would need to account for certain outliers which would mean the clockspeed is below 2.23Ghz, possibly by a significant margin. With the variable clocks they can maintain a higher average clockspeed because the majority of the time the outlier scenario is not in play and when it is you can lower clocks to keep power and heat in line.
But that would on presumption that the CPU/GPU is not going to have any sort power throttle and it would run always at full frequency and full load. I don't think that's the situation here. Basically if you lock your CPU, GPU on some frequency, it's not really consuming that much more resources. Load factor is also a factor.

However, we know nothing ho much headroom XSX has with the PSU, which is only like 300W (rated), but plug is up to 250W, so in that case it prove beneficial. Something like Turing architecture, is bound by power, not temps (surface area for the die is so big, that you easily just take whole heat by cooler).

But if that SoC is power well within the spec of the PSU, which is smart things to do, because otherwise 🔥 or monitoring power delivery inside the APU (seems unrealistic, based on the common methods which are used in these cases - not smart shift) then I don't see the reason for chip with sustained clocks to loose performance. BUT, we don't know if XSX does not throttle those sustained clocks down, because that's not some glamorous technology, that's fail safe mechanism.

But obviously within the console itself is hard to draw some conclusions as of now. I am just discussing the concept how would that work.
 
Top Bottom