• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
This dude has no clue WTF he is talking about.
That’s not how this works. That’s not how any of this works.
I simply don’t get this way of thinking. The XsX can run at its max clocks all the time. On a leveled playing field Variable clocks can’t match fixed max clocks.
What he said is exactly what Matt said over at Era. I'd trust Matt more than these other 'insiders' since Matt got all the stuff about PS4 x XBO right (wayyy before launch)
 

Elog

Member
I simply don’t get this way of thinking. The XsX can run at its max clocks all the time. On a leveled playing field Variable clocks can’t match fixed max clocks.

That is not how it works either - it will down clock when the power draw exceeds the budget which it easily can - but then in an uncontrolled way.

Edit: This whole discussion is filled with quite some ignorance. With making both power and frequency variables the system can avoid bottle-necks more often. It is really that simple - and yes, it is a clear benefit to be able to control both as on the PS5. Only upside - no downside.
 
Last edited:
Honestly looks ugly asf
65566814.jpg
 

Entroyp

Member
That is not how it works either - it will down clock when the power draw exceeds the budget which it easily can - but then in an uncontrolled way.

Edit: This whole discussion is filled with quite some ignorance. With making both power and frequency variables the system can avoid bottle-necks more often. It is really that simple - and yes, it is a clear benefit to be able to control both as on the PS5. Only upside - no downside.

Let’s say that the CPU is under heavy load due to AVX usage. As far as I know the XsX CPU will handle the load and the PS5 would have to underclock to stay under the power budget. No?
 

kyliethicc

Member
I ask you this question: Do you think that Cerny would still say that TF and CU counts don't tell the whole story if the XSX had less TFs and CUs than the PS5?

He wouldn't. There's no way in hell that Cerny would downplay the importance of CUs and TFs if they were one more bullet point that the PS5 could one-up the XSX with. So I think it's safe to say that on the CU/TF subject, he was damage controlling.

Don't get me wrong, I think the PS5 will blow the XSX out of the water, but him handwaving TFs and CUs as a true measure of power was damage control.

He was not saying what you think he was. He was pointing out that the PS4 Pro and PS5 both have 8 core CPUs and 36 CUs, but we know the performance of both will be different.

He never said TF aren’t important. He didn’t hand wave anything.

He said they don’t give you a guaranteed measure of performance. Other factors matter too, like architecture, clocks, caches, bandwidth, etc. Lots of things also could make any nominal amount of CPUs or TFs perform higher or lower than another.

Hes clearly quite smart, and so he probably didn’t feel the need to spell it out that much. It does seem pretty obvious if you just listen. Because everything he said IS TRUE.
 

IntentionalPun

Ask me about my wife's perfect butthole
What he said is exactly what Matt said over at Era. I'd trust Matt more than these other 'insiders' since Matt got all the stuff about PS4 x XBO right (wayyy before launch)
Link?

Maybe he's badly paraphrasing things that are correct.. but what he said is absolutely not correct.

You can't magically not lower performance when lowering clocks.. like who would ever make that claim?

You can get higher average performance than the same design at fixed clocks though. But... the XSS clocks are fixed, higher than the max clocks of the PS5... so when comparing them, the XSS has the clear advantage at least overall theoretical TF.

PS5 has advantage for maximum clock speed per CU on the GPU side though.
 
Last edited:
That is not how it works either - it will down clock when the power draw exceeds the budget which it easily can - but then in an uncontrolled way.

Edit: This whole discussion is filled with quite some ignorance. With making both power and frequency variables the system can avoid bottle-necks more often. It is really that simple - and yes, it is a clear benefit to be able to control both as on the PS5. Only upside - no downside.

I'm not saying the PS5 is 9.2 TFs.

I think they did this because it's better to have a 10.28TF system with variable clocks than a 9.2TF system with fixed clocks.

Like you said it helps them avoid bottlenecks and Mark said its important for the cooling as well. It should help with noise if it's done properly. Plus it allows them to squeeze more power out of the GPU.

Am I understanding this correctly?
 
Last edited:

Brudda26

Member
Let’s say that the CPU is under heavy load due to AVX usage. As far as I know the XsX CPU will handle the load and the PS5 would have to underclock to stay under the power budget. No?
The XSX has thermal boundaries whereas the PS5 power boundaries. Different workloads will cause different problems for both consoles. There will be tasks which cause XSX to generate unneeded noise and excessive heat because it's running a task at max power that wont need max power because of it's fixed clocks. Ps5 is designed to avoid that it gives power to what the tasks need.
 

IntentionalPun

Ask me about my wife's perfect butthole
The XSXs CPU clocks are a little higher I believe. But in terms of GPU the PS5s is clocked quite a bit higher.

Not saying the PS5 has more TFs but the GPUs frequency is higher.

Yeah read the last sentence in my post.

Titus says some accurate things, but it's also clear he doesn't get a few things.

Are people really still confused by this stuff?
 

kyliethicc

Member
GEDAFUCK ATTA HERE!

Seriously though, I'm talking about turning the power on the PS5, wait the system to boot, run Spider-Man/Witcher 3/AAA title, load a saved game, wait for the game world to load and play.

For most of those games, the procedure takes like, 3-5 minutes.

Heck, the X1X needs about a minute to boot.
Good news is supposedly on PS5 you can load your saved game right from the system OS.

So you could just turn PS5 on, select “continue story” from say Spider-Man on the home screen, and within 1-2 second, you’re playing.

Again, hopefully this is how fast it is.
 

TrippleA345

Member
To counter confusion
The concept of Amd Smartshift is as follows:
A Control Unit measures workloads. Based on the workloads, Smartshift decides to shift energy between Cpu and Gpu. When this happens the frequencies of the Cpu/Gpu go down. But not to much. In total, more transistors are used more often without using more energy and getting more performance.

This year two devices will get Amd Smartshift. The PS5 and a laptop from Dell. Amd claims up to 14% more performance in games on the laptop (they still have to prove it). We'll have to wait and see how much this will be for the PS5.
Besides, Amd Smartshift is free. Free not in the sense of dollars but in the sense that it doesn't take up (very little) space on the device, as well as an automatic process that is done by the system, so that no software developer has to worry about this process.
Only problem. It must be implemented in advance, i.e. when the device is developed, because it is implemented deep in the system.
 
Last edited:
Link?

Maybe he's badly paraphrasing things that are correct.. but what he said is absolutely not correct.

You can't magically not lower performance when lowering clocks.. like who would ever make that claim?

You can get higher average performance than the same design at fixed clocks though. But... the XSS clocks are fixed, higher than the max clocks of the PS5... so when comparing them, the XSS has the clear advantage at least overall theoretical TF.

PS5 has advantage for maximum clock speed per CU on the GPU side though.
He said so in the last tweet. CPU/GPU are stronger on XSX. It's more of a design limitation on workloads or something like it (hence why this is used in laptops). It won't make PS5 stronger, just more efficient, since it doesn't have a 500W PSU to power it.
 
Last edited:
Yeah read the last sentence in my post.

Titus says some accurate things, but it's also clear he doesn't get a few things.

Are people really still confused by this stuff?
What part exactly do you think he was wrong about? I read his tweets and looks pretty much in line with my understanding. Maybe I missed something.
 

IntentionalPun

Ask me about my wife's perfect butthole
I didn't see that last part sorry. And when you said XSS I thought you meant the Series S for a moment.
lol.. I meant X, sorry.. fuck that branding.

What Tidus gets wrong: The variable clocks are ABSOLUTELY because it couldn't maintain full clock speed for both at all times. Like the rest of his post contradicts that entire claim.. ti's THE reason to have variable clocks in any setup, in the Sony setup it's about power budget.. others do it by measuring heat. (which really are related pretty closely)

The rest I guess is accurate.. but that statement really throws the whole thing off, and he's just over-confusing it with the bottlneck talk and acting like variable clocks are some magic fix for bottlnecks.

They just let a processor go beyond what it could do "at all times" otherwise.
 

IntentionalPun

Ask me about my wife's perfect butthole
What part exactly do you think he was wrong about? I read his tweets and looks pretty much in line with my understanding. Maybe I missed something.
The entire point of making clocks variable is because that same chip/setup/power budget/etc. could NOT handle max clocks at all times. He makes the statement that it isn't why they exist, but that isn't true at all. It's literally the reason you make clocks variable, to push beyond what would be possible by picking a static number. That was flat out wrong, and his stuff about bottlenecks is just sort of overly confusing too.

Fixed clocks: Pick a static number that you believe all code can achieve without causing power/heat issues.
Variable clocks: Let the clocks go higher than they could "at all times" because there are specific times when that is feasible without causing issues.

Typical method: React to thermals, give more power to the processor if the thermals are good, lower it when it heats up.
Sony method: Do this based on workload, basically PREDICTING ahead of time that a piece of code would be likely to cause thermal issues and do it within a specific power budget (another way to avoid thermal issues.)
 
Last edited:

Elog

Member
Let’s say that the CPU is under heavy load due to AVX usage. As far as I know the XsX CPU will handle the load and the PS5 would have tounderclock to stay under the power budget. No?

Firstly - which I think you know - at a given frequency a bit of silicon can consume very different amounts of power depending on the instructions you are running.

You can hit the power budget on both the XSX and the PS5.

When that happens on the XSX it means that you will have uncontrolled down clocks on the silicon that tapped out (CPU or the GPU).

On the PS5, you can then choose to divert power from either the GPU to the CPU or vice-versa depending on where the bottle-neck is.

If you have tapped out on both the CPU and the GPU at the same time the systems will more or less behave the same even though you can choose a bit more freely on the PS5 which part you want to sacrifice.

So assuming that both systems have roughly the same power source etc the PS5 will give better results when either the CPU or the GPU taps out in terms of power consumption.

As was shown in a thread here earlier - normal PC GPUs @ fixed frequencies tap out more often than what people think with uncontrolled downclocks as a result.
 
lol.. I meant X, sorry.. fuck that branding.

What Tidus gets wrong: The variable clocks are ABSOLUTELY because it couldn't maintain full clock speed for both at all times. Like the rest of his post contradicts that entire claim.. ti's THE reason to have variable clocks in any setup, in the Sony setup it's about power budget.. others do it by measuring heat. (which really are related pretty closely)

The rest I guess is accurate.. but that statement really throws the whole thing off, and he's just over-confusing it with the bottlneck talk and acting like variable clocks are some magic fix for bottlnecks.

They just let a processor go beyond what it could do "at all times" otherwise.
How is possible after of months of this discussion you still confusing clock with utilization, you can
have two CPUs with exactly the same specs running to the same clock but have a very different
consume of power because the operations which are running are different and the use is different
in each of those CPUs.

I know is hard to believe Tidux is good in something even I cannot believe and is making me
rethink what is actually is truth and what is not (in the life in general). :lollipop_neutral:
 
lol.. I meant X, sorry.. fuck that branding.

What Tidus gets wrong: The variable clocks are ABSOLUTELY because it couldn't maintain full clock speed for both at all times. Like the rest of his post contradicts that entire claim.. ti's THE reason to have variable clocks in any setup, in the Sony setup it's about power budget.. others do it by measuring heat. (which really are related pretty closely)

The rest I guess is accurate.. but that statement really throws the whole thing off, and he's just over-confusing it with the bottlneck talk and acting like variable clocks are some magic fix for bottlnecks.

They just let a processor go beyond what it could do "at all times" otherwise.
If you mean tweet no 3 I think you misread him.

He says: "... they're not there because "the system can't handle both at max clocks"...". He never mentioned at all times, or maybe you're referring a different tweet. As far as we know, the GPU and CPU can stay at max clocks for as long as needed, which does not mean at all times.
 

Pedro Motta

Member
This dude has no clue WTF he is talking about.
Actually, he is quite right on the PS5
The entire point of making clocks variable is because that same chip/setup/power budget/etc. could NOT handle max clocks at all times. He makes the statement that it isn't why they exist, but that isn't true at all. It's literally the reason you make clocks variable, to push beyond what would be possible by picking a static number.
Why you people always ignore what the lead architect explained about boost clocks? How YOU THINK it works, it's not HOW IT WORKS. What Tidux wrote e completely right.
 

jose4gg

Member
You can't magically not lower performance when lowering clocks.. like who would ever make that claim?

If the CPU is using the power that isn't needed in a specific frame and you redistribute that power to The GPU, exactly how you lose performance?, you basically taking all the performance you can get and putting it where is needed multiple times per frame.

You are not letting the CPU or the GPU waste resources that they don't need at a specific moment.
 

IntentionalPun

Ask me about my wife's perfect butthole
How is possible after of months of this discussion you still confusing clock with utilization, you can
have two CPUs with exactly the same specs running to the same clock but have a very different
consume of power because the operations which are running are different and the use is different
in each of those CPUs.

Literally nothing in my post indicates I do not understand this.

I know is hard to believe Tidux is good in something even I cannot believe and is making me
rethink what is actually is truth and what is not (in the life in general). :lollipop_neutral:

I know it's hard to believe, but nothing you just said addresses my post.
 
The entire point of making clocks variable is because that same chip/setup/power budget/etc. could NOT handle max clocks at all times. He makes the statement that it isn't why they exist, but that isn't true at all. It's literally the reason you make clocks variable, to push beyond what would be possible by picking a static number. That was flat out wrong, and his stuff about bottlenecks is just sort of overly confusing too.

Fixed clocks: Pick a static number that you believe all code can achieve without causing power/heat issues.
Variable clocks: Let the clocks go higher than they could "at all times" because there are specific times when that is feasible without causing issues.

Typical method: React to thermals, give more power to the processor if the thermals are good, lower it when it heats up.
Sony method: Do this based on workload, basically PREDICTING ahead of time that a piece of code would be likely to cause thermal issues and do it within a specific power budget (another way to avoid thermal issues.)

Ok, saw your reply after posting. I think you're hanging on the "all times" here which is kind of a straw man. Nobody mentioned all times (or at least I didn't see that). Depending on the workload, the CPU and GPU will run at the max clocks, but in specific workloads there will be the power shift, in order to avoid certain bottlenecks.

There's no need to be at max clocks "at all times" because that would go against the logic of using SmartShift
 

IntentionalPun

Ask me about my wife's perfect butthole
If the CPU is using the power that isn't needed in a specific frame and you redistribute that power to The GPU, exactly how you lose performance?, you basically taking all the performance you can get and putting it where is needed multiple times per frame.

You are not letting the CPU or the GPU waste resources that they don't need at a specific moment.

Whatever code is running on the CPU, will lose performance, if you downclock the CPU...

The PS5 decided which would be better to give increased power to, favoring the GPU, as it is more likely to have intense workloads.

But when it does that, whatever is running on the CPU will run slower.. AKA lose performance.
 
Status
Not open for further replies.
Top Bottom