• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

FunkMiller

Member
I seriously don't understand Ubisoft teams, water is something they have almost perfected in Assasins Creed, WTH is this shit?

- Aren't they using the same engine?
- Don't they share their tech across their teams to deliver in better times?

They are a supremely greedy factory line that cares little for producing high quality games, and more for churning out as many games as possible to fulfil a quota. They know people continue to be dumb enough to by their games, so see no reason to improve standards or quality. Devs are not allowed to spend time on anything, because they have to shit out another annual game to keep the margins rolling in.
 

Elog

Member
What people want to know is how often the PS5 will drop clock speeds of the CPU or GPU. It will absolutely 100% do it, otherwise they would have just gone with constant instead of variable.

Apologies for my rough response before. I simply felt 'not this topic again'.

And I rushed my response - the point is that every piece of silicon has limitations based on power, voltage and thermals. This is especially true in a cost conscious console environment. And the APU will throttle when hitting those limits and with the right type of instruction sets thrown at silicon over time you can throttle it. Even a fixed frequency environment will see fluctuations under load because of this.

The fact that the AMD solution allows smarter allocation of frequency (and hence power) means the APU can handles these peaks better. It is a benefit and not a disadvantage. That is why every piece of advanced GPU on the market has something similar - you get more out of your silicon.
 

bitbydeath

Member
output06d.png

What video is that from?
I take it that is the graphics mode.
 
The DX12U API is more unified with PC, but that doesn't mean it is less low-level. Actually the inverse is true:




Source: https://arstechnica.com/gaming/2020...e-brings-xbox-series-x-features-to-pc-gaming/

Fiat punto is actually faster than bicycle.

Saying that DX12 is "more low level" means nothing, if they compared it to another DirectX version.

Compared to PS5 APIs I mean. unless there is hard data it could be that

Old DX = 100% base level

New DX = 80% base level

PS5 API = 60% base level

Lower number is "closer to the hardware", numbers pulled out of thin air. Point is, that if they compare DX to DX, yes they can claim it is lower level now, but it tells nothing about DX vs PS5.

It is more likely that APIs that are made for limited amount of hardware (ps5 only or ps4+ps5) are more specialized than APIs that have to work on tens of thousands PC hardware combinations + 5-6 xbox versions.



But xbox were able to cool a more powerful gpu in a smaller form factor with near silent accoustics? If ps5 was more powerful and smaller I could understand your argument that this was the only way to keep it quiet but clearly this is not the case.



Do you have proof of this? I highly doubt the series x will not adjust clocks at any time. What about just sitting on the home screen or watching Netflix? Is this gonna be 12 TF? Digital foundry have already measured different power draw for different games, indicating different clocks for cpu and gpu.

Look guys I have a ps5 on coming on pre order. Im not getting a series x because I have a pc. I just think we should be wary of what sony is saying about variable clocks as to me it doesn't seem great.

Series X and PS5 have almost the same internal volume

7936 cubic centimeters for Xbox Series X

10 000 cubic cm for ps5 (but it is not straight box so it is actually less than that and if we count only the black part of the console, it is way less)


Until there is professional noise measurements made on sound laboratory either of those are not "near silent" as people talk out of their ass and even noisy systems are nearly silent to some people.

And less power draw at netflix doesnt mean it downclocks.

You can force your PC to run at full clocks at desktop, but power draw isnt same as using those full clocks in benchmark software

power drawn depends of the load too.
 
Last edited:

buenoblue

Member
It seems you are either unfamiliar with the DF interview Cerny gave - after the Road to Ps5 - or are wantonly peddling not 10.23TF just by a different MO.

The downclocking you are mentioning is when the silicon is idle. This is not the same as constant boost clocking at skyhigh clocks with active silicon staying within a constant power budget - by varying boost clocks deterministically based on CU utilisation.

Cerny's comments regarding how to optimise for this paradigm shift will only benefit setups that can boost like the ps5 GPU does. On older designs it will save power draw via the fixed clock, whereas on the new design power remains constant and those workloads get boost clocked in the same way lighter workloads did.

As expected no real comment on my points or concerns. Do you honestly believe Sonys decisions of running gpu up to 10.23TF* is somehow better than running a gpu @12TF constant? Like I said if ps5 was smaller and quieter than series I could understand but it's bigger and less powerful, surely this points to design mistakes.
 

MrFunSocks

Banned
Exactly, this is how every gpu and cpu works. Series x will up and downclock constantly. They up and downclock when needed or not needed. The fact that sony have needed to state variable clocks is a worry because it must be more than how it normally works. Like I said pc gpus state the minimum clock then variably boost up. Sony have stated the boost top end and not stated the lower end. At any given time my pc gpu can rely on a certain clock at any time. Ps5 it seems can not guarantee this.
Yeh, I’m not sure why people find this so hard to understand. There is a difference between the PS5s variable clock rates and the clock rates of every other GPU before it. If it was the same, and the Xbox series x was the same, then they wouldn’t have mentioned it, would they? No. I don’t think it’s crazy to want to know the drawbacks of this new method, yet this threads inhabitants just want to shut down any and all questioning.

Like I said, if the PS5 could run at max clocks all the time it wouldn’t be using this new variable clock speed tech. It just wouldn’t. Microsoft aren’t using smart shift. If it had no disadvantages they would, don’t you think? Cerny himself even said that it will downclock at times.

So to all the people fanboying out and calling me an idiot - if Smartshift offered only benefits and no downsides, why aren’t MS using it too?


I was under the impression people want games, not to know clockspeeds.
Lol you’re in a thread where there are literally thousands of pages of people arguing about rdna 1/1.5/2/3, rasterisation speeds, infinity caches, etc, yet for some reason now it’s silly to ask about the clock speeds and how often they will drop? Come on lol
 
Last edited:

Jemm

Member
Lower number is "closer to the hardware", numbers pulled out of thin air. Point is, that if they compare DX to DX, yes they can claim it is lower level now, but it tells nothing about DX vs PS5.
This was about DX vs. DX. Neither me nor the article I linked talked anything about PS5 and like you said, there is no point comparing them across consoles.
 

geordiemp

Member
Yeh, I’m not sure why people find this so hard to understand. There is a difference between the PS5s variable clock rates and the clock rates of every other GPU before it. If it was the same, and the Xbox series x was the same, then they wouldn’t have mentioned it, would they? No. I don’t think it’s crazy to want to know the drawbacks of this new method, yet this threads inhabitants just want to shut down any and all questioning.

Like I said, if the PS5 could run at max clocks all the time it wouldn’t be using this new variable clock speed tech. It just wouldn’t. Microsoft aren’t using smart shift. If it had no disadvantages they would, don’t you think?

Microsoft are not using RDNA2 CU with fine clock gating. Everybody else is.
 

MrFunSocks

Banned
Apologies for my rough response before. I simply felt 'not this topic again'.

And I rushed my response - the point is that every piece of silicon has limitations based on power, voltage and thermals. This is especially true in a cost conscious console environment. And the APU will throttle when hitting those limits and with the right type of instruction sets thrown at silicon over time you can throttle it. Even a fixed frequency environment will see fluctuations under load because of this.

The fact that the AMD solution allows smarter allocation of frequency (and hence power) means the APU can handles these peaks better. It is a benefit and not a disadvantage. That is why every piece of advanced GPU on the market has something similar - you get more out of your silicon.
No worries at all. So if it has zero disadvantages, why didn’t Microsoft implement it too? They have access to it too. They worked closely with AMD too. Wouldn’t it stand to reason that the Series S/X would use it too if there was literally no drawbacks, only benefits? Honest to god serious question. Why would MS not use it if it is an advantage in literally every scenario?

In a situation where both the CPU and GPU are being stressed to the max, full utilisation trying to run a big online open world multiplayer game at 4K60FPS for example, would the guaranteed constant clock speeds not give more power than the variable ones where cerny even said himself that they will have drops?
 
Last edited:
But xbox were able to cool a more powerful gpu in a smaller form factor with near silent accoustics? If ps5 was more powerful and smaller I could understand your argument that this was the only way to keep it quiet but clearly this is not the case.



Do you have proof of this? I highly doubt the series x will not adjust clocks at any time. What about just sitting on the home screen or watching Netflix? Is this gonna be 12 TF? Digital foundry have already measured different power draw for different games, indicating different clocks for cpu and gpu.

Look guys I have a ps5 on coming on pre order. Im not getting a series x because I have a pc. I just think we should be wary of what sony is saying about variable clocks as to me it doesn't seem great.
Why can't the xsx even reach a 2.0 GHz frequency on the GPU while being a "full RDNA2" architecture? Being silent is only ok if it also runs cool (not the box, before some idiot takes this to share temperature of the plastics as counter proof).
Different approaches, but the end of it is, if the XSX had smart shift, it would be better, not worse. They would be able to keep the system closer to peek performance than they can nowadays.
 
Ah yes, the variable clocks concern again. Let's put it this way: if XSX had variable clocks it would be able to push the hardware further to it's limit more often, something that it only happens in perfect conditions with fixed clocks and no smart shift.

no, they could easily implement it. But they won’t. Why should they? They already have the most powerful console. And they want to make it as easy for the developer as possible.
And having sustained and predicable clocks makes it easier for devs than variable clocks.
 

Yoboman

Member
No worries at all. So if it has zero disadvantages, why didn’t Microsoft implement it too? They have access to it too. They worked closely with AMD too. Wouldn’t it stand to reason that the Series S/X would use it too if there was literally no drawbacks, only benefits? Honest to god serious question. Why would MS not use it if it is an advantage in literally every scenario?

In a situation where both the CPU and GPU are being stressed to the max, full utilisation trying to run a big online open world multiplayer game at 4K60FPS for example, would the guaranteed constant clock speeds not give more power than the variable ones where cerny even said himself that they will have drops?
What makes you think 100% utilisation of CPU and GPU - a highly unlikely scenario, would not run into thermal throttling anyway?
 

Gudji

Member
no, they could easily implement it. But they won’t. Why should they? They already have the most powerful console. And they want to make it as easy for the developer as possible.
And having sustained and predicable clocks makes it easier for devs than variable clocks.

That's not the point, the point is variable clocks aren't a bad thing. Developing for PS5 is also easy, they just have to optimize the game in a different way... they already do that on the nintendo switch.
 

PaintTinJr

Member
As expected no real comment on my points or concerns. Do you honestly believe Sonys decisions of running gpu up to 10.23TF* is somehow better than running a gpu @12TF constant? Like I said if ps5 was smaller and quieter than series I could understand but it's bigger and less powerful, surely this points to design mistakes.
Most powerful? Define that for me in the context of the comparative technologies - traditional fixed clock, versus continuous boosting + smartshift.

Which console does more work per watt - measured over a second or longer?
Which console supplies more power from the wall to the console?
Of the power supplied by the wall, which console doesn't require PS4/Pro/X1/X1X (and previous gen) power headroom?

If you are just using the theoretical TF PR marketing measure of power, then I would say you a being disingenuous in your claim.
 

Elog

Member
No worries at all. So if it has zero disadvantages, why didn’t Microsoft implement it too? They have access to it too. They worked closely with AMD too. Wouldn’t it stand to reason that the Series S/X would use it too if there was literally no drawbacks, only benefits? Honest to god serious question. Why would MS not use it if it is an advantage in literally every scenario?

In a situation where both the CPU and GPU are being stressed to the max, full utilisation trying to run a big online open world multiplayer game at 4K60FPS for example, would the guaranteed constant clock speeds not give more power than the variable ones where cerny even said himself that they will have drops?

Every solution on a console costs mm2 on the die - so many parameters goes into what you want to spend that silicon budget on. They obviously prioritised other features. In addition, I believe Smartshift requires infinity fabric for the shared power pool across the APU. While we do not know yet I believe the PS5 has infinity fabric/cache while the XSX/S does not.
 

bitbydeath

Member
No worries at all. So if it has zero disadvantages, why didn’t Microsoft implement it too? They have access to it too. They worked closely with AMD too. Wouldn’t it stand to reason that the Series S/X would use it too if there was literally no drawbacks, only benefits? Honest to god serious question. Why would MS not use it if it is an advantage in literally every scenario?

In a situation where both the CPU and GPU are being stressed to the max, full utilisation trying to run a big online open world multiplayer game at 4K60FPS for example, would the guaranteed constant clock speeds not give more power than the variable ones where cerny even said himself that they will have drops?

Because it’s not a bog standard feature.
 

MrFunSocks

Banned
Ah yes, the variable clocks concern again. Let's put it this way: if XSX had variable clocks it would be able to push the hardware further to it's limit more often, something that it only happens in perfect conditions with fixed clocks and no smart shift.
So why did MS choose to not implement it, since it’s an AMD feature that is available to them? They worked closely with amd on these new chipsets, so if it only offers advantages and would mean an even more powerful console then why do you think they didn’t choose to use it?
 

Gudji

Member
So why did MS choose to not implement it, since it’s an AMD feature that is available to them? They worked closely with amd on these new chipsets, so if it only offers advantages and would mean an even more powerful console then why do you think they didn’t choose to use it?

Because they went with a traditional design = fixed clocks. Every console has been like that with exception of the nintendo switch and now the PS5. Sony is just doing a new thing.
 

geordiemp

Member
No worries at all. So if it has zero disadvantages, why didn’t Microsoft implement it too? They have access to it too. They worked closely with AMD too. Wouldn’t it stand to reason that the Series S/X would use it too if there was literally no drawbacks, only benefits? Honest to god serious question. Why would MS not use it if it is an advantage in literally every scenario?

In a situation where both the CPU and GPU are being stressed to the max, full utilisation trying to run a big online open world multiplayer game at 4K60FPS for example, would the guaranteed constant clock speeds not give more power than the variable ones where cerny even said himself that they will have drops?

But again, if AMD show them this feature that literally has no drawbacks and makes the APU better, why wouldn’t they do it? It doesn’t make sense.

The reason is quite simple, CUs having differeing gated frequencies makes sense if your running 1 game,. as there are times ina frame when GPU is quiet and CPU is very busy. It saves power, give GPU a rest for a few hundred nanoseconds...

If you design a system to run 4 games at once, then frequency gating control and everything else is less interesting.

XSX CPU is also server class from hotchips, its designd to run 4 games.

XSX GPU is desigend around this 4 games running at once, hence the 4 shader arrays that are larger as you would likely want L1 cache private for each game running..

It is clever how MS have desigend a dual purpose APU for console and server application, and once you get that you will understand the design choices.
 
Last edited:

MrFunSocks

Banned
Because they went with a traditional design = fixed clocks. Every console has been like that with exception of the nintendo switch and now the PS5. Sony is just doing a new thing.
But again, if AMD show them this feature that literally has no drawbacks and makes the APU better, why wouldn’t they do it? It doesn’t make sense.
 

Rea

Member
No worries at all. So if it has zero disadvantages, why didn’t Microsoft implement it too? They have access to it too. They worked closely with AMD too. Wouldn’t it stand to reason that the Series S/X would use it too if there was literally no drawbacks, only benefits? Honest to god serious question. Why would MS not use it if it is an advantage in literally every scenario?

In a situation where both the CPU and GPU are being stressed to the max, full utilisation trying to run a big online open world multiplayer game at 4K60FPS for example, would the guaranteed constant clock speeds not give more power than the variable ones where cerny even said himself that they will have drops?
I'm sry what??? Microsoft work closely with amd yes, but the variable frequency tech in ps5 is not AMD'S, Microsoft can't use it. Only smart shift is AMD'S tech. Variable frequency technology used in ps5 is brand new technology which is never used before, cerny said its totallly new paradigm. U thinks Microsoft can use it too?? :messenger_tears_of_joy: :messenger_tears_of_joy:
 
no, they could easily implement it. But they won’t. Why should they? They already have the most powerful console. And they want to make it as easy for the developer as possible.
And having sustained and predicable clocks makes it easier for devs than variable clocks.
It really shows how easy to develop it is compared to the ps5 when the games shown are in that miserable state. "Ray tracing will come later guys, don't worry" :messenger_tears_of_joy:
 

MrFunSocks

Banned
I'm sry what??? Microsoft work closely with amd yes, but the variable frequency tech in ps5 is not AMD'S, Microsoft can't use it. Only smart shift is AMD'S tech. Variable frequency technology used in ps5 is brand new technology which is never used before, cerny said its totallly new paradigm. U thinks Microsoft can use it too?? :messenger_tears_of_joy: :messenger_tears_of_joy:
Do we have confirmation that it’s not smartshift? What is the difference between it and smartshift then, since they both appear to do the same thing?

here’s Cerny saying it’s using AMD smartshift......


Cerny said in his presentation. “While we’re at it, we also use AMD’s SmartShift technology and send any unused power from the CPU to the GPU, so it can squeeze out a few more pixels.”
 
Last edited:

M1chl

Currently Gif and Meme Champion
People still think variable clock rates are inferior to fixed clock rates 😂


I mean all CPUs today is using ariable frequency, when they are idle inner CPU multiplier is decreasing to save power. However if need full power it sustains it's f until high temperature, where it thermalthrottles. Smart shift on the other hand balanced CPU and GPU power based on power target. So in the case of XSX it suppose to sustain both component under load at full power, when with PS5 it shift the frequency of both components to hit thermal/power target. You kind of have to do it, because if your GPU runs at 2.23Ghz it's going to draw a lot of power. Now, XSX power consumption showed that RDNA 2 is really efficient with power, but maybe it's due to lower clocks. Smartshift was not meant for desktop PCs, but for laptops, when power management is really important thing.

With that said, we are never going to know how this technology is influencing performance or how much XSX sustains it's clocks. Because we have no tool to measure it.
 

PaintTinJr

Member
The reason is quite simple, CUs having differeing gated frequencies makes sense if your running 1 game,. as there are times ina frame when GPU is quiet and CPU is very busy. It saves power, give GPU a rest for a few hundred nanoseconds...

If you design a system to run 4 games at once, then frequency gating control and everything else is less interesting.

XSX CPU is also server class from hotchips, its designd to run 4 games.

XSX GPU is desigend around this 4 games running at once, hence the 4 shader arrays that are larger as you would likely want L1 cache private for each game running..

It is clever how MS have desigend a dual purpose APU for console and server application, and once you get that you will understand the design choices.
I agree with your overall view that the XsX design decisions were guided by the server side, but I really don't think they had access to the continuous boosting technology in the PS5 APU.

The reason I think that, is IIRC power draw (of air-con/servers) is the biggest issue with a server farm, so even if it cost die-space and meant they needed more space for more servers, the continuous boost clocking technology at the lower clock would have yield lower cost per performance than what they have in the XsX. And they'd still have the ability to do it as fixed clock - as their fallback - if it wasn't working out for reliability, etc.

Even a server partially utilised (less than 4 copies)would have given them great gains using the PS5 boost tech IMHO, and would have worked better(less power draw, less heat) running different demanding games, say if running a BC game in the cloud.
 
Last edited:

sircaw

Banned
So to all the people fanboying out and calling me an idiot - if Smartshift offered only benefits and no downsides, why aren’t MS using it too?

Why don't they have a better ssd like sony?
Why don't they have a better sound engine like sony's
Why don't they have the latest version of wifi usb-c
Why don't they have any exclusive launch games?
why did they not invest in have a better controller like sony?
Why don't they have cache scrubbers like sony
Why don't they have coherency cores like sony.

because the only two things microsoft cared about was marketing and packaging.


Microsoft headquarters.
Don't worry people that 12 tflop number and the pretty box will do the business, consumers are thick as hell.


"Phil Phil Phil"
 
Last edited:
Status
Not open for further replies.
Top Bottom