• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

XSX vs PS5 TFlop delta is WAY overblown

CJY

Banned
Sorry CJY CJY . You spent all that time writing an actual good OP with relevant data and numbers and the usual fanboys run in and shit all over it. I can’t wait until we finally get to see games and real world examples because this whole “iTs BeTtEr On PaPeR!!?” nonsense is played out.
Thanks man. I'm OK though, just hoping to contribute to healthy discussion in these trying times.
 
Last edited:
But nobody sweats about it. It's how PC GPUs worked all the time.
What's the problem?
Somehow after MSFT said that they run "sustained" it became standard all of a sudden?
Nobody sweats about it because on PC you can account for this in a number of ways which mitigate it, undervolting, better cooling, adjusting the fan curve, changing settings, resolution etc.

None of those are an option for the user with a console, the system and code manages all of that itself with little to no involvement from you. On PC how I mitigate it is by undervolting the GPU and pushing up my fan curve, the fan spins up and it gets quite loud but it's a PC so noise isn't a gigantic concern because it's generally a headset oriented environment.

I mean just one post ago you said this doesn't happen in AMD PC cards, what changed in the last 10 minutes?
 
Last edited:

SonGoku

Member
That's exactly what AMD cards do, when you reach the thermal envelope the card scales back frequency and voltage which in turn lessens its performance capabilities to bring temperature under control.
Boost frequency is fundamentally different on PS5, the system is given a fixed constant power budget, frequency at any given time varies with the power allocated to the GPU or CPU
"It's a completely different paradigm," says Cerny. "Rather than running at constant frequency and letting the power vary based on the workload, we run at essentially constant power and let the frequency vary based on the workload."
Its up to the devs to decide workloads and frequency
 
Boost frequency is fundamentally different on PS5, the system is given a fixed constant power budget, frequency at any given time varies with the power allocated to the GPU or CPU

Its up to the devs to decide workloads and frequency
And I can do the same with my Vega 64, this is all tunable, the PS5 isn't doing anything I couldn't have been doing for years.

Constant power also seems terribly inefficient because you're pushing the same profile regardless of the workload. A 2D indie game shouldn't have the same power profile as a bleeding edge graphically intense 3D game.
 

psorcerer

Banned
Nobody sweats about it because on PC you can account for this in a number of ways which mitigate it, undervolting, better cooling, adjusting the fan curve, changing settings, resolution etc.

Most of the people never do it.
Yet these Chinese graphic card manufacturers are pretty good at selling these cards.
So Sony now is so incompetent that cannot do what they do?
Get over it, it's nothing.
The only thing that can happen - PS5 will get as loud as PS4Pro.
That's the maximum amount of problems I would expect.
 
It won't have. The developers will chose their power profiles.
They shouldn't have to, the system should manage that all based upon the demands of the render cycle. This just seems like convolution without real purpose when the traditional methods of workload management literally handled this for you with no intervention requirement..
 

psorcerer

Banned
They shouldn't have to, the system should manage that all based upon the demands of the render cycle. This just seems like convolution without real purpose when the traditional methods of workload management literally handled this for you with no intervention requirement..

They do have custom ASIC for that kind of thing. It uses "AMD smart shift" but it's a custom ASIC anyway, see the Cerny's presentation.
I would be surprised if you won't have all the options: make system decide for you or force it to use your tailored profile.
 
Last edited:

VFXVeteran

Banned
I'm not sure the memory is faster. If you combine the access to the slow and fast pool, it's exactly the same on average.
So either XSeX RAM is "faster but smaller" or "almost the same".

Disagree, I can find ways to use just the 10G. All texture operations would go there. The OS is going to have some of the lower speed memory. I'd use the rest for cache. It's essentially like the PC's memory structure with separate GPU RAM.
 
Last edited:

Foxbat

Banned
It seems as though there are simply some people who just can't accept what appears before them.

Aside from the SSD aspect, the XSX seems far more impressive across the board. The more you dig into details, and research things... The more impressive it becomes.

What I can't understand is this. Why would anyone expect the PS5 to be superior to the XSX anyway? The PS4 was a great piece of tech when it was revealed. Sonys gamble on gddr5 paid off big time, but outside of that, there wasn't anything marveling about it. It was hands down more impressive than the XB1 though.


That's been the only time that Sony provided hardware that outshined MS's. Both before and since the PS4/XB1 reveal MS has always trumped Sony. The 360 provided better results vs the PS3. The X was better than the Pro. Look at the controllers, and it's the same.

So why all the mental gymnastics? Why are some so surprised at the XBX apparent performance gap over the PS5? Why are so many others trying everything possible to dispute it?

Anyone who's paid attention knew this was going to be result.
 
Source ?

You seem to know more than Cerny, lead architect....tell us more please ?

We need to learn from the smart and knowledgeable..../s
You can’t run both the CPU and GPU at max as they share power. You also can’t run the GPU at 2.23Ghz consistently for an extended amount of time, it will melt.


It seems as though there are simply some people who just can't accept what appears before them.

Aside from the SSD aspect, the XSX seems far more impressive across the board. The more you dig into details, and research things... The more impressive it becomes.

What I can't understand is this. Why would anyone expect the PS5 to be superior to the XSX anyway? The PS4 was a great piece of tech when it was revealed. Sonys gamble on gddr5 paid off big time, but outside of that, there wasn't anything marveling about it. It was hands down more impressive than the XB1 though.


That's been the only time that Sony provided hardware that outshined MS's. Both before and since the PS4/XB1 reveal MS has always trumped Sony. The 360 provided better results vs the PS3. The X was better than the Pro. Look at the controllers, and it's the same.

So why all the mental gymnastics? Why are some so surprised at the XBX apparent performance gap over the PS5? Why are so many others trying everything possible to dispute it?

Anyone who's paid attention knew this was going to be result.

The majority of people on this forum are Sony fans and like 2013’s Xbox fans, are in disbelief. When that happens they will try to find reasons their console of choice is more powerful regardless of how much less power it has.
Does this mean Sony will have shitty games? No.
Does this mean PS5 won’t be a generational leap? No
Does this mean PS5 will consistently hit 4K? Depending on fidelity, maybe not.

But a lot of that doesn’t really mean PS5 will suck. Games will look great, load phenomenally, and since the studios are fantastic the games will be top notch. But on Xbox the games will simply look better and hit 60fps and 4K consistently.
 
Last edited:
Disagree, I can find ways to use just the 10G. All texture operations would go there. The OS is going to have some of the lower speed memory. I'd use the rest for cache. It's essentially like the PC's memory structure with separate GPU RAM.
People easily forget that the CPU doesn't need an insane amount of bandwidth, the OS doesn't, audio doesn't, certain background tasks don't. You can configure the management of bandwidth with priority structures to ensure what needs high bandwidth gets it and what doesn't does not.
 

psorcerer

Banned
Disagree, I can find ways to use just the 10G. All texture operations would go there. The OS is going to have some of the lower speed memory. I'd use the rest for cache. It's essentially like the PC's memory structure with separate GPU RAM.

Nope, it's not. If it was the case it would be much better.
Each request that comes for the "slower" pool will make all the memory work as a "slow" pool.
I.e. you will need to ration CPU access pretty heavily.
Obviously "slow" here is not really slow as 336GB/sec is a good speed.
And PS5 will have the same problems if CPU accesses the memory too frequently (as it was the last generation), but the drop in speed will be less pronounced.
BTW, I think that's why MSFT went with 320bit bus, instead of 256bit (like Sony): they have more CUs and they need to feed them, yet RAM bandwidth becomes bottleneck, and on top of that L2 cache in RDNA is sized per each 64bit dual-channel block.
If they could pull off 20GB@320bit that would be nice, but probably costs were so high they needed to compromise.
 

VFXVeteran

Banned
Nope, it's not. If it was the case it would be much better.
Each request that comes for the "slower" pool will make all the memory work as a "slow" pool.
I.e. you will need to ration CPU access pretty heavily.
Obviously "slow" here is not really slow as 336GB/sec is a good speed.
And PS5 will have the same problems if CPU accesses the memory too frequently (as it was the last generation), but the drop in speed will be less pronounced.
BTW, I think that's why MSFT went with 320bit bus, instead of 256bit (like Sony): they have more CUs and they need to feed them, yet RAM bandwidth becomes bottleneck, and on top of that L2 cache in RDNA is sized per each 64bit dual-channel block.
If they could pull off 20GB@320bit that would be nice, but probably costs were so high they needed to compromise.

If you are forced to downclock all the memory so that the voltage is the same, then I'd just stick with 10G for the entire game. There is no way I would just force all of it to be 336G/s for every game. The higher bandwidth pool would be a waste in that case.
 
If you are forced to downclock all the memory so that the voltage is the same, then I'd just stick with 10G for the entire game. There is no way I would just force all of it to be 336G/s for every game. The higher bandwidth pool would be a waste in that case.

Then Series X would be limited to 10GB for games, is that enough for next-gen at 4K60?
 

synce

Member
No footnote about how Sony's numbers focus on boosted speeds? Until shown an actual spec sheet proving otherwise, I think it's fair to assume the non boost clocks are no better than Xbox's and if anything the performance gap is even greater than the current numbers show. The PS5 could be 9 or even 8 TF under certain conditions. Cerny's claim of performance decreasing only a couple percent when not boosted sounds unrealistic, especially if it's up to developers to code their game around this "feature." I like the MS approach much better - no weird variables, as a console should be. That said, I'm still sticking with PC next gen as the hardware in these consoles is more similar to PC than ever before.
 

vpance

Member
I'm not sure the memory is faster. If you combine the access to the slow and fast pool, it's exactly the same on average.
So either XSeX RAM is "faster but smaller" or "almost the same".

So when do they use the slow pool? I'm thinking a lot of early on games won't even dip into that 3.5GB.
 

psorcerer

Banned
So when do they use the slow pool? I'm thinking a lot of early on games won't even dip into that 3.5GB.

I wouldn't be so dismissive, it can work.
Last gen AFAIK MSFT had pretty bad profiling tools for XOne, so it would be very tricky.
Hope with that new simpler arch it gets better and developers can hunt CPU -> RAM overuse.
 

SonGoku

Member
Too bad its 9.2 without boost but please spin more lol
The only one spinning is you.. there's no traditional base frequency, frequency varies with the allocation of power
Cerny specifically said the GPU would spend most of its time at 10.27TF or close to it. The worse case game would only require a minor downclock

But no, he is lying and you know better right?
 
Last edited:

Forsete

Member
Basically I think we all can agree on that due to the PS5s superior SSD performance it will not suffer from stuttering.
Lets say that a PS5 title due to its inferior graphical performance runs at a rock solid 30 fps, without stuttering. While a Xbox 5 title will run at 120 fps but with stuttering when the inferior SSD need to buffer and catch up. (basically like in the old Youtube days, when the buffering circle appears).

I think this is fair and something we all can agree on.
Lets agree.
Please.
Agree.
 
Zzzzzz. I will believe it when we see some benchmarks on how the ps5 GPU&CPU are performing under a lot of strain. We also have no clue how much Ram Sony has allocated to its OS. We also have no clue what kind of cooling Sony will be using to sustain a level of clock speeds at least near to the peak numbers they announced.

Everything is crystal clear on the XSX front, this is a beastly console , balanced in everything , with MS having basically explained everything . On the other hand we have seen vagueness (most of the time, close to maximum clock speeds e.t.c) along with ridiculousness (big number of CU is bad, I don’t measure TF by their number).
 

vpance

Member
I wouldn't be so dismissive, it can work.
Last gen AFAIK MSFT had pretty bad profiling tools for XOne, so it would be very tricky.
Hope with that new simpler arch it gets better and developers can hunt CPU -> RAM overuse.

Ofc it will work. But the easiest solution may be to not use it at all if you're heavily bandwidth dependent.
 
Basically I think we all can agree on that due to the PS5s superior SSD performance it will not suffer from stuttering.
Lets say that a PS5 title due to its inferior graphical performance runs at a rock solid 30 fps, without stuttering. While a Xbox 5 title will run at 120 fps but with stuttering when the inferior SSD need to buffer and catch up. (basically like in the old Youtube days, when the buffering circle appears).

I think this is fair and something we all can agree on.
Lets agree.
Please.
Agree.

I just really have a hard time imagining the XsX having stuttering issues. I’m just being honest. It’s a slower drive than PS5’s but it’s still an nvme drive. It’s ultra fast storage. It’s considerably faster than the SATA SSDs hooked up in my PC
 
Last edited:
Basically I think we all can agree on that due to the PS5s superior SSD performance it will not suffer from stuttering.
Lets say that a PS5 title due to its inferior graphical performance runs at a rock solid 30 fps, without stuttering. While a Xbox 5 title will run at 120 fps but with stuttering when the inferior SSD need to buffer and catch up. (basically like in the old Youtube days, when the buffering circle appears).

I think this is fair and something we all can agree on.
Lets agree.
Please.
Agree.
I can imagine a game designed and dependent around the PS5's SSD is flat out broken or impossible to port to XSX/PC as is. Would need redesigning and downgrade of data streaming to accomodate for a 100%+ disparity in speed.
 
Last edited:

LostDonkey

Member
let's say you have a 1.0l engine that produces 50bhp vs a 2.0l engine that produces 100bhp, so 50bhp per L.

then they release a 1.0l that does 100bhp and a 2.0l that does 200bhp at 100bhp per L.

that's a 100bhp gap between the two engines now instead of a 50bhp gap. it's still the same percentage but the performance per litre has doubled.

I've never been great at maths so I'll take a slap down if ive fucked it up but wouldn't this be the same if the efficiency has increased per TF.
 
I can imagine a game designed and dependent around the PS5's SSD is flat out broken or impossible to port to XSX/PC as is. Would need redesigning and downgrade of data streaming to accomodate for a 100%+ disparity in speed.
It might be greater than 100% disparity. PC ssds are only like 3-4 times faster than hdds, as bottlenecks limit ssd performance. But the custom i/o on ps5 is said to remove bottlenecks and allow it to be 100 times faster than an hdd. If true that is 20+ times faster than a pc ssd, or 2000+% faster.

edit:


Here we seem to see sata ssd vs pcie nvme 4.0 ssd, yet despite being substantially faster the bottlenecks keep the pcie nvme drive no faster than the sata ssd. The sata ssd only being a few times faster than hdd.
 
Last edited:

SonGoku

Member
And I can do the same with my Vega 64, this is all tunable,
No you can't, there aren't any consumer electronics out there with this design
Your Vega64 card frequency is regulated by temperature
PS5 GPU frequency is regulated by the amount of power it receives at any given time and the type of loads. This is all at the control of the developers
Constant power also seems terribly inefficient because you're pushing the same profile regardless of the workload. A 2D indie game shouldn't have the same power profile as a bleeding edge graphically intense 3D game.
PS5 has a fixed power budget, that doesn't mean it will run at full throttle all the time, it varies with workload until it reaches the cap.
 
Last edited:

Lort

Banned
This is clickbait and conjecture. No real evidence here just speculation and opinion ...

20% difference = 60 fps instead of 50
Or 30 fps instead of 25

The power to play a whole ps4 game ... is not insignificant ;)
 
Last edited:
It might be greater than 100% disparity. PC ssds are only like 3-4 times faster than hdds, as bottlenecks limit ssd performance. But the custom i/o on ps5 is said to remove bottlenecks and allow it to be 100 times faster than an hdd. If true that is 20+ times faster than a pc ssd, or 2000+% faster.
PS5 is 125% faster on SSD than XSX to be precise. I read somewhere that the console could load an entire level by the time your character does a 180° turn. A game made to exploit that capability is pretty much tied to that system. Anything slower breaks it. I imagine God of War 2 to have instant realm travel for example.
 

Truespeed

Member
Can we all just agree that multiplats are going to perform better on the PS5? The inevitable meltdowns that are going to occur when multiplat after multiplat performs better on the PS5 is all but a given. And in the unlikely event that some titles, ported to the PS5, don't then the couple of FPS lost will be more than made up by the PS5's superior sound coprocessor, bus speeds and load times. I keep coming back to the word "creativity" when thinking about the PS5. All I see with the Xbox Series X is a grey PC box.

je1dQQg.png
 
PS5 is 125% faster on SSD than XSX to be precise. I read somewhere that the console could load an entire level by the time your character does a 180° turn. A game made to exploit that capability is pretty much tied to that system. Anything slower breaks it. I imagine God of War 2 to have instant realm travel for example.
Could be. But remember sony removed bottlenecks. An nvme pcie 4.0 drive can be 10X faster than a sata ssd, and yet it is no more faster. A sata ssd can be 10x faster than an hdd, yet be no more than 3-4 times faster. Why does that happen? Bottlenecks. Sony removed the bottlenecks, so their nvme pcie 4.0 is said to be 100x faster than a standard hdd.
 
  • Thoughtful
Reactions: CJY
No you can't, there aren't any systems out there with this design
Your Vega64 card frequency is regulated by temperature
PS5 GPU frequency is regulated by the amount of power it receives at any given time and the type of loads. This is all at the control of the developers

PS5 has a fixed power budget, that doesn't mean it will run at full throttle all the time, it varies with workload until it reaches the cap.
Speaking from second hand experience won't do you too many favors.

I hope you realize I can manipulate the entire voltage and frequency curve so the only determinant factor for frequency shift would be load and not temperature...

s3j4oVT.png
 
Last edited:

SonGoku

Member
Speaking from second hand experience won't do you too many favors.

I hope you realize I can manipulate the entire voltage and frequency curve so the only determinant factor for frequency shift would be load and not temperature...

s3j4oVT.png
Your OC is bound by temperature, the moment a load or game becomes too much too handle it will throttle. This can't happen on a console as its too unpredictable
PS5 frequencies wont ever be throttled by temperature
 

VFXVeteran

Banned
You guys are going down this fantasy rabbit hole again.

Stuttering isn't going to be from I/O bottlenecks. At that point, everything is in super fast RAM and frame pacing is what's happening or any number of things at speeds much much faster than the SSD.

Please stop trying to equate fast IO to faster FPS. If a developer makes a game like that, then they need to rewrite their code.
 
Last edited:
Your OC is bound by temperature, the moment a load or game becomes too much too handle it will throttle. This can't happen on a console as its too unpredictable
PS5 frequencies wont ever be throttled by temperature
I think if the ps5 is put in something like an oven like environment it will probably shut down for safety rather than throttling. But similar would likely happen to the series x if put in a hot enough environment. Assuming adequate cooling there shouldn't be an issue with sustaining clocks.
 

SonGoku

Member
I think if the ps5 is put in something like an oven like environment it will probably shut down for safety rather than throttling. But similar would likely happen to the series x if put in a hot enough environment. Assuming adequate cooling there shouldn't be an issue with sustaining clocks.
Right, developers will be in full control of power allocation and frequency
 
Top Bottom