• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

Kenpachii

Member
A good example of this is the Xbox Series X hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.

They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.

He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.

The main difference is that the working frequency of the PlayStation 5 is much higher and they work at a higher frequency. That's why, despite the differences in CU count, the two consoles’ performance is almost the same. An interesting analogy from an IGN reporter was that the Xbox Series X GPU is like an 8-cylinder engine, and the PlayStation 5 is like turbocharged 6- cylinder engine. Raising the clock speed on the PlayStation 5 seems to me to have a number of benefits, such as the memory management, rasterization, and other elements of the GPU whose performance is related to the frequency not CU count. So in some scenarios PlayStation 5's GPU works faster than the Series X. That's what makes the console GPU to work even more frequently on the announced peak 10.28 Teraflops. But for the Series X, because the rest of the elements are slower, it will not probably reach its 12 Teraflops most of the time, and only reach 12 Teraflops in highly ideal conditions.

Aware of this, Sony has used a faster GPU instead of a larger GPU to reduce allocation costs. A more striking example of this was in the CPUs. AMD has had high-core CPUs for a long time. Intels on the other hand has used less core but faster ones. Intel CPUs with less cores but faster ones perform better in Gaming. Clearly, a 16- or 32-core CPU has a higher number of Teraflops, but a CPU with a faster core will definitely do a better job. Because it's hard for gamers and programmers to use all the cores all the time, they prefer to have fewer cores but faster.

Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.

So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.

And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.

And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.

Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
 
Last edited:

Thirty7ven

Banned
*trash*

Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.

Gotta love the internet. What do you do for a living my brother? How can you speak with such authority on the subject, when opposing someone who does work on the thing for a living, at a studio known for being a tech studio, and with the results being what they are.

This ain't politics breh, you can't just drop hot trash like that and classify it as "opinion", "freedom of thought". I mean, you're free to look dumb, but is that what you want?
 
Last edited:

Ev1L AuRoN

Member
They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.

He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.





Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.

So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.

And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.

And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.

Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
On PC big Navi will have dedicaded memory and much higher clock. What he said make sense, if your data pipeline has bottlenecks you wouldn't be able to performe efficiently. Mark Cerny make a big deal about how ps5 is optimize for data throughput, increase cache sizes, adicional hardware to handle the memory access. The racional is, let's use less silicon, but let's make sure we can fully utilized they by keep them fed constantly.
 

Kenpachii

Member
Gotta love the internet. What do you do for a living my brother? How can you speak with such authority on the subject, when opposing someone who does work on the thing for a living, at a studio known for being a tech studio, and with the results being what they are.

This ain't politics breh, you can't just drop hot trash like that and classify it as "opinion", "freedom of thought". I mean, you're free to look dumb, but is that what you want?

Oh look another PS5 fanboy crawled out of the dumpster called PS5 next gen thread with no argument just shit posting because i dared to critize his plastic box prophets.

Maybe next time start posting with some arguments. because the market is clearly moving into xbox direction and not PS5 direction when it comes to GPU solutions as i explained and with that bandwidth also. Maybe the entire market even amd rdna2 and nvidia 3000 series got it all wrong.

What a joke.

On PC big Navi will have dedicaded memory and much higher clock. What he said make sense, if your data pipeline has bottlenecks you wouldn't be able to performe efficiently. Mark Cerny make a big deal about how ps5 is optimize for data throughput, increase cache sizes, adicional hardware to handle the memory access. The racional is, let's use less silicon, but let's make sure we can fully utilized they by keep them fed constantly.

And the same counts for software being designed for high cu counts which the entire market is moving into. But he forgot to mention that. the same goes for higher bandwidth on the v-ram modules but all of that is not important even whli everywhere else it's the most important aspect lol. its so important the cu's on the card and memory clocks that amd nvidia charges a bucket load more money for just that in there top efforts. and with RT those cu's will age poorly. I won't be shocked if in 2022 the PS5 pro is already on the menu with a massive leap in cu's.
 
Last edited:

Ev1L AuRoN

Member
Oh look another PS5 fanboy crawled out of the dumpster called PS5 next gen thread with no argument just shit posting because i dared to critize his plastic box prophets.

Maybe next time start posting with some arguments. because the market is clearly moving into xbox direction and not PS5 direction when it comes to GPU solutions as i explained and with that bandwidth also. Maybe the entire market even amd rdna2 and nvidia 3000 series got it all wrong.

What a joke.



And the same counts for software being designed for high cu counts which the entire market is moving into. But he forgot to mention that. And the market is also moving into far more bandwidth memory and raytracing will also use more cu's etc etc etc.

Dude, more CU's are definitely better, don't get me wrong. If the Xbox is really efficient it will be stronger off course. The reality of it? They are not very apart.
Series X has a wide design with 18% more CU power
Ps5 has the faster design by having 21% more clock.
Ps5 is faster in some ways and the Xbox in another, but so far at least, ps5 is proving to be more efficient. Maybe this change, maybe not, all we hear from developers is praise on the ps5 design, and Microsoft still didn't proven us wrong. So far, you are the one assuming without substance.
 
Someone obviously never told nvidia or amd when they designed their new monster gpu’s....

What MS did with xsex is to win the TF war on paper for marketing. But actual results as we can see now favors the PS5 in performance.

AMD's new RDNA2 GPUs have 8-10 CUs per shader array. That's the amount of CUs they deemed optimal for each shader array. So AMD's big GPUs have 8 shader arrays with 8-10 CUs.

The problem with xsex GPU is that MS crammed 12-14 CUs in each shader array. But it only has 4 shader arrays similar to PS5. So xsex is not actually wider or big by the definition of AMD. AMD's big and wide GPUs have as much shader array as the number of CUs (8-10CUs to 1 shader array).


AIQZ6Se.jpg



With PS5 you have 4 shader arrays with 8-10 CUs. But the GPU is clocked a lot higher so its caches are a lot faster. If xsex has 6 or 5 shader arrays then that would count as bigger and wider than PS5. Then surely it would have performed faster than PS5. But that's not the case.

There may still be advantage on the xsex configuration I would say. It's a wash. There may be operations where xsex is faster and rendering where PS5 is faster.
 
Last edited:

Md Ray

Member
What MS did with xsex is to win the TF war on paper for marketing. But actual results as we can see now favors the PS5 in performance.

AMD's new RDNA2 GPUs have 8-10 CUs per shader array. That's the amount of CUs they deemed optimal for each shader array. So AMD's big GPUs have 8 shader arrays with 8-10 CUs.

The problem with xsex GPU is that MS crammed 12-14 CUs in each shader array. But it only has 4 shader arrays similar to PS5. So xsex is not actually wider or big by the definition of AMD. AMD's big and wide GPUs have as much shader array as the number of CUs (8-10CUs to 1 shader array).


AIQZ6Se.jpg



With PS5 you have 4 shader arrays with 8-10 CUs. But the GPU is clocked a lot higher so its caches are a lot faster. If xsex has 6 or 5 shader arrays then that would count as bigger and wider than PS5. Then surely it would have performed faster than PS5. But that's not the case.

There may still be advantage on the xsex configuration I would say. It's a wash. There may be operations where xsex is faster and rendering where PS5 is faster.
A bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?
 

ethomaz

Banned
A bit unrelated. I see a 5MB L2 cache in there. What's the L2 cache size of Big Navi?
Nobody knows because cache is different in each chip but BigNavi is suppose to have 128MB L2 or L3 cache.

We can only post what exists in the market... RX 5700 and 5700 XT has 4MB L2 cache that is proportional more cache than Xbox has.
 
Last edited:

assurdum

Banned
They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.

He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.





Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.

So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.

And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.

And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.

Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.
You know at least what's the point of the CUs? Calculate data. Let me put a simple example. You really think series X can push an advantage around the 40% of more CUs power when can just fill those with around the 20% of more bandwidth (which is even splitted)? No such advantage it will stay just around the 20% of more data calculable and that's it.
More CUs are almost pointless without a proportionate increase in the data feeding. On pc there is the infinity cache which feeds constantly the massive CUs counts with a massive amount of data. On series X can just count to the bandwidth , it hasn't even a robust cache customisation as ps5 to help to push the CUs more, so higher CUs can be good but is not that extreme differentiation in raw power without a proper flow of input .
 
Last edited:

Ev1L AuRoN

Member
So now it's frequencies that's the secret sauce and MS's Direct X is a hinderence to the console. And raw power is meaningless.

This reminds me of the devs in 2013 that were downplaying the PS4s power advantage over XB1.

There is also no doubt that Crytek is not happy with MS after they all most went bankrupt and big reason for that was due to the poor sells of Ryse. And the fued MS and Crytek had over the ip

Meltdowns? No. Some devs will have different views and preferences. But the XSX power advantage is known and common knowledge. And I haven't to bet many devs would disagree with the unknown devs opinion to say the least.

Yet another attempt of desperate sony fanboys to downplay the power advantage narrative to make themselves feel a bit better. In the end, you guys are just setting yourselves up for disappointment when the DF head to head comes in and the XSX wins the majority with significantly better RT, higher resolutions at higher settings etc

And it's even more sad that you have a few hardcore sony fanboys doing the translations
The difference then was that the PS4 perform better in all of games, have not only a better gpu but a faster ram.
And speaking for myself, I don't think the PS5 is more powerful, to me they will perform very close. I'm surprised like everyone else about how the ps5 is doing against the XSX. But I think is fair to say, the ps5 is really very efficient and easy to code for, xbox won't have the landslide victory in multiplats like the One X.
 

assurdum

Banned
Oh look another PS5 fanboy crawled out of the dumpster called PS5 next gen thread with no argument just shit posting because i dared to critize his plastic box prophets.

Maybe next time start posting with some arguments. because the market is clearly moving into xbox direction and not PS5 direction when it comes to GPU solutions as i explained and with that bandwidth also. Maybe the entire market even amd rdna2 and nvidia 3000 series got it all wrong.

What a joke.



And the same counts for software being designed for high cu counts which the entire market is moving into. But he forgot to mention that. the same goes for higher bandwidth on the v-ram modules but all of that is not important even whli everywhere else it's the most important aspect lol. its so important the cu's on the card and memory clocks that amd nvidia charges a bucket load more money for just that in there top efforts. and with RT those cu's will age poorly. I won't be shocked if in 2022 the PS5 pro is already on the menu with a massive leap in cu's.
The only similarity with the series X is just in the high CUs counts but it ends here. If you look how I/O is handled in the GPU is more close to the what ps5 does than to the series X approach. Even in the frequency.
 
Last edited:

geordiemp

Member
You know at least what's the point of the CUs? Calculate data. Let me put a simple example. You really think series X can push an advantage around the 40% of more CUs power when can just fill those with around the 20% of more bandwidth (which is even splitted)? No such advantage it will stay just around the 20% of more data calculable and that's it.
More CUs are almost pointless without a proportionate increase in the data feeding. On pc there is the infinity cache which feeds constantly the massive CUs counts with a massive amount of data. On series X can just count to the bandwidth , it hasn't even a robust cache customisation as ps5 to help to push the CUs more, so higher CUs can be good but is not that extreme differentiation in raw power without a proper flow of input .

Your missng so much I dont know where to start. Look at this patent from Sony where they compact data before Local data store / cache before pixel shaders. Its a known bottleneck so compacting data helps. as does having less less CU in the shader array.

Old way, made worse by the bigger shader array - Cerny and naughty dog patent

LSg0ezo.png


Sony patent and different CU arrangenent / workflow


IFYlpNc.png


The cU and arrangenent in ps5 will be very different to XSX, and I bet PC parts are same as ps5 as its more performant, but we need to wait for RDNA2 white paper to see.

There is a reason you dont just have very large shader arrays.......

Summary : different CU arrangement which allows faster processing between pixel vertices and pixel shaders and allows less costly post process effects.
 
Last edited:

Sejanus

Member
Nobody knows because cache is different in each chip but BigNavi is suppose to have 128MB L2 or L3 cache.

We can only post what exists in the market... RX 5700 and 5700 XT has 4MB L2 cache that is proportional more cache than Xbox has.
For full navi 21 die
Cache Hierarchy
  • 128MB AMD Infinity Cache
  • 4MB L2
  • 1MB Distributed L1
 

assurdum

Banned
Your missng so much I dont know where to start. Look at this patent from Sony where they compact data before Local data store / cache before pixel shaders. Its a known bottleneck so compacting data helps. as does having less less CU in the shader array.

Old way, made worse by the bigger shader array - Cerny and naughty dog patent

LSg0ezo.png


Sony patent and different CU arrangenent / workflow


IFYlpNc.png


The cU and arrangenent in ps5 will be very different to XSX, and I bet PC parts are same as ps5 as its more performant, but we need to wait for RDNA2 white paper to see.

There is a reason you dont just have very large shader arrays.......

Summary : different CU arrangement which allows faster processing between pixel vertices and pixel shaders and allows loess costly post process effects.
In my defence it was just a very approximate chats of the reason of the CUs; personally didn't knew about the stuff you posted but I already suspected the limited CUs number was balanced to improve other performance.
 
Last edited:

Greggy

Member
They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.

He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.





Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.

So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.

And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.

And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.

Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.

Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.
 

assurdum

Banned
Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.
I mean everyone is free to believe whatever he wants . But it has explained why ps5 outperformed series X, why it has lesser CUs but nope, "more CUs are always better, I don't listen, blablabla," and so on, with the same generic stuff with tons of approximations and nothing else to add to the discussion. If some people prefers to live in their bubble, and just to hear series X is more powerful, good, rejoy about it. But no needs to engage the same narrative loop every single time if I can say.
 
Last edited:

geordiemp

Member
Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.

More CU is better, but not tacked on the end of a shader array like XSX and Navi 14 is not ideal. MS likely did this because they wanted 4 shader arrays, as the design is for running 4 games as a server most likely. XSX also has a server type CPU arrangement.

Hence 6800 has 10 CU per shader array and has 8 shader arrays. There is a reasion for that, go read my post above on this page.

Did you not wonder how a 6800 with 20 TF performs same as a 3800 which is 30 TF ? No its not just infinity cache, there are alots of things to consider.
 
Last edited:

RaySoft

Member
Amen to that. I think deep down, the people who shout vindication in this thread know it. They are not wasting any second to celebrate because they know very well that with equal development time, level of familiarity with the tools and developer skills, there is no rational reason why a game should perform better on PS5. Not saying that it can never happen, but the hardware wouldn't be to blame. Those consoles have the same hardware supplier and the architecture differences outisde of the I/O system are marginal. The theoritical power difference is quite frankly the best we have to predict future performance comparisons in multiplat games.
And yes we might as well throw away all we know about multicore programming if more CUs is somehow worse. By that logic, all hardware manufacturers including Nvidia would be racing to the bottom and try to engineer ways to run a single CU at the highest possible frequency like in the early 90s. We all know that computer science runs in the opposite direction to that.
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
 
Last edited:

Neo_game

Member
They went with it because for more bandwidth. With his logic he should like it. And frankly memory limitation with or without a shared pool is going to be a thing anyway. If you wanted to get rid of memory constrains they should have opted for 32gb of ram but they didn't. 10gb is what u get and maybe 11-12gb on the Ps5 both are limited. Bandwidth is king at higher resolutions and microsoft understood this.

He should honestly praise xbox for this but he won't and u see him move through hoops of twisted logic to push sony into a favorable position with borked logic over and over again.





Lets put some real talk forwards then.
PC lowest rnda 2 gpu announced is a 60 cu gpu. the xbox has less then that, and the PS5 way way less then that. U honestly think games are not going to use those CU's ( aka cores as he calls them ) when the whole market is shifting towards it and that's what gains them performance? of course they do.

So his whole story about his 36 compute units at a higher clock is just laughable really and frankly it will only help the PS5 in current gen titles that do not adress much cu's but for next gen things are going to be cmpletely different. The same reason why microsoft pushes bandwidth upwards.

And no u don't need a high clock speed to get use out of that 12tflop solution for the simple fact that 6800xt with 72 ( twice the PS5 cu count ) is not running its clocks at 2x the frequency to keep up with it. There is absolute no need for that.

And then his ideology of the 12tflop number that is hard to reach, so is any tflop number with any card with that logic.

Look if he just stated, that in current gen titles that use less cu's PS5 could see a advantage because of this sure, next gen titles lol nope.

What is the logic behind current gen and next gen games ? It do not think that make any sense. Overall SX on paper should have 18% advantage but we do not know about the bottlenecks and API's. Also games using more than 10gb RAM on SX has to be managed by programmer due to its different bus width unlike the PS5. Faster SSD on PS5 should also help in filling the RAM twice as fast than SX. So the way I see it. SX has 25% BW advantage only for games using 10gb or less. So next gen games using more RAM should benefit the PS5 instead.
 

CobraXT

Banned
RDNA parts do not scale well at higher clockspeeds. Overclocking tests on the RX 5700 XT-close analogue for the PS5’s GPU-indicate that a massive 18 percent overclock from stock up to 2.1 GHz resulted in just a 5-7 percent improvement to frame rates


After 1.8 Ghz GPU performance will not scale linearly with clock speed
 

assurdum

Banned
What is the logic behind current gen and next gen games ? It do not think that make any sense. Overall SX on paper should have 18% advantage but we do not know about the bottlenecks and API's. Also games using more than 10gb RAM on SX has to be managed by programmer due to its different bus width unlike the PS5. Faster SSD on PS5 should also help in filling the RAM twice as fast than SX. So the way I see it. SX has 25% BW advantage only for games using 10gb or less. So next gen games using more RAM should benefit the PS5 instead.
There is caveat too. Using two different speed in the same bus decrease the effectiveness to have more bandwidth because it needs to have more. So isn't it that straight victory especially considered ps5 can count to a robust cache system to support to the bandwidth work. I'm not entirely sure series X did the best deal compared to the ps5. But imo.
 
Last edited:

assurdum

Banned

After 1.8 Ghz GPU performance will not scale linearly with clock speed
Isn't it based on rdna1? Oh an article from 3/20/2020. Lol.
 
Last edited:

ethomaz

Banned

After 1.8 Ghz GPU performance will not scale linearly with clock speed
That is why I said it was bullshit to use RDNA clocks to make claims for RDNA 2 clocks lol
DF jumped the gun and I called them out.
People thought I was just doing damage control lol
 
Last edited:

killatopak

Gold Member

After 1.8 Ghz GPU performance will not scale linearly with clock speed
Off the shelf RDNA1.
 

geordiemp

Member

After 1.8 Ghz RDNA1 GPU performance will not scale linearly with clock speed

Fixed that for you, RDNA2 goes all the way to 2.5 Ghz its designed for that, RDNA1 is not.
 
Last edited:
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
Yes, the rest is basically what's inside the shader arrays. They both have 4, and PS5 shader arrays are clocked 22% faster. That would explain both consoles being close in GPU heavy scenes.

But that wouldn't explain why the PS5 has a 10% advantage in CPU heavy scenes (in AC at least). There is something else for that.
 

assurdum

Banned
Yes, the rest is basically what's inside the shader arrays. They both have 4, and PS5 shader arrays are clocked 22% faster. That would explain both consoles being close in GPU heavy scenes.

But that wouldn't explain why the PS5 has a 10% advantage in CPU heavy scenes (in AC at least). There is something else for that.
Splitted bandwidth speed in the same bus . Two fast two furious. Just my guess. Cache system also could help a lot the cpu on ps5. A combination of both. I don't find anything of inexplicable if we start to consider some difference in both hardware
 
Last edited:

Greggy

Member
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.

I vaguely suspect MS knew that it's a balancing act even before Cerny said it.

I'd chill and wait for more games to come out before awarding vindication medals to supporters of any of those platforms. The XSX does 4k 120 just fine in Halo MCC and many other games. Hell, the XSS even does 4k60 in a game. Clearly 3rd parties are currently struggling to quite pull the most out of the machine. time will tell if it's temporary.
 

killatopak

Gold Member
Yes, the rest is basically what's inside the shader arrays. They both have 4, and PS5 shader arrays are clocked 22% faster. That would explain both consoles being close in GPU heavy scenes.

But that wouldn't explain why the PS5 has a 10% advantage in CPU heavy scenes (in AC at least). There is something else for that.
It’s OS overhead. At least I’m assuming it to be that.

Advantage is it allows more of the system to be used for BC and Series S compatibility similar to a how easy it is in PC to play old games with different components on each PC.

Disadvantage is it means a lot more abstraction and less fine tuning on a specific system.
 

Handy Fake

Member
It’s OS overhead. At least I’m assuming it to be that.

Advantage is it allows more of the system to be used for BC and Series S compatibility similar to a how easy it is in PC to play old games with different components on each PC.

Disadvantage is it means a lot more abstraction and less fine tuning on a specific system.
Could well be that too. Plus I suspect the Quick Resume feature might affect it as well...
 
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.

thankyou.gif

nobody seemed to be picking up on that - anyone who has worked in IT for a while understands this - its the same as having a super fast CPU and 4GB of RAM; you're going to be limited by the LCD
 

fast_taker

Member
Gotta love the internet. What do you do for a living my brother? How can you speak with such authority on the subject, when opposing someone who does work on the thing for a living, at a studio known for being a tech studio, and with the results being what they are.

This ain't politics breh, you can't just drop hot trash like that and classify it as "opinion", "freedom of thought". I mean, you're free to look dumb, but is that what you want?
Ok, let's say that people who work in tech companies are not biased. Let's assume that they have invested a huge amount of their time becoming experts in both platforms developing games (you are taking their opinion for granted, you want them to be experts).
We should also assume that they are not human since mistaken opinions are not a thing according to your thinking.
Can someone explain providing video evidence - now that we have plenty- the advantages of the ps5 architecture?
I can only find material indicating that the biggest hype of ps5, the disk speed, is outperformed when it comes to loading times. Not gpu related, i am just sayin...
 

RaySoft

Member
I vaguely suspect MS knew that it's a balancing act even before Cerny said it.

I'd chill and wait for more games to come out before awarding vindication medals to supporters of any of those platforms. The XSX does 4k 120 just fine in Halo MCC and many other games. Hell, the XSS even does 4k60 in a game. Clearly 3rd parties are currently struggling to quite pull the most out of the machine. time will tell if it's temporary.
MS had an alternative agenda since they also wanted a chip they could use in their Xcloud, where compute are more important.
 

sinnergy

Member
thankyou.gif

nobody seemed to be picking up on that - anyone who has worked in IT for a while understands this - its the same as having a super fast CPU and 4GB of RAM; you're going to be limited by the LCD
That’s why ms upped the bandwidth .. but you guys seem to have missed that point .
 

Rayderism

Member
What I get from the interview is that yes, the XBOX is more powerful, but it has more bottlenecks and technical hurdles to overcome to realize its 12TF power, whereas the PS5 has less bottlenecks and technical hurdles, so it can more easily achieve its 10.28TF power.
 

benjohn

Member
The Crytec dev didnt say that more CU's are bad, as long as you can keep feeding them with relevant data. He's angle was the Series X' and the PS5's GPU's. With their respective bandwidth and clocks he felt that Sony chose the "right" amount of CU's. Like Cerny said, it's a balancing act. A GPU won't be faster simply because you throw more CU's at it, it also needs to be supported by the rest of the chip, otherwise it's a waste.
He did not even say XSX GPU is weaker than ps5's. He even mentioned that he believes these console perform mostly the same. Ps5 is weaker but 'cause of its smarter architecture can punch above its weight.
 

Leyasu

Banned
Those have big advantages, higher clock, dedicaded memory, more cpu power available for them and better thermal design and power delivery.

? They are GPUs right? People are trying to say that les with higher clocks = better. Yet the manufacturers are obviously not listening to the forum experts. In any case, both consoles have dedicated memory in their systems to games even if it is not solely dedicated to the GPU. and they have fairly decent cpus this time. I don't yet see your point here.


What MS did with xsex is to win the TF war on paper for marketing. But actual results as we can see now favors the PS5 in performance.

AMD's new RDNA2 GPUs have 8-10 CUs per shader array. That's the amount of CUs they deemed optimal for each shader array. So AMD's big GPUs have 8 shader arrays with 8-10 CUs.

The problem with xsex GPU is that MS crammed 12-14 CUs in each shader array. But it only has 4 shader arrays similar to PS5. So xsex is not actually wider or big by the definition of AMD. AMD's big and wide GPUs have as much shader array as the number of CUs (8-10CUs to 1 shader array).


AIQZ6Se.jpg



With PS5 you have 4 shader arrays with 8-10 CUs. But the GPU is clocked a lot higher so its caches are a lot faster. If xsex has 6 or 5 shader arrays then that would count as bigger and wider than PS5. Then surely it would have performed faster than PS5. But that's not the case.

There may still be advantage on the xsex configuration I would say. It's a wash. There may be operations where xsex is faster and rendering where PS5 is faster.

I seriously doubt microsoft invested heavily just so that xbox fanboys can wave their dicks on forums. Although you make some good points about the shader arrays. I will not pretend to know what I am talking about and would defer to (if there any), experts on here with that. But, surely this was widely discussed with AMD during the conception phase. It would be incredible if it turned out having more was less, and AMD stood by and did nothing, whilst they designed something that was obviously going to be handicapped. Still, we need to wait and see now. I have looked at few tech sites, and I haven't seen any apprehension frompeople far more knowledgeable than me about the design or the split L3 cache as was being speculated yesterday.

It is almost surreal though to see people assuming that a few x gen ports are an indication of anything. In a year to 18 months both platform holders will start putting out games that will annihilate these launch games. Even x gen Horizon forbidden west (or what ever it is called) will stomp them. I would bet money that unlocking the frame rate and resolution on RDR 2 for these new console, would look and play better than the launch games in question.

Everyone was lamenting not long ago that x gen games would ruin everything. But now they are proof of something, and apparently the 52 cu's will not be possible to get working. Yet we haven't had one game especially made for it! Same with the PS5, the really impressive looking Demons Souls will soon enough get totally outclassed... And I would be willing to be that x gen Horizon will be the game to do it. It is far too early to claim anything.
 
Top Bottom