• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Dirt 5 devs: Next gen GPU differences don't impact development, but have efficiencies when optimising and tuning

Kumomeme

Member
I should probably not even respond to this thread but here we go. Some posts here will not age well.

The focus on theoretical peak TFLOPs is misguided at best to measure how good graphics you will get with these two consoles.

The XSX and the PS5 are ridiculously close to each other in terms of hardware rendering specifications.

Having said that, the I/O of the PS5 has a significant advantage over the XSX - and this will in games utilising that I/O result in more and higher resolution textures.

As to the GPU itself the XSX has a theoretical peak TFLOP advantage over the PS5. However, the PS5 has a pixel fill rate advantage and a frequency advantage. if the PS5 has a significant cache advantage (that I believe) - and we should know within the next few weeks - I even think the PS5 has a real-world TFLOP advantage due to higher CU utilisation and practical memory bandwidth (due to the size of the cache and its bandwidth to the CUs).

Net-net - the consoles are really close in power. And if there is a hardware advantage to be had - without current benchmarks - I am willing to bet the PS5 will come out slightly ahead. That relative difference though is close to meaningless when it comes to rendering power. The real difference between the two is in I/O to the advantage of the PS5. And the last point will show itself in some titles.
also worth to added that PS5 might have advantages with their API. Their specific API surely can squeeze more from the hardware more than XSX API GDK since it designed to support more than 1 SKU. (XSS and wide range specs of Windows 10 pc out there).

So dont expect big differences in term of performance between both console. On paper the gpu TF differences is around 18% not to mention other cpu clockspeed, memory bandwith etc but it probably smaller than what we expected.
 
Last edited:

Bo_Hazem

Banned
I should probably not even respond to this thread but here we go. Some posts here will not age well.

The focus on theoretical peak TFLOPs is misguided at best to measure how good graphics you will get with these two consoles.

The XSX and the PS5 are ridiculously close to each other in terms of hardware rendering specifications.

Having said that, the I/O of the PS5 has a significant advantage over the XSX - and this will in games utilising that I/O result in more and higher resolution textures.

As to the GPU itself the XSX has a theoretical peak TFLOP advantage over the PS5. However, the PS5 has a pixel fill rate advantage and a frequency advantage. if the PS5 has a significant cache advantage (that I believe) - and we should know within the next few weeks - I even think the PS5 has a real-world TFLOP advantage due to higher CU utilisation and practical memory bandwidth (due to the size of the cache and its bandwidth to the CUs).

Net-net - the consoles are really close in power. And if there is a hardware advantage to be had - without current benchmarks - I am willing to bet the PS5 will come out slightly ahead. That relative difference though is close to meaningless when it comes to rendering power. The real difference between the two is in I/O to the advantage of the PS5. And the last point will show itself in some titles.

The graph here is quite intriguing as well:

iuYrbNy6ALXkvrv2PTXXZJ-1200-80.png


How could 36 RDNA2 CU's 7nm be equivalent of 58 GCN CU's 28nm (14nm if referring to PS4 Slim/Pro)! Listen to what he says here: (timestamped)



And here:



CU's must be dense with features to be much larger than 28-14nm CU's!
 
Last edited:

DinoD

Member
As long as PS5 is in the same league as Xbox SX. Like there isn't any significant difference in quality. I have been on PS platform since PS1 and will continue to be my main one, as long as Sony and other 3rd party vendors continue to produce content I like to play. I did buy GameCube (RE4), Xbox360 (exclusives and some 3rd party games that played much better on 360) and One. I only really regret buying One. After its launch lineup, I only played "Sunset Overdrive". 3rd party games played better on PS4 and Xbone's UI was slow and felt very kludgy.
 
Last edited:

Pimpbaa

Member
I think DF comparison vids are going to be a lot more boring next gen. The games that would show a real difference would be exclusives, but those are a lot harder to compare.
 

Neo_game

Member
Racing games are not going to show any difference. Even this current gen where the gap was twice more than next gen they both were doing the same shit.
 
I'm just waiting for those who will call this guy out for A. being a shill B. Not knowing what he's talking about... because that's what happens when someone, anyone, says one is greater than other or close to being equal.
 

pixelation

Member
I mean... have you seen Dirt 5 and Yakuza LaD running on XBSX?, I am pretty sure that the PS5 version of those same games won't look much (if any) worse. Sony first party devs are wizards, they are sure to make some of the most beautiful games ever yet again... this time on PS5.
 

-Arcadia-

Banned
I’d love if he went into more detail. That comment is really bare-bones, and intriguing at the same time.

Personally, I have a tough time believing that as everything gets settled in, the console with an on-paper spec advantage won’t take a slight lead, but who knows. There’s certainly a lot of interesting engineering going on in the PS5.
 

longdi

Banned
Albert Penello said in order to overclock the PS5 GPU to 10.2 TF you can't get to 60fps on your games
which means the CPU will downclock at that number



Yap! I cant see why GAF posters are dismissing Albert.
He has his sources, heck he worked on Amazon Luna!
I like that fanboys would rather cover their ears to keep their hopes alive. 🤷‍♀️

I mean the rumored game clocks of 300W TGP Big Navi is only about 2Ghz.
Add to the fact, in an SOC, the GPU needs to compete with the CPU for heat and power.
 
Last edited:

Ar¢tos

Member
I don't see why people are acting so surprised, it's typical 3rd party MO.
Reach target on the more powerful platform and spend the rest of the time optimizing for the weaker one to reach the same target.
Anyone that expects 3rd parties to squeeze the max out of the 1.8tf difference is going to be hugely disappointed, specially when 1.8tf means little in the 4k/RT realm.
If you want to see the max any console can do, you have to rely on 1st party devs.
 

longdi

Banned
From what we seen of Series X CU, it is more off-the shelf than Big Navi.
This is the same with One X, where its SoC is not in any of Amd lineup
You can thank Jason Ronald for that, he seems a great engineering wizard, remember his name or his beard. :messenger_bicep: :messenger_open_mouth:
 

mykedo0909

Member
... then why add additional power if it has no real impact? Makes no sense to me. Therefore im sceptical. How much additional TF would be needed to see a difference then?
 


Yap! I cant see why GAF posters are dismissing Albert.
He has his sources, heck he worked on Amazon Luna!
I like that fanboys would rather cover their ears to keep their hopes alive. 🤷‍♀️

I mean the rumored game clocks of 300W TGP Big Navi is only about 2Ghz.
Add to the fact, in an SOC, the GPU needs to compete with the CPU for heat and power.

I book this post .in 3 weeks its fun to call it back up to see if ps5 was struggling. We will see which one is struggling 👀😅😅
 

geordiemp

Member
The graph here is quite intriguing as well:

iuYrbNy6ALXkvrv2PTXXZJ-1200-80.png


How could 36 RDNA2 CU's 7nm be equivalent of 58 GCN CU's 28nm (14nm if referring to PS4 Slim/Pro)! Listen to what he says here: (timestamped)



And here:



CU's must be dense with features to be much larger than 28-14nm CU's!


Its amasing the reading comprehension, he said their was a GPU performance difference, and one console needed more optimisation / tweaking.

He did not say WHICH CONSOLE needed more work, and why for his engine and game.

I book this post .in 3 weeks its fun to call it back up to see if ps5 was struggling. We will see which one is struggling 👀😅😅

Another smart poster who can read like me.
 
Last edited:
Its amasing the reading comprehension, he said their was a GPU performance difference, and one console needed more optimisation / tweaking.

He did not say WHICH CONSOLE needed more work, and why for his engine and game.



Another smart poster who can read like me.
I m not talking about dirt per say rather in general 👀👀
 

Mr Moose

Member
Albert Penello said in order to overclock the PS5 GPU to 10.2 TF you can't get to 60fps on your games
which means the CPU will downclock at that number
Albert Penello says a lot of things.


Edit: Beaten
 
Last edited:

Ar¢tos

Member
... then why add additional power if it has no real impact? Makes no sense to me. Therefore im sceptical. How much additional TF would be needed to see a difference then?
They didn't add additional power. It's not like Sony and MS designed their consoles together.
The power difference turned out to be small after they designed their consoles. Also it's not like it is something you can measure with a single unit, both have strengths and weaknesses.
In the 4k space with extra detail/effects affecting pixel cost, I would say a 4tf+ difference to see visual differences in gameplay (outside of zoomed screenshots).
 
Last edited:

Rea

Member
You'll be satisfied, as next gen engines get more demanding the XSX will hold up a lot better. Better GPU, faster CPU, faster memory bandwidth and no down clocking when you hit a power limit.

You got a better console for the same price👍
Yes! No downclocking but will shut down when hitting max power.
 

geordiemp

Member
Can’t imagine the Xbox 10/6 memory setup is set and forget

Memory bandwidth is less important in RDNA2, heck the 40 CU navi22 has a 192 bit bus. It wont effect either console.

Likely main reason why XSX is larger is because they use 20 MB or 40 MB in the server application.

Most RDNA2 benefits are around caches, which we will see in 4 days.
 
Last edited:

Sejanus

Member


Yap! I cant see why GAF posters are dismissing Albert.
He has his sources, heck he worked on Amazon Luna!
I like that fanboys would rather cover their ears to keep their hopes alive. 🤷‍♀️

I mean the rumored game clocks of 300W TGP Big Navi is only about 2Ghz.
Add to the fact, in an SOC, the GPU needs to compete with the CPU for heat and power.
Albert don't know sit about electronics/ architecture.
The difference is that xsx Gpu = rtx 2080s and the PS5 = rtx 2070s.
But in the memory subsystem (i/o ram ssd) the PS5 is a beast and likely more similar to the big navi.
For two reason
1) Compression have built in compression engines for working on highly compressed textures directly.

2) The cache scrubbers.
The focus of UE5's new features on compute shaders supports this move.
 

mykedo0909

Member
They didn't add additional power. It's not like Sony and MS designed their consoles together.
The power difference turned out to be small after they designed their consoles. Also it's not like it is something you can measure with a single unit, both have strengths and weaknesses.
In the 4k space with extra detail/effects affecting pixel cost, I would say a 4tf+ difference to see visual differences in gameplay (outside of zoomed screenshots).
Hey, thanks for the reply. Yes, that makes sense of course.
Though i believe that MS to some degree, and Sony to some degree knows what the other party is trying to achieve in terms of performance. And if we can believe MS and Sony, they both gathered feedback from 3rd Party dev. studios during the R&D and development. It would be just hard to believe for me, that the 1.3tf is just overhead (and therefore increased production cost). Even if lets say it would account for 5% better framerate... Im really looking forward to seein real life results for both of the consoles.

Some developers state that its more hard to develope for pc, due to many possible configurations. I just imagine there are many visual options in pc games which can be enabled / disabled depending on your hw. I think the 1.3 tf would enable the xsx to check one of those additional options at least, if conpared to pc...

But yeah, im not an expert, just my impression
 

longdi

Banned
Albert don't know sit about electronics/ architecture.
The difference is that xsx Gpu = rtx 2080s and the PS5 = rtx 2070s.
But in the memory subsystem (i/o ram ssd) the PS5 is a beast and likely more similar to the big navi.
For two reason
1) Compression have built in compression engines for working on highly compressed textures directly.

2) The cache scrubbers.
The focus of UE5's new features on compute shaders supports this move.

Albert may or may not be an architect engineer, but he has contacts who do. He works in the industry. He recently worked on Amazon Luna.

What is your contribution to society to judge Albert this way? 🤷‍♀️

Meaning please share more if you are, similarly, in the industry/system.
 
Last edited:
I don't see why people are acting so surprised, it's typical 3rd party MO.
Reach target on the more powerful platform and spend the rest of the time optimizing for the weaker one to reach the same target.
Anyone that expects 3rd parties to squeeze the max out of the 1.8tf difference is going to be hugely disappointed, specially when 1.8tf means little in the 4k/RT realm.
If you want to see the max any console can do, you have to rely on 1st party devs.
We know that. We know that for multiple gens.
But this time, one console platform is pretending they can depend on third party output to showcase their hardware. We know they are lying, but they have no choice but to keep it up.

Whatever Xbox users say to defend Series S, would at the same time sabotage the power advantage that Series X is suppose to have. The two SKUs have contradictory goals and as such you can't defend one without accidentally attack the other.

Right now, the main justification to buy a series X is "to play the best version of third party games". But if third party games don't look much different between hardware, then that argument fails.
 
Last edited:
Whatever Xbox users say to defend Series S, would at the same time sabotage the power advantage that Series X is suppose to have. The two SKUs have contradictory goals and as such you can't defend one without accidentally attack the other.

What? They both have the same goal, play next-gen games. One does it at a higher resolution and higher details than the other, that's it.
 

Sejanus

Member
Albert may or may not be an architect engineer, but he has contacts who do. He works in the industry. He recently worked on Amazon Luna.

What is your contribution to society to judge Albert this way? 🤷‍♀️

Meaning please share more if you are, similarly, in the industry/system.
He is not an architect engineer, he is working in marketing department.
Do you think his opinion is unbiased.
 

waxer

Member
Albert may or may not be an architect engineer, but he has contacts who do. He works in the industry. He recently worked on Amazon Luna.

What is your contribution to society to judge Albert this way? 🤷‍♀️

Meaning please share more if you are, similarly, in the industry/system.
If a pornster bites a dick off does my lack of also sucking dick mean I'm wrong that she gives shit blowjobs?
 

Bitmap Frogs

Mr. Community
You guys need to stop fellating your plastic boxes.

There's no stock available for either, so your warrioring won't impact any purchasing decisions and we'll have real comparisons coming before stock is being made available and when that happens people will be talking about the real comparisons, not your hypotheticals.
 

Md Ray

Member
Expect the same difference as ps4 pro and one x. That big series x APU die is there for a reason.
That's not how it works.

Pro to One X had a 43% and 50% advantage, respectively in the computational and memory bandwidth department.

PS5 to SX advantage is 18% and 25% in that regard.

You can't compare PS5 and SX and say the difference will be like PS4 Pro and One X. There are parts of the PS5 GPU that are actually better than XSX GPU. For e.g. pixel fillrate, rasterization perf, all the caches, and its bandwidth, etc are 20%+ faster than XSX's. TFLOPS metric alone isn't the be-all and end-all. PS5 and XSX are a lot closer as the devs have been saying.
 
Last edited:

Ar¢tos

Member
Hey, thanks for the reply. Yes, that makes sense of course.
Though i believe that MS to some degree, and Sony to some degree knows what the other party is trying to achieve in terms of performance. And if we can believe MS and Sony, they both gathered feedback from 3rd Party dev. studios during the R&D and development. It would be just hard to believe for me, that the 1.3tf is just overhead (and therefore increased production cost). Even if lets say it would account for 5% better framerate... Im really looking forward to seein real life results for both of the consoles.

Some developers state that its more hard to develope for pc, due to many possible configurations. I just imagine there are many visual options in pc games which can be enabled / disabled depending on your hw. I think the 1.3 tf would enable the xsx to check one of those additional options at least, if conpared to pc...

But yeah, im not an expert, just my impression
Nearly final spec and design must have been decided a good 3 years ago.
A complex 7nm chip takes 5 months to manufacture. After the initial design they need to go into a cycle of making chips, testing, making changes and back to making chips that lasts probably a good 2 years, then you have mass production.
They didn't end up with similar specs because of industrial espionage, they ended up with similar specs because both were targeting releasing in the same year and for a similar retail price, and that was the tech available.
We only have a difference because what Sony spent in a faster SSD, MS spent on a bigger GPU.
Then there is the factor that the power difference not only is not down to a single spec, but ends up being variable in real usage.
Example: A game locked at 4k@30 both doesn't mean a full constant 1.8f extra free on XSX, it will probably be variable depending on how good the optimization work was, it could be within 1.4 to 2.2tf difference depending on scene complexity (example) and in that case devs would only target the lowest number for improvements (1.4tf), and do only improvements they are confident they could do without impacting framerate to avoid further optimizations because of limited time and budget (better shadows? A few more particles? Slightly higher RT res?).
(this is just a theoretical example, in reality Devs don't target parity first and then increase things on the more powerful console)

If MS really wanted to make difference, instead of releasing a 4tf XSS for 300$, they should have also considered a 16-17tf XSZ for 700$ as an option for the players that want the most powerful console where they can actually see a considerable difference in comparison to the competition.
 

Rikkori

Member
As expected.

Cerny deserves a lot of credit. The real difference this generation will be the SSD.

Nope. The SSD difference will be LESS relevant, because to properly make use of that it requires more fundamental changes than just lowering resolution a bit, and when you ship to multiple platforms then you want to reduce the amount of work & fine-tuning required, especially if it would add extra QA (which a resolution change wouldn't).

The real differences will be on games & prices, the rest is just nerd-squabble stuff.
 

Tajaz2426

Psychology PhD from Wikipedia University
The differences don’t matter to me as a PC gamer, but they don’t seem to be that much yet. I say that as maybe developers don’t know how to leverage the power of XBox yet, but the same could be said for PS5, also.

PC for games and PS5 for exclusives is the way to go.
 

Elios83

Member
There was already plenty of evidence about this if you noticed that all the games announced so far are targeting the same resolution/fps profiles on both PS5 and XSX.
If one system was factually stronger than the other we would have seen many PS5 not having 4K mode where XSX has it, PS5 not having 120fps modes where XSX has it.
It seems like it's clearly not case and differences will basically be in the "who holds the peak resolution for more time when the resolution is dynamic" or "who has less frame rate drops here and there in the context of the same peak frame rate and general performance" category. Basically meaningless stuff for most users.
It might end up that differences about loading times will be the most perceivable aspect without using a pixel zoom and a performance analyzer.
 
Last edited:

Ar¢tos

Member
Nope. The SSD difference will be LESS relevant, because to properly make use of that it requires more fundamental changes than just lowering resolution a bit, and when you ship to multiple platforms then you want to reduce the amount of work & fine-tuning required, especially if it would add extra QA (which a resolution change wouldn't).

The real differences will be on games & prices, the rest is just nerd-squabble stuff.
You don't have to do ABSOLUTELY ANYTHING to take advantage of the raw speed difference between the ps5 and xsx ssd, and with kraken+oodle texture being part of the SDK, you only not use it on ps5 if you want to purposely make the ps5 look bad (loading speed parity to not piss off MS?).
 

Rikkori

Member
You don't have to do ABSOLUTELY ANYTHING to take advantage of the raw speed difference between the ps5 and xsx ssd, and with kraken+oodle texture being part of the SDK, you only not use it on ps5 if you want to purposely make the ps5 look bad (loading speed parity to not piss off MS?).
You do, because you need to make it actually need it first, it's not about utilisation it's about what's needed. Same way I can run AC:O with fucking optane or ramdisk but shit still has fuck all LOD model 0 distance because I can't actually modify the core-engine parameters.
 
Top Bottom