• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] Watch Dogs Legion: PlayStation 5 vs Xbox Series X|S - Graphics, Performance, Ray Tracing!

assurdum

Banned
Why is he a douchebag, going by official numbers a 12.1 tf console should show better performance than a 10.2 tf console. Its only natural that a person in his position doing that job might wonder why that is the case, just natural curiosity.
People need to stop being so touchy about these things its not life or death or insulting your family if someone makes a comment like that.
Again, rdna2 is all about speed. But it's shocking to see a faster clock GPU overperform a slower. What's the surprise? A person in his position should already know it.
 
Last edited:
You pea brained insect. I have witnessed you turn every thread into a shit show with your petulant attitude. You don't need to be triggered by someone saying PS5 version is superior. Yes the lower texture filtering makes the Series X version slightly inferior because textures get muddier at closer distance than PS5 version and you can see that easily as it affect most textures at a given distance and especially when viewed at an angle which is every gameplay moment.

Texture Filtering Quality >>>>>>>> Missing puddles in RT reflection on a glass.

Both appear to be bugs that can easily be patched to bring them on par.

For example if you don't know how important texture filtering is.

dirt-5-ps5-playstation-5-2.original.jpg

dirt5-1.png

Dirt 5 is such an ugly game that's being forced unto next-gen as a poster child for "look, this is next-gen gaming!" by their marketing team.

It isn't, it's pretty darn underwhelming.
 

CthulhuPL

Member
Great results but I really hope they patch in a 60fps mode even if they disable RT to do so. Playing this at 30fps after playing Spider-Man at 60fps is too jarring.
Yet the London looks much better than New York in those games, in my honest opinion.
 

ethomaz

Banned
If PS5 really perform as well as the Series X in the long run, it will be the first time I see a smaller chip with fewer CU's outperform a bigger one, If conformed. What a colossal screw up from Microsoft that would be, Series X must have some terrible bottleneck in its design.
The bigger more expensive SoC, the gas chamber cooling, Series X should be quite a bit more expensive to manufacture.
First case?
There is a lot of cases even in AMD lineup.

5700 XT is smaller with less CUs and outperform the big Vega with way more CUs... that just to say one example.
 
Last edited:

assurdum

Banned
If PS5 really perform as well as the Series X in the long run, it will be the first time I see a smaller chip with fewer CU's outperform a bigger one, If conformed. What a colossal screw up from Microsoft that would be, Series X must have some terrible bottleneck in its design.
The bigger more expensive SoC, the gas chamber cooling, Series X should be quite a bit more expensive to manufacture.
Series X is just a 18% more powerful. 18%. Ps5 is faster in the same way. Too many look to the CUs and TF counts of series X but they forget to look to the complessive delta specs against the ps5.
 
Last edited:

eNT1TY

Member
If PS5 really perform as well as the Series X in the long run, it will be the first time I see a smaller chip with fewer CU's outperform a bigger one, If conformed. What a colossal screw up from Microsoft that would be, Series X must have some terrible bottleneck in its design.
The bigger more expensive SoC, the gas chamber cooling, Series X should be quite a bit more expensive to manufacture.
You will see the series X hardware flex when MS's studios projects purpose built ground up for the X start dropping. The only thing holding the hardware back is the incentive of third parties taking time optimize for the x specifically rather than the one size fits all approach and hoping the X can just brute force through the sometimes clumsy concessions made to accommodate as many platforms as possibly.
 

Concern

Member
so PS5 wins again. performance is the same, but PS5 loads faster and has miles superior controller.

PS5 THE Best Place to play!


After that whole ssd campaign about double the speed. They can only get 8 seconds on the xsx and even slower on some games. Not really much to celebrate there


Edit: doesn't take much but just some logic to upset the sony cult lol
 
Last edited:

Elios83

Member
An other example of guess what...parity?
It's just becoming repetitive at this point but the two systems are really close in real world performance. Those who expected big differences in multiplatform games were really wrong.

If PS5 really perform as well as the Series X in the long run, it will be the first time I see a smaller chip with fewer CU's outperform a bigger one, If conformed. What a colossal screw up from Microsoft that would be, Series X must have some terrible bottleneck in its design.
The bigger more expensive SoC, the gas chamber cooling, Series X should be quite a bit more expensive to manufacture.

More CUs is just part of the design, you need to feed those CUs efficiently to be able to extract real world performance, you need not to have bottlenecks somewhere in the pipeline to trasform something that is peak on paper into something real. Sony and MS simply chose two different approaches (smaller die at high frequency vs bigger die at slower frequency) with their own advantages to end up with something really similar in real world performance.
 

Ev1L AuRoN

Member
First case?
There is a lot of cases even in AMD lineup.

5700 XT is smaller with less CUs and outperform the big Vega with way more CUs... that just to say one example.

They aren't the same architecture. Tflops are meaningless when comparing two different designs.

Series X is just a 18% more powerful. 18%. Ps5 is faster in the same way. Too many look to the CUs and TF counts of series X but they forget to look to the complessive delta specs against the ps5.

Yes I understand that, but I'm waiting to see if it is enough. I'm not an engineer, maybe there really something odd about Microsoft approach with the Series X, time will tell.

You will see the series X hardware flex when MS's studios projects purpose built ground up for the X start dropping. The only thing holding the hardware back is the incentive of third parties taking time optimize for the x specifically rather than the one size fits all approach and hoping the X can just brute force through the sometimes clumsy concessions made to accommodate as many platforms as possibly.

That's true, but even without the incentive, more powerful hardware always show through in the results and that's what is so strange about it. As for the exclusives, Microsoft need to step up their game in this regard, it's not only buying studios, they also need time and money to shine. Microsoft has a lot to prove in this regard.
 
After that whole ssd campaign about double the speed. They can only get 8 seconds on the xsx and even slower on some games. Not really much to celebrate there

After this whole,

- It eats monsters for breakfast,
- 12TF, Worlds most powerful console,
- More CU's than PS5,
- Only console with full RDNA2,
- Best place to play multiplatform games

campaign, only to get whacked by an overclocked 10TF, RDNA1 system with no VRR, no hardware ray tracing and variable clocks.

We know it hurts so we try to be gentle

tenor.gif

Edit: doesn't take much but just simple facts to upset the Xbox cult lol
 
Last edited:

ethomaz

Banned
They aren't the same architecture. Tflops are meaningless when comparing two different designs.
The same can be said for Series X and PS5.
Big Vega had bottlenecks that doesn't allow it to use all the TFs power.... AMD improved that a lot in RX 5700XT.

PS5 has custom silicon exactly to remove at max these bootlenecks found in AMD GPUs.

So it is not crazy to thing PS5's APU can perform better than Series X's APU even being smaller with less CU/TFs... plus PS5 has a lot of advantages in their silicon not found on Series X.

Anybody that watched Road of the PS5 are not surprised with the actual results in games... in fact a lot of people expected PS5 to perform better and that include devs, journos, etc that Xbox fans called bullshit due the "more TFs" view that was never an absolute truth even on PC.

To be fair some posters here still call me delusional lol even when the games are showing the reality.

After that whole ssd campaign about double the speed. They can only get 8 seconds on the xsx and even slower on some games. Not really much to celebrate there


Edit: doesn't take much but just some logic to upset the sony cult lol

Logic.
SSDs campaign is being showed in games and it is real.

 
Last edited:

AnotherOne

Member
So both are pretty much the same minus the few bugs but again sony fanboys find a way to make this somehow a win? Lol how pathetic do you have to be.. play the fuckin game on what ever system you bought the game too many keyboard devs on here pretending they know what they're talking about its hilarious 😆
 

Ev1L AuRoN

Member
The same can be said for Series X and PS5.
Big Vega had bottlenecks that doesn't allow it to use all the TFs power.... AMD improved that a lot in RX 5700XT.

PS5 has custom silicon exactly to remove at max these bootlenecks found in AMD GPUs.

So it is not crazy to thing PS5's APU can perform better than Series X's APU even being smaller with less CU/TFs... plus PS5 has a lot of advantages in their silicon not found on Series X.

Anybody that watched Road of PS5 is not surprised with the actual results in games.

So, Sony has able to improve the RDNA2 IPC by 18% with customizations? Or Microsoft didn't take full advantage of the design by limiting the clock at 1.8Ghz.
I really don't know, I'm just think it's too early to call who is best. But yes, I believe that Sony can match the XSX, but I rather wait a bit more.
 

ethomaz

Banned
So, Sony has able to improve the RDNA2 IPC by 18% with customizations? Or Microsoft didn't take full advantage of the design by limiting the clock at 1.8Ghz.
I really don't know, I'm just think it's too early to call who is best. But yes, I believe that Sony can match the XSX, but I rather wait a bit more.
Actually... if you want to talk about IPC.

RDNA... 25% improvement over GCN.
Series X... 25% improvement over Xbox One X.
RDNA 2... 15%? (it is not clear how much it improved but it is double digit) improvement over RDNA.
PS5 ? We don't have that data.

Series X doesn't have the RDNA 2 IPC improvement... PS5? Not sure but probably has plus there are custom silicon to remove overhead from CPU and GPU... and additional cache scrubbers.

Even on paper specs Series X doesn't have the advantage in all parts... it has indeed a CU/TFs advantage.
 
Last edited:

Elios83

Member
After this whole,

- It eats monsters for breakfast,
- 12TF, Worlds most powerful console,
- More CU's than PS5,
- Only console with full RDNA2,
- Best place to play multiplatform games

campaign, only to get whacked by an overclocked 10TF, RDNA1 system with no VRR.

We know it hurts so we try to be gentle

tenor.gif

Edit: doesn't take much but just simple facts to upset the Xbox cult lol

You forgot no real ray tracing or PowerVR tacked on ray tracing :messenger_tears_of_joy:
And it will be a nightmare for developers to work with because of variable clocks! They will only be able to overlclock the GPU if they choose to turn off the CPU.
 
Last edited:
You forgot no real ray tracing or PowerVR tacked on ray tracing :messenger_tears_of_joy:
And it will be a nightmare for developers to work with because of variable clocks! They will only be able to overlclock the GPU if they choose to turn off the CPU.
You are 100% correct. How could I leave out the variable clocks and no hardware ray tracing support. Tsk tsk. I'm dropping the ball today.
 

Ev1L AuRoN

Member
So both are pretty much the same minus the few bugs but again sony fanboys find a way to make this somehow a win? Lol how pathetic do you have to be.. play the fuckin game on what ever system you bought the game too many keyboard devs on here pretending they know what they're talking about its hilarious 😆
When Microsoft market their console as a powerhouse and the most powerful console yes, ps5 matching it is amazing, beating it is hilarious.
 
Last edited:
This must have been a very frustrating watch for fanboys. Just when you wanted to cheer you get shat on a little. Xbox has worst Af haa!!😝 PS5 is missing reflective puddles. 😐💩
 
Maybe the devs aren't wrong and teraflops aren't everything, maybe there are more things involved on what cause to have better or worse performance.
Maybe they are a theorical peak indicator for certain tasks and not a good indicator of overall real world performance.


CUs aren't the only difference between the consoles. PS5 GPU is faster. Cerny explained on his PS5 talk why they took that approach of less CUs but higher GPU frequency, and why their memory management and I/O optimizations are important too.


To have extra CUs helpd in some areas but adds zero in other areas where it's better to have the PS5 perks like having higher frequency for their GPU.

We saw from their PC version settings that their settings for these consoles are pretty much the same, so it isn't something related to the game trying to block the performance or something like that to try to artificially achieve an unneeded parity. We're also seeing pretty similar basically tied results in most multi games, so sounds that the real world performance of both consoles is basically the same.


Don't forget #puddlegate2.0!
Psyduck.png


Now seriously, I assume they'll patch that reflections issue, same goes with texture filtering looking worse on XSX

I agree i mean take a look at nvidia 3080 cards and the 6800xt cards when u compare them 3080 has double the cu but when they tested the cards on games yes raytracing is much slower on 6800xt due to the cores but in real world gaming the 6800xt is actually able to produce better framerates due the speed there running at
 

Gediminas

Banned
After that whole ssd campaign about double the speed. They can only get 8 seconds on the xsx and even slower on some games. Not really much to celebrate there


Edit: doesn't take much but just some logic to upset the sony cult lol

you didn't get to pre-order PS5. pure boy, try next year. for now play with inferior console. piling on far superior console won't help your cause :) you just going to look like this 🤡
 
Last edited:

Great Hair

Banned


Carbon copies, but the PS5 has a better AF value than XSX. Im guessing at least double 8 vs 16 AF?


TOP PS5, BOTTOM XSX (better AF on PS5) (cylindrical object, floor textures)



TOP PS5, BOTTOM XSX (better AF on PS5) (textures, brick wall left)



TOP PS5, BOTTOM XSX (better AF on PS5) (green plastic cover)



TOP PS5, BOTTOM XSX (better AF on PS5)



PS5 has to render more stuff on screen! :p

 
Why are people pretending like poorly-optimized cross gen ports are benchmarks
for system performance?
Xbox 360 - PS3 (Xbox had better development environment and maintained that performance crown throughout the generation from the start which includes multiplatform games)
Xbox One - PS4 (PS4 had better development environment and maintained that performance crown throughout the generation from the start which includes multiplatform games)

See a pattern here?
 

priba76br

Neo Member
Again, rdna2 is all about speed. But it's shocking to see a faster clock GPU overperform a slower. What's the surprise? A person in his position should already know it.
From your logic an RTX2080 would be faster than a 2080TI due to its faster clocks. This is obviously not the case. I also remember XBOX One higher clocks vs PS4 and everybody knows how that turned out....LOL
 

Elios83

Member
Why are people pretending like poorly-optimized cross gen ports are benchmarks
for system performance?

Because they are poorly optimized on both so it's equal ground and in the past they have always been benchmarks?
The 900p vs 1080p difference we saw last gen at launch on XB1 vs PS4 stayed there for the rest of the gen.
The struggles developers had in multiplatform games on PS3 vs 360 stayed there for almost all the generation (except for the last two years).
The huge superiority of the original Xbox vs PS2 was already fully showcased with a launch title like DOA3 even if PS2 had already second gen software when the XB was launched.
 
Last edited:

Handy Fake

Member
From your logic an RTX2080 would be faster than a 2080TI due to its faster clocks. This is obviously not the case. I also remember XBOX One higher clocks vs PS4 and everybody knows how that turned out....LOL
RDNA2 gets increased performance gains from faster clocks. RDNA1 tended to have diminishing returns on performance increases as you clocked higher.
 
Last edited:

priba76br

Neo Member
First case?
There is a lot of cases even in AMD lineup.

5700 XT is smaller with less CUs and outperform the big Vega with way more CUs... that just to say one example.
Different architectures LOL. Just google for comparisons of RDNA GPUs where even for a matched TF Number, more CU´s still beat higher clock.
 

ethomaz

Banned
Different architectures LOL. Just google for comparisons of RDNA GPUs where even for a matched TF Number, more CU´s still beat higher clock.
2000Mhz 20CUs will have better performance than 1000Mhz 40CUs.
That is true to any GPU.
The only exceptions are when there is bandwidth differences between the cases.

I have no ideia where you find that claim lol

BTW Series X and PS5 are different architectures.
 
Last edited:

geordiemp

Member
Why is he a douchebag, going by official numbers a 12.1 tf console should show better performance than a 10.2 tf console. Its only natural that a person in his position doing that job might wonder why that is the case, just natural curiosity.
People need to stop being so touchy about these things its not life or death or insulting your family if someone makes a comment like that.

Why does a 20 TF card perform same and often better in non RT applications as a 30 TF card ?

Why are people obsessed with 1 number ? It really is odd.

Different architectures LOL. Just google for comparisons of RDNA GPUs where even for a matched TF Number, more CU´s still beat higher clock.

Now google how many pC parts have 14 CU in shader arrays,. or even easier, find any good PC GPU for games with more than 10 CU in shader array from anyone..

Or even easier, find any PC GPU that will intend to use 14 CU per shader array. How does it run ?

If you cant fine any, then come back tell us.
 
Last edited:

priba76br

Neo Member
2000Mhz 20CUs will have better performance than 1000Mhz 40CUs.
That is true to any GPU.
The only exceptions are when there is bandwidth differences between the cases.

I have no ideia where you find that claim lol

BTW Series X and PS5 are different architectures.
Google it and you may be surprised. Try 5700 vs overclocked 5600 for example
 

cebri.one

Member
Actually... if you want to talk about IPC.

RDNA... 25% improvement over GCN.
Series X... 25% improvement over Xbox One X.
RDNA 2... 15%? (it is not clear how much it improved but it is double digit) improvement over RDNA.
PS5 ? We don't have that data.

Series X doesn't have the RDNA 2 IPC improvement... PS5? Not sure but probably has plus there are custom silicon to remove overhead from CPU and GPU... and additional cache scrubbers.

Even on paper specs Series X doesn't have the advantage in all parts... it has indeed a CU/TFs advantage.

Nah, CUs from RDNA1 do not have RT accelerated hardware, both have RDNA2 CUs. They are missing the IC tho.

IPC gain of RDNA2 over RDNA1 is around 30%, the rest is clocks and cache config.
 
So even if we call this one a draw, that's still a big L for the "monster eater." Can't wait to get my PS5 in Dec. Multiplats on par with, sometimes better than, the 12Tflops beast, plus high quality exclusives and the haptic feedback and adaptive triggers. PS5 truly is the place to play. No wonder Spencer showed concern about apathy towards his product.
 

ethomaz

Banned
From your logic an RTX2080 would be faster than a 2080TI due to its faster clocks. This is obviously not the case. I also remember XBOX One higher clocks vs PS4 and everybody knows how that turned out....LOL
RTX 2080 2944SPs 184TMUs 64ROPs 1710 MHz 448GB/s
RTX 2080TI 4352SPs 272TMUs 88ROPs 1545 MHz 616.0 GB/s

RTX 2080TI +47% SPs + 47% TMUs +37% ROPs +37% Bandwidth
RTX 2080 +10% clock

You know why the RTX 2080TI perform better and it better in everything except a big slower clock.

Google it and you may be surprised. Try 5700 vs overclocked 5600 for example
Yeap that example...

RX 5600XT 2304SPs 144TMUs 64ROPs 192bits 6GB
RX 5700 2304SPs 144TMUs 64ROPs 256bits 8GB

What I'm supposed to see here lol
A comparison where the memory bandwidth becomes bottleneck?

You can overclock what you want... it won't perform better than 5700 if the game requires more bandwidth.

Don't they both have 36CUs?
The biggest different are memory size and bandwidth... everything is equal.
That makes me think he did not understand what I wrote :pie_thinking:
 
Last edited:

cebri.one

Member
Why does a 20 TF card perform same and often better in non RT applications as a 30 TF card ?

To be completely fair, new NVIDIA TF numbers for Ampere are complete BS. Even the A100 is only 20TF. Taken into account the number of shader cores, ops per clock and clock and the number is closer to 18-19TF. Which is more realistic. NVIDIA TF still > AMD TF tho.

edit: memory bandwidth is another story, there AMD L3 cache allows a 512GB/s card to compete with a 760GB/s card
 
Last edited:

geordiemp

Member
To be completely fair, new NVIDIA TF numbers for Ampere are complete BS. Even the A100 is only 20TF. Taken into account the number of shader cores, ops per clock and clock and the number is closer to 18-19TF. Which is more realistic. NVIDIA TF still > AMD TF tho.

And Microsofts extended 14 CU shader array in XSX with 56 CU and only 4 arrays is normal for GPU games cards ?

Same discussion. If you could just linearly extend shader arrays and not loose performance, then why not 20 Cu shader array for 6800 and you only need 2 Shader engines and 4 arrays, saves a whole load of die space and its the same TF.

Why bother with 8 shader angines, what a waste. /s
 
Last edited:

ethomaz

Banned
Nah, CUs from RDNA1 do not have RT accelerated hardware, both have RDNA2 CUs. They are missing the IC tho.

IPC gain of RDNA2 over RDNA1 is around 30%, the rest is clocks and cache config.
RT is a separated module in RDNA 2.... you can have TMU/RT in a different version as CUs.

Can you share where you read 30% increase in IPC for RDNA to RDNA2? I found the only slide saying it was a double digit (more like > 10%).

What we know as fact is that Series X doesn't have the IPC increase in CUs from RDNA to RDNA 2.
 
Last edited:

reinking

Gold Member
Why are people pretending like poorly-optimized cross gen ports are benchmarks
for system performance?
Because that is all there is to go on right now? Why are others pretending they do not matter at all? If things were reversed we would be seeing the exact same posts but usernames would be swapped.

I find it funny that both sides are having to scrutinize DF videos to prove an insignificant point. I mean, when I see people say "I will wait to see what DF says" as if they need that info before they can enjoy a game I find it silly.
 
Last edited:

cebri.one

Member
RT is a separated module in RDNA.... you can have TMU/RT in a different version as CUs.

Mmm... not sure if I follow. RDNA1 doesn't have RT accelerated hardware, you can dedicate the shader cores to compute intersections and so on but the performance is horrible, NVIDIA enabled raytracing on the 1000 series and I believe not even the 1080TI could produce acceptable results. On RDNA2 is part of the CUs. The TMUs are also part of the CUs as well.

ujsrilj.jpg


And Microsofts extended 14 CU shader array in XSX with 56 CU and only 4 arrays is normal for GPU games cards ?

Same discussion. If you could just linearly extend shader arrays and not loose performance, then why not 20 Cu shader array for 6800 and you only need 2 Shader engines and 4 arrays, saves a whole load of die space and its the same TF.

Why bother with 8 shader angines, what a waste. /s

I do agree that 4 additional CUs per SA is odd and can be detrimental. The same way that RDNA2 cards as well as the PS5 clock very high and the XSBX doesn't, which could be another hint that is not running on the most optimal config.

In addition, there are two shader arrays with all the CUs active, but the number of rasterizers, RBs, L1 cache, is the same. So there could be bottle necking there, the GPU may have issues distributing loads efficiently as well. We'll see. I guess XBSX first party will be optimized to overcome the issues and it will be mostly equal on multiplats.
 
Top Bottom