• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Ok, We need to know this: Is the XSX more powerful than the PS5 or not?

John Wick

Member
Correct. DF delisted their fixed video because the supposed reloading fix doesn't work every time, probably because these scenes are playable and dynamic, there is actually more than one of them and the load can be different depending on player input.

Also you only have to look at the first comparison DF did of DMC5, Series X outperformed PS5 at identical settings and resolutions in both Ray Tracing modes, also Watch Dogs where PS5 doesn't reflect the puddles but Xbox does, even Series S.
Remember your the clown that said"Series S would be pushing the PS5 hard". Your opinion is worth Jack and Shit and Jack just left town.
DF already explained reflecting the puddles was a bug.
 

John Wick

Member
NXGamer did the testing on CoD too, he said it's a bug.

Regarding DMC5, DF said both consoles are like for like on the 1080p RT mode. They also said that in the RT Quality mode (4K) XSX has a very small advantage in some scenes with very few FPS difference and PS5 manages to match XSX during most of the time.

Watchdogs (DF):





Are you really implying XSS can do RT in puddles but PS5 can't? Lmao
You can't take this fangirl seriously. He thinks once MS sort out the magical GDK Series S will push the PS5 hard😂😂😂
That is Timdog and Dealer level of stupidity and fanboyism.
 

John Wick

Member
Don't let these comparisons bring you down mate.
Xbox still has more teraflops, nobody can take that away from you!

Z10aQAc.png
EXACTLY! It can still do the theoretical 12tf
 

truth411

Member
Don't know why this thread is still going. Some folks just need to rewatch Cerny's Road to PS5 presentation and pay attention.
Merely theoretical teraflops isn't an absolute indicator of system performance/capabilities. He even explained why in this Slide/commentary.

o3H8B42.jpg


Clock Speed matters, not just for the theoretical tflop number.
The Gpu cache is Directly tied to the Clock Speed. The higher the clock the more bandwidth the gpu cache have to import/export data. That matters especially if you have a ludicrously fast SSD/IO with no bottlenecks.

I still say it's probably only exclusives that can really take advantage of this. Also the PS5 capabilities is wrapped up in its Geometry Engine which no one has really touched yet.

At the end of the day PS5 exclusives will look the best but multiplats will look similar on both consoles imo.
 
Last edited:

assurdum

Banned
To MS's credit, one area Series X should see some notable advantage is texel fillrate. They have more TMUs on their GPU, which helps with a texel advantage even if their clocks are lower.

The texel advantage should allow them to do more with leveraging texels for texture and non-texture tasks, outside of simply some ray-tracing performance benefits. But it's also as you mention; whether something like that advantage shows up readily comes down to if the hardware is specifically targeted for.

Microsoft going with a focus on (hopefully) optimizing the high-end language APIs is a step in the right direction because truth be told, you don't want to be messing with assembly or hand-written assembly unless you HAVE to, either because the high-level tools aren't that good/compatible with the hardware, or you want every single last bit of performance out of the hardware. But the latter is generally not something devs do until in the latter years of a console's lifecycle.

That said MS's issue could potentially be the fact that their high-level tools are standardizing themselves for a suite of platforms. We don't know how generalized the GDK tools are but I'd like to think they are taking the approach of high-level API tools that are still tailored for the specific platform hardware, even if they share root commonalities. If it's just flush standard with little tailoring for specific platforms in the ecosystem the GDK covers, that could be a potential problem, especially if low-level assembly/hand-written assembly access is not provided easily due to obfuscation through abstraction (or at all).
I think another major problem to push correctly the CUs parallelism, is the splitted ram/bandwidth. Again everyone said it's perfectly fine but I'd like to know when CPU overtakes the 2 GB of usage (I don't remind if is it splitted in single bench of 2GB?) anyway let's imagine we need to use more RAM bench in the cpu how the GPU bandwidth is not bottlenecked or the contrary? The bandwidth it's separate but not really separate in the full bandwidth available; it can't be "fully" utilised independently in the same time because obviously both processors will fight to use it. It's something I don't get it and it seems extremely problematic to me to put uniformed the data in so many CUs without screw the parallelism with this continuos "leaps" in the bandwidth usage between the 2 processor. I don't know if it's clear what I mean, English obviously is not my first language but I think MS has been extremely shallow and "naive" using this approach without to think to impact negatively the performance.
 
Last edited:

Fredrik

Member
Clock Speed matters, not just for the theoretical tflop number.
The Gpu cache is Directly tied to the Clock Speed. The higher the clock the more bandwidth the gpu cache have to import/export data. That matters especially if you have a ludicrously fast SSD/IO with no bottlenecks.
Yeah I think this is it. People tried to downplay the higher clockspeed prelaunch but they’re extremely high for a reason and it certainly seems like Cerny knows his stuff, the only concern was how they would cool it but they seem to have managed that well enough.

As PC tech advances I can see some titles which take advantage of a higher CU count do better on XSX, but for the first years I think the norm will be PS5>XSX on the majority of multiplats.

(speculation)
 

assurdum

Banned
Yeah I think this is it. People tried to downplay the higher clockspeed prelaunch but they’re extremely high for a reason and it certainly seems like Cerny knows his stuff, the only concern was how they would cool it but they seem to have managed that well enough.

As PC tech advances I can see some titles which take advantage of a higher CU count do better on XSX, but for the first years I think the norm will be PS5>XSX on the majority of multiplats.

(speculation)
It's all to see if they can fully utilized the CUs potential with such bandwidth setup
 

Fredrik

Member
It's all to see if they can fully utilized the CUs potential with such bandwidth setup
If it happens it’ll come when it comes on PC I think. At the moment you can still run a game well enough on an older GPU with fewer CUs but as graphic tech advances and really push new GPUs to their limit it’s harder to keep up by brute forcing with overclocking. I’m still gaming on my clocked 1080ti lightning z, ultra on Cyberpunk, it works great, for now.
 

assurdum

Banned
If it happens it’ll come when it comes on PC I think. At the moment you can still run a game well enough on an older GPU with fewer CUs but as graphic tech advances and really push new GPUs to their limit it’s harder to keep up by brute forcing with overclocking. I’m still gaming on my clocked 1080ti lightning z, ultra on Cyberpunk, it works great, for now.
The problem is the series X hasn't infinity cache. So I literally don't understand how can be pushed properly with only a splitted unified bandwidth.
 
Last edited:
If it happens it’ll come when it comes on PC I think. At the moment you can still run a game well enough on an older GPU with fewer CUs but as graphic tech advances and really push new GPUs to their limit it’s harder to keep up by brute forcing with overclocking. I’m still gaming on my clocked 1080ti lightning z, ultra on Cyberpunk, it works great, for now.
You have to keep in mind the PS5's cache scrubbers since nothing else on the market has this. Remember, the compute unit/tflop count is misleading. PS5's GPU has significantly less overhead compared to any other GPU on the market. When you have a fine-grain eviction of data, you save on cache misses and the need to constantly flush the entirety of caches. Clock speeds matter much more when the GPU doesn't suffer from the typical hang-ups all GPUs have. Not to mention the savings on memory bandwidth thanks to less re-fetching of data. I'm not saying PS5's GPU is the most powerful out there, but considering its performance with 36CU it's probably the most efficient.
 
Last edited:

BuffNTuff

Banned
Don't know why this thread is still going. Some folks just need to rewatch Cerny's Road to PS5 presentation and pay attention.
Merely theoretical teraflops isn't an absolute indicator of system performance/capabilities. He even explained why in this Slide/commentary.

o3H8B42.jpg


Clock Speed matters, not just for the theoretical tflop number.
The Gpu cache is Directly tied to the Clock Speed. The higher the clock the more bandwidth the gpu cache have to import/export data. That matters especially if you have a ludicrously fast SSD/IO with no bottlenecks.

I still say it's probably only exclusives that can really take advantage of this. Also the PS5 capabilities is wrapped up in its Geometry Engine which no one has really touched yet.

At the end of the day PS5 exclusives will look the best but multiplats will look similar on both consoles imo.

Multi plats will probably be the same with slightly lower dynamic res on xsx and also less stable frame rate in the form of jitters hitching and stuttering.

“Marginal” quantitative difference, huge qualitative difference.
 

Boss Man

Member
There might be some wiggle room with the wording here. Is a car with 1000 horsepower and square wheels more powerful than a car with 700 horsepower and round wheels? I think the Series X is objectively more powerful, but it's looking less and less like it will end up with better looking games. I'd still be surprised if the PS5's apparent advantage on multiplatform games continued, but that's a much different goalpost than what I expected before launch.
 
Last edited:

S73v3

Banned
I suppose the cap of 10GB vram might be a problem for series X

It's kinda like the gtx 970(?) 3.5 gb situation. Where a chunk of ram was forced thru a lower bandwidth part od bus or clogged into a channel or whatever

Tho not as extreme for sure, but a bottle neck it is. I've been told in the tech world that you cannot just add bandwidths and mixing 2 bandwidths (if Xbox needed 10.5 GB of vram) can be messy

I've also read somewhere that ps5's "primative shaders" are not strictly just proto mesh shaders either. But whatever lol

I've been ify on buying into ps console hardware and ecosystem since Sony has been randomly porting games to pc. (Most of their games aren't for me anyways)
 
I think another major problem to push correctly the CUs parallelism, is the splitted ram/bandwidth. Again everyone said it's perfectly fine but I'd like to know when CPU overtakes the 2 GB of usage (I don't remind if is it splitted in single bench of 2GB?) anyway let's imagine we need to use more RAM bench in the cpu how the GPU bandwidth is not bottlenecked or the contrary? The bandwidth it's separate but not really separate in the full bandwidth available; it can't be "fully" utilised independently in the same time because obviously both processors will fight to use it. It's something I don't get it and it seems extremely problematic to me to put uniformed the data in so many CUs without screw the parallelism with this continuos "leaps" in the bandwidth usage between the 2 processor. I don't know if it's clear what I mean, English obviously is not my first language but I think MS has been extremely shallow and "naive" using this approach without to think to impact negatively the performance.

Don't worry, your English is more or less fine. Anything screwy I think I was able to see what you meant :p

On issue of bandwidth contention, both MS and Sony's systems have to deal with it, since the CPU, GPU, audio etc. will have to eat at available bandwidth. The SSD I/O sub-system in each system also contents with main system bandwidth. However, it is true Sony's approach with fully unified GDDR6 memory is the preferable way to do it. Some of these 3P multiplats on Series X, in addition to possibly underutilizing the GPU (sticking with similar saturation as on PS5 which hurts Series X due to lower clocks), might also be using more than 10 GB of memory for graphics-orientated data.

That also hurts Series X performance because, while I don't know exactly how much the bandwidth really drops (for example there was some blog from early this year saying effective bandwidth was lower than PS4s; I honestly don't think that is anywhere near the case 😆 ), I'm guessing there's at least enough of a hit on bandwidth if a game tries dipping into memory outside of the fast 10 GB for graphics, that could be causing issues.

Combine that with what I was mentioning earlier with regards to the same multplats on Series X needing at least 44 CUs in order to match performance on PS5 (at least in things like the cache bandwidths, texture fillrate etc.), and I think at least the bigger issues outside of CU saturation can be fixed relatively soon. MS are probably working on ways to help manage data assets in the fast and slow memory pools so that if games happen to be accessing the data in the slower pool, that should hopefully cease. But stuff like the CU saturation, that'll come down to the devs themselves and if they tailor their games to do so.

I don't think that should be too hard, though it does require some time and resources they could've been in a crunch for implementing, especially if certain features of the GDK weren't ready until late June (and keeping in mind, devs would've still needed to integrate the updates into their whole workplace pipeline, create backups of various files, test out compatibility, etc.).
 

Tschumi

Member
They're playing different game:

PS5 is a holistic low cost solution which redefines where power comes from.

XSX is just a powerful machine.

The pudding thus far shows that PS5 is performing more consistently, and that XSX is mysteriously gimped in certain isolated instances.

But yeah as most have said, we won't really know for a few years. I think XSX will be better used in future, but i also think the PS5's speed isn't being totally used yet, either.

I think XSX might turn it to be xbone mk II, overpriced and misguided and in dire need of a "pro" edition to redefine what they're trying to do.

The latest lesson, then: if you wanna be Goliath (hugely strong) the other guys are gonna be David (speedy and nimble, packing a punch) and that's what's happened.
 

Heisenberg007

Gold Journalism
They're playing different game:

PS5 is a holistic low cost solution which redefines where power comes from.

XSX is just a powerful machine.

The pudding thus far shows that PS5 is performing more consistently, and that XSX is mysteriously gimped in certain isolated instances.

But yeah as most have said, we won't really know for a few years. I think XSX will be better used in future, but i also think the PS5's speed isn't being totally used yet, either.

I think XSX might turn it to be xbone mk II, overpriced and misguided and in dire need of a "pro" edition to redefine what they're trying to do.

The latest lesson, then: if you wanna be Goliath (hugely strong) the other guys are gonna be David (speedy and nimble, packing a punch) and that's what's happened.

Yeah that's a more level-headed response. Both consoles are pretty good, tbh, especially considering the price. One is just being more performant, and it is easy to see why.

For future, if anything, I think PS5 has a better chance to widen the gap. It has more exotic (read: new) things that devs can tap into. The geometry engine, the use of cache scrubbers, and the complete off-load to DC units isn't something that devs are entirely familiar with yet. More importantly, that I/O setup is completely new. Once devs start utilizing it, things may change rather drastically. That's the "special sauce" that Sony has put the most focus on banking on it for the true next-gen leap.

My pre-launch assumption was that XSX will perform slightly better in the first year or two, because how it builds upon the last-gen foundation with faster CPU and GPU in familiar veins. And PS5 would sneak ahead when games become more I/O dependent.
 

Fredrik

Member
The problem is the series X hasn't infinity cache. So I literally don't understand how can be pushed properly with only a splitted unified bandwidth.
I have no concern that the hardware itself have issues, it’ll take optimization wizardry on both to max them out. I don’t trust the tools explanation, I just think it’ll take some time before we see the benefits of having 56 CUs, longer than it’ll take to see the benefit of a higher clockspeed. But AMD is up at 80 CU now on PC so it’s not XSX that is doing things strangely here, I absolutely think the low CU count and high speed strategy will show it’s limitation as things evolve. Only time will tell if that happens before a Pro happens though. I’m not blindly trusting Cerny but I think they went with the right strategy at the right time here.
 

assurdum

Banned
Don't worry, your English is more or less fine. Anything screwy I think I was able to see what you meant :p

On issue of bandwidth contention, both MS and Sony's systems have to deal with it, since the CPU, GPU, audio etc. will have to eat at available bandwidth. The SSD I/O sub-system in each system also contents with main system bandwidth. However, it is true Sony's approach with fully unified GDDR6 memory is the preferable way to do it. Some of these 3P multiplats on Series X, in addition to possibly underutilizing the GPU (sticking with similar saturation as on PS5 which hurts Series X due to lower clocks), might also be using more than 10 GB of memory for graphics-orientated data.

That also hurts Series X performance because, while I don't know exactly how much the bandwidth really drops (for example there was some blog from early this year saying effective bandwidth was lower than PS4s; I honestly don't think that is anywhere near the case 😆 ), I'm guessing there's at least enough of a hit on bandwidth if a game tries dipping into memory outside of the fast 10 GB for graphics, that could be causing issues.

Combine that with what I was mentioning earlier with regards to the same multplats on Series X needing at least 44 CUs in order to match performance on PS5 (at least in things like the cache bandwidths, texture fillrate etc.), and I think at least the bigger issues outside of CU saturation can be fixed relatively soon. MS are probably working on ways to help manage data assets in the fast and slow memory pools so that if games happen to be accessing the data in the slower pool, that should hopefully cease. But stuff like the CU saturation, that'll come down to the devs themselves and if they tailor their games to do so.

I don't think that should be too hard, though it does require some time and resources they could've been in a crunch for implementing, especially if certain features of the GDK weren't ready until late June (and keeping in mind, devs would've still needed to integrate the updates into their whole workplace pipeline, create backups of various files, test out compatibility, etc.).
No offence but I'm not entirely convinced about it. Not to saying the series X can't offer proper performance but I don't believe at all it's just matter of early development what we see. In particular I don't think the contention it happens just when GPU needs more than 10 GB of RAM but even when CPU uses more RAM benches, because inevitable it will affect the whole bandwidth effectiveness on both sides. I don't remind the exact number but isn't it around 52 GBs per bench the bandwidth usage? Now let say CPU needs more RAM than a single bench offer, as I don't know 4 GB (2GB X bench, again I can be totally wrong in this number) so how GPU can reach 560 GBs of speed if 52+52GBs it's occupied to the CPU in the same time? And more the CPU requires RAM, less bandwidth GPU will have, as the contrary. Again this compromise terrible the performance in the bandwidth side and I don't know what magical software solution could solve this without compromise, and we don't speak about the full CUs utilisation with such continuous contention in the bandwidth. I'm more than sure AC Valhalla has such problematic. Now I'm not saying series X will not offer better performance but such issue could avoid to the hardware to hit is maximum to prevent to outpace the ps5 performance. IMO.
 
Last edited:

Lysandros

Member
I have no concern that the hardware itself have issues, it’ll take optimization wizardry on both to max them out. I don’t trust the tools explanation, I just think it’ll take some time before we see the benefits of having 56 CUs, longer than it’ll take to see the benefit of a higher clockspeed. But AMD is up at 80 CU now on PC so it’s not XSX that is doing things strangely here, I absolutely think the low CU count and high speed strategy will show it’s limitation as things evolve. Only time will tell if that happens before a Pro happens though. I’m not blindly trusting Cerny but I think they went with the right strategy at the right time here.
AMD's 80 CU GPUs have four shader engines compared to XSX's two for 52 CUs. So yes, 'XSX is doing things a bit strangely'..
 

Heisenberg007

Gold Journalism
No offence but I'm not entirely convinced about it. Not to saying the series X can't offer proper performance but I don't believe at all it's just matter of early development what we see. In particular I don't think the contention it happens just when GPU needs more than 10 GB of RAM but even when CPU uses more RAM benches, because inevitable it will affect the whole bandwidth effectiveness on both sides. I don't remind the exact number but isn't it around 52 GBs per bench the bandwidth usage? Now let say CPU needs more RAM than a single bench offer, as I don't know 4 GB (2GB X bench, again I can be totally wrong in this number) so how GPU can reach 560 GBs of speed if 52+52GBs it's occupied to the CPU in the same time? And more the CPU requires RAM, less bandwidth GPU will have, as the contrary. Again this compromise terrible the performance in the bandwidth side and I don't know what magical software solution could solve this without compromise, and we don't speak about the full CUs utilisation with such continuous contention in the bandwidth. I'm more than sure AC Valhalla has such problematic. Now I'm not saying series X will not offer better performance but such issue could avoid to the hardware to hit is maximum to prevent to outpace the ps5 performance. IMO.

You are not far off. This article explains very well the disadvantages of a split memory pool of varying speeds.
 
I have no concern that the hardware itself have issues, it’ll take optimization wizardry on both to max them out. I don’t trust the tools explanation, I just think it’ll take some time before we see the benefits of having 56 CUs, longer than it’ll take to see the benefit of a higher clockspeed. But AMD is up at 80 CU now on PC so it’s not XSX that is doing things strangely here, I absolutely think the low CU count and high speed strategy will show it’s limitation as things evolve. Only time will tell if that happens before a Pro happens though. I’m not blindly trusting Cerny but I think they went with the right strategy at the right time here.
With double the number of shader array on those cards. The problem is not the total number CU, it's the number of CUs by shader array. On AMD 5700, 6000 (high end) cards and PS5 there are 10 CUs by shader array (which are mostly identical in those GPUs). The problem is XSX has 14CUs by shader array.

XSX is exactly doing things differently than all high end AMD GPUs, including PS5. This was a big part of Cerny presentation: To keep the CUs busy you need available ressources. Mostly everything that is not CUs.

Dammh9u.png
 
Last edited:

JackMcGunns

Member
When a game runs at 120fps typically means the GPU is much less stressed and thus CPU bound (XSX and PS5 have roughly the same CPU). So I think we might see a difference with games pushing the GPU harder, I'm no developer, but that makes sense to me.

Is Control Ultimate Edition the first game to run 30fps on both Series X and PS5? If the PS5 runs better with this game, then I see the trend continuing, unless the so called "Tools" issue is real.

 
Is Control Ultimate Edition the first game to run 30fps on both Series X and PS5?

I believe there's a 30FPs Quality mode for Assassin's Creed Valhalla on both platforms.

unless the so called "Tools" issue is real.

Still trying to decide if that issue was greatly exaggerated. I have doubts that tools can limit the power of the system by ~3-4TFs. That's assuming the PS5 is outputting around 8-9TFs this early in the cycle. Overtime developers will get closer to the theoretical maximums.

To avoid any confusion I know the theoretical maximums are 10.28TFs for the PS5 and 12.2TFs for the Series X.
 
Last edited:
When a game runs at 120fps typically means the GPU is much less stressed and thus CPU bound (XSX and PS5 have roughly the same CPU). So I think we might see a difference with games pushing the GPU harder, I'm no developer, but that makes sense to me.

Is Control Ultimate Edition the first game to run 30fps on both Series X and PS5? If the PS5 runs better with this game, then I see the trend continuing, unless the so called "Tools" issue is real.


Watchdogs Legion also runs at 30 FPS. I'm pretty sure the Sony fans here declared it was superior on PS5.
 

Invalid GR

Member
Don't know if I am wrong but how can we say developers cannot utilize all the 52 CUs on XSX and tools are not ready when the 6800/6900xt on PC are performing as great as they are using almost the same "tools" and CUs.
 
Watchdogs Legion also runs at 30 FPS. I'm pretty sure the Sony fans here declared it was superior on PS5.

I thought it was the tech analysts that determine the superior version. People like Digital Foundry and NXGamer NXGamer for example.

As for WatchDogs legion based off what Digital Foundry said it's basically a tie.


Reading through the review I found this part pretty interesting.

Interestingly, dynamic resolution scaling is also a close match between PS5 and Series X, despite a wide gulf in terms of overall compute power and memory bandwidth between the two consoles. Ubisoft has a very fine grain DRS system here, seemingly capable of adjusting resolution on the fly quickly and with tiny adjustment steps. Minimum resolution is close to 1440p (confirmed again by the PC config files) and I noted that general gameplay shifts between 80 to 100 per cent of full 4K, with nigh-on identical shifts in resolution on both consoles in the same situations. It's only really at night, or in areas heavy in foliage, that the game drops closer to the lower bounds

Pretty interesting how that massive gulf in compute and memory bandwidth lead to Watchdogs being identical on both platforms. I guess the PS5s other advantages are really helping it out in this case.
 
Last edited:

HoofHearted

Member
I thought it was the tech analysts that determine the superior version. People like Digital Foundry and NXGamer NXGamer for example.

As for WatchDogs legion based off what Digital Foundry said it's basically a tie.


Pretty certain almost all of these initial cross-gen game comparisons are now appearing to end up being virtual "ties" with latest patch(es)..
 

HoofHearted

Member
So even with improved tools it's still a tie?

Not sure if the patch(es) include "tools" or not. But that doesn't really matter anyway.

My point is - you can't really take any of these current cross-gen games and use them for full/actual comparison because - none of these games are compiled, ported, upgraded, etc., to take full advantage of these next-gen "true" native capabilities.

The only real takeaway here (IMHO) - is that one console may run a game slightly better than the other - either by it's ability to run BC mode better and/or by nitpicking a game apart and looking at something as miniscule as an individual frame resolution and/or FPS counts on certain scenes that last a few fleeting seconds.

End result of all of these comparisons? I'd wager that if you display any of these multi-plat games running same areas/scenes on these next-gen consoles side-by-side and execute a basic survey to see who could tell the difference - majority of people wouldn't be able to see the difference.

Until we have a true/actual multi-plat game - targeted (compiled) solely and specifically for these next-gen consoles in order to take advantage of the available hardware/SDK/GDK ("tools") - we won't truly know which console is "moar powaful".

What game will that be? Not sure yet - I don't think we'll see a game that provides a true native comparison for another year or so.

Until then all we have is basic updates/ports to enable higher resolution + higher framerate + enable/disable RT (that are, in reality, being fast tracked to get something out), with additional typical performance patches.
 
Last edited:
Not sure if the patch(es) include "tools" or not. But that doesn't really matter anyway.

My point is - you can't really take any of these current cross-gen games and use them for full/actual comparison because - none of these games are compiled, ported, upgraded, etc., to take full advantage of these next-gen "true" native capabilities.

The only real takeaway here (IMHO) - is that one console may run a game slightly better than the other - either by it's ability to run BC mode better and/or by nitpicking a game apart and looking at something as miniscule as an individual frame resolution and/or FPS counts on certain scenes that last a few fleeting seconds.

End result of all of these comparisons? I'd wager that if you display any of these multi-plat games running same areas/scenes on these next-gen consoles side-by-side and execute a basic survey to see who could tell the difference - majority of people wouldn't be able to see the difference.

Until we have a true/actual multi-plat game - targeted (compiled) solely and specifically for these next-gen consoles in order to take advantage of the available hardware/SDK/GDK ("tools") - we won't truly know which console is "moar powaful".

What game will that be? Not sure yet - I don't think we'll see a game that provides a true native comparison for another year or so.

Until then all we have is basic updates/ports to enable higher resolution + higher framerate + enable/disable RT (that are, in reality, being fast tracked to get something out), with additional typical performance patches.

Even if that's the case the more powerful system should still have advantages. Like a higher average framerate or resolution for example.
 

HoofHearted

Member
Even if that's the case the more powerful system should still have advantages. Like a higher average framerate or resolution for example.

Not necessarily. Just because a new console comes out with higher spec'd hardware - doesn't necessarily mean that a game automatically detects and adjusts itself to take advantage of the new hardware.

I think many people today think that consoles are very similar to PC development, in that you can simply "upgrade" your GPU and see an immediate enhanced increase in framerates/resolution with a simple "re-compile" of the code. That's simply not the case - these games are designed from the ground up to be compiled and targeted specifically for the hardware format (console) it was designed for and to ultimately glean every last miniscule ounce of performance out of the console.

It all depends on how well the game is optimized from an upgrade/compatibility target (assuming there is an "upgrade" path) in order to take advantage of available increases in hardware. And that's typically only if the new hardware is simply a natural evolution of the previous "generation" of hardware.

Microsoft is attempting to bridge this gap with introducing GDK, etc., and we may (or may not) being see "tools" impacts with respect to XSX games. But to unilaterally say that we're NOW seeing the most optimal output of either of these consoles for cross-gen games is suspect at best.

Heck - just look at the differences in launch games from XBO/PS4 era to these latest cross-gen games everyone is now comparing under a microscope to argue console dominance. Looking back - some of those original launch games make my eyes water - especially when compared to the latest released games (GoT).

Finally - one additional thought - we all "know" that the PS5 has a much faster SSD than XSX (per the spec difference alone). Yet we've seen instances of games loading slower on the PS5 versus XSX.

Does that call into question the spec? - is the XSX actually faster?

Or is the game un-optimized? Does it need new drivers? New firmware? New SDK/GDK to wrap/call the new low-level operation(s) to increase load performance?

All of the above with a full game patch?

Now - think about all the touchpoints from a software development and delivery perspective that a single simple "patch" would require in order implement a new firmware with required low-level libraries/SDK API changes to fix a simple load time issue.

Even worse - now think about a time not too long ago where game developers didn't have these options because they had to create games that couldn't be updated as they are today (cartridges/ROMs, etc.). During that time - people argued about the GAMES - not framerate and resolution. :)

TL;DR - We don't (and won't, for some time) know the true top end capabilities of these consoles based on comparing current cross-gen games.

It's way too early to even have these types of discussions about how one console runs a game better than another console - especially when the only comparison to be had right now is something as miniscule and meaningless as very minor differences in framerate and resolution.
 
Last edited:

JackMcGunns

Member
I think people need to look up Microsoft's research into how cu's are under utilized in games and often sit. Idle.


But is this considered a design flaw? or is it just a matter of developers starting on PS5 and focusing on the lower CU count first? The future is definitely higher CU count focused based on high end GPUs
 
But is this considered a design flaw? or is it just a matter of developers starting on PS5 and focusing on the lower CU count first? The future is definitely higher CU count focused based on high end GPUs

But higher CU counts with more shader arrays though. At least that's what AMD is doing with RDNA2.

Edit: I would also like to add that high clocks seems to be what AMD is aiming for.

Again I don't believe the future is just increasing CU counts there's other things that need to be increased to achieve better performance.
 
Last edited:

geordiemp

Member
I believe there's a 30FPs Quality mode for Assassin's Creed Valhalla on both platforms.



Still trying to decide if that issue was greatly exaggerated. I have doubts that tools can limit the power of the system by ~3-4TFs. That's assuming the PS5 is outputting around 8-9TFs this early in the cycle. Overtime developers will get closer to the theoretical maximums.

To avoid any confusion I know the theoretical maximums are 10.28TFs for the PS5 and 12.2TFs for the Series X.

Utilisation of CU is normally round 40 %, so maybe ps5 uses about 5 TF on average over a frame if its efficient, and XSX uses about 4.5 TF as its not. I am just being TF humours, or am I ?

We are not ruinning mining.
 
Last edited:
Utilisation of CU is normally round 40 %, so maybe ps5 uses about 5 TF on average over a frame if its efficient, and XSX uses about 4.5 TF as its not. I am just being TF humours, or am I ?

We are not ruinning mining.

Whatever the case may be it's pretty obvious that the performance advantage is almost identical between the two platforms.
 

JackMcGunns

Member
But higher CU counts with more shader arrays though. At least that's what AMD is doing with RDNA2.

Edit: I would also like to add that high clocks seems to be what AMD is aiming for.

Again I don't believe the future is just increasing CU counts there's other things that need to be increased to achieve better performance.


Does the PS5 have more? because if not, then the higher CU count reflects a delta. The real question is, are CUs in Series X under-utilized because of not enough arrays, or due to poor optimization?
 
Last edited:
Top Bottom