• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
The PS5 is using the old Render Backend design, which has 4 Color ROPs and 16 Z/Stencil ROPs.
XSX has the new RB+ design which has 8 Color ROPs but still 16 Z/Stencil ROPs.
Per Shader Engine this means the PS5 has 64 Color ROPs and 256 Z/Stencil ROPs, while the XSX has 64 Color ROPs and 128 Z/Stencil ROPs.

Thanks for your response but I really don't understand this part.

What does having more Z/Stencil ROPs do for the system?
 
But why? Why you think more RDNA2 features are that important? Out of my curiosity.

Basically revolves around the argument that Mesh Shaders and VRS will give a significant performance advantage to the XSX.

But than again I've seen people say that the PS5s can match the performance of mesh shaders with primitive shaders and that VRS is certainly possible on the platform.
 

Elog

Member
Basically revolves around the argument that Mesh Shaders and VRS will give a significant performance advantage to the XSX.

But than again I've seen people say that the PS5s can match the performance of mesh shaders with primitive shaders and that VRS is certainly possible on the platform.
Unless we get detailed information regarding the software-API/hardware customizations to the PS5 GE (which I highly doubt we will - do we even have the PS4 development kit specifications/manuals?), the only thing we can look at is what has been stated by people working on the projects. And they are very positive regarding the GE customizations and how they will allow for geometry culling and shader prioritizations (which mesh and VRS is all about).

The other argument is that Sony sits on some of the world's leading 3D graphics developers and they had full visibility on mesh shaders and VRS during the PS5 project but choose to take their own path. It takes some serious guts to assume that they did not know what they were doing.

How these systems will perform with regard to culling and shader prioritizations? We have to wait for the games but to expect any major differences between the systems along these dimensions is unrealistic. The biggest delta between the two is on hardware based BC (XSX advantage), I/O (PS5) and cache management (PS5).
 

Locuza

Member
Thanks for your response but I really don't understand this part.

What does having more Z/Stencil ROPs do for the system?
I think they don't do much but may add a little bit of performance if they are cases where the Z/Stencil throughput is a limiting factor on the new RB+ design.
Computerbase did a nice test about Navi22 vs. Navi10 at the same clock speed of 1GHz:
https://www.computerbase.de/2021-03/amd-radeon-rdna2-rdna-gcn-ipc-cu-vergleich/

In 1080 Navi10/RDNA1 is on average 4% faster per clock than Navi22/RDNA2.
There are some games which even show 10% difference in favor of Navi10/RDNA1.

Now contrary to the PS5 and Xbox Series X, many RDNA2 GPUs have also a new rendering frontend configuration:
Eto3YoMXcAIMw5n


Instead of two primitive units (triangle output) and two rasterizer (putting out the pixel fragments based on the received triangle), most RDNA2 GPUs only have one primitive unit and one (larger) rasterizer.
Triangle throughput is in theory halved but there are some reasons why this might be not that terrible but it could explain some performance regressions on Navi22 vs. Navi10.
For the high clock rates AMD did a lot of re-pipelining work, adding more pipeline stages which increases the latency, this may lead to some performance penalty.
And as stated the Render Backends now have less Z/Stencil ROPs per SE.
Still performance differences per clock are 4% on average or 10% in bad cases and I think the lesser amount of Z/Stencil ROPs have little influence.

Unfortunately I can't classify how much this really matters in practise but just stating this difference.
Basically revolves around the argument that Mesh Shaders and VRS will give a significant performance advantage to the XSX.

But than again I've seen people say that the PS5s can match the performance of mesh shaders with primitive shaders and that VRS is certainly possible on the platform.
It's not clear if there is even a real difference between "Mesh Shaders" and "Primitive Shaders", those are just names.
You would need the specifications and performance numbers to tell.
And yes, it's possible to implement VRS in software, which is also already used on the PS4 and Xbox One platforms, even with some advantages vs. the HW implementation.
However HW VRS has also it's own sets of pros and on the Xbox Series you can simply use both, depending on the use case, something which is likely not possible on the PS5.
 
I think they don't do much but may add a little bit of performance if they are cases where the Z/Stencil throughput is a limiting factor on the new RB+ design.
Computerbase did a nice test about Navi22 vs. Navi10 at the same clock speed of 1GHz:
https://www.computerbase.de/2021-03/amd-radeon-rdna2-rdna-gcn-ipc-cu-vergleich/

In 1080 Navi10/RDNA1 is on average 4% faster per clock than Navi22/RDNA2.
There are some games which even show 10% difference in favor of Navi10/RDNA1.

Now contrary to the PS5 and Xbox Series X, many RDNA2 GPUs have also a new rendering frontend configuration:
Eto3YoMXcAIMw5n


Instead of two primitive units (triangle output) and two rasterizer (putting out the pixel fragments based on the received triangle), most RDNA2 GPUs only have one primitive unit and one (larger) rasterizer.
Triangle throughput is in theory halved but there are some reasons why this might be not that terrible but it could explain some performance regressions on Navi22 vs. Navi10.
For the high clock rates AMD did a lot of re-pipelining work, adding more pipeline stages which increases the latency, this may lead to some performance penalty.
And as stated the Render Backends now have less Z/Stencil ROPs per SE.
Still performance differences per clock are 4% on average or 10% in bad cases and I think the lesser amount of Z/Stencil ROPs have little influence.

Unfortunately I can't classify how much this really matters in practise but just stating this difference.

It's not clear if there is even a real difference between "Mesh Shaders" and "Primitive Shaders", those are just names.
You would need the specifications and performance numbers to tell.
And yes, it's possible to implement VRS in software, which is also already used on the PS4 and Xbox One platforms, even with some advantages vs. the HW implementation.
However HW VRS has also it's own sets of pros and on the Xbox Series you can simply use both, depending on the use case, something which is likely not possible on the PS5.

Very interesting. Seems like if there was some huge delta between the two we would have seen it already. I guess while things will get better for both the results will remain largely the same. Not saying the gap can't increase but it might not increase as much as some people believe.
 
I see the topic of Mesh and Primitive Shaders has come up again. Me and TheThreadsThatBindUs TheThreadsThatBindUs have covered this several times now. Primitive and Mesh Shaders have the same goal, they are basically compute shaders which are injected into the graphics pipeline, this allows the GPU to be utilised more significantly and gives developers much better control and felixbility over the graphics pipeline.

As for the actual difference between the Mesh and Primitive Shaders, they are both different API implementations and the underlying hardware is exactly the same across the PC RDNA 2 cards, Series X and PS5. This functionality was introduced in AMD's Vega cards in the "NGG fast path" and since then AMD have not listed any changes to the command and geometry processors even up until and including RDNA 2. AMD in the past year have still been filing patents for Primitive Shaders. The AMD RDNA 2 cards have also been converting Mesh Shaders into Primitive Shaders in code (even after the driver updates) and the Series X is very likely doing the same.

Now allegedly Sony have made customisations to their geometry engine (according to RGT and few others and was alluded to by Cerny himself) which is different to what is found on the stock RDNA 1/2 cards as well as Series X but they have not disclosed any details on what they exactly are so it's very difficult to say.

As for the performance differences between the two, well allegedly Mesh Shaders are slightly easier to use when it comes to coding for things like tessellation but thats about it, but the raw performance gains will be identical when properly coded and will usually depend on developer time and effort.

I'm sure TheThreadsThatBindUs TheThreadsThatBindUs can give a better answer then me.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I made a die shot analysis video for the Xbox Series X and S chips, though the first part is just limited to the I/O:


It may be interesting for some, especially since all sources are in the description box and I lay down how you can figure out different hardware blocks.
So to some extend it's like a tutorial and you can learn how to analyse die shots yourself.
However you need some background knowledge and not everything will be easy to follow if you are just a casual hardware enthusiast.


That said some may not like to watch videos or can't, so here is a small summary:

1.) When companies share the die sizes it's not always clear what they mean by that.
Sometimes they mean the whole chip size, sometimes just the chip design itself, without the scribe line.
In some cases the official die size doesn't fit either way, so the first thing you should do, when you want to measure structure sizes, is to check on the die sizes yourself.
https://abload.de/img/mega-leaks-part-5-xboyyjr3.png

2.) First a short CPU analysis of the Zen2 Matisse (CPU) and Renoir (APU) die shots.
They both share the same CPU core sizes of 2.83mm² and the structures look basically identical, so AMD uses the same design libraries and physical implementation.
https://abload.de/img/mega-leaks-part-5-xbok9jvo.png

3.) AMD only changed the L3$ design, reducing the capacity from 16MiB (CPU) to 4MiB (APU).The CCX size goes down from 31.37mm² to 20.21mm².
A Renoir CCX is 11.16mm² smaller.
You could get 3x 4MB CCXes (3x 20.21mm = 60.63mm²) for the price of 2x 16MB CCXes (2x 31.37mm² = 62.74mm²).
It's more than obvious why there is a large motivation in cutting down the L3$ if area is important.
https://abload.de/img/mega-leaks-part-5-xbo5ojye.png

4.) In the past I made a hypothetical CCX mockup with 8MB per CCX, resulting in ~24.23mm².
It was an interesting question what Renoir, the Xbox Series and PS5 would use (was still open back then).
Now it's known that Renoir, Xbox Series and PS5 all use 4MB for the L3$ per CCX. https://abload.de/img/mega-leaks-part-5-xboeckzt.png

5.) The assumption now is of course that the Xbox Series and PS5 use the same Renoir design and implementation (which is not true for the PS5 as we know).
Comparing the Xbox Series CCXes to AMD's Zen2 CCXes shows basically the same picture, at least the size should be identical.
https://abload.de/img/mega-leaks-part-5-xboh9j0c.pnghttps://abload.de/img/mega-leaks-part-5-xborkk9d.png

Because of the low resolution it's hard to tell if some digital logic is not laid down differently, on the FPU and L3$ side there appear to be some differences but I wouldn't make the call right now.

6.) The core size on the Xbox Series should be identical with 2.83mm², meaning the die shot MS shared at Hot Chips is 372.16mm², larger than the final chips.
https://abload.de/img/mega-leaks-part-5-xbo07jqp.png

7.) Thanks to the XSX teardown on ifixit we can say that the final chips are ~360mm² in size, as MS officially stated with 360.45mm².
The chip design itself is ~354.71mm² large on XSX and 192.18mm² on the XSS.
https://pbs.twimg.com/media/Ew2ziieWgAsA8Z8?format=jpg&name=large
https://abload.de/img/mega-leaks-part-5-xboeakmw.png
https://abload.de/img/mega-leaks-part-5-xbonrjsg.png

8.) For time reasons I just managed to include the I/O sections, where MS is stating 5 I/O blocks, 3 on the left and two on the right side.
We obviously want to know what they are for.
Looking at them makes it clear that we have two different types of I/O.
Based on official information we only have 3 major I/O connections; Display, 8x PCIe4 Lanes and GDDR6 links.
2x PCIe4 lanes are used for the internal and 2x for the external SSD.
The Southbridge is a dedicated chip which is also attached via the PCIe standard and contains USB3 interfaces, a system controller, SPI, etc.

So one I/O type should be for display and the other for PCIe.
The left I/O blocks don't fit to 4 display links or 8 PCIe4 lanes.
But the right I/O blocks have 4 yellow stripes and if those are PCIe Lanes that would fit perfectly to 8 PCIe4 Lanes.
https://abload.de/img/mega-leaks-part-5-xboz6khw.png

9.) We can check that simple assumption thanks to Fritzchens Fritz, who has die shots of Navi10 and Navi14 on his flickr account.
Comparing the I/O sections and other analog elements shows basically no differences, outside of two larger blocks colored in red and cyan.
N10 has 6 red blocks, N14 just 5.
N10 has 4 cyan blocks, N14 just 2.
https://abload.de/img/mega-leaks-part-5-xboickrc.png

10.) The cyan blocks look identical to the right I/O blocks on the Xbox Series X and they share the exact same size.
There is only one major I/O specification which could come into question here.
N10 has 16x PCIe4 lanes, while N14 only has 8x PCIe4 Lanes.
We can conclude that the cyan blocks are PHYs for PCIe4 and that each yellow stripe is a single lane:
https://abload.de/img/mega-leaks-part-5-xbomrjd7.png

11.) Finding the display specifications per GPU is not so easy, I think you really have to look at the open source drivers to find the information about how many display controllers are present per GPU.
N10 has 6, N14 5, which fits to the red blocks which look very similar to the left I/O blocks on Xbox Series.
https://abload.de/img/mega-leaks-part-5-xbo9sjo5.png

12.) Why the Xbox Series X has 3 of those (display) I/O PHYs is still a mystery to me, the Xbox Series S just has 1.
https://pbs.twimg.com/media/Ew25RAQWQAAtdwX?format=jpg&name=large

13.) One could also look at the traces on the PCB to figure out the I/O situation but for a layperson this is not always presenting a clear picture.
https://abload.de/img/mega-leaks-part-5-xboeckfu.png

14.) Finally the GDDR6 I/O, which MS officially annotated but they would have been easy to make out anyway.
DRAM PHYs are usually the largest interfaces on a chip and in most cases there are no other options which fit to the memory specifications.
For example the Xbox Series X has a 320-Bit wide interface and each GDDR6 memory chip is connected via 32-Bits, meaning that the PHY needs to fit to the number 10 in some way.
Either we should see 5 larger blocks, 10 or 20 but not something which is not part of that number series.
On the Xbox Series X we see exactly 10 larger I/O PHYs around the chip edges.

In addition I made just a visual representation how the 10GB of "GPU Optimal Memory" should be in principle mapped over the full 320-Bit Interface and how the 6GB "Standard Memory" is addressed over 192-Bits/6 memory chips.
https://abload.de/img/mega-leaks-part-5-xbodekll.png
https://abload.de/img/mega-leaks-part-5-xboy7jki.png
https://abload.de/img/mega-leaks-part-5-xbo83kyk.png

------

In the end we should roughly come up with this picture (yellow TMU blocks are part of the WGPs and the green/cyan blocks on the left and right side from the Parameter Cache should relate to the Primitive Units and Rasterizer):
1600-xboxldjpu.jpg


Full res and with other information here:


One interesting point may relate to the CCXes:
Ew3VvHxWgAAik-p


The PS5 CPU has some cut downs on the FPU side (at least the FP register file was reduced), making the CCX shorter than on the Xbox Series and AMD's Zen2 CPUs.

____

Any feedback, questions or answers are welcomed. :)

so not only does the ps5 cpu not have zen 3 features, its actually smaller. can we ban red gaming tech from this forum?

p.s thanks for the detailed info. very interesting stuff. do we have a final die size for the ps5?
 
"Severe" is relative but from my perspective I wouldn't describe it that way and also don't see it happening in the future.
HW VRS is nice and you may get a 10% perf boost but that's not "severe" from my perspective.
On the other side the PS5 has some advantages which probably are the reasons behind the good performance numbers we already see vs. XSX.

* Higher clock rates are leading to higher Frontend and Backend throughput --> faster Command Processor, higher geometry throughput, higher pixel fillrate.

* Further the PS5 has 4MiB L2$ for 36 CUs, while the XSX has "only" 5MiB for 52 CUs.

* It's also easier to utilize 36 CUs than 52.

* The PS5 is using the old Render Backend design, which has 4 Color ROPs and 16 Z/Stencil ROPs.
XSX has the new RB+ design which has 8 Color ROPs but still 16 Z/Stencil ROPs.
Per Shader Engine this means the PS5 has 64 Color ROPs and 256 Z/Stencil ROPs, while the XSX has 64 Color ROPs and 128 Z/Stencil ROPs.

* Moreover the PS5 has GPU Cache Scrubbers making data evictions more efficient.

* Two address spaces on the Xbox Series may lead to worse bandwidth utilization and memory access times.

* The software stack could be more efficient on the PS5 or the developers optimized better for the PS5 platform.

Without profiling data and a precise breakdown it's not possible to say what really matters for a variety of games but on average I think the results we are seeing now, will more or less stay that way.
Unless some game version on borked on one platform or the either, I don't think people could distinguish between the platforms, in terms of resolution and performance, without a direct comparison in front of them.

Software ecosystem or something like VR-Support is much more important from my perspective.
Very interesting. While my knowledge is extremely limited, going off this as well as many other comments. My take away is...

Xbox has more TFs. (Which is an utterly useless metric that means nothing)
Xbox may benefit from HW VRS.

Absolutely every other aspect of both the hardware as well as software significantly favors the PS5.

My question then is this. Why hasn't the PS5 fared significantly better than the XSX in comparisons? Based on the info being posted here, there should be quite a gap in favor of the PS5. While a couple of the first comparisons might have supported that trend occurring, almost every single comparison since has shown the opposite, with the results not only favoring the XSX, but with a widening difference over time.

Are developers perhaps being paid money in secret alleyways somewhere to gimp the PS5 versions?
 
Very interesting. While my knowledge is extremely limited, going off this as well as many other comments. My take away is...

Xbox has more TFs. (Which is an utterly useless metric that means nothing)
Xbox may benefit from HW VRS.

Absolutely every other aspect of both the hardware as well as software significantly favors the PS5.

My question then is this. Why hasn't the PS5 fared significantly better than the XSX in comparisons? Based on the info being posted here, there should be quite a gap in favor of the PS5. While a couple of the first comparisons might have supported that trend occurring, almost every single comparison since has shown the opposite, with the results not only favoring the XSX, but with a widening difference over time.

Are developers perhaps being paid money in secret alleyways somewhere to gimp the PS5 versions?

I've never seen people really claim that there would be a significant delta in favor of the PS5. Most of the speculation was that the XSX would destroy the PS5 in multiplat comparisons. People are just questioning why that hasn't happened.

No idea why to interpret this as the PS5 having some massive power advantage. What people should question is why the XSX isn't performing better not why the PS5 isn't performing worse.

Locuza Locuza gave me some pretty solid answers. Maybe they will help you understand the situation better?
 
Last edited:

demigod

Member
God of War was never coming in 2021, you’re a fucking idiot if you believed Sony. If anything it will be Horizon thats coming this year.
 

Locuza

Member
so not only does the ps5 cpu not have zen 3 features, its actually smaller. can we ban red gaming tech from this forum?

p.s thanks for the detailed info. very interesting stuff. do we have a final die size for the ps5?
I don't have a lot of time watching other people's content but in multiple videos RGT likes to put a disclaimer around their rumors, saying "take it fit a pinch of salt" or "a truckload of salt".
The wording is usually defensive and not outright stating something as facts.
You obviously may still dislike this style, sharing wonky rumors but I wouldn't categorize this as a terrible offense, worthy of a ban.
From my perspective people are investing too much personal emotion and take many statements too fast as a fact, even when they are not even presented as such.
I noticed it myself that I have to be much more careful about my wording, because being casual about claims and the wording can come back as a hurtful boomerrang.


The die size of the PS5 is 298.48mm² (23,147mm x 12,895mm):



Very interesting. While my knowledge is extremely limited, going off this as well as many other comments. My take away is...

Xbox has more TFs. (Which is an utterly useless metric that means nothing)
Xbox may benefit from HW VRS.

Absolutely every other aspect of both the hardware as well as software significantly favors the PS5.

My question then is this. Why hasn't the PS5 fared significantly better than the XSX in comparisons? Based on the info being posted here, there should be quite a gap in favor of the PS5. While a couple of the first comparisons might have supported that trend occurring, almost every single comparison since has shown the opposite, with the results not only favoring the XSX, but with a widening difference over time.

Are developers perhaps being paid money in secret alleyways somewhere to gimp the PS5 versions?
The TeraFLOPs numbers are not utterly useless and they mean something but it's good that people keep a certain distance from it and don't take it as absolute truth.
In the past GHz was the metric for CPU speed, which was also excluding a lot of important attributes which contribute to the final performance of the chips, a reason why AMD back then also tried to market their CPU models with a number similar to the GHhz speed of Intel CPUs, although their CPUs run at much lower clocks.

In the same vein you may argue that the PS5 has higher geometry throughput and a higher pixel fillrate but does this matter in practise or rather is this reflected nearly as much in practise vs. the theoretical advantage?
The answer to that is no, at least for most cases.
But I think it's fair to say that the throughput from the Shader Units is the most important performance aspect for most cases and in that regard the XSX has a theoretical advantage of 18-19%, at worst.
Now because perf scaling with more Shader Units is rarely optimal in all cases and other factors exist where the PS5 may have an advantage, this number could be quite a bit smaller in practise.
But either way, there is no "killer reason" from a hardware side why the PS5 should be even faster than the Xbox Series X at most games or why the XSX should "stomp" the PS5.
By default the difference is just not that large.
 
Last edited:

RespawnX

Member
So? What the hell is 40x brighter lights and 10x deeper blacks? Of course there are small differences between HDR 10/10+ and Dolby Vision, but those metrics are WAAAAY OFF.
I purposely only referred the Dolby Vision vs. HDR 10 difference. XboxDynasty is a pure clickbait website. They have been spreading every rumor about every console from day 1. Since the operator had an intermediate dislike for the Xbox, they also used every opportunity to spread wild rumors about PS5 and Xbox for console wars. You won't find many more pages on this scale which spread so much fud and hate. A shame that any one would even link the site, let alone credit it as a source. The metrics are probably just copied together from somewhere. Or simply derived from the color depth, 1 billion colors versus 68 billion.
 
Last edited:
But either way, there is no "killer reason" why the PS5 should be even faster than the Xbox Series X or why the XSX should "stomp" the PS5.
By default the difference is just not that large.

I honestly don't see how either is possible with a similar BOM and no major hardware screw up.

The results are showing that the two are close and that each has their own set of advantages.

When it comes to power I do find it curious how Microsoft changed their marketing.
 

Lysandros

Member
Some people were saying an 1800P vs 4K kind of a difference would be standard.
Yeah, i remember those half blind genuises who expected more than 30% higher resolution (consistently) from a GPU with only 18% higher teraflops while being in deficit in other relevant GPU metrics, without taking into account trivial details such as speed, architecture and efficiency. They work at NASA as engineers now.
 
Yeah, i remember those half blind genuises who expected more than 30% higher resolution (consistently) from a GPU with only 18% higher teraflops while being in deficit in other relevant GPU metrics, without taking into account trivial details such as speed, architecture and efficiency. They work at NASA as engineers now.

It could have been true with a major hardware screw up for unfortunately for those people that didn't happen.

What did they expect was going to happen with two systems with similar BOMs?

It's not like the PS5 was forced bundled with PSVR or anything.
 
I've never seen people really claim that there would be a significant delta in favor of the PS5. Most of the speculation was that the XSX would destroy the PS5 in multiplat comparisons. People are just questioning why that hasn't happened.

No idea why to interpret this as the PS5 having some massive power advantage. What people should question is why the XSX isn't performing better not why the PS5 isn't performing worse.

Locuza Locuza gave me some pretty solid answers. Maybe they will help you understand the situation better?
People were maybe claiming that because of the TF difference between the two consoles. I don't know why people speculated what they did prior to knowing what components were actually in the consoles.

I'm trying to understand the situation as it is now. The consoles have been torn apart and dissected. There has been software released for them that has been compared. I'm not interested in what console people speculated would destroy the other before they even were available. I'm interested in the actual hardware that's been confirmed, along with the actual comparisons that have been done.

So with that being said. I was simply wondering why the comparisons seem to trending the way the are. The first couple of game comparisons showed the PS5 having the advantage, but since then... The comparisons have been favoring the XSX more and more. At this point the difference between the two usually ends up being a dropped frame or two here or there at most, with the XSX running close to twice the resolution. So while I won't say the XSX "stomps" or "blows the PS5 out of the water", that certainly is a significant difference between the two. With devs on record stating that results should get better as they more familiar with the XSX tools compared to them already utilizing the PS5 tools to their fullest. It would seem that the gap between the two could potentially widen further going forward.

Which leads me to ask the question I did when looking at the hardware layout explained. While the XSX may benefit from HW VRS, the rest was a laundry list of specs or features favoring the PS5. Based on that. Why is the PS5 not distancing itself from the XSX in comparisons rather than the opposite which is happening.
 
It could have been true with a major hardware screw up for unfortunately for those people that didn't happen.

What did they expect was going to happen with two systems with similar BOMs?

It's not like the PS5 was forced bundled with PSVR or anything.
I can't recall every games comparative resolution of the top of my head. Could you point me to an example of the PS5 running at a higher resolution compared to the XSX. There are several where the XSX runs consistently higher then the PS5.

Am I missing some comparison where the PS5 has a higher resolution than the XSX? Your comment here, as well as the one it was replying to, as well as your previous one that he was referring to is confusing. It would make sense if the PS5 were consistently running at the same res as XSX, but I have yet to see that as being the case.
 

Mr Moose

Member
I can't recall every games comparative resolution of the top of my head. Could you point me to an example of the PS5 running at a higher resolution compared to the XSX. There are several where the XSX runs consistently higher then the PS5.

Am I missing some comparison where the PS5 has a higher resolution than the XSX? Your comment here, as well as the one it was replying to, as well as your previous one that he was referring to is confusing. It would make sense if the PS5 were consistently running at the same res as XSX, but I have yet to see that as being the case.
Valhalla.
 
I can't recall every games comparative resolution of the top of my head. Could you point me to an example of the PS5 running at a higher resolution compared to the XSX. There are several where the XSX runs consistently higher then the PS5.

Am I missing some comparison where the PS5 has a higher resolution than the XSX? Your comment here, as well as the one it was replying to, as well as your previous one that he was referring to is confusing. It would make sense if the PS5 were consistently running at the same res as XSX, but I have yet to see that as being the case.

No but your missing all the comparison where they are on par with each other. There's no proof that there's a massive difference between the two which is what you're suggesting.

I'm not the one that's saying there's a big delta between the two. I think you're confusing me with somebody else.
 
Last edited:
No but your missing all the comparison where they are on par with each other. There's no proof that there's a massive difference between the two which is what you're suggesting.

I'm not the one that's saying there's a big delta between the two. I think you're confusing me with somebody else.
I'm not saying there's a "big delta" or a "massive difference" either. But there does tend to be an obvious resolution difference between the two is there not?
 

skit_data

Member
Lol, absolutely the best correction possible. My mistake, and I apologize. Thinking one thing while typing out another. I was thinking about the series s and how it compares to the other two as well. Which at 1080p (as MS is only now advertising as such) was what ended up coming out.

My mistake.
There are some BC games that the Series X definitely has the ability to run at higher resolutions than PS5 due to it being handled differently, but if we are to count those then me might as well bring in the whole 900p squad as well and we dont want to do that because everyone everywhere would become very sad
 
Valhalla.

The average resolution in Dirt 5 in quality/60 FPS mode is 1620P for the PS5 with higher settings on top of that compared to XSX version's 1440P according to NXgamer's latest comparison video about the game.
In my posts I clearly alluded to a couple of early comparisons that favored the PS5. It was specifically the two that you guys mentioned.

Now compare those two early comparisons to almost every one since, and you're left with the exact same question I do.
 

Mr_Potato

Banned
The first couple of game comparisons showed the PS5 having the advantage, but since then... The comparisons have been favoring the XSX more and more.

No, I don't know how you reached that conclusion. Crash 4 comparison was released today and PS5 is slightly on top again (slightly higher framerate, no stutter vs stutter on Series X like on Valhalla, same resolution). Even Cyberpunk runs better on PS5 since latest patch. I don't think there's anything new.

Hitman 3 is the exception (but even that game has parts where PS5 runs better, notably when there's a lot of grass, XSX has some trouble with that like we already saw on Dirt 5 grass).

Some older BC games run better on Series X, but that's mostly due to the way BC is done (PS5 version is often just the PS4 Pro version uncapped).
 
Last edited:
There are some BC games that the Series X definitely has the ability to run at higher resolutions than PS5 due to it being handled differently, but if we are to count those then me might as well bring in the whole 900p squad as well and we dont want to do that because everyone everywhere would become very sad
Comparing games that run on the xbox Series and PS5 consoles. BC or not, they are games that can be played today on those consoles.

Why you believe that equates to games from 5-7 years ago running at 900p, is beyond me.
 

Mr_Potato

Banned
I can't recall every games comparative resolution of the top of my head. Could you point me to an example of the PS5 running at a higher resolution compared to the XSX. There are several where the XSX runs consistently higher then the PS5.

Am I missing some comparison where the PS5 has a higher resolution than the XSX? Your comment here, as well as the one it was replying to, as well as your previous one that he was referring to is confusing. It would make sense if the PS5 were consistently running at the same res as XSX, but I have yet to see that as being the case.

Valhalla runs higher res on PS5 and most games run same resolution and slightly higher framerate on PS5 so I'm not sure where your information about resolution comes from.

Today Crash 4 comparison showed PS5 running slightly more frames at same res too.
 
Last edited:
But I think it's fair to say that the throughput from the Shader Units is the most important performance aspect for most cases and in that regard the XSX has a theoretical advantage of 18-19%, at worst.

Don't you mean 18% at best? Not sure how the difference could possibly be higher than the theoretical maximum.

Now because perf scaling with more Shader Units is rarely optimal in all cases and other factors exist where the PS5 may have an advantage, this number could be quite a bit smaller in practise.

Hence, why it's 18% difference at best, not at worst.
 

SlimySnake

Flashless at the Golden Globes
The die size of the PS5 is 298.48mm² (23,147mm x 12,895mm):
Jesus that thing is tiny. that would make this thing 20% smaller and more importantly 20% cheaper than the xsx. that explains how they were able to hit that $399 price point by just removing the disc drive. the BOM on the chip is probably
$30-40 lower.


god hes such a prick lmao.
 
Last edited:

Locuza

Member
Don't you mean 18% at best? Not sure how the difference could possibly be higher than the theoretical maximum.

Hence, why it's 18% difference at best, not at worst.
Because it's only the theoretical maximum if the PS5 GPU isn't downclocking, which it can.

Jesus that thing is tiny. that would make this thing 20% smaller and more importantly 20% cheaper than the xsx. that explains how they were able to hit that $399 price point by just removing the disc drive. the BOM on the chip is probably
$30-40 lower.
However thermal density is much higher, leading to more challenging cooling requirements.
Though I don't know what turns out more expensive in the end.
The double PCB cooling construction with a vapor chamber on the XSX or the larger plastic box with liquid metal and more complex packaging on the PS5?
In addition MS has to buy more GDDR6 chips but has a cheaper SSD solution.
And at least the DE version from the PS5 is sold at a lower price than Sony pays for manufacturing.

A precise BOM analysis would be very interesting.
 
Because it's only the theoretical maximum if the PS5 GPU isn't downclocking, which it can.

Given the explanation for when this actually occurs, and for how long (i.e. millisecond inter-frame periodicity), we can realistically dismiss the variable frequency having any negative impact on PS5 performance vs XSX.

Given the performance in real games so far, I'd argue that's a pretty safe assumption.
 
Because it's only the theoretical maximum if the PS5 GPU isn't downclocking

But if it only happens with one frame out of 30 will anyone really notice it?

From Cernys talk he made it seem like any down locking would be extremely rare. Also from what I understand the PS5 can shift clocks extremely quickly and to the millisecond due to how RDNA2 clocks work. It's not like the clocks are going to drop 30% for 5 minutes or anything like that.
 
"Severe" is relative but from my perspective I wouldn't describe it that way and also don't see it happening in the future.
HW VRS is nice and you may get a 10% perf boost but that's not "severe" from my perspective.
On the other side the PS5 has some advantages which probably are the reasons behind the good performance numbers we already see vs. XSX.

* Higher clock rates are leading to higher Frontend and Backend throughput --> faster Command Processor, higher geometry throughput, higher pixel fillrate.

* Further the PS5 has 4MiB L2$ for 36 CUs, while the XSX has "only" 5MiB for 52 CUs.

* It's also easier to utilize 36 CUs than 52.

* The PS5 is using the old Render Backend design, which has 4 Color ROPs and 16 Z/Stencil ROPs.
XSX has the new RB+ design which has 8 Color ROPs but still 16 Z/Stencil ROPs.
Per Shader Engine this means the PS5 has 64 Color ROPs and 256 Z/Stencil ROPs, while the XSX has 64 Color ROPs and 128 Z/Stencil ROPs.

* Moreover the PS5 has GPU Cache Scrubbers making data evictions more efficient.

* Two address spaces on the Xbox Series may lead to worse bandwidth utilization and memory access times.

* The software stack could be more efficient on the PS5 or the developers optimized better for the PS5 platform.

Without profiling data and a precise breakdown it's not possible to say what really matters for a variety of games but on average I think the results we are seeing now, will more or less stay that way.
Unless some game version on borked on one platform or the either, I don't think people could distinguish between the platforms, in terms of resolution and performance, without a direct comparison in front of them.

Software ecosystem or something like VR-Support is much more important from my perspective.
I remember cerny saying in the road to ps5 that despite working with amd on rdna2 and ps5 they didnt just copy and paste the whole architecture on ps5, they intricately chose some rdna2 features that are whats important for their design on ps5 and left the ones they dont need and also said some features will remain exclusive to ps5 for instance the cache scrubbers.. which arent there on series and pc.

Basically if anything ive learnt here is not every new feature is better for every design, reminds me when i was copying my friends homework at school and wondered why my teachers kept finding me out, it was simply because i copied everything including the hand writting and the words i wasnt wise enough to mix my own 🤣🤣🤣.
 
Given the explanation for when this actually occurs, and for how long (i.e. millisecond inter-frame periodicity), we can realistically dismiss the variable frequency having any negative impact on PS5 performance vs XSX.

Given the performance in real games so far, I'd argue that's a pretty safe assumption.
Hasnt variable frequency or smartshift actually increase performance as stated by amd, this could be the reason why ps5 is good in variable situations like dynamic res scaling, when theres more unexpected stuff on screen, and keeps solid framerates most of the time cause lets face it dynamic or variable effects are the future, ie dynamic res, vrs, mesh shading/procedural/adaptive tessellation, cbr,dlss, vrr i kind of have a hunch on that.
 
Status
Not open for further replies.
Top Bottom