• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Oberon PlayStation 5 SoC Die Delidded and Pictured

Dream-Knife

Banned
That would be impossible.

But why did MS give the option to use Series X CPU at a higher clockspeed (3.8 Ghz) with fewer threads if having higher clocks with fewer cores doesn't have any real advantages?

You probably don't remember, but when XBOX One launched MS engineers explained to Digital Foundry that higher clocks provided better performance than more CUs (which they know because XBOX One devkits had 14 CUs instead of the 12 present in the retail console, and according to them 12 CUs clocked higher performed better that the full fat version of the APU with 14 CUs). The difference in clockspeed was not enough to overcome PS4's advantage in raw Tflops, but One S did beat PS4 on some occasions.

https://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects




So according to MS engineers a 6.6% clock upgrade provided more performance than 16,6% more CUs. 🤷‍♂️
And we know how that turned out. Xbox one was weaker than PS4 . Maybe they said that as marketing fluff? Were the PS4 and Xbox one were on the same architecture?
 

Lysandros

Member
This is what could be interesting in the long term. When developers are going to start using Tempest to offload some CPU or GPU tasks.
I also find this possiblity interesting if there is enough juice left after 3D sound processing, bandwidth consumption is also a concern like Cerny alluded.
 

Arioco

Member
And we know how that turned out. Xbox one was weaker than PS4 . Maybe they said that as marketing fluff? Were the PS4 and Xbox one were on the same architecture?


I heard conflicting reports (just like I hear now about RDNA 1.5 😂), but I'd say they were the same architecture. Both are custom designs, of course, and PS4 had some extra features (like more ACEs) . Anyway I don't think the situation is similar to PS5 vs SeriesX at all. To see that kind of difference Series X would have to have around 50% more Tflops and a much better memory system, along with more capabilities for GPGPU (which was crucial last gen due to the lack of CPU power available to those machines). The same for Pro vs One X, MS's console was better in almost every possible way (except for ROPs and the ID buffer) and the difference in raw power was again almost 50%. In fact in that case I think it was even worse, since One X had both more CUs and higher clockspeed, so caches and fixed function units were more performant too.

Anyway it was just an example of MS saying that higher clocks can in fact perform better than more CUs. Marketing? Well, maybe, but as I said they've done again this gen by allowing devs to choose between using the CPU at 3.6 Ghz with 16 threads and 3.8 Ghz with 8 threads. Why would they do something like that if they are 100% sure that higher clocks with fewer cores/threads does NOT provide better results under any circumstance at all? That would be very dumb of them, wouldn't it?
 

MonarchJT

Banned
So it is better not to be honest and try to bully random on Twitter just to make their fanboys happy?

😂😂😂
no is exactly the opposite... just to cut people like you who have grabbed millions of times on inaccuracies and the thousand silences of Cerny and Sony regarding the PS5 soc to put the two consoles on the same level (and I'm talking specifically of the GPU) And even after years in fact do not miss the opportunity to question who developed the console even when you are told directly that the xsx has all the main features that make the Rdna 2.0 GPUs
 
Last edited:

Md Ray

Member
This would depend on the graphics engine and the the kind of graphics workloads it issues to the GPU.

The PS5 will have an advantage in workloads which scale well with high clock speeds, such workloads would be dependant on latency and rasterisation (FPS, geometry throughput, elements of culling and such).

The Xbox Series X will have an advantage in compute heavy workloads which scale better with higher CU, such as resolution and elements of ray-tracing.

This will also carry into fully fledged next-gen engines like UE5 as well.

This does not even factor in the efficiency of the API's which are another important factor to performance, we have PSSL on the Playstation side and DX12U on the Xbox side, we've already had developers discuss how easy it is to develop and optimise games on the PS5 compared to Series X. So there's also that.

If you think either console will show significant advantages over the other in 3rd party titles then your in for a unpleasant surprise.
Couldn't have said any better myself.
 

Dream-Knife

Banned
I heard conflicting reports (just like I hear now about RDNA 1.5 😂), but I'd say they were the same architecture. Both are custom designs, of course, and PS4 had some extra features (like more ACEs) . Anyway I don't think the situation is similar to PS5 vs SeriesX at all. To see that kind of difference Series X would have to have around 50% more Tflops and a much better memory system, along with more capabilities for GPGPU (which was crucial last gen due to the lack of CPU power available to those machines). The same for Pro vs One X, MS's console was better in almost every possible way (except for ROPs and the ID buffer) and the difference in raw power was again almost 50%. In fact in that case I think it was even worse, since One X had both more CUs and higher clockspeed, so caches and fixed function units were more performant too.

Anyway it was just an example of MS saying that higher clocks can in fact perform better than more CUs. Marketing? Well, maybe, but as I said they've done again this gen by allowing devs to choose between using the CPU at 3.6 Ghz with 16 threads and 3.8 Ghz with 8 threads. Why would they do something like that if they are 100% sure that higher clocks with fewer cores/threads does NOT provide better results under any circumstance at all? That would be very dumb of them, wouldn't it?
Perhaps scalability? Pretty much everything is gpu bound over 1080p. Maybe for future 1080p 120 games? I don't know how many 120hz mode games will come out in the future though.
 

Md Ray

Member
No, I think performance will be similar to ps4 pro vs one x.

I'm just debating that the PS5 is somehow more powerful, or at an advantage from simply clock speeds on the same architecture. Like the analogy I made earlier in this thread: 6600xt has a higher clock speed than a 6900xt, but no one argues the 6600xt is better than the 6900xt.
Eh, please stop. It's not even close.

One X had a 43% advantage in TF and a massive 50% more BW than PS4 Pro. The difference between PS5 and XSX isn't even anything remotely like that or comparable for the perf to be similar to Pro vs One X.
 
Last edited:

Lysandros

Member
Eh, please stop. It's not even close.

One X had a 43% advantage in TF and a massive 50% more BW than PS4 Pro. The difference between PS5 and XSX isn't even anything remotely like that or comparable for the perf to be similar to Pro vs One X.
Not to mention ~30% higher rasterization, cache bandwidth etc. All this due to you know.. ~30% higher clock frequency.. But this seems like a lost cause, bon courage..
 

Dream-Knife

Banned
Eh, please stop. It's not even close.

One X had a 43% advantage in TF and a massive 50% more BW than PS4 Pro. The difference between PS5 and XSX isn't even anything remotely like that or comparable for the perf to be similar to Pro vs One X.

I wasn't aware there was that large of a performance gap between the two. I just remember in some games one x would have a more stable frame rate, but the two were fairly even.

I am still waiting on info of how a faster core clock will make the PS5 better vs its lower CU count.

I could care less which is more powerful, as I won't be owning either system. I want these answers for the technical knowledge.

How is the PS5 more powerful than the series X despite being 10.2 tf vs 12 of the same architecture?

How does an increased clock speed with a lower CU count that adds up to 10.2 outweigh a slower click with an increased CU count?

That's all I'm trying to find out here. This dancing around these questions makes it sounds like the Sony defense force is making things up now.
 
Last edited:

ethomaz

Banned
no is exactly the opposite... just to cut people like you who have grabbed millions of times on inaccuracies and the thousand silences of Cerny and Sony regarding the PS5 soc to put the two consoles on the same level (and I'm talking specifically of the GPU) And even after years in fact do not miss the opportunity to question who developed the console even when you are told directly that the xsx has all the main features that make the Rdna 2.0 GPUs
And I was still right and he lying to his own fan base trying to put a “I’m a expert” poker face 😂😂😂
 
Last edited:

Md Ray

Member

I wasn't aware there was that large of a performance gap between the two. I just remember in some games one x would have a more stable frame rate, but the two were fairly even.

I am still waiting on info of how a faster core clock will make the PS5 better vs its lower CU count.

I could care less which is more powerful, as I won't be owning either system. I want these answers for the technical knowledge.

How is the PS5 more powerful than the series X despite being 10.2 tf vs 12 of the same architecture?

How does an increased clock speed with a lower CU count that adds up to 10.2 outweigh a slower click with an increased CU count?

That's all I'm trying to find out here. This dancing around these questions makes it sounds like the Sony defense force is making things up now.

Here's one example where PS5's higher clock speed brings it on par Series X in a scene where visuals and resolution are identical between both. If higher CU count and BW was everything then this shouldn't be happening.
JOqj3Wi.png

There are plenty of examples like this, just go through VGTech's channel and checkout the last couple of vids. And re-read these excellent posts again to better understand what we mean:
Not entirely true.

Higher clock speeds allow for faster rasterisation, higher pixel fill rate, higher cache bandwidth and so on. All of these are important factors in overall performance. Even elements of the ray-tracing pipeline scale well with high clock speeds. It also explains why PS5 is trading blows with the Series X on a game by game basis and this will very likely be the case lasting throughout the gen.

This would depend on the graphics engine and the the kind of graphics workloads it issues to the GPU.

The PS5 will have an advantage in workloads which scale well with high clock speeds, such workloads would be dependant on latency and rasterisation (FPS, geometry throughput, elements of culling and such).

The Xbox Series X will have an advantage in compute heavy workloads which scale better with higher CU, such as resolution and elements of ray-tracing.

This will also carry into fully fledged next-gen engines like UE5 as well.

This does not even factor in the efficiency of the API's which are another important factor to performance, we have PSSL on the Playstation side and DX12U on the Xbox side, we've already had developers discuss how easy it is to develop and optimise games on the PS5 compared to Series X. So there's also that.

If you think either console will show significant advantages over the other in 3rd party titles then your in for a unpleasant surprise.
 

Dream-Knife

Banned
Here's one example where PS5's higher clock speed brings it on par Series X in a scene where visuals and resolution are identical between both. If higher CU count and BW was everything then this shouldn't be happening.
JOqj3Wi.png

There are plenty of examples like this, just go through VGTech's channel and checkout the last couple of vids. And re-read these excellent posts again to better understand what we mean:
Yes, and I've already asked him for more information on that. I got a link multiplying the speed x 4.

I just want actual technical info on this.

I'm not debating early third party games won't perform the same.
 

Md Ray

Member
Yes, and I've already asked him for more information on that. I got a link multiplying the speed x 4.

I just want actual technical info on this.

I'm not debating early third party games won't perform the same.
I might have given that info to you a couple of days ago as well, the technical info. PS5 GPU has higher rasterization throughput/culling rate, pixel fillrate than XSX GPU, what more do you want? There's even a screenshot showing perf being on par in that scene. Depending on scenes in the same game, XSX can be ahead sometimes as the scene is likely compute or BW bound or both and in scenes like the above where PS5 is on par means it's favouring PS5's higher clock speeds.
 
Last edited:

Dream-Knife

Banned
I might have given that info to you a couple of days ago as well, the technical info. PS5 GPU has higher rasterization throughput/culling rate, pixel fillrate than XSX GPU, what more do you want? There's even a screenshot showing perf being on par in that scene. Depending on scenes in the same game, XSX can be ahead sometimes as the scene is likely compute or BW bound or both and in scenes like the above where PS5 is on par means it's favouring PS5's higher clock speeds.
I want technical information/ sources as to why a gpu with a lower CU count with a higher clock performs better in the same architecture.

Do you have sources for you info where I can find out more? Everytime I search using these terms I end up getting series x vs ps5 links (which also state that the series x has a more powerful gpu). I would prefer sources that are factual and outside of the scope of these two boxes.
 

Md Ray

Member
I want technical information/ sources as to why a gpu with a lower CU count with a higher clock performs better in the same architecture.

Do you have sources for you info where I can find out more? Everytime I search using these terms I end up getting series x vs ps5 links (which also state that the series x has a more powerful gpu). I would prefer sources that are factual and outside of the scope of these two boxes.
Search Series X and PS5 techpowerup and it has all the GPU details. Both have 64 ROPs. 64 x clk speed gives you pixel fillrate throughput. For PS5 it's 142.7 Gpix/sec vs 116 Gpix/sec on XSX. 22% higher.

Here's a Hot Chips slide from Microsoft themselves confirming the same:
sUET9U8.jpeg

7.3 Gtri/sec is the triangle rasterization rate of XSX. This is calculated by multiplying 4 primitive units by clk speed (1825 MHz). PS5 also has the same amount of primitive units, so 4 x 2230 it's 8.9 Gtri/sec on PS5 vs 7.3 Gtri/sec of XSX. 22% higher throughput.

For proof or to verify you can download the official AMD published RDNA whitepaper from their website here, go to the "Radeon RX 5700 family" chapter and look at "Triangles rasterized" bit. The number there for 5700 XT will be "7620" (7.6 Gtri/sec). The 5700 XT also has the same amount of 4 prims so take the frequency which is mentioned right there and multiply it by 4 and you'll arrive at that 7.6 figure. There's a wealth of info in that whitepaper including cache bandwidth, etc.

EDIT: here are the differences in tl;dr form, these are legit technical info that you chose to ignore when Three Jackdaws Three Jackdaws posted them
s0n39Hi.png
 
Last edited:

I wasn't aware there was that large of a performance gap between the two. I just remember in some games one x would have a more stable frame rate, but the two were fairly even.

I am still waiting on info of how a faster core clock will make the PS5 better vs its lower CU count.

I could care less which is more powerful, as I won't be owning either system. I want these answers for the technical knowledge.

How is the PS5 more powerful than the series X despite being 10.2 tf vs 12 of the same architecture?

How does an increased clock speed with a lower CU count that adds up to 10.2 outweigh a slower click with an increased CU count?

That's all I'm trying to find out here. This dancing around these questions makes it sounds like the Sony defense force is making things up now.

jgxkaam96z701.jpg


This is typically what you saw with Pro vs One X. We all know the XSX is more powerful than the PS5 but it certainly isn't as bad as One X vs Pro.

There's your answer.
 

rnlval

Member
Here's one example where PS5's higher clock speed brings it on par Series X in a scene where visuals and resolution are identical between both. If higher CU count and BW was everything then this shouldn't be happening.
JOqj3Wi.png

There are plenty of examples like this, just go through VGTech's channel and checkout the last couple of vids. And re-read these excellent posts again to better understand what we mean:
Again, your example screenshot has very little raytracing.

NbKH7Bi.jpg
 

rnlval

Member
Search Series X and PS5 techpowerup and it has all the GPU details. Both have 64 ROPs. 64 x clk speed gives you pixel fillrate throughput. For PS5 it's 142.7 Gpix/sec vs 116 Gpix/sec on XSX. 22% higher.

Here's a Hot Chips slide from Microsoft themselves confirming the same:
sUET9U8.jpeg

7.3 Gtri/sec is the triangle rasterization rate of XSX. This is calculated by multiplying 4 primitive units by clk speed (1825 MHz). PS5 also has the same amount of primitive units, so 4 x 2230 it's 8.9 Gtri/sec on PS5 vs 7.3 Gtri/sec of XSX. 22% higher throughput.

For proof or to verify you can download the official AMD published RDNA whitepaper from their website here, go to the "Radeon RX 5700 family" chapter and look at "Triangles rasterized" bit. The number there for 5700 XT will be "7620" (7.6 Gtri/sec). The 5700 XT also has the same amount of 4 prims so take the frequency which is mentioned right there and multiply it by 4 and you'll arrive at that 7.6 figure. There's a wealth of info in that whitepaper including cache bandwidth, etc.

EDIT: here are the differences in tl;dr form, these are legit technical info that you chose to ignore when Three Jackdaws Three Jackdaws posted them
s0n39Hi.png
That's a flawed argument when pixel fill rates are memory bandwidth bound. You failed GDC 2014 lecture.

5Re3aMm.png


ZocplTb.jpg

Games like Doom Eternal have heavy Async Compute path usage.

BVH Raytracing
XSX: 13.1 TFLOPS equivalent
PS5: 11 TFLOPS equivalent
XSX has about 19% advantage over PS5. BVH Raytracing + raytracing denoise workloads are treated like Compute Shader path but with hardware RT acceleration.

NGGP (Next-Generation Geometry Pipeline) Mesh shader and Amplification shader path is the solution against fix hardware geometry bottleneck issues and it's similar to Compute shader path.

RX 5700 XT doesn't support DX12U's NGGP.
PS5 has its own Primitive Shaders as its NGGP.

RX 5700 XT has Primitive Shaders as its NGGP, but Wintel PC world doesn't support it. Microsoft selected NVIDIA's RTX NGGP over AMD's 1st NGGP offer.


pqjvM5z.jpg


AMD Vega/NAVI GPUs have main two render paths i.e. compute engine path and pixel engine path.

Given similar similar Pixel Engine for both XSX and PS5, higher clock speed is an advantage for PS5. XSX has the Compute Engine advantage over PS5. XSX's VRS feature is for the Pixel Engine path.


PS; When compared to PC NAVI 2's Infinity Cache (L3 cache), Vega HBC is located in the wrong location.
 
Last edited:

Arioco

Member
~30% how? That'd be 22%, no? 🤔

How is the PS5 more powerful than the series X despite being 10.2 tf vs 12 of the same architecture?

Oh, but I don't think PS5 is more powerful than Series X. It just has some advantages (and disadvantages too). Higher fill rate and higher cache bandwidth could help in some scenarios. That doesn't make PS5 "more powerful", but it can lead to better performance in some areas, even if Series X has more raw power. I think we've already seen many examples of that. PS5 extremely fast I/O system should be taken into account too. I should allow devs to improve RAM usage, load super high resolution textures (as seen in Unreal Engine 5 demo), load highly detailed models in a split seconds without the need of having them in the RAM... And who knows what else devs will come up with?

Tflops are the computational power of the vector ALU, and that's just ONE part of the GPU. At higher frequency all other parts are faster too.


 

rnlval

Member
Oh, but I don't think PS5 is more powerful than Series X. It just has some advantages (and disadvantages too). Higher fill rate and higher cache bandwidth could help in some scenarios. That doesn't make PS5 "more powerful", but it can lead to better performance in some areas, even if Series X has more raw power. I think we've already seen many examples of that. PS5 extremely fast I/O system should be taken into account too. I should allow devs to improve RAM usage, load super high resolution textures (as seen in Unreal Engine 5 demo), load highly detailed models in a split seconds without the need of having them in the RAM... And who knows what else devs will come up with?

Tflops are the computational power of the vector ALU, and that's just ONE part of the GPU. At higher frequency all other parts are faster too.
FYI, GPU L2 cache's bandwidth is related to memory controller units i.e. XSX GPU has 5 MB L2 cache with wider pathways over RX 5700 XT's 4 MB L2 cache.
 

rnlval

Member
Eh, please stop. It's not even close.

One X had a 43% advantage in TF and a massive 50% more BW than PS4 Pro. The difference between PS5 and XSX isn't even anything remotely like that or comparable for the perf to be similar to Pro vs One X.
Also, X1X's 32 ROPS has 2 MB render cache instead of baseline Polaris 32 ROPS being directly connected to the memory controllers.

Both XSX and PS5 ROPS are connected to the L2 cache.
 
Last edited:

Md Ray

Member
That's a flawed argument when pixel fill rates are memory bandwidth bound. You failed GDC 2014 lecture.

5Re3aMm.png


ZocplTb.jpg

Games like Doom Eternal have heavy Async Compute path usage.

BVH Raytracing
XSX: 13.1 TFLOPS equivalent
PS5: 11 TFLOPS equivalent
XSX has about 19% advantage over PS5. BVH Raytracing + raytracing denoise workloads are treated like Compute Shader path but with hardware RT acceleration.

NGGP (Next-Generation Geometry Pipeline) Mesh shader and Amplification shader path is the solution against fix hardware geometry bottleneck issues and it's similar to Compute shader path.

RX 5700 XT doesn't support DX12U's NGGP.
PS5 has its own Primitive Shaders as its NGGP.

RX 5700 XT has Primitive Shaders as its NGGP, but Wintel PC world doesn't support it. Microsoft selected NVIDIA's RTX NGGP over AMD's 1st NGGP offer.


pqjvM5z.jpg


AMD Vega/NAVI GPUs have main two render paths i.e. compute engine path and pixel engine path.

Given similar similar Pixel Engine for both XSX and PS5, higher clock speed is an advantage for PS5. XSX has the Compute Engine advantage over PS5. XSX's VRS feature is for the Pixel Engine path.


PS; When compared to PC NAVI 2's Infinity Cache (L3 cache), Vega HBC is located in the wrong location.
That's all well and good but you fail to explain why the PS5 can be on par XSX in scenes like the one I posted from RE8. Ofc, XSX has higher compute power and BW, it will have the advantage and in fact, it does in other locations of RE8, no one's saying that XSX is weaker.

I'm only saying that depending on scenes/engines PS5 can get close or fully close the gap in frames where it's not compute or BW heavy. DOOM Eternal is definitely BW heavy so I agree in scenarios like that PS5 might never come close or close the gap entirely.

But not every engine is constructed/behaves like IdTech 7. And you'll see variances throughout the gen.
 

Md Ray

Member
Oh, but I don't think PS5 is more powerful than Series X. It just has some advantages (and disadvantages too). Higher fill rate and higher cache bandwidth could help in some scenarios. That doesn't make PS5 "more powerful", but it can lead to better performance in some areas, even if Series X has more raw power. I think we've already seen many examples of that. PS5 extremely fast I/O system should be taken into account too. I should allow devs to improve RAM usage, load super high resolution textures (as seen in Unreal Engine 5 demo), load highly detailed models in a split seconds without the need of having them in the RAM... And who knows what else devs will come up with?

Tflops are the computational power of the vector ALU, and that's just ONE part of the GPU. At higher frequency all other parts are faster too.
I fully agree with this.
 

Panajev2001a

GAF's Pleasant Genius
Also, X1X's 32 ROPS has 2 MB render cache instead of baseline Polaris 32 ROPS being directly connected to the memory controllers.
That would certainly widen the bandwidth gap between the two, but then again even when doing the PR rounds I think they knew they had over engineered the ROPS fillrate (maybe there were some edge cases where they could flex it, but rare ones). PS4 Pro had some post Polaris enhancements, but I do not recall a wide render cache.
 
Eh, please stop. It's not even close.

One X had a 43% advantage in TF and a massive 50% more BW than PS4 Pro. The difference between PS5 and XSX isn't even anything remotely like that or comparable for the perf to be similar to Pro vs One X.
Most importantly One X had a massive GPU L2 cache advantage against Pro (plus the 2MB render cache). With 2MB (L2 GPU cache) Allegedly twice more (and 25% faster) than PS5. This could have helped as much as bandwidth in some cases.

This is why the 25% bandwidth advantage of XSX is not that significant in most cases because the GPU cache on PS5 is very efficient because PS5 has the advantage in that area.
 
Last edited:

onesvenus

Member
Imagine when a MS engineer come say to me the opposite and I asked him why Series X GPU has silicon parts from AMD IP and he keep the PR dodge even after he 3rd question lol

And I was still right and he lying to his own fan base trying to put a “I’m a expert” poker face 😂😂😂

You believing you know more hardware design that one of the Microsoft engineers that designed the Xbox is kind of sad. Stop repeating it because you are only embarrassing yourself.
 

MonarchJT

Banned
You believing you know more hardware design that one of the Microsoft engineers that designed the Xbox is kind of sad. Stop repeating it because you are only embarrassing yourself.
If someone would to question Cerny's words and imply that he is possibly lying with all that Sony, we dont knows for what reason, have never done a real hw Deep dive and have never released real technical specifications ....o clarified what's inside this blessed geometry engine and how it ranks with respect to the newest mesh shader ... or still about the vrs etc etc ... well if anyone would allow themselves to say that Cerny lied probably would get an instant ban.
 
Is MINIMUM 18% and we still don't know how much the PS5 will downclock later in the gen when workload will become heavier and heavier also we will see true differences when next gen engines will arrive ..i mean engines that use mesh shader or high parallelization like ue5
anyway we seeing already early in this first year the xsx pushing noticable higher pixel count (noticable in relation to the power of the GPU I am not talking about visual perception)

This is not entirely accurate. Last generation engines and the ones which are being currently used on a majority of current PS5 and Xbox Series X/S games (both third and first party) favour a wider but slower design (more CU clocked at a relatively lower frequency). So the fact that PS5 is still trading blows with the Series X on games which favour wider designs should already tell you enough about the "minimum 18% compute advantage" and the supposed disparity in performance between each consoles.

As of now, we've only had a glimpse at some "true" next-generation engines such as Unreal Engine 5, and it is true that UE5 it self favours compute, but how that advantage manifests into real-time game performance is still yet to be seen. Keep in mind that UE5 does not support Mesh Shaders, although it does support Primitive Shaders and it's own version of what Epic call "hyper optimised compute shaders". Judging from what we've seen on the Unreal Engine 5 demos, especially on the PS5, I'm not concerned about any disparity in performance between either console.

UE5 is one engine that favours compute, however it's very likely that more and more next-gen engines will favour a narrower design (less CU relatively speaking but clocked at a higher frequency. As mentioned in my previous posts this would favour cache hierarchy, so things like alpha-effects, pixel fill-rate, geometry throughput, latency, rasterisation and elements of culling amongst other things which will be critical to performance especially in next-gen games.

You also mentioned Mesh Shaders, this is another complicated subject, but the hardware required for Mesh Shaders has been present in all AMD GPU's since Vega, both Primitive and Mesh Shaders offer the same performance gains when optimised and Mesh Shaders are simply an API implementation. All RDNA 2 cards and very likely including the Series X/S convert Mesh Shaders into Primitive Shaders in code.

My point is that this whole narrative of PS5 "getting pushed by next-generation engines" is complete nonsense, it will in fact run more efficiently than it is now, including on engines which favour compute like UE5. I'm not trying to convince anyone that the PS5 is more powerful, but it does have advantages over the Series X in elements of GPU performance which I have already discussed. To reiterate my original point, people who are expecting some sort of major advantage in performance on either console will have over another is in for a unpleasant surprise.

I would also like to add that the the advantages each console has doesn't mean it's bad at other things, so PS5 having relatively less-compute power compared to Series X doesn't mean it's "bad" at compute, the same for ray-tracing, even though Series X has more RT cores than PS5 doesn't mean PS5 is bad at RT. In fact we've seen some of the best RT implementation on PS5 compared to Series X (mostly on first party titles).

Likewise the PS5 having higher frequencies, doesn't mean the Series X will be bad at scenarios which favour things like alpha-effects, rasterisation, geometry throughput and other things.
 
Last edited:

ethomaz

Banned
You believing you know more hardware design that one of the Microsoft engineers that designed the Xbox is kind of sad. Stop repeating it because you are only embarrassing yourself.
No lol.

But what I said and asked him after him come to talk with me was and is right… if I know more or less about hardware design is unimportant (in fact I know very little about from the university and never worked with that).

He indeed embarrassed himself first for dodging the question and not answering it and after using “the I’m expert” arrogant cover face.

Let’s be honest here. Only Xbox fans really supported the false PR argumentation he tried to create.

I guess you know that like most Xbox fans but you should never accept the MS engineer were not honest because I’m the dam Sony fanboy.
 
Last edited:
No lol.

But what I said and asked him after him come to talk with me was and is right… if I know more or less about hardware design is unimportant (in fact know very little about from the university and never worked with that).

He indeed embarrassed himself first for dodging the question and not answering it and after using “the I’m expert” arrogant cover face.

Let’s be honest here. Only Xbox fans really supported the false PR argumentation he tried to create.
I know a few other tech-savy people on Twitter asked him some technical questions (in good faith) and he was giving some really ambiguous responses. Although I don't want to say much because it's not something I've looked into.
 

Zathalus

Member
No lol.

But what I said and asked him after him come to talk with me was and is right… if I know more or less about hardware design is unimportant (in fact I know very little about from the university and never worked with that).

He indeed embarrassed himself first for dodging the question and not answering it and after using “the I’m expert” arrogant cover face.

Let’s be honest here. Only Xbox fans really supported the false PR argumentation he tried to create.

I guess you know that like most Xbox fans but you should never accept the MS engineer were not honest because I’m the dam Sony fanboy.
You don't find it odd that you always believe what Sony has to say, but doubt everything Microsoft has to say?
 

onesvenus

Member
No lol.

But what I said and asked him after him come to talk with me was and is right… if I know more or less about hardware design is unimportant (in fact I know very little about from the university and never worked with that).

He indeed embarrassed himself first for dodging the question and not answering it and after using “the I’m expert” arrogant cover face.

Let’s be honest here. Only Xbox fans really supported the false PR argumentation he tried to create.

I guess you know that like most Xbox fans but you should never accept the MS engineer were not honest because I’m the dam Sony fanboy.
Well, your questions on that Twitter thread were eyeroll-worthy. I know that in some more technical minded forums people laught at you. I was only saying it in good faith.

When you double down on something like this not having experience on the subject matter, you are being a fanboy and nothing else. It's not that he didn't answer or that the answers were ambiguous is that you weren't receiving the answer you wanted to hear so you automatically were dismissing it.

If i were you I'd stop bringing to light that thread as if it was some kind of revelation only you can see. Anyway, enough off-topic
 

ethomaz

Banned
Well, your questions on that Twitter thread were eyeroll-worthy. I know that in some more technical minded forums people laught at you. I was only saying it in good faith.

When you double down on something like this not having experience on the subject matter, you are being a fanboy and nothing else. It's not that he didn't answer or that the answers were ambiguous is that you weren't receiving the answer you wanted to hear so you automatically were dismissing it.

If i were you I'd stop bringing to light that thread as if it was some kind of revelation only you can see. Anyway, enough off-topic
I’m not sure what you are trying to say.
I explained the question for him 3 times.

People laughing at me is of no importance at all imo.
The minimum you expect from a expert in the subject is to reply honestly a question when he start a talk with somebody random like me.

I mean I never put him in a bad position with his community… the opposite… I did not even know he existed until I receive a notification from him... if he takes to engage with me then why not anwser what it was asked?

There is no revelation btw... just what happened in facts.
 
Last edited:

ethomaz

Banned
I give a full reading of what Locuza wrote on twitter this time... 28 tweets :messenger_face_screaming:
It is basically what it was already know before.
In fact the only part new is about the PS5's FPU.

Locuza changed his instance on the cut down FPU because in fact it doesn't seem to be cut down but compressed.
In simple terms AMD reworked the Zen's FPU to use less transistors maintaining all the functionalities.

Why? Nobody knows yet.

BTW GPUsAreMagic agree with him... looks like the same FPU with different transistor density.

 
Last edited:

onesvenus

Member
I give a full reading of what Locuza wrote on twitter this time... 28 tweets :messenger_face_screaming:
It is basically what it was already know before.
In fact the only part new is about the PS5's FPU.

Locuza changed his instance on the cut down FPU because in fact it doesn't seem to be cut down but compressed.
In simple terms AMD reworked the Zen's FPU to use less transistors maintaining all the functionalities.

Why? Nobody knows yet.

BTW GPUsAreMagic agree with him... looks like the same FPU with different transistor density.


Have you noticed the tweet where he says there's no unified cache as you said in the first pages of this thread? Or how he calls clowns the ones that were shouting the RDNA1 bullshit? Or how the PS5 is using the old render backend (unlike the one introduced in PC RDNA2 which is found in the XSX) and has not VRS-specific hardware? Or how he says there's no L3$ on the GPU whatsoever?

I hope you have so you can stop going into hardware threads claiming it does
 

Lysandros

Member
I give a full reading of what Locuza wrote on twitter this time... 28 tweets :messenger_face_screaming:
It is basically what it was already know before.
In fact the only part new is about the PS5's FPU.

Locuza changed his instance on the cut down FPU because in fact it doesn't seem to be cut down but compressed.
In simple terms AMD reworked the Zen's FPU to use less transistors maintaining all the functionalities.

Why? Nobody knows yet.

BTW GPUsAreMagic agree with him... looks like the same FPU with different transistor density.


Yet, his eagerness and insistence on 'cut down 128-bits FPU' narrative created quite the confusion and misinformation back in the days despite Marc Cerny's very clear statement of 'PS5's CPU supports native 256-bits instructions'. I must say that i am not a big fan of the trend putting him above the main system architects about such matters.
 

ToTTenTranz

Banned
In simple terms AMD reworked the Zen's FPU to use less transistors maintaining all the functionalities.
If it has the same functionality at the cost of increased heat density then it's not using less transistors. It's using area-optimized transistor libraries.


The initial assessment about decreasing the FPU to 128bits was a pretty bad one to be honest, considering Cerny had named 256bit floating ops as something that would cause a decrease in CPU clocks.
 
Last edited:
Top Bottom