• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

ethomaz

Banned
What makes you say that? I'm very curious to know how you saw that.

Edit: Ok I think I see it as well.

EuNU-3oXMAIiJ6a


Xbox_Series_X_slide-2.jpg
If this is correct each CCX has 8MB of L3 Cache. Since there's two that makes 16GB of L3 Cache.

Very interesting if true.
I believe the guy that made the color did wrong.
He even put 4 blocks of 512KB that give us 2MB and not 4MB.

But we can compare the sizes with Series X to confirm it.
 
Last edited:

Lysandros

Member
So the day of the die shot has finally come!

Looks like the unified L3 CPU cache was squashed (not terribly surprising), Geometry engine looks rather large right in the middle of the CUs, and the I/O complex is also taking up a nice chunk of silicone as expected. Curious to see if someone is able to identify all the components in that area. Interested to see how much SDRAM was added and overall how much relative space was spent on I/O.

Not too many surprises tbh except does the CPU actually have a total of 16MB of L3 cache? That's 2x more than what was expected is it not? Looking at the XsX diagram it only seems to have 8MB L3 between all 8 CPU cores.
That would be pretty huge 'if' true, but i will remain sceptical for now.
 

IntentionalPun

Ask me about my wife's perfect butthole
What makes you say that? I'm very curious to know how you saw that.

Edit: Ok I think I see it as well.

EuNU-3oXMAIiJ6a


Xbox_Series_X_slide-2.jpg
If this is correct each CCX has 8MB of L3 Cache. Since there's two that makes 16GB of L3 Cache.

Very interesting if true.
Looks like a mistake.

Notice how the "broken down" version below shows 4 512 chips; so that's be 2MB x's 2 for each CCX. I think they meant to put 2MB above in the non-broken down half. So 8MB total.

Otherwise counting them all up is actually 6MB per side.
 
Last edited:

LucidFlux

Member
I did not notice that, where is that highlighted?

What makes you say that? I'm very curious to know how you saw that.

I was just looking at the colorized and labeled diagram
GLCJs95.jpg



Now this could just be labeled improperly or I'm just interpreting it incorrectly.

JuDU0yr.jpg


If this is what the person meant to convey then it totals to 16MB otherwise....

ImfrBs3.jpg


If this is what they meant then it's the same 8MB as 512KB x 16 = 8MB

I could totally just be nitpicking the labeling lol...

In fact I just looked at the relative sizes of the L3 on both chips and they are very similar so I highly doubt the 16MB is correct.
 

M1chl

Currently Gif and Meme Champion
What makes you say that? I'm very curious to know how you saw that.

Edit: Ok I think I see it as well.

EuNU-3oXMAIiJ6a


Xbox_Series_X_slide-2.jpg
If this is correct each CCX has 8MB of L3 Cache. Since there's two that makes 16GB of L3 Cache.

Very interesting if true.
Well if you look at the bottom L3 cluster, it's says 4x 512KB, so I don't think it's different than on XSX...
 

raul3d

Member
Yields are 73%?! Yikes.
Na, that yield is for the full 28 WGP chip without disabling WGPs. If the defect is in a WGP, the chip can still be used. The article states this: "by absorbing that defect and disabling that WGP, that SoC can be used in a console and the effective yield is higher."

I find that 306.4 mm2 die size more shocking. I thought that all the estimates were substantial higher.
 

kyliethicc

Member
Na, that yield is for the full 28 WGP chip without disabling WGPs. If the defect is in a WGP, the chip can still be used. The article states this: "by absorbing that defect and disabling that WGP, that SoC can be used in a console and the effective yield is higher."

I find that 306.4 mm2 die size more shocking. I thought that all the estimates were substantial higher.
Plus they only need 24 WGPs for using that chip in xcloud servers. Improves on yields.

The entire die is 360.4 mm^2 so I'm not sure where that article got 306.4 ... must be a typo. They even say its 360.4 earlier in the article.
 

GreyHand23

Member
https://wccftech.com/4a-games-tech-...ue-for-rt-but-amds-approach-is-more-flexible/

Great interview about ray tracing. The whole interview is interesting, but people on this forum should take this part to heart.

With regards to ray tracing specifically, how would you characterize the different capabilities of PlayStation 5 from Xbox Series X and both consoles from the newly released RTX 3000 Series PC graphics cards? Overall, should we expect a lot of next-gen games using ray tracing in your opinion?

What I can say for sure now is PlayStation 5 and Xbox Series X currently run our code at about the same performance and resolution.
As for the NV 3000-series, they are not comparable, they are in different leagues in regards to RT performance. AMD’s hybrid raytracing approach is inherently different in capability, particularly for divergent rays. On the plus side, it is more flexible, and there are myriad (probably not discovered yet) approaches to tailor it to specific needs, which is always a good thing for consoles and ultimately console gamers. At 4A Games, we already do custom traversal, ray-caching, and use direct access to BLAS leaf triangles which would not be possible on PC.

As for future games: the short answer would be yes. And not only for graphics, by the way. Why not
path-trace sound for example? Or AI vision? Or some explosion propagation? We are already working on some of that.
 

ethomaz

Banned
Na, that yield is for the full 28 WGP chip without disabling WGPs. If the defect is in a WGP, the chip can still be used. The article states this: "by absorbing that defect and disabling that WGP, that SoC can be used in a console and the effective yield is higher."

I find that 306.4 mm2 die size more shocking. I thought that all the estimates were substantial higher.
It is 360.4mm2.

All estimated (and after confirmed by MS) where around 360mm2.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole



Listen to voice of reason. 👂🏻👆🏻


AMD CPU's supported AVX2 before the Zen2 FPU was added to improve their performance... as early as 2017 for their consumer CPUs.

Cerny specifically talked about their support of AVX2.. he brought it up as a segway into mentioning their variable frequency setup; since those ops are expensive and might be one reason the PS5 has to drop GPU clock...

So.. lacking the FPU just means worse performance for AVX2, not that they don't support it.. or that it "should not be on the CPU." That's my read on googling at least lol

edit: Granted it's possible PS5's co-processors might make use of AVX2 even more minimal on PS5.. quick google suggests data compression for instance is one of the more common CPU uses, as well as stuff like video encoding (which is basically compression).. Sony has dedicated hardware for both.
 
Last edited:

LiquidRex

Member
Lmao he skipped over a lot of stuff, a little awkward since he was the one to leak the unified L3 cache. I believe he passed off that information in sincerity but I guess his sources were wrong on that.

Although he is right in saying the die shot raises more questions than it answers.
I believe Paul will address everything in due course, he never does things for clicks. Note: there was a Patent by Mark Cerny that insinuated Unified Cache, but Paul & Amata on the channel have always said a Patent is just a Patent, and doesn't necessarily mean it's going to be part of the end product. :messenger_grinning:

I think there's still more to come; for example GE could still be part of RDNA 3, and there is a lot we don't know about the GE either. :messenger_winking:
 

IntentionalPun

Ask me about my wife's perfect butthole
Well the developer did say they have their own VRS solution for the PS5. But then Alex says the PS5 isn't capable of having it.

I'm confused.

Since they had this on PS4, that means they had a software based solution.

Presumably for XboxSX, they'd be using the VRS hardware support.

For PS5? They wouldn't re-use the PS4 approach if PS5 had VRS hardware support I guess is what people are assuming there.
 
Last edited:
Well the developer did say they have their own VRS solution for the PS5. But then Alex says the PS5 isn't capable of having it.

I'm confused.

Anyways the comparison should be interesting because the developers are using RDNA2 features.
For me sounds software solution, like PS4.

But, is VRS hardware-based accelerated on RDNA 2? Mesh Shaders we know it's not. SFS i don't know.

Anyway, Dirt 5 uses VRS on Xbox Series X and we didn't saw any better performance against PS5.

But Gears 5 and Tatics has good tier-2 VRS.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Some guys like LeviathanGamer are talking about Mesh Shaders is software layer, and in hardware level it's the same primitive shaders of RDNA 1.


If it was done in software, why would it be RDNA 2 only?

It's a hardware feature nVidia brought out with Turing.. I don't think AMD would advertise a new "platform" feature if it was a software thing.
 

IntentionalPun

Ask me about my wife's perfect butthole
So what? Probably could use Mesh Shaders in 5700XT using Vulkan.

No.. you can't... until RDNA 2 they were an nVidia only feature, because nVidia had the hardware to support them.

UE5 will use Primitive Shaders.

We have no idea what UE5 will and won't support. We just know for the PS5 demo they used primitive shaders.
 
Last edited:

Neo_game

Member
As far as I know only RT needs HW acceleration and is the only DX12U feature that actaully improve IQ. Rest of other features are for optimization techniques on API level for easy of game development for programmers. Like programmer do not have to make or use their own libraries now. It is upto to them. So it is surprising something like VRS or sampler feedback etc is probably been used already for years by some.
 
Well the developer did say they have their own VRS solution for the PS5. But then Alex says the PS5 isn't capable of having it.

I'm confused.

Anyways the comparison should be interesting because the developers are using RDNA2 features.
I think he means it has no hardware acceleration silicon like the Series X or desktop RDNA 2 cards do.

The silicon cost for VRS is very small according to Mircosoft so it's very likely it was removed on purpose by Sony because either they didn't feel it was useful or wanted to implement their own solution. Rumour is that PS5 has a hybrid VRS solution which involves the GE but again those are just rumours.

Call of Duty actually have their own software based VRS solution which actually works great, but the software solutions usually requires more resources from the GPU so they are not as efficient as the fixed function hardware.

I think we should look back to Matt Hargett's comments on VRS and GE, he implied that culling via GE has a massive performance gain which VRS doesn't "hold a handle on". This is only going to get better with Primitive Shaders as well. LeviathanGamer2 has stated a few times now that one the single biggest gains the next-gen consoles have are Primitive/Mesh Shaders. To my knowledge they have not been incorporated into any games yet but will take a year or two for developers to implement them, I think the results will be ridiculous.
 
I think he means it has no hardware acceleration silicon like the Series X or desktop RDNA 2 cards do.

The silicon cost for VRS is very small according to Mircosoft so it's very likely it was removed on purpose by Sony because either they didn't feel it was useful or wanted to implement their own solution. Rumour is that PS5 has a hybrid VRS solution which involves the GE but again those are just rumours.

Call of Duty actually have their own software based VRS solution which actually works great, but the software solutions usually requires more resources from the GPU so they are not as efficient as the fixed function hardware.

I think we should look back to Matt Hargett's comments on VRS and GE, he implied that culling via GE has a massive performance gain which VRS doesn't "hold a handle on". This is only going to get better with Primitive Shaders as well. LeviathanGamer2 has stated a few times now that one the single biggest gains the next-gen consoles have are Primitive/Mesh Shaders. To my knowledge they have not been incorporated into any games yet but will take a year or two for developers to implement them, I think the results will be ridiculous.
Funnily enough, LevitganGamer2 has been stating for a few weeks know that he has figured out how PS5 handles VRS and that he was going to do a write up on it, and then he tweeted this:



For some context, Primitive/Mesh Shaders are Compute shaders which have been incorporated into the graphics pipeline, which means that there is no cache/RAM/CPU bottlenecks which Compute Shaders rely on heavily.
 
Last edited:
Status
Not open for further replies.
Top Bottom