• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020

To understand it better, see XSX memory bandwidth explained in 3 tweets



That was debunked some time ago if I am not mistaken. It does not read "in parallel", the first gb of the 2gb chips is read at 560gbps.

When more ram is needed and the other 6gbs of ram need tobe addressed, then the remaining data is read at 336gbps.

I am not even convinced that "other less prioritized" stuff such ass audio, IO, cpu or OS instructions can be mapped to the "336gbps of slower ram" because it is all read sequentally.

As I said, we will find out in 3 to 4 months to see how this ram setup actually works.
 
Last edited:
I think the PS5 audio solution is 100GFLOP not 200+.

Yeah you're right M1chl M1chl pointed that out earlier. CUs as AMD describe them are dual compute units. I need to re-read specifically how Cerny described the Tempest Engine in that regard to make sure tho.

This is what MS said.

For me, the original idea was 20GB of RAM, all 320bits 560GB/s. RAM price went up, they had to cut to keep the BOM below 600USD.
The operating system is in the slowest pool, and every time the pool is accessed the general bandwidth drops well below the PS5's 448GB / s.

That a long time ago I said, recent rumors of the price of the SX staying at 600USD only reinforced my theory.

Just like I think Sony wanted more than 448GB / s of bandwidth, and had to keep it at 448GB / s because the price of RAM went up.
The difference is that Sony's original idea allowed cost cutting to be more flexible and efficient than cost cutting by MS, which had to do this workaround.

"Well below" could be anything, but I doubt it will be magnitudes lower than PS5 bandwidth figure. There's been ridiculous blog posts and stuff written to try implying effective bandwidth is even lower than PS4's, which is complete lunacy.

If it takes a cycle or a few (let's say less than 10) cycles for full bandwidth saturation to adjust in case of a switch from one pool to the other pool, you still end up with effective bandwidth average much higher than PS5's. Mainly because anyone calculating the bandwidth pools as an average (as just one example) is doing it wrong. You can't guarantee every game (or any game) will use the faster and slower memory pools at exactly 1:1 amounts of cycles per frame, let alone per second, let alone throughout entire application time.

So while you can theoretically say the average is 448 GB/s, in practice it will be much higher even if it's not 560 GB/s. The same with regards to people like a blog spot I've read (and some other posters in other spots like Lady Gaia, but in her case their assumed effective bandwidth is somewhat more reasonable if not still wrong) assuming effective Series X bandwidth is lower than 448 GB/s: bandwidth "drops" switching from one pool to the other won't be any longer than a few cycles, and we're talking about processors that can perform billions of cycles per second.
 
Last edited:

Deto

Banned
To understand it better, see XSX memory bandwidth explained in 3 tweets



if a twitter astroturfing random spoke, it must be true.

He remembered misterxmedia, only with an image bank avatar.

I think this MS marketing is bizarre, creating these astroturfing to deceive its captive consumer. It resembles a Sect.


Guess we will know soon enough.

I would not be too worried about the RAM setup though. The GTX970 (Faux 4gb ram Card) had a similar setup (3'5gb fast ram vs slowass 0'5gb that would bring the performance to a halt when it was reached) and many times users forced the card to use the whole memory without the games stuttering.

It was when users when nuts and crancked the settings of the games up to eleven (4k, ultra settings, uncompressed textures) that the gpu shat itself and the performance went to shit.

In reality we wont really see what happens when the games hit that limit because developers will develop games in a "split-memory" friendly environment which will monitor the ram usage at all times.

Thank you, I had forgotten that example.


That was debunked some time ago if I am not mistaken. It does not read "in parallel", the first gb of the 2gb chips is read at 560gbps.

When more ram is needed and the other 6gbs of ram need tobe addressed, then the remaining data is read at 336gbps.

I am not even convinced that "other less prioritized" stuff such ass audio, IO, cpu or OS instructions can be mapped to the "336gbps of slower ram" because it is all read sequentally.

As I said, we will find out in 3 to 4 months to see how this ram setup actually works.


Exactly.
 
Last edited:

Dolomite

Member
PS5 should have the advantage if both consoles are using the same number of rays because PS5 will be able to have more bounces during the same time. XSX should be able to display more rays (with less bounces). But there is also the available bandwidth to be taken into the equation.
Not at all that simple. Let's not do mental gymnastics to even the playing field.
 

MrFunSocks

Banned
Can anyone explain in layman terms as to why one would go with 52 CU at a lower clock speed than having 36 CU at higher clocks ? Die size, yields and cost is the first topic of dicussion in the slides, signifying the importance. what are the tradeoffs in both cases?
Lower clocks = less heat, better yield, and less cooling needed.
Higher clocks = more heat, worse yield, more cooling needed.
More CUs at lower frequency = more distributed work so more can be done in parallel, but not done as fast.
Less CUs at higher frequency = each CU can get through more work, but not as much can be done at once.
Less CUs = takes up less space, so die size can be smaller/cheaper or can use the same size but use the extra room for something else. For example IIRC the XB1s die size was bigger than the PS4s despite having a significantly smaller GPU portion because the XB1 had the eSRAM (or eDRAM, can't remember which).

On the deep dive, unsurprisingly the PS5 "secret sauce" Geometry engine also exists here. Shocking! It's almost like they are both made by the same company and that company makes their tech available to everyone and isn't in the business of hiding things.

Would love to see how this isn't a "balanced" design though, since apparently the PS5 is the better designed overall system somehow.
 
Last edited:
KaweRUU.gif

Secret sauce incoming

True carnivore would cringe at that image. But I do love me lots of sauce on my steak!
 

Entroyp

Member
Lower clocks = less heat, better yield, and less cooling needed.
Higher clocks = more heat, worse yield, more cooling needed.
More CUs at lower frequency = more distributed work so more can be done in parallel, but not done as fast.
Less CUs at higher frequency = each CU can get through more work, but not as much can be done at once.
Less CUs = takes up less space, so die size can be smaller/cheaper or can use the same size but use the extra room for something else. For example IIRC the XB1s die size was bigger than the PS4s despite having a significantly smaller GPU portion because the XB1 had the eSRAM (or eDRAM, can't remember which).

Why the correlation between frequency and yields? Never heard of this before, even back when I worked at a semiconductor factory.
 

Redlight

Member
Yup, that's the magic of it, any basic stereo headphones/headsets and you're good to go! What a creepy clip! Listened to them and others, I won't play horror games with that stuff. :messenger_fearful:
It's a good effect and strangely, I can experience it right now.

Bo, I don't understand, how is this audio magic possible without Sony's world changing Tempest Engine?
 
Lower clocks = less heat, better yield, and less cooling needed.
Higher clocks = more heat, worse yield, more cooling needed.
More CUs at lower frequency = more distributed work so more can be done in parallel, but not done as fast.
Less CUs at higher frequency = each CU can get through more work, but not as much can be done at once.
Less CUs = takes up less space, so die size can be smaller/cheaper or can use the same size but use the extra room for something else. For example IIRC the XB1s die size was bigger than the PS4s despite having a significantly smaller GPU portion because the XB1 had the eSRAM (or eDRAM, can't remember which).

On the deep dive, unsurprisingly the PS5 "secret sauce" Geometry engine also exists here. Shocking! It's almost like they are both made by the same company and that company makes their tech available to everyone and isn't in the business of hiding things.

Would love to see how this isn't a "balanced" design though, since apparently the PS5 is the better designed overall system somehow.

FWIW Sony could've been referring to some customizations to their implementation of GE. We'll find out soon(ish) if they do another deep dive.

I always thought it was weird people assumed Series X had no GE whatsoever though, considering even RDNA1 has it and the GE is pretty damn important to the architecture as a whole.

That being the case, it's a good thing it's confirmed Series X has one, otherwise if it was under a completely different name it would've been fodder to back up claims "PS5 is RDNA 1.5" again 🤷‍♂️
 

Ascend

Member
CUs have 25% better perf/clock compared to last gen
Dang. If that's true, that GPU is a beast. As a reference... That would mean the XSX GPU is about 35%-40% faster than a 2080Ti (bar RT). That is assuming the similar scaling as a 5700XT, and no bandwidth bottleneck.

Some other interesting stuff...;
Q: Can you stream into the GPU cache? A: Lots of programmable cache modes. Streaming modes, bypass modes, coherence modes.
I take that as a yes... What was being discussed in the Xbox Velocity architecture thread was whether you could stream directly from the SSD to the GPU cache. Seems like that is still a possibility.

Q: Coherency CPU and GPU? A: GPU can snoop CPU, reverse requires software
That is interesting. This means that whatever you would have needed to load on the 'slow' portion of the RAM for the CPU, the GPU has access to as well. Doesn't make the operation of the RAM any less complicated though.
 

MrFunSocks

Banned
Why the correlation between frequency and yields? Never heard of this before, even back when I worked at a semiconductor factory.
I'd read/heard that since they are clocked higher they have to have better QA to make sure that they can run at that higher frequency, though very well could be wrong. What did you do at the semiconductor factory?
 
I'd read/heard that since they are clocked higher they have to have better QA to make sure that they can run at that higher frequency, though very well could be wrong. What did you do at the semiconductor factory?
They increased production so I imagine it's not that much of an issue for them. Plus take into account that they can get more APUs from the wafers since they are smaller.
 

Ascend

Member
Why the correlation between frequency and yields? Never heard of this before, even back when I worked at a semiconductor factory.
It's two fold... Either there are chips that cannot reach said frequency at all. That happens if you're close to the clock limit of the design.
The other is that they can reach said frequency, but exceed the allowed amount of power to do so.

Generally, the frequency is chosen based on some sort of optimum yield curve rather than vice versa. Maybe that's why you never hear about it.
 
Hot Chips live....blog (lol)

Better than not getting something from the stream at all I suppose. Gonna give it a read over the night.

EDIT: Damnit, beaten by Greeno.

CUs have 25% better perf/clock compared to last gen
Dang. If that's true, that GPU is a beast. As a reference... That would mean the XSX GPU is about 35%-40% faster than a 2080Ti (bar RT). That is assuming the similar scaling as a 5700XT, and no bandwidth bottleneck.

Some other interesting stuff...;
Q: Can you stream into the GPU cache? A: Lots of programmable cache modes. Streaming modes, bypass modes, coherence modes.
I take that as a yes... What was being discussed in the Xbox Velocity architecture thread was whether you could stream directly from the SSD to the GPU cache. Seems like that is still a possibility.

Q: Coherency CPU and GPU? A: GPU can snoop CPU, reverse requires software
That is interesting. This means that whatever you would have needed to load on the 'slow' portion of the RAM for the CPU, the GPU has access to as well. Doesn't make the operation of the RAM any less complicated though.

Nice insights. The stuff about the GPU cache sounds a lot like what we were talking about a while ago WRT the 100 GB of "instantly accessible" data on the SSD. It'll most likely work the way us and others were arriving at a conclusion to, which is pretty cool. Makes me wonder how good the latency is on the SSD for being able to facilitate this.

I don't think the RAM setup will be that hard to come to grips with, older systems have had way more complicated RAM configurations and devs got used to them sooner or later. Still a full 20 GB pool would've been preferred if prices for DRAM weren't such a pain in the ass.

Also something else: I guess the coherency hardware other posters were talking about earlier are those two SOC coherency blocks in the GPU that can snoop the CPU (but the reverse apparently requires a software solution, I imagine a good few of them already exist and/or MS has the tools for devs to implement that in software).
 
Last edited:

MrFunSocks

Banned
It sounds good (Dolby Atmos) but it's fake and pretty limited, that's the truth of it. You don't need fancy headsets, only good quality headsets that support basic 2.0 Channel. That's why Dolby went bitching around after the PS5 GDC as they're not needed anymore for PS5.

It'll support all of them anyway, but it's more accurate with headsets.

And yes, every fucking droplet will produce its own sound, not a baked soundtrack put in some location.

sony-teases-cool-audio-work-for-this-weeks-ps5-showcase.jpg
That's all well and good, but every droplet of rain making it's own sound will make literally zero difference to 99.99% of people in 99.99% of games. I can't hear every drop of rain making it's own individual sound and pinpoint it in real life, why would I need to in a game?

Most people are going to be playing these using TV speakers or a sound bar. It's not going to make any difference to any of them. Audiophiles will love it, sure. I've got surround sound and I might even notice a difference. It's not anything game changing though. It is something that 99% of people won't even notice.

I'm extremely happy that they've made the tempest, which only needs half of its power for audio and will assist the CPU/GPU in other calculations to boost performance. And it's pretty critical for VR gaming.
Oh boy, ANOTHER secret sauce!!! Now devs are going to be using the audio processor as a second CPU/GPU for graphics and game calculations! lol. No-one is doing that lol.
 
Last edited:

MrFunSocks

Banned
They increased production so I imagine it's not that much of an issue for them. Plus take into account that they can get more APUs from the wafers since they are smaller.
Increasing production doesn't mean that at all though? When you have lower yield you have to make more to get the amount required. You can't infer anything from them increasing production.
 
Increasing production doesn't mean that at all though? When you have lower yield you have to make more to get the amount required. You can't infer anything from them increasing production.

Yes you can. If the yields were absolutely horrible they would have issues producing these systems. And there's a limit to the number of chips they they can produce on a wafer and another limit on producing them overall.

That's why when it comes to microprocessors with really poor yields supply can be an issue.
 

Aladin

Member
I do not think the intention of the audio chip is to support higher quality headphones or speakers. There are various games which can do with a bit of audio path tracing.
Racing / Driving games will be the ones to make the best use of this chip, for most games it would be diminishing returns in value.
 

Redlight

Member
That's all well and good, but every droplet of rain making it's own sound will make literally zero difference to 99.99% of people in 99.99% of games. I can't hear every drop of rain making it's own individual sound and pinpoint it in real life, why would I need to in a game?

Most people are going to be playing these using TV speakers or a sound bar. It's not going to make any difference to any of them. Audiophiles will love it, sure. I've got surround sound and I might even notice a difference. It's not anything game changing though. It is something that 99% of people won't even notice.
The sound will improve in these next-gen consoles, no doubt, but only to the extent that developers want to leverage the tech. Sound isn't something that often gets prioritised.

You're absolutely right that people with TV speakers/soundbars won't get a lot out of it (apart from the Placebo effect). Headphones will help but anyone with experience of a true surround system will tell you about the limitations of stereo sources trying to mimic surround.

Of course I haven't taken advantage of Cerny's offer to send in pictures of my ears yet. Could be a game changer. :)
 
KaweRUU.gif

Secret sauce incoming


I've been doing the keto diet combined with going vega(Yes I just eat leafy greens and drink water with MCT oil in it) since January 11th. Lost 87lb's and my rib cage is showing but i'm keeping it up until I fucking eat a whole cow on September 8th.

Wanted a lifestyle change and now im skinny as shit. But I say all that to say Fxck you for this pic got damn that looks good :messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy:
 
T

Three Jackdaws

Unconfirmed Member
Lower clocks = less heat, better yield, and less cooling needed.
Higher clocks = more heat, worse yield, more cooling needed.
More CUs at lower frequency = more distributed work so more can be done in parallel, but not done as fast.
Less CUs at higher frequency = each CU can get through more work, but not as much can be done at once.
Less CUs = takes up less space, so die size can be smaller/cheaper or can use the same size but use the extra room for something else. For example IIRC the XB1s die size was bigger than the PS4s despite having a significantly smaller GPU portion because the XB1 had the eSRAM (or eDRAM, can't remember which).

On the deep dive, unsurprisingly the PS5 "secret sauce" Geometry engine also exists here. Shocking! It's almost like they are both made by the same company and that company makes their tech available to everyone and isn't in the business of hiding things.

Would love to see how this isn't a "balanced" design though, since apparently the PS5 is the better designed overall system somehow.
Do we know the details of the geometry engines featured within the Series X?, even Polaris and Vega featured GE.

Mark Cerny in his Road to PS5 talk mentioned that the Geometry Engine on the PS5 allows full programmable control of the graphics rendering pipeline, this is important for performance and optimisations since the GPU won't need to render off-screen, back-face or out of view triangles, this can allow for significant gains in performance as well as efficiency depending on how developers will utilise it.

The Geometry Engines in the previous architectures didn't allow full control of the rendering pipeline as far as I am aware but it did allow for the discard of primitives early in the pipeline, according to RedGamingTech (take his info as you will), developers were not happy with the PS4 and Pro because the lack of control over the rendering pipeline meant that they couldn't implement effective performance optimisations (this was actually alluded to by Mark Cerny as well) and it was also one of the reasons why games such as God of War ran so hot on the PS4 and Pro.

Also according to RGT, we can take a look at patents about the PS5's Geometry Unit and how it is able to adjust pixel density. I believe it has much more features but I'll save that for the speculation thread and I think Sony has held back on revealing all the information.

So how does the Geometry Engine of the PS5 differ from that of the Series X and also the one featured in RDNA 2? I don't know, we'll have to wait for more technical deep dives as well as official information from Sony, AMD and Microsoft.
 
Last edited by a moderator:

DavidGzz

Member
$599 sounds fine. I'll get a beast of a console while the casuals will keep MS happy enough buying the XSS to keep Game Pass popping off. Feelsgoodman.
 

MrFunSocks

Banned
No chance.
Why not? I'd say its highly likely it will come with at least 3-6 months free. They don't profit from the game pass subscription, they're using game pass as a way to get people in to their ecosystem so they can spend money on games and digital purchases.
 
$599 sounds fine. I'll get a beast of a console while the casuals will keep MS happy enough buying the XSS to keep Game Pass popping off. Feelsgoodman.

Right

I'm expecting both of them to come in $549 - $599. Shit I bought a ps3 day one at $599 as a broke college student, I can definitely afford a PS5 or Xbox Series X without breaking the bank just like I did the 2080 Super in my machine, and the one downstairs in my wifes machine lol.

They built a behemoth of a console. Now they just have to show us what it can do. Same with Sony, let them games fly its time
 

K.N.W.

Member
I don't think it's safe at all to compare XSX audio power to tempest... PS5 has a regular DSP besides Tempest! Tempest is for 3D Audio only, and we still don't how much power in total it brings coupled with tempest.
 

jonesxlv

Neo Member
Was there this much confusion over which console was more powerful when the PS4 and Xbox One launched? I remember before launch the PS4 was considered more powerful and that the Xbox One was underpowered and focused too much on media player and Kinect crap, which is why I chose the PS4 after being an Xbox 360 guy the previous generation (only got a PS3 during Black Friday 2011).

It seems to me that the Xbox Series X is slated to be more powerful than the PS5, but I guess we won't really know until Digital Foundry has analyzed a bunch of multi-platform games and compared resolution, performance and overall image quality.

This generation has followed the same general trend in terms of graphical quality and performance: Xbox One -> PS4 -> PS4 Pro -> Xbox One X

The above is not controversial, but PS5 vs. Xbox Series X really is.

It is pretty clear. XSX is the more powerful console. The only metric where the PS5 will trump the XSX is storage IO.

The vagueness and uncertainty are coming from a very select group of warriors looking for secret sauce, similar to what happened with the XBOX ONE at launch. Ignore it... and ignore all warriors.
 

jonesxlv

Neo Member
PS5 should have the advantage if both consoles are using the same number of rays because PS5 will be able to have more bounces during the same time. XSX should be able to display more rays (with less bounces). But there is also the available bandwidth to be taken into the equation.

Are we seriously doing this, again? You have grossly oversimplified how this works from a computer perspective. Stop moving goalposts and stop spreading misinformation.
 
Was there this much confusion over which console was more powerful when the PS4 and Xbox One launched? I remember before launch the PS4 was considered more powerful and that the Xbox One was underpowered and focused too much on media player and Kinect crap, which is why I chose the PS4 after being an Xbox 360 guy the previous generation (only got a PS3 during Black Friday 2011).

It seems to me that the Xbox Series X is slated to be more powerful than the PS5, but I guess we won't really know until Digital Foundry has analyzed a bunch of multi-platform games and compared resolution, performance and overall image quality.

This generation has followed the same general trend in terms of graphical quality and performance: Xbox One -> PS4 -> PS4 Pro -> Xbox One X

The above is not controversial, but PS5 vs. Xbox Series X really is.

Man


I remember being a youngin and watching the Sony PS3 show wondering what the hell I was about to buy lol. I said it in another thread and i'll say it again...I never look at these console launches with a focus on whats coming out game wise - day one..cause the first year in my opinion/as far as my taste goes is typically a little lackluster lol.
 

MrFunSocks

Banned
Was there this much confusion over which console was more powerful when the PS4 and Xbox One launched? I remember before launch the PS4 was considered more powerful and that the Xbox One was underpowered and focused too much on media player and Kinect crap, which is why I chose the PS4 after being an Xbox 360 guy the previous generation (only got a PS3 during Black Friday 2011).

It seems to me that the Xbox Series X is slated to be more powerful than the PS5, but I guess we won't really know until Digital Foundry has analyzed a bunch of multi-platform games and compared resolution, performance and overall image quality.

This generation has followed the same general trend in terms of graphical quality and performance: Xbox One -> PS4 -> PS4 Pro -> Xbox One X

The above is not controversial, but PS5 vs. Xbox Series X really is.
There is literally zero confusion though, just fanboys trying to conjure up the secret sauce. The PS5 has a faster SSD and that's pretty much it. Same CPU, Xbox has a better GPU, Xbox has better RAM. The SSD doesn't make the console more powerful. The Series X is the more powerful console without any shadow of a doubt.
 

longdi

Banned
This is the information we have about the Tempest capability, how can we translate that to Gflops for example?

"Where we ended up is a unit with roughly the same SIMD power and bandwidth as all eight Jaguar cores in the PS4 combined"

'roughly' means not as good as PS4 8 cores clock at 1.6ghz
Series X audio is better than all 8 cores of Xbox One X clock at 2.3ghz

Damn son!
 
Do we know the details of the geometry engines featured within the Series X?, even Polaris and Vega featured GE.

Mark Cerny in his Road to PS5 talk mentioned that the Geometry Engine on the PS5 allows full programmable control of the graphics rendering pipeline, this is important for performance and optimisations since the GPU won't need to render off-screen, back-face or out of view triangles, this can allow for significant gains in performance as well as efficiency depending on how developers will utilise it.

The Geometry Engines in the previous architectures didn't allow full control of the rendering pipeline as far as I am aware but it did allow for the discard of primitives early in the pipeline, according to RedGamingTech (take his info as you will), developers were not happy with the PS4 and Pro because the lack of control over the rendering pipeline meant that they couldn't implement effective performance optimisations (this was actually alluded to by Mark Cerny as well) and it was also one of the reasons why God of War ran so hot on the PS4 and Pro.

Also according to RGT, we can take a look at patents about the PS5's Geometry Unit and how it is able to adjust pixel density. I believe it has much more features but I'll save that for the speculation thread and I think Sony has held back on revealing all the information.

So how does the Geometry Engine of the PS5 differ from that of the Series X and also the one featured in RDNA 2? I don't know, we'll have to wait for more technical deep dives as well as official information from Sony, AMD and Microsoft.

Series X has a multi-core command processor unit, maybe that provides the "full programmability" of the GE similar to what Sony mentioned? The RGT rumor you're referring to probably points to some additional foveated rendering features on PS5 which would be beneficial for PSVR 2.0 for example.

Aside that, also have to readjust the audio calculations. So Sony actually compared Tempest Engine to PS4 CPU cluster, which was 102.4 GFLOPs iirc. So Tempest Engine provides around that much (maybe slightly more) in terms of raw power. But that also would mean Series X's audio is actually the more capable of the two in terms of raw performance, at around anywhere between 147.2 GLOPs (actually somewhat more). And it seems to offload more or less all of the audio tasks to its audio block, similar in idea to Sony's PS5. Finding that out surprised me.

So it'd seem MS's audio solution is about 50% more powerful than Sony's in raw numbers, anyway. It's obviously not designed to mimic a SPE though, also no figures on how much bandwidth it would need on the bus at peak (i.e is it same as Tempest's 20 GB/s, greater, or less?).

This is the information we have about the Tempest capability, how can we translate that to Gflops for example?

"Where we ended up is a unit with roughly the same SIMD power and bandwidth as all eight Jaguar cores in the PS4 combined"

Yes, about 104.2 GFLOPs, maybe slightly more (could be 110 GFLOPs for example), maybe slightly less (could be 100 GFLOPs). One X's CPU was 147.2 GFLOPs, MS says thiers is more than that, so being conservative could be say 150+ GFLOPs for example.

So in raw power MS's is more capable of the two but Sony did a much better job talking about their audio setup plus they have a history in audio electronics so most people were going to assume theirs was the most powerful anyway (and MS didn't even give clear reference numbers for theirs until literally today xD).

Regardless tho both systems have really strong audio capabilities and did a lot of customizations to hit their targets.
 
Last edited:
T

Three Jackdaws

Unconfirmed Member
Series X has a multi-core command processor unit, maybe that provides the "full programmability" of the GE similar to what Sony mentioned? The RGT rumor you're referring to probably points to some additional foveated rendering features on PS5 which would be beneficial for PSVR 2.0 for example.

Aside that, also have to readjust the audio calculations. So Sony actually compared Tempest Engine to PS4 CPU cluster, which was 102.4 GFLOPs iirc. So Tempest Engine provides around that much (maybe slightly more) in terms of raw power. But that also would mean Series X's audio is actually the more capable of the two in terms of raw performance, at around anywhere between 147.2 GLOPs (actually somewhat more). And it seems to offload more or less all of the audio tasks to its audio block, similar in idea to Sony's PS5. Finding that out surprised me.

So it'd seem MS's audio solution is about 50% more powerful than Sony's in raw numbers, anyway. It's obviously not designed to mimic a SPE though, also no figures on how much bandwidth it would need on the bus at peak (i.e is it same as Tempest's 20 GB/s, greater, or less?).
Apologies, I'm a little confused, is the second and third paragraph in response to me? I wasn't discussing the audio stuff lol

EDIT: my bad, didn't see your edit.
 
Last edited by a moderator:

longdi

Banned
Sony needs to priced PS5 at least $50-100 cheaper than Series X. This my expectations, and it hasn't moved as new hw information flows out from MS team....

I wish Sony is as forth coming with PS5 tech-specs-talks. Be open!
 
Last edited:
Sony needs to priced PS5 at least $50-100 cheaper than Series X. This my expectations, and it hasn't moved as new hw information flows out from MS team....

I wish Sony is as forth coming with PS5 tech-specs-talks. Be open!

1. Phil said they would be flexible with the XSX pricing and were not going to lose the price war. Its not going to be higher than the PS5
2.The power difference isn't large enough for a 100 price difference.
 

MrFunSocks

Banned
Do we know the details of the geometry engines featured within the Series X?, even Polaris and Vega featured GE.

Mark Cerny in his Road to PS5 talk mentioned that the Geometry Engine on the PS5 allows full programmable control of the graphics rendering pipeline, this is important for performance and optimisations since the GPU won't need to render off-screen, back-face or out of view triangles, this can allow for significant gains in performance as well as efficiency depending on how developers will utilise it.

The Geometry Engines in the previous architectures didn't allow full control of the rendering pipeline as far as I am aware but it did allow for the discard of primitives early in the pipeline, according to RedGamingTech (take his info as you will), developers were not happy with the PS4 and Pro because the lack of control over the rendering pipeline meant that they couldn't implement effective performance optimisations (this was actually alluded to by Mark Cerny as well) and it was also one of the reasons why games such as God of War ran so hot on the PS4 and Pro.

Also according to RGT, we can take a look at patents about the PS5's Geometry Unit and how it is able to adjust pixel density. I believe it has much more features but I'll save that for the speculation thread and I think Sony has held back on revealing all the information.

So how does the Geometry Engine of the PS5 differ from that of the Series X and also the one featured in RDNA 2? I don't know, we'll have to wait for more technical deep dives as well as official information from Sony, AMD and Microsoft.
Microsoft have already demoed upcoming things like the GPU not needing to render off-screen/out of view/etc polygons FYI. Can't remember what the demo was called but it had a few rooms with different statues in them, and the demo showed exactly what was being rendered and when, and as soon as something obstructed even a tiny piece of the statue the GPU stopped rendering that part. There were various levels of accuracy for it as well IIRC, based on how many triangles were used.

Not relating to your post at all now: it's funny how all of a sudden the Tempest audio thing is the big secret sauce in here hahaha. Not the Geometry Engine anymore, that's old news. Now that we know the Xbox has one too, it's all about that Tempest engine that can now also act as a secret GPU lol.
 
Top Bottom