• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020

jimbojim

Banned
dude, its the same gpu/cpu combo. only ps5 has less

so its like one car on the racetrack revs up to factory rev limit,
while the other, similar car with the smaller engine, went to the neighborhood tuner and put the rev limit 4.050rpm higher
because it wants to compete with the car that has 44% more displacement

go on, ask mr. yamauchi the race driver to tell you what the outcome will be

now please stop talking about ps5 in a hot chips xbox thread
there might be some useful info about xbox to be shared here

My post was a reaction to 100 mph. Every todays modern car can go that "fast".
 

Deto

Banned
Ok, PS5 has 18TF and dual GPU, now settledown

Of course, variable clock is bad

that's why AMD, Intel, Nvidia and Sony use it.

right the software company with its fixed clock.

amazing how everything that sony does for some is garbage, if it is different from the 5700xt it is bad because it is different and runs 2.23GHz, if the variable clock is equal to 5700XT it is also garbage because the SX is locked

If variable clocking equal to 5700XT is garbage because it is not locked like SX
If it has a clock higher than the 5700XT, it is rubbish, because the 5700XT does not increase performance above 2ghz.
 

GODbody

Member
Right, we know it's 10GB at 560GB/s and 6GB at 336GB/s, but we don't know the details of the arrangement.
We do know the details of the arrangement. It was posted earlier in the thread.

h22uypU.jpg


It's unified memory with asymetrical chips. There are 10 memory chips in total. four 1 GB chips and six 2 GB chips. Twenty 16 bit channels.

Data in the GPU optimal space can span 1 GB from all the chips giving a speed of 560 GB/s (560 ÷ 10 = 56 GB/s per chip).

Data in the standard memory space is only in the 2 GB chips, data can only span the second GB of the 2 GB chips giving a speed of 336 GB/s (336 ÷ 6 = 56 GB/s per chip).

uCQsgut.png


All of the chips have the same underlying GDDR6 speed (56 GB/s).

I've mentioned in previous page, will mention here. Some are deliberately ignoring it. Why PS5s variable clocks are better than "PS5 would be stronger with fixed clocks?

Sony did tell us how their design works. The thing you're missing is that the PS5 approach is not just letting clocks be variable, like uncapping a framerate. That would indeed have no effect on the lowest dips in frequency. But they've also changed the trigger for throttling from temperature to silicon activity. And that actually changes how much power can be supplied to the chip without issues. This is because the patterns of GPU power needs aren't straightforward.

Here's a depiction of the change. (This is not real data, just for illustrative purposes of the general principle.) The blue line represents power draw over time, for profiled game code. The solid orange line represents the minimum power supply that would need to be used for this profile. Indeed, actual power draw must stay well below the rated capacity. Power supplies function best when actually drawing ~80% of their rating. And when designing a console the architects, working solely from current code, will build in a buffer zone to accommodate ever more demanding scenarios projected for years down the line.
standardpowersokvg.png


You'd think the tallest peaks, highlighted in yellow, would be when the craziest visuals are happening onscreen in the game: many characters, destruction, smoke, lights, etc. But in fact, that's often not the case. Such impressive scenes are so complicated, the calculations necessary to render them bump into each other and stall briefly. Every transistor on the GPU may need to be in action, but some have to wait on work, so they don't all flip with every tick of the clock. So those scenes, highlighted in pink, don't contain the greatest spikes. (Though note that their sustained need is indeed higher.)

Instead, the yellow peaks are when there's work that's complex enough to spread over the whole chip, but just simple enough that it can flow smoothly without tripping over itself. Unbounded framerates can skyrocket, or background processes cycle over and over without meaningful effect. The useful work could be done with a lot less energy, but because clockspeed is fixed, the scenes blitz as fast as possible, spiking power draw.

Sony's approach is to sense for these abnormal spikes in activity, when utilization explodes, and preemptively reduce clockspeed. As mentioned, even at the lower speed, these blitz events are still capable of doing the necessary work. The user sees no quality loss. But now behind the scenes, the events are no longer overworking the GPU for no visible advantage.
choppedpower2hjss.png



But now we have lots of new headroom between our highest spikes and the power supply buffer zone. How can we easily use that? Simply by raising the clockspeed until the highest peaks are back at the limit. Since total power draw is a function of number of transistors flipped, times how fast they're flipping, the power drawn rises across the board. But now, the non-peak parts of your code have more oomph. There's literally more computing power to throw at the useful work. You can increase visible quality for the user in all the non-blitz scenes, which is the vast majority of the game.

raisedpowerc3keg.png


Look what that's done. The heaviest, most impressive scenarios are now closer to the ceiling, meaning these most crucial events are leaving fewer resources untapped. The variability of power draw has gone down, meaning it's easier to predictively design a cooling solution that remains quiet more often. You're probably even able to reduce the future proofing buffer zone, and raise speed even more (though I haven't shown that here). Whatever unexpected spikes do occur, they won't endanger power stability (and fear of them won't push the efficiency of all work down in the design phase, only reduce the spikes themselves). All this without any need to change the power supply, GPU silicon, or spend time optimizing the game code.

Keep in mind that these pictures are for clarity, and specifics about exactly how much extra power is made available, how often and far clockspeed may dip, etc. aren't derivable from them. But I think the general idea comes through strongly. It shows why, though PS5's GPU couldn't be set to 2GHz with fixed clocks, that doesn't necessarily mean it must still fall below 2 GHz sometimes. Sony's approach changes the power profile's shape, making different goals achievable.

I'll end with this (slowly) animated version of the above.

variablepowerudkmp.gif





SmartShift is an intelligent way to gain 14% more performance and make your device easier to cool. But we have 0 benchmarks to base any claims off of. The question is does that 14% gain in performance result in a loss somewhere else.
 

Bo_Hazem

Banned
Why do you think XSX will have a better resolution ? You realise TF is just the computational capacity of the vector ALU, there is more to it. You can crow when XSX runs something much better.

And why cant you guys just talk XSX and hot chips without having to talk about trying to beat Sony in every damn post....

He didn't see VRS kicking in on Halo Infinite to drop the resolution down to 720p.

HaloVRS.jpg


HaloVRS2.jpg
 

JLB

Banned
We do know the details of the arrangement. It was posted earlier in the thread.

h22uypU.jpg


It's unified memory with asymetrical chips. There are 10 memory chips in total. four 1 GB chips and six 2 GB chips. Twenty 16 bit channels.

Data in the GPU optimal space can span 1 GB from all the chips giving a speed of 560 GB/s (560 ÷ 10 = 56 GB/s per chip).

Data in the standard memory space is only in the 2 GB chips, data can only span the second GB of the 2 GB chips giving a speed of 336 GB/s (336 ÷ 6 = 56 GB/s per chip).

uCQsgut.png


All of the chips have the same underlying GDDR6 speed (56 GB/s).



SmartShift is an intelligent way to gain 14% more performance and make your device easier to cool. But we have 0 benchmarks to base any claims off of. The question is does that 14% gain in performance result in a loss somewhere else.

Interesting. Many colors in the first image, and some crazy tubes on the image below. Also the red end is throwing 2gb of chocolate chips to the yellow side. Intriguing.
 
this CPU is going to be great. When the ML stuff is taken advantage of, Nvidia may lose some market share if AMD/MS put that in Put that into Vulkan and DX, respectively.
 
Last edited:

geordiemp

Member
I know, secret sauce power
the other day I speculated it might be hidden in the enhanced ps5 joypads, but some people didn't like it
so, where do you think its hidden?

because the ps5 chip is smaller for sure, you know.
where did master cerny hid it?

The PS5 SSD has a second gpu much more powerful.

Why cant you guys just discuss XSX hot chips. But since you asked and are mocking me, you asked for it.

In The GPU, ps5 compresses the Vector shaders and decompresses in the pixel shaders for more rendering performance. Its patented by Cerny and Naughty dog and was released to public last week, just before the teardown....Strange that......

If your interested in the benefits of compressing GPU data on performance, I can post more ?
pp91hgl.png
 
Last edited:
Since you asked...

In The GPU, it compresses the Vector shaders and decompresses in the pixel shaders for more rendering performance. Its patented by Cerny and Naughty dog and was released to punlic last week, just before the teardown....Strange that......

If your interested in the benefits of compressing GPU data on performance, I can post more ?
pp91hgl.png
see my two cars example above.
btw, has anybody told you that you sound like a broken patent record?
 

geordiemp

Member
see my two cars example above.
btw, has anybody told you that you sound like a broken patent record?

You asked, for the record I still think XSX will still just edge it, but there wont be much in it IMO, but you guys just want to crow about power over ps5 all the damn time.

If you were genuininly interested in technology we could discuss the XSX and ps5 designs, but your not so bye.
 
Last edited:
You asked, for the record I still think XSX will still just edge it, but there wont be much in it IMO, but you guys just want to crow about power over ps5 all the damn time.

If you were genuininly interested in technology we could discuss the XSX and ps5 designs, but your not so bye.
Dude it's going to get stomped out, give it a rest. There's no computational or design comparative. They both heavily customized their systems, they're both incredibly efficient, but flat out Microsoft has better hardware in their console on every single front beyond the SSD.

It's over.
 
You asked, for the record I still think XSX will still just edge it, but there wont be much in it IMO, but you guys just want to crow about power over ps5 all the damn time.
I don't.
but practically EVERY thread is ruined by a bunch of folks -the same folks- rushing in to damage control.
here are the facts:

xbox series x is going to be the better machine
exactly like xbox x was the better machine compared to ps4pro,
exactly like the ps4 was the better machine compared to xbone,
exactly like the 360 was the better machine compared to ps3, etc.

and the absolute difference between them will be the biggest power difference ever before on consoles.

that doesnt mean that playstation 5 sucks. just that xbox is the better machine.


as I said, since I already know I will be using the xbox as a main console,
the only thing I care about is to see a ps5 stripdown and hands-on,
because I dont want it to be a frying pan like my ps4's were (and are),
and the choices sony made to appear competitive or "equal" are worrisome
 
Last edited:

geordiemp

Member
Dude it's going to get stomped out, give it a rest. There's no computational or design comparative. They both heavily customized their systems, they're both incredibly efficient, but flat out Microsoft has better hardware in their console on every single front beyond the SSD.

It's over.

I am not one of your dudes with a hoody thanks.

9juS6pR.jpg


No point in discssion, see ya.

This dude geordiemp will have a mental breakdown when he sees the face off of multiplats on digital foundry.

No I wont, I will be hear. Smiling. They will be similar, bookmark it.

OH, I always have 2 Ps4 and xbox in the house, so will be same next gen. I like technology.
 
Last edited:

Marlenus

Member
So XSX is now 15 % + 20 % = 35 % more powerful.




Your living in fantasty land, why cant people just discuss the information and data we have without going la la

Speak about hot chips and XSX, why have you got to FUD Ps5 in every post, its like timdog

So the idiots are back on variable clocks, after trying what was it again, og yeah degrading silicon in ps5 and it melts things and kills your mum.

Cerny must be liar and we believe in Marlenus er Timdog. Jesus.

And the post gota thoughtfull, embarrassing


I was giving an example of how variable clocks can give higher performance on average by sacrificing performance in rare edge case scenarios.

The 80% was a number to use for illustration purposes, it has no bearing on reality.

Did you hit your head before reading the post or have some sort of seizure that blocked your reading comprehension?
 

MrFunSocks

Banned
The thing is what many don't understand, the clock rate depends on the amount of work. So when there is no work to be done, the Xbox still clocks at 1825MHz, but the PS5 doesn't have to because it's a waste of energy.
But the thing you’re assuming is that it makes a difference or that people care. It running at that clock speed all the time isn’t a bad thing. In fact it’s a good thing because it’s one less thing the developers have to worry about.
 
Last edited:
Dude it's going to get stomped out, give it a rest. There's no computational or design comparative. They both heavily customized their systems, they're both incredibly efficient, but flat out Microsoft has better hardware in their console on every single front beyond the SSD.

It's over.
Totally off topic and a side note, you seem to have recovered nicely lol!
 
Last edited:
We do know the details of the arrangement. It was posted earlier in the thread.

h22uypU.jpg


It's unified memory with asymetrical chips. There are 10 memory chips in total. four 1 GB chips and six 2 GB chips. Twenty 16 bit channels.

Data in the GPU optimal space can span 1 GB from all the chips giving a speed of 560 GB/s (560 ÷ 10 = 56 GB/s per chip).

Data in the standard memory space is only in the 2 GB chips, data can only span the second GB of the 2 GB chips giving a speed of 336 GB/s (336 ÷ 6 = 56 GB/s per chip).

uCQsgut.png


All of the chips have the same underlying GDDR6 speed (56 GB/s).



SmartShift is an intelligent way to gain 14% more performance and make your device easier to cool. But we have 0 benchmarks to base any claims off of. The question is does that 14% gain in performance result in a loss somewhere else.
Thanks for the great post, i had no idea that the ram was configured this way. It seems like a good way to get a boost in performance while helping keep cost down.
 
im getting tired of craig.

Xbone X is significantly better than PS4Pro
Xbox Series X 12 TFLOPS is significantly better than PS5, and well rounded more balanced console.

The difference is the utilization of the hardware which requires the perfect harmony, fusion, marrying, synchronization of 2 opposing things:

ART: Good character design, story, color palette, background design, texture detail, smooth fluid animation etc, etc.
PROGRAMMING: Efficient use of CPU, GPU, optimizations in coding, maintaining consistent framerates and resolution with enough bandwidth, computational power, RAM etc.

Sony can do MORE with less. That is why they can create great first party software, it is the perfect harmony of ART and PROGRAMMING. Plus they dont have an ecosystem to support like MS. It is a closed highly customized, tightly integrated console. I think their design philosophy is very similar to Apples (not that apple doesn't have an eco system, but their hardware is in house highly customized and they can do more with less)

MS has 2 consoles which are superior to Sonys: Xbone X and Xbox Series X, but their games visual appeal lacks luster, shine, polish, fluidity and appear rigid, stiff, bad and bland ugly art and mediocre at best due (speculation) the following reasons:

PROGRAMMING:
-Shackled by Xbone 1.3 TFLOP support
-Burden of Eco system support (Windows Desktop, Laptop, etc)

ARTISTIC:
-Unimaginative, Uncreative, lackluster talented: artists, content creators, animators


+DirectX12UltimateAPI, along with newer API's: DirectML, DirectStorageAPI are great new solutions to alleviate programming issues
+New studio acquisitions with MS financial support will hopefully bring limelight to talented content creators.
 
18% isn't "significant" imo
18% doesn't factor in Series X being a Big Navi full RDNA 2.0 chip while PlayStation 5 is a hybrid 1.0/2.0 chip.

18% doesn't factor in that frequency is not a substitute for physical hardware. With lesser CU's at a higher frequency with matched teraflops the GPU with more CU's will perform better.

18% doesn't factor in the Series X's heavy handed memory bandwidth advantage over the PlayStation 5.

18% doesn't factor in that 18% is with the PlayStation 5 at peak frequency and doesn't account for its variability.

In other words 18% isn't 18%....
 

psorcerer

Banned
18% doesn't factor in Series X being a Big Navi full RDNA 2.0 chip while PlayStation 5 is a hybrid 1.0/2.0 chip.

No. It cannot work that way.

18% doesn't factor in that frequency is not a substitute for physical hardware. With lesser CU's at a higher frequency with matched teraflops the GPU with more CU's will perform better.

No. It will not.
Exactly as CPU.

18% doesn't factor in that 18% is with the PlayStation 5 at peak frequency and doesn't account for its variability.

It doesn't matter, variability is there to reduce power consumption in degenerate cases.
 
I/O advantages? I think we're not going to see that much difference tbh 😂 The delta between Pro and X was even higher and well...
 

psorcerer

Banned
This is all proven already, there's no counter, bud.

It's not proven by anybody. Sorry.
The next-gen multiplatform quality will be dictated by the lowest common denominator: a small PC with an HDD, exactly like it was this gen.
The difference between PS5 and XBSX will be in the amount of VRS/DLSS-like stuff applied in the post-processing.
 
Last edited:
The next-gen multiplatform quality will be dictated by the lowest common denominator: a small PC with an HDD, exactly like it was this gen.

One of the reasons why I can see exclusives being so important to show off the I/O. If you don't give a damn about HDDs then the SDDs will benefit immensely from that.
 
It's not proven by anybody. Sorry.
The next-gen multiplatform quality will be dictated by the lowest common denominator: a small PC with an HDD, exactly like it was this gen.
The difference between PS5 and XBSX will be in the amount of VRS/DLSS-like stuff applied in the post-processing.
It's 100% fact, don't go down this path because it's very easy to recall these facts.
 

psorcerer

Banned
One of the reasons why I can see exclusives being so important to show off the I/O. If you don't give a damn about HDDs then the SDDs will benefit immensely from that.

I'm pessimistic here.
The only way I see out of that: Sony makes PS5/PC game that heavily uses hardware and I/O to look incredibly good on PC.
Or MSFT, but I don't really think MSFT is into making games anymore.
 
Top Bottom