• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

llien

Member
That’s the point I was trying to make. If AMD’s next-gen tech doesn’t beat nVidias Current-gen tech l, then they’d be in the same boat as always.
Perhaps our understanding of what "Fermi Times" means is very different, chuckle.

2080Ti is a 750mm2 "no fucks given" chip at "no fucks given" price. It's not something even remotely applicable to consoles.
AMD wasting money or not wasting money on such abomination has no console performance impact whatsoever.

as they seem to lack the skill to beat nVidia on the same playing field.
Both 5700 and 5700XT have less transistors, than NV counterpart that they beat, you need to get back from distorted reality into real world.
 

xool

Member
Both 5700 and 5700XT have less transistors, than NV counterpart that they beat, you need to get back from distorted reality into real world.
I don't think they're quite there.
  • AMD 5700/XT 10.8 billion transistors, base speed 1.6GHz enabled by 7nm (boost 1.6-1.9GHz)
  • Nvdia 1080 7.2billion 1.6GHz, 16nm
  • Nvdia 2070 10.8 billion, base speed 1.4GHz + (boosted 1.6) 12nm
The base 5700 doesn't really beat an 'old' 1080 with 25% less transistors, and the XT doesn't [obviously] beat the 2070, despite being higher clocked. [edit I accept it's +5% better]
 
Last edited:

llien

Member
The base 5700 doesn't really beat an 'old' 1080 with 25% less transistors
1080 is slower than 5700, let alone 5700XT, which indeed does beat 2070.

relative-performance_2560-1440.png


Going beyond 14/12nm manufacturers actually drop frequency (10nm mobile Intel chip, for instance, loses 20% of the clock speed).
 
Perhaps our understanding of what "Fermi Times" means is very different, chuckle.

2080Ti is a 750mm2 "no fucks given" chip at "no fucks given" price. It's not something even remotely applicable to consoles.
AMD wasting money or not wasting money on such abomination has no console performance impact whatsoever.


Both 5700 and 5700XT have less transistors, than NV counterpart that they beat, you need to get back from distorted reality into real world.
I guess I’ll deal with people telling me I’m wrong until ampere comes out and destroys AMDs cards
 

llien

Member
I guess I’ll deal with people telling me I’m wrong until ampere comes out and destroys AMDs cards
You struggle to even keep track of reality at hand, why do you think your forecasts (not that we were forecasting anything, chuckle) are more accurate that those of others?

The whole concept of "never beats" and "destroys" is a kindergarten talk to me, sorry.
There are products, which have: price, performance, power consumption. Rarely special features.

Products of similar price are to be compared. Products at crazy prices do not matter.
To illustrate: if Intel rolls out "MegaMonstah5000" that would be 10 times faster than 750mm2 2080Ti, at, say, modest price of, let's be generous, 50 million $, what kind of impact would it have at actual GPU market, let alone, upcoming consoles?

If you didn't notice, Navi has significantly changed the landscape. Before it came, realistic expectations where somewhere between Vega 56 and Vega 64. Now even pessimistic take is Vega64 + 10%.
 
Last edited:

xool

Member
1080 is slower than 5700, let alone 5700XT, which indeed does beat 2070.

relative-performance_2560-1440.png


Going beyond 14/12nm manufacturers actually drop frequency (10nm mobile Intel chip, for instance, loses 20% of the clock speed).

1080 has ~95% of the performance of 5700 using ~25% less transistors and at the same speed.

5700XT has less than 5% performance advantage over 2070 at a much higher base/boost clock. (1.6/1.9 AMD vs 1.4/1.6 Nvdiia)

Clock speeds definitely (can be) increase at 7nm -
 
Last edited:
So i finalized my testing in the Simulating Gonzalo-Thread and wrote up a summary with a pinch of speculation at the end. I think it also fits good in here, hence the following copy paste.

If you haven't followed the thread i would suggest, that you read post 1 to understand what I'm doing in there:

First things first:
  • I'm not saying Gonzalo is PS5, it might be!
  • I'm not suggesting this simulated Gonzalo is equivalent to PS5's power in games, this thread is about how much computational power you can put in a console size box and not how efficient you can use that power.


So what‘s Gonzalo?

"Gonzalo" was a leak from 3Dmark database earlier this year that hinted to a semicustom (not PC) APU wich through a AMD intern product code told us that it uses a CPU boost clock of 3.2Ghz and a GPU clock of 1.8Ghz.

DECODE3.png



8d7b1542898939.png



You can read a summary of all that here:

https://www.eurogamer.net/articles/digitalfoundry-2019-is-amd-gonzalo-the-ps5-processor-in-theory

so the string „13E9“ at the end of the code refers to a driver entry for Navi10 lite. Navi10 is also the architecture name of the recent rx 5700 series of AMDs desktop GPUs which use up to 40 Compute Units.


So confronted with these numbers the question must be allowed to ask: Would the power requirements of such a chip even fit in a console sized box with it’s thermal and power constraints?



Simulating Gonzalo:

So here we have the spec sheet of the 5700xt vs the data we can derive from the leaks:

specsdekos.png



For the CPU part of the APU we expect a 8 core variant. on 7nm the nearest AMD prozessors would be the 3700X or 3800X. i snapped up the former.

As for the testing conditions: I put the 5700xt and 3700X with 16GB DDR4 on a B350 motherboard in a well ventilated case. As you can see from the spec sheet, the first problem we are confronted with, is that the GPU alone is capable of drawing 225W.



So how do we get this combination of gear to a comparable to consoles set up?

first we underclock and undervolt the GPU to what the Gonzalo leak suggests. You can archieve that by changing the parameters in AMDs Wattman:

wattman100k13.png


We start by setting the power limit to -30% which gives us a physical limit of current drawn which from my pretesting equates to around 125W GPU die power (only die without vram and aux)

After that we limit the voltage frequency curve to just 1800MHz at 975mV (stock is 2032Mhz @ 1198mV):

wattman277kif.png



Secondly we underclock and undervolt the CPU via AMDs ryzen master:

ryzenmasterecksd.png


We max the clock to 3.2Ghz and undervolt to 1000mV (stock iirc 1.4V)





Benchmarks and Testing

3D Mark - Fire Strike:


drumroll
.
.
.

firestrikegonzalolowwgjoc.png


Exactly what the last leak of TumApisak was suggesting.


So how does that acutally compare?

firestrikeresults4ijuv.png


First wow, the 3700x is a beast. With the 1600 i used before you couldn't dare to dream to come close to 20K overall at the 5700XT's stock settings. The overall score of the Gonzalo configuration is around 10% down from the stock settings. Graphics score is down just 6% but CPU dependend Physics score takes a hit with the limitations and is down nearly 20%.


But what does it take to get there?

So here is the system power drawn from the wall. That's repeated peak power and not the mean. A wallside TDP equivalent would probably be around 10W lower:

firestrikepowersijyq.png


That's in Fire Strike's graphics test 1 representing the maximal power load. Graphics test 2 which stresses other aspects of the graphics pipeline more is around 10 watts lower.


Here are the GPU clock rates stock and gonzalo:

firestrikegpuclockz2kam.png


As we can see, we don't hit the 1800MHz in this load due to reaching the power limit. In GT2 where power constraints are lower we reach steadier and higher clocks.


Here's the over time chart of the GPU die power:

firestrikegpudiepowerm7kv7.png


Further Testing:

To get an idea how power characteristics of Navi would change in different circumstances i repeated the over time testing in Fire Strike at different frequencies. The goal was to get different paramaters for every frequency point. For that I undervolted individually for every frequency point and ran GT 1 and 2 in Fire Strike with this settings. I logged the data of every run with a high sampling rate, so that i could do the seen above over time charts. I repeated the testing for every datapoint with a second 5700XT to ensure that i didn't went to unstable territory.

So here's the most meaningful result of that testing. The power drawn (only) by the GPU die over target frequency in Wattman [green curve]. That's the average power draw over the duration of Graphics Test 1, which is power-wise the most demanding in Fire Strike. To show how performance changes in relation to the power draw I ploted the graphics score in Fire Strike against it [red curve]:

powerscalinggpuonlyuljwr.png


As you can see, power draw looks exponential and really takes off above 1.9 Ghz target frequency. All that while the Fire Strike score raises not even quite linearly. The yellow dot is the Gonzalo simulation for reference.


Another way to show this, is to compare scaling factors of Fire Strike and GPU die power drawn from the wall with the Gonzalo simulation as a base (100%):

powerscalingfactor5kkho.png


I summerized the results of the testing in the table below:

resultsshjg4.png




Interpretation:

So what does that all mean?

Ok as seen above my PC had 208W power draw from the wall socket for the Gonzalo settings. Does that mean Gonzalo would be a >200W console? Let's check the data we have...

For said settings we have a second measurement: the GPU die power over time as described above. That get's us s an avarage of 119W in Fire Strike Graphics Test 1. Furthermore I know the power efficiency of my PSU on 1/3 load: It's 90%. We also know the difference between die power and TBP of the 5700XT at stock. The delta should be not insignificantly lower at Gonzalo settings, as VRM losses get exponentially bigger with temps and fans suck up overproportionally more current at high rpm. So we also need to account for that.

That framework culminates in the following:

firestrikepower-compoh4jkq.png


So black are measured values. Yellow are pure estimates (Whew! only one). And the rest [red] can be calculated or somewhat derived. Granted that's no excact science, but i bet it wont be to far off what you would measure if you had the means.


So if we mutate the actual computing components into one hypothetical APU we see that we get under 150W of TDP. Thermal Design Power. The stuff that - when it eventually turns to heat - you have to cool away. So keep in mind that there are some redundant components in GPU and CPU like memory controllers that an APU would have less of. Those need power the APU wouldn't need. Also moving data between RAM CPU VRAM and GPU also consumes lots of juice, that wouldn't be needed if big parts of the data would just hang around in on-die cache.



Speculation – What would be possible next gen?

This data is certainly not only useful figuring out, what Gonzalo could be, but what would be possible with Navi in general. So how much more power than this would we be able to fit into a console provided we have to go with the same process node?

This depends mainly on two aspects: one – how much power would it draw and therefore heat would it produce (you can’t heat up a console sized volume indefinitely) and two – how much beef can you fit on a affordable APU / die. Let’s start with the second one

Die layout:

To get an idea of a hyopotechtical next gen die, first of all we have to take a look at the navi 10 layout and it’s dimensions. We have the outer dimensions of the die and we got this die shot / render showing the on die components directly from AMD.

navi10die2jrksb.png


With that information I tried to use realtive scaling to figure out what’s how big and came up with the follwing:

navi10dielayoutf9kpu.png
<----->
navi10dielayout28oj25.png



So we have the rough dimensions (rough meaning a fraction of a mm in this case) of what size the CUs need to be. From several AMD presentations we also know the hierachy in which Navi is organized. So from this we can extrapolate what a bigger APU would look like (as a dimension constraint i used the die length which proelite from beyond3d derived from the scarlett trailer = 24+mm):

consoledielayoutqojie.png


So as you can see i added 8 Workgroup processors / dual CUs in total. 4 on each side to keep symmetry. I also adjusted L2 graphics cache and widened the bus to 384bit. Furthermore i added two 4 Core CCXs (The dimension are from anandtech and scaled proportionally). The additional memory controllers would likely shape around the lower side as seen on the X1X’s die. But i kept everything symmetrical and tidy to show how much empty space is left on the die. Empty space which could accomadate stuff like ray tracing hardware.

For yield reasons this die would not have all the 56CUs enabled. On the rx5700 non-xt there are 4 CUs diabled. At this point it’s not clear how they disable those. In the past, AMD had to disable one CU per shader array presumaly to keep load symmetry. Since the introduction of dual CUs it’s not clear if you can just disable half a dual CU now, or if symmetry can be broken now, just requiring to disable on dCU per Shader Engine in Navi. If i had to make a guess i would bet on the former. For visualization reasons i disabled / greyed out one dCU per shader engine in the pic above, anyways.

All in all that would give you a 52CU APU at around 350mm² size. That should at this point be at uttermost $60 more expensive per die as a simliar die sizes when the 16nm or 28nm console SOC launched. Long story short: Such a die should absolutely NOT be cost prohibitive at $449 or $499 even today.

Now lets check the second aspect, power requirements.


Power Prognosis:

So we learned from the testing that if we clock Navi down slightly, we improve power efficiency in an overproportional manner. So at F_t = 1500Mhz the Navi10 GPU die just consumed 87W. Lets round that up to 90W to ensure our underclock would be viable for a wider range of silicon quality.

So 90W for 40 active CUs. What would happen if we scale that up to a 52CU console APU as shown above?

Ok lets just assume the power at the same frequency would just scale linearly. That would be some sort of worst case scenario though, because in the scenario above the ratio from front-end components to CUs would not be constant in the higher CU GPU meaning that those parts would contribute less to the total power requirements of the die.

So linear scaling to 52 CUs would bring us to 117W die power. For the CPU side we just take our 24W we derived in the previous chapter.
That said, there are some redundant components in those GPU and CPU figures that wouldn’t be present twice in an APU (memcontrollers for example). So from that perspective that is yet again worst case.

Following our method from the Interpretation chapter, that would give us the following:

GPU: 117W
CPU: 24W
RAM: 12 x 1,5W = 18 W (12 x 2GB GDDR6 modules)
PCB/AUX: 15W
PSU losses: 31W

powerbalancenbknz.png


Conclusion:

So ~210W, sounds much, but not impossible. In my opinion the real hard barrier is the heat density of the die. The other components are managable passively. But you really can’t cool away much more than 160-170W from the die itself in a acceptable manner. Provided you’re using a vapour chamber plus blower cooler, which i guess you would have to do in a console form factor. But since we are savely under 150W even with our rather pessimistic assumptions, we should be good.
 
Last edited:

Mass Shift

Member
What are you talking about, you need to chill with these (absurd) assumptions.

Well it's all assumptions at this point.

Regarding specs though, I wouldn't take bets against MS this time. For the simple reason that they already declared hardware accelerated RT,4K, 120fps etc. That's just going to require more than what AMD has released up to this point.

Sony's posture is equally comfortable, even though I think MS probably has the better box this go round. I think the only folks who are going to implode are the impossibles.
 
I wouldn't take bets against MS this time. For the simple reason that they already declared hardware accelerated RT,4K, 120fps etc.

1080p 30fps RT, 4k 60fps (medium to high), 1080p 120fps, maybe 1440p 120fps (medium to high).

That's just going to require more than what AMD has released up to this point.

The 5700 xt can do that.

even though I think MS probably has the better box this go round.
Yep, might be.
 
Hehe, run out of factual arguments?
All I know is, it’s an uneven playing field. AMD is being put up against Intels 14nm chips and nVidia’s 16nm chips and in each case it’s not like AMD is blowing them out of the water at 7nm. Intel is planning 10nm chips and nVidia is going out with 7nm EUV IIRC. So yes it’s awesome that AMD is wearing a crown right now that is making everyone swoon at the price / performance, but it’s not going to last all that long. I do hope, however, that they put up a good fight in pricing and drag the top dogs down a bit.
 

Lort

Banned
From performance standpoint neither of next gen consoles can really compete with datacenter - Stadia. You can build way more advanced games on Stadia. 10.7tf number is just one instance, for marketing purposes. More power/instances can be used if needed. But nobody will make AAA exclusives for Stadia to show off this power anyway.

Xbox for the whole current generation has offered cloud backed compute and noone used it .... if someone was to make a server rendered game it would come out on all platforms ..google literally has nothing that MS and Sony cant offer.
 

MadAnon

Member
Xbox for the whole current generation has offered cloud backed compute and noone used it .... if someone was to make a server rendered game it would come out on all platforms ..google literally has nothing that MS and Sony cant offer.
Wow, another one who doesn't understand what's being discussed. I'm not here to talk about which platform will have better gaming experience and will be more popular. I don't believe in Stadia's success in near future if that's what you wanted to hear, but it doesn't mean there's no potential either. Not here for fanboy wars anyway. I'm saying that console has nothing on datacenter in terms of raw power and the games you can potentially build. Not that they will be made. And which part of the word exclusive you didn't understand?

Cloud backed compute isn't the same thing as streaming. How many times it needs to be repeated? About to bang my head against the wall...
 
Last edited:

Mass Shift

Member
The 5700 xt can do that.

It doesn't support RT. Plain and simple. And while I'm not expecting magic in a box, the specifications for a hardware accelerated RT pipeline are pretty demanding.

But even if we were just talking about hardware agnostic solutions, RT still requires performance muscle when there are instances of high rendering complexity.

Whatever the solution, it needs to be both highly efficient and flexible for the consoles for full integration. I for one can't wait for the deep dives.
 
It doesn't support RT. Plain and simple. And while I'm not expecting magic in a box, the specifications for a hardware accelerated RT pipeline are pretty demanding.

But even if we were just talking about hardware agnostic solutions, RT still requires performance muscle when there are instances of high rendering complexity.

Whatever the solution, it needs to be both highly efficient and flexible for the consoles for full integration. I for one can't wait for the deep dives.
Thats what i said 1080 30fps RT, possible on 5700 xt.
 

Lort

Banned
Wow, another one who doesn't understand what's being discussed. I'm not here to talk about which platform will have better gaming experience and will be more popular. I don't believe in Stadia's success in near future if that's what you wanted to hear, but it doesn't mean there's no potential either. Not here for fanboy wars anyway. I'm saying that console has nothing on datacenter in terms of raw power and the games you can potentially build. Not that they will be made. And which part of the word exclusive you didn't understand?

Cloud backed compute isn't the same thing as streaming. How many times it needs to be repeated? About to bang my head against the wall...

Your post seemed like you may have forgotten this happened ... https://www.google.com.au/amp/s/www.engadget.com/amp/2013/05/24/xbox-cloud-computing-gaming/ which would be understandable since it never eventuated into anything...

Google saying it can scale to more compute than a single console is what ms provided 5 years ago.
 

MadAnon

Member
Your post seemed like you may have forgotten this happened ... https://www.google.com.au/amp/s/www.engadget.com/amp/2013/05/24/xbox-cloud-computing-gaming/ which would be understandable since it never eventuated into anything...

Google saying it can scale to more compute than a single console is what ms provided 5 years ago.
So which part goes against what I said exactly? Tnx for confirming that MS did something completely different than running a game client purely on datacenter and sending you only compressed sound/video output.
 

Gamernyc78

Banned
600$ - 800$, no less

Sony wouldn't be tht stupid to go over 500 like the PS3 lol Like you guys don't take into account history? They have learned from their experience and the past. It will be between 450.00 and 500.00 which I've always said and will be decided shortly before launch taking into account other things like competitors pricing like thy did with PS4. This is why competition is good because it'll drive both companies to sell their consoles as close in pricing as possible even if one is going lower.
 
Last edited:

MadAnon

Member
Sony wouldn't be tht stupid to go over 500 like the PS3 lol Like you guys don't take into account history? They have learned from their experience and the past. It will be between 450.00 and 500.00 which I've always said and will be decided shortly before launch taking into account other things like competitors pricing like thy did with PS4. This is why competition is good because it'll drive both companies to sell their consoles as close in pricing as possible even if one is going lower.
Well, Sony did say that they position next gen console as a premium product.

premium product = premium price

So I wouldn't take 500-600 price range out of the picture.
 

Gamernyc78

Banned
Well, Sony did say that they position next gen console as a premium product.

premium product = premium price

So I wouldn't take 500-600 price range out of the picture.

As stated 500 is OK for masses anything over thy wouldn't do. Ps4 was 400 at launch because of ps3 debacle. Sony is talking about a console for hardcore players and Chinese tariffs because we all know they are going to go a little higher but idk if you read the whole pr they said ppl will still be pleasantly surprised at the price. 600-800 is ridiculous thinking in my mind as the whole time after ps3 Sony has stated thy made a mistake in pricing the ps3 too high or creating such a high end product (it actually was sold at a big loss so it wasn't over priced).

No more than 500 and I'm expecting 500.00
 
Last edited:
Well, Sony did say that they position next gen console as a premium product.

premium product = premium price

So I wouldn't take 500-600 price range out of the picture.
premium product doesn't necessarily means premium price, and don't forget that it could be said to lure competition, mind games.
 

Fake

Member
499~599. Will be a premium console from both and both PS4/XboxOne will continue to sell until the nextgen get their proper exclusives and not crossplat gamse.
 

DeepEnigma

Gold Member
499~599. Will be a premium console from both and both PS4/XboxOne will continue to sell until the nextgen get their proper exclusives and not crossplat gamse.

I would love to see both coming out swinging with high-powered consoles from the get-go, and not be more conservative by holding back specs (CU counts) for a mid-gen refresh garbage.

They don't really have to be as conservative with all the revenue they make with engagement, not to mention the ecosystem is already established.

The more power they launch with, the more it'll help push gaming forward, not just console, but PC the same.
 
Last edited:

Marlenus

Member
1080 has ~95% of the performance of 5700 using ~25% less transistors and at the same speed.

5700XT has less than 5% performance advantage over 2070 at a much higher base/boost clock. (1.6/1.9 AMD vs 1.4/1.6 Nvdiia)

Clock speeds definitely (can be) increase at 7nm -

Why do you compare the 5700 to the 1080? If you want to compare perf/transistor then use the fastest versions of each chip which is the 5700xt anniversary Vs a top end 1080.

The 1070 has the same transistors as the 1080 and the vanilla 5700 is about 25% faster than the 1070 so I don't get why you are comparing the cut down Navi 10 chip with the full fat gp104 chip.
 

Fake

Member
I would love to see both coming out swinging with high-powered consoles from the get-go, and not be more conservative by holding back specs (CU counts) for a mid-gen refresh garbage.

They don't really have to be as conservative with all the revenue they make with engagement, not to mention the ecosystem is already established.

The more power they launch with, the more it'll help push gaming forward, not just console, but PC the same.
Indeed, but at least we're very close to 2020, and judging by the ryzen reviews I can see very good results. Unlike PC, consoles can have a massive extract, of course in the right hands.
 
Status
Not open for further replies.
Top Bottom