• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

SonGoku

Member
Benchmarks don't mean jack. Show me in game performance, where the 2080 has an undeniable lead.
I never said anything about outperforming, i claimed match as in being on the same ballpark as a RTX2080
They are gaming benchmarks, i don't know what else do you want?
I just don't understand how you think 3 years (or less) after the Xbox one x, console manufacturers are going to be able to cram 2x the graphical horsepower into a similar sized chassis? It's incredibly optimistic - diminishing returns are a thing.
Its simple if you think about it..
In 3 years PS4 Pro more than doubled base PS4 floating point performance while using a smaller die to boot!.
In 4 years the X more than tripled base PS4 floating point performance.

Add in a post GNC arch and everything lines up for a huge jump.
A 56CU to 64CU GPU will fit in nicely on a 350mm2 GPU at 7nm.
 
Last edited:

CrustyBritches

Gold Member
which is why i mantain PS5 will use a 56-64CU GPU

Consoles won't use of the shelf discrete cards
A 56-64CU GPU at 12TF to 13TF should match RTX2080.
Fill this out if you get a chance and I can get a better idea of where you're coming from:

1. Navi 10 XT list TDP =
2. Navi 10 XT gaming peak power consumption(boost bios) =
3. Navi 10 XT boost clock & # of SPs =
4. PS5 total system peak power consumption =
5. PS5 GPU peak power consumption =
6. PS5 boost clock & # of SPs =
 
I never said anything about outperforming, i claimed match as in being on the same ballpark as a RTX2080
They are gaming benchmarks, i don't know what else do you want?

Its simple if you think about it..
In 3 years PS4 Pro more than doubled base PS4 floating point performance while using a smaller die to boot!.
In 4 years the X more than tripled base PS4 floating point performance.

Add in a post GNC arch and everything lines up for a huge jump.
A 56CU to 64CU GPU will fit in nicely on a 350mm2 GPU at 7nm.

[inserts princess bride meme]...about that not meaning what you think it means.

Radeon VII doesnt match 2080 in gaming performance. Fact. YouTube/Google is your friend.
Those are those diminishing returns I was talking about. But as I said, I'll be happy to offer you a full apology when ps5 matches a 2080 in gaming performance (notice, I said gaming performance, not raw horsepower because I'm sure it will lose in that)
 
Last edited:

stetiger

Member
[inserts princess bride meme]...about that not meaning what you think it means.

Radeon VII doesnt match 2080 in gaming performance. Fact. YouTube/Google is your friend.
Those are those diminishing returns I was talking about. But as I said, I'll be happy to offer you a full apology when ps5 matches a 2080 in gaming performance (notice, I said gaming performance, not raw horsepower because I'm sure it will lose in that)
Yeah I would be surprised if the ps5 matches a 2080. The real question though is can amd produce a chip that matches or exceeds the 2080 two years later at a smaller node. Spoiler yes. I do however expect the ps5 to be around rtx2070 + 10% range but below 2080. It will be a beast. Combine that with the rumored 4.8GB/s ssd, 8 core zen2, and 16GB of gaming memory, specialized chips, and you have a bigger jump from ps4 to ps5, then ps3 to ps4.

Probably the last big jump we will see on consoles. The bigger news here though, is that Stadia is a bit screwed.
 

SonGoku

Member
Fill this out if you get a chance and I can get a better idea of where you're coming from:

1. Navi 10 XT list TDP =
2. Navi 10 XT gaming peak power consumption(boost bios) =
3. Navi 10 XT boost clock & # of SPs =
4. PS5 total system peak power consumption =
5. PS5 GPU peak power consumption =
6. PS5 boost clock & # of SPs =
I dont care/work with AMDs discrete cards lineup, consoles use their own configuration anyways and AMD tends to clock their cards past the performance per watt sweet spot, case in point X vs RX580
50w2g2.png
fnq47z.png


I expect a max system power consumption of 200W
Two possible GPU configs
56CUs (64CU chip with 8CUs disabled)
64CUs (72CU chip with 8CUs disabled)
Consoles don't use boost clocks, i expect 1.6GHz to 1.8GHz
Floating point performance in between 12 and 13 teraflops.
Those are those diminishing returns I was talking about.
Such as? we are just hitting two major tech advancements:
7nm and RDNA.

Radeon VII is 13.4TFs btw
Navi at 12TF will be worth 15TF of Vega performance.
 
Last edited:

CrustyBritches

Gold Member
I dont care/work with AMDs discrete cards lineup, consoles use their own configuration anyways and AMD tends to clock their cards past the performance per watt sweet spot, case in point X vs RX580
That was the point I was making about comparing the X1X's GPU to a 200W GPU instead of a 156W RX 480 with 300GB/s+ memory bandwidth and more cache. When I oc my RX 480 memory clock, power cap, and undervolt I get X1X's FH4 4k/30fps and TW3 1800p/30fps Crookback bog horse ride like DF. PS4 Pro is just like a 4.2TF RX 480 with way lower memory bandwidth. It's important you guesstimate Navi 10 XT's TDP and actual gaming consumption to compare with RX 480 and mid-gen refreshes.

PS5 is probably Gonzalo, 1.8GHz core(power capped), and Navi 10 Lite. Probably ~175W total system consumption like X1X.
 

TLZ

Banned
I dont care/work with AMDs discrete cards lineup, consoles use their own configuration anyways and AMD tends to clock their cards past the performance per watt sweet spot, case in point X vs RX580
50w2g2.png
fnq47z.png


I expect a max system power consumption of 200W
Two possible GPU configs
56CUs (64CU chip with 8CUs disabled)
64CUs (72CU chip with 8CUs disabled)
Consoles don't use boost clocks, i expect 1.6GHz to 1.8GHz
Floating point performance in between 12 and 13 teraflops.

Such as? we are just hitting two major tech advancements:
7nm and RDNA.

Radeon VII is 13.4TFs btw
Navi at 12TF will be worth 15TF of Vega performance.
Actually isn't this the very reason it should allow them to go lower on TFs? Why the need to go higher if you can match older Vega TFs with lower counts? Like Nvidia.
 
so what did we learn today?

AMD is done with selling high value stuff. they've seen that you will buy overpriced nvidia and intel shit and now they gonna give us just a little bit more perf for a little less dollars. good for my AMD stock, bad for us gamers. but to be honest, for the idiocy of the last few years, we really did earn that.
 

stetiger

Member
so what did we learn today?

AMD is done with selling high value stuff. they've seen that you will buy overpriced nvidia and intel shit and now they gonna give us just a little bit more perf for a little less dollars. good for my AMD stock, bad for us gamers. but to be honest, for the idiocy of the last few years, we really did earn that.
Most accurate comment in this whole thread. Part of the problem though are the journalists that keep propping nvidia because it performs better despite their pricing strategy.
 

SonGoku

Member
a 156W RX 480 with 300GB/s+ memory bandwidth and more cache. When I oc my RX 480 memory clock, power cap, and undervolt I get X1X's FH4 4k/30fps and TW3 1800p/30fps Crookback bog horse ride like DF.
Except for the part that the X GPU is nothing like a RX480 silicon layout, if anything the Pro is a downclocked 480, and CPU bound benchmarks don't prove anything. The X GPU is beefier than a rx580 silicon wise and achieves the same results with less thanks to having more CUs. That is why lower CU count GPUs clocked past the performance per watt sweet spot hitting disminishing returns are not indicative of the performance per watt you can expect from a console GPU with higher CU count.

Higher CU cards hit better performance per watt sweetspots which is why it doesn't make sense to use less than 56CUs on consoles.
PS5 is probably Gonzalo, 1.8GHz core(power capped), and Navi 10 Lite. Probably ~175W total system consumption like X1X.
56 CU at 1.8GHz would coincide with the 13TF rumors
Monster if true.
 
Last edited:
but im glad that navi is indeed nextgen (so the rumours of the SE rework of the last days probably were true). the stated 25% IPC should bring a 64CU chip at high clocks up to 2080ti levels of performance.

of course amd is fucked once again as soon as nvidia goes to 7nm.
 
Last edited:

demigod

Member
I was expecting ps5 to be somewhere around 2070, but after today I’m leaning towards 2080.

 

SonGoku

Member
Actually isn't this the very reason it should allow them to go lower on TFs? Why the need to go higher if you can match older Vega TFs with lower counts? Like Nvidia.
Because of 7nm and the wider 32SIMD design means you can fit up to 72CUs on a 350 mm2 console APU and with the news that Navi is designed for high clocks means you can comfortably hit 12TF+
Remember console manufacturers are chasing a clear next gen leap, 4k and rt will be resource hogs on their own. Every flop counts.

From Cerny interview
A true generational shift tends to include a few foundational adjustments. A console’s CPU and GPU become more powerful, able to deliver previously unattainable graphical fidelity and visual effects

I get what you are asking but 8-9TF is just too low a mark for next gen even accounting for Navi advantages. Sony will want to push the absolute best sillicon they can fit on a console APU.
 
Last edited:
Stolen this from era, the leak kinda makes sense. Now I’m wondering if Sony has Navi exclusively.


Highly unlikely. AMD wants to show their dominance and also show people why they should choose their new architecture. Allowing MS to put out an inferior product would be dumb and keeps the Vega platform alive for no reason.
 

CrustyBritches

Gold Member
CPU bound benchmarks don't prove anything.

Higher CU cards hit better performance per watt sweetspots which is why it doesn't make sense to use less than 56CUs on consoles.
Lol. Nice spin. The benchmarks I chose were specifically not CPU-bound, but GPU-bound.

It seems you're intent on totally discarding the implications of Navi 10 for next-gen consoles. We have precedent with the past 2 console releases(2013 and mid-gen) that a card from the previous year from between the 150W and 175W tiers has been used. If Navi 10 line is 160-200W and PS5 has 175W total system consumption then I don't think we can discard the possibility of it's use in PS5/NextBox. Like for instance they had a 50ft screen with PlayStation on it during the unveiling of this very card.
psnavi.jpg
 
Last edited:
so going by lisa's wording it looks like AMDs gpu arcs will from now on indeed be deverging.

vega for the datacenter and maybe content creators. rDNA for gaming. that's hopefully another indicatior that there will be dedicated raytracing silicon on navi.
 
Lol. Nice spin. The benchmarks I chose were specifically not CPU-bound, but GPU-bound.

It seems you're intent on totally discarding the implications of Navi 10 for next-gen consoles. We have precedent with the past 2 console releases(2013 and mid-gen) that a card from the previous year from between the 150W and 175W tiers has been used. If Navi 10 line is 160-200W and PS5 has 175W total system consumption then I don't think we can discard the possibility of it's use in PS5/NextBox. Like for instance they had a 50ft screen with PlayStation on it during the unveiling of this very card.
psnavi.jpg

im with him on that one. if navi low CU count (whatever its called) is runing on higher clocks and therefore higher on the frequency voltage curve and therefore TDP, they will go with navi high CU count at lower clocks in the consoles, because there is the sweetspot of perf per W.
looking back to the past isn't a good indication in that case, imo.
 
Last edited:

SonGoku

Member
Lol. Nice spin. The benchmarks I chose were specifically not CPU-bound, but GPU-bound.
What spin? both games you listed are locked 30fps on the X, we'll never know if it could achieve a higher fps with a better CPU
TW3 is notably CPU bound on consoles

I don't even get the point of said exercise when we already know the specs of the X GPU surpass the RX580 in everything but clocks.
It seems you're intent on totally discarding the implications of Navi 10 for next-gen consoles
We don't even know what Navi line up looks like, to say Navi 10 is meaningless at this point
Let's talk specs instead: A 56-64CU card would fit on a 350mm2 APU

Why would Sony go with die smaller than launch PS4 and get worse performance per watt? It doesn't add up
We have precedent with the past 2 console releases(2013 and mid-gen) that a card from the previous year from between the
Wrong, PS4 Pro card was based on a card that launched on the same year with some Vega features
X used an enhanced version of the RX580 which launched on the same year.

The situation is very different compared to 2013, a 7970/R9390 would just not fit in a console APU.
If navi is not gcn, won't that make BC with PS4 trickier? Will it have to be cpu and gpu software emulation?
Im guessing it will be done with a software layer, the CPU is a bigger change imo
 
Last edited:

CrustyBritches

Gold Member
What spin? both games you listed are locked 30fps on the X, we'll never know if it could achieve a higher fps with a better CPU
TW3 is notably CPU bound on consoles

We don't even know Navi line up looks like, to say Navi 10 is meaningless at this point
Let's talk specs instead: A 56-64CU card would fit on a 350mm2 APU
Nope. TW3 has dynamic res with maximum 4k on X1X. Crookback Bog is 1800p/30fps like my card so GPU-bound and hits memory bandwidth hard with fog effects. If you want CPU-bound you go to Novigrad. Totally different scenarios. You want to chase X1X results with higher compute and push Polaris 10 out of it's sweet spot for perf/watt instead of crediting X1X's higher mem bandwidth and additional cache. That's why you must justify using a 200W GPU to get your next-gen estimates along with AdoredTV chart on Navi 20 Radeon VII tdp, imo.

We know Sapphire is opening with 2 Navi 10 variants at $399 and $499. We know Navi 10's performance in Strange Brigade relative to a 2070. We know it's perf/watt relative to Vega. We actually know a lot about Navi 10.

No thanks about shifting the discussion from Navi perf/watt and console TDP to die size estimates. I'm afraid I'm not well versed enough in that area to comment.
 
Last edited:

SonGoku

Member
Nope. TW3 has dynamic res with maximum 4k on X1X. Crookback Bog is GPU-bound and hits memory bandwidth hard with fog effects. If you want CPU-bound you go to Novigrad. Totally different scenarios. You want to chase X1X results with higher compute and push Polaris 10 out of it's sweet spot for perf/watt instead of crediting X1X's higher mem bandwidth and additional cache. That's why you must justify using a 200W GPU to get your next-gen estimates along with AdoredTV chart on Navi 20 Radeon VII tdp, imo.
Im not sure why you are pushing so hard for this undervolted rx480 just to fit this made up narrative of consoles using last year GPUs when we already know the specs of the X GPU surpass the rx580 and the Pro used a GPU that released on the same year with better tech. Why are you trying to deny the water is wet?
There are so many factors like CPU not catching up with GPU draw calls, your own benchmark not using the same settings, console GPU handling tasks that are done by the CPU on PC etc.
Xbox One X GPU Specs
Radeon RX 580 Specs
Radeon RX 480 Specs
Lets get this out of the way. Are denying those specs? are you denying the X has a 6TF Polaris GPU?

BTW Im not longer taking into account anything adoredtv says or any of his charts
Im basing my estimates on 7nm silicon density and AMD PR for Navi
We know Sapphire is opening with 2 Navi 10 variants at $399 and $499. We know Navi 10's performance in Strange Brigade relative to a 2070. We know it's perf/watt relative to Vega. We actually know a lot about Navi 10.
I meant via official channels but ok
No thanks about shifting the discussion from Navi perf/watt and console TDP to die size estimates. I'm afraid I'm not well versed enough in that area to comment.
Too bad, because die size and CU count are crucial when discussing perf/watt. Case in point X GPU hits 6TF while consuming less power than a RX580 thanks to its lower clocked CUs (1172 MHz vs 1340 MHz)
Let me ask you this instead:
If you knew with certainty a 56-64CU (enabled) GPU would fit on a 350mm2 APU (Launch PS4 size) would you say its quite likely to be the final CU count on the console?
 
Last edited:

stetiger

Member
If navi is not gcn, won't that make BC with PS4 trickier? Will it have to be cpu and gpu software emulation?
A new architecture is typically merely a refactor. The last 5 years of NVIDIA have had TPUs (same thing as CUs). Expect Navi to have a lot in common with GCN with some bottlenecks removed and replaced by newer features. That is more straight forward to do BC on than lets say something like Cell, where you are jumping between two fundamentally different architectures.
 
Last edited:

stetiger

Member
Yeah, he said Navi was having issues and said RIP to it. You had folks jumping off the Navi bandwagon in that thread and now all of a sudden they are back on Navi wagon.
Didn't he say that navi was GCN, and that people wanted to go to the next arch? That sounds like BS to me. That being said Navi most likely has tons of GCN tech in it.
 

CrustyBritches

Gold Member
Let me ask you this instead:
If you knew with certainty a 56-64CU (enabled) GPU would fit on a 350mm2 APU (Launch PS4 size) would you say its quite likely to be the final CU count on the console?
It's pretty clear you need X1X to have a 200W GPU instead of 156W GPU to fit your narrative.

We've been looping on this. We saw Navi 10 tonight and it's benchmark result on Strange Brigade relative to a 2070. We know perf/watt relative to Vega. It would be great if that was the Navi 10 Pro, but the Sapphire rep seemed to indicate it was the $499 XT variant. Somebody will measure that die size relative to Dr. Su's fingers and the size of some other chip she's held, then you'll get your die size, but she said it was small whatever that means.

You know my position, that Navi 10 line could fill the RX 570/580 TDP tiers with 160-200W cards replacing Vega 56/64. I think PS5 is Gonzalo(Navi 10 Lite) and has total system peak consumption around 175W. I'll let you have the last word, thanks.
 

stetiger

Member
note if you do a quick math of X1X compute 6*(keeping clocks the same)1.25*(keeping power draw the same)1.5 = 11.25TF. So we can safely expect next gen consoles start from 11TF. You can hope the range to go from 11TF to 15TF. You might even be able to get an 11TF console at 399 when you factor the 7nm efficiency which is 1.25 according to AMD
 
Last edited:

SonGoku

Member
It's pretty clear you need X1X to have a 200W GPU instead of 156W GPU to fit your narrative.
The RX580 is not even 200w...
You are missing the point at hand, you can hit a better performance per watt sweet spot with more CUs clocked lower as opposed to less CUs clocked higher. The X and RX580 are a prime example of this

Do you agree with bolded?
Consoles don't use off the shelf discrete cards, they tinker and customize silicon to fit their perf and tdp target. You get much more flexibility this way.
 
Last edited:

stetiger

Member
Interestingly enough this also means that next gen consoles will have around 1080TI power or 2080 power with their ray tracing solution, an SSD, and a zen 2 8C at presumably 3.2Ghz. Those are specs for a high end console last summer. 4K 60 will be doable in a lot games. But I still think they should aim for 1800p upscaled for most games. We could trade off a bit of rasterization cycles for extra geometry and post effects like AA (I am kinda getting tired of TAA's blurriness now haha).

Edit: The PS5 at release will be a two year old high end consoles, with developers having had two years to build games for it on devkit. Next gen has a more exciting start than current gen.
 
Last edited:

CrustyBritches

Gold Member
The RX580 is not even 200w...
You are missing the point at hand, you can hit a better performance per watt sweet spot with more CUs clocked lower as opposed to less CUs clocked higher. The X and RX580 are a prime example of this

Do you agree with bolded?
Consoles don't use off the shelf discrete cards, they tinker and customize silicon to fit their perf and tdp target. You get much more flexibility this way.
I've already stated my case for X1X with 175W total system consumption and ~150W peak GPU power consumption. I know your position, let's move on.

Navi 10 was demoed tonight. What's your estimated AMD listed TDP compared to boost bios peak consumption for load gaming? How does The Strange Brigade demo relate to Vega and the claims for perf/watt improvements relative to your console target estimates? Where are you getting your Navi 20 tdp estimates?
 

SonGoku

Member
I've already stated my case for X1X with 175W total system consumption and ~150W peak GPU power consumption. I know your position, let's move on.
And a GPU with higher CUs can hit those numbers i agree with bolded maybe a bit higher
What's your estimated AMD listed TDP compared to boost bios peak consumption for load gaming?
Retail GPUs config is inconsequential to consoles (see: X vs RX580)
Where are you getting your Navi 20 tdp estimates?
I dont even know what Navi 20 is lol
All i know is a 56-64CU chip will fit on a 350mm2 APU and lower clocked CUs hit a better perf/watt
 
Last edited:

TLZ

Banned
Because of 7nm and the wider 32SIMD design means you can fit up to 72CUs on a 350 mm2 console APU and with the news that Navi is designed for high clocks means you can comfortably hit 12TF+
Remember console manufacturers are chasing a clear next gen leap, 4k and rt will be resource hogs on their own. Every flop counts.

From Cerny interview


I get what you are asking but 8-9TF is just too low a mark for next gen even accounting for Navi advantages. Sony will want to push the absolute best sillicon they can fit on a console APU.
What about total wattage? They're still limited by that. We'd have to know how many watts 72CUs with high clocks would cost. I did watch it again just now, and just heard the higher clocks part. Just wondering how high they can really go without exceeding limits.
 

CrustyBritches

Gold Member
Retail GPUs config is inconsequential to consoles (see: X vs RX580)

I dont even know what Navi 20 is lol
If find it odd that you so brazenly cast aside the implications of Navi 10 for consoles while saying you don't what Navi 20 is. They had a giant PlayStation logo on a 50ft screen for this very GPU.
psnavi.jpg

Listed TDP and actual load gaming peak power consumption are pretty much the last piece of the puzzle and you don't want to talk Navi 10? You say retail config is inconsequential yet you base everything on the RX 580's perf/watt while you argue high clock speed has poor perf/watt scaling on Polaris 10?
 

SonGoku

Member
What about total wattage? They're still limited by that. We'd have to know how many watts 72CUs with high clocks would cost. I did watch it again just now, and just heard the higher clocks part. Just wondering how high they can really go without exceeding limits.
It would be 64CUs (8 disabled for yields) that'd be the high end of my expectations, the other possibility would be 56CUs
To answer your question as high as it can go without breaking the bank

The point is on a high CU chip on a arch that's supposed to clock high, you won't get less than 11TF
 

SonGoku

Member
They had a giant PlayStation logo on a 50ft screen for this very GPU.
That was Navi RDNA reveal, she wasnt talking about any chip in particular at that point
Listed TDP and actual load gaming peak power consumption are pretty much the last piece of the puzzle and you don't want to talk Navi 10?
Its not that i dont want to talk Navi10, im eager for info on Navi 10 and Navi in general
My point is that retail cards are not representative of Console GPUs. Consoles tend to use more flexible configurations that fit their target.
AMD tends to overshoot clocks hitting diminishing returns which is why i dont think its perf/watt is representative.

Lastly 7nm is not a mature process, 1 year can make the world of difference for yields which affect clocks and in turn influence tdp
high clock speed has poor perf/watt scaling on Polaris 10
No, I argue that you get better per/watt sweet spot with more cores/CUs. This is true of any arch (nvidia/amd), CPUS and GPUs alike.
 
Last edited:

Munki

Member
They seem to have implied that Sony contributed considerably with R&D.

They also had a speaker from MS where her and Lisa gushed over their relationship from the early planning of Navi. They also heavily teased the NextBox product(s). There seemed to be some genuine excitement from both of the ladies in regards to what is in store from their work together.

Somebody from ERA made a transcript:

D DopeyFish - TY brudduh.

Lisa: AMD and Microsoft have been partners for a long time, but our relationship has changed. Can you talk a little bit about our relationship and what we're doing together?
Roanne: Absolutely. So for me, I look back to 2 years ago when we started back in redmond, AMD and Microsoft started planning the next generation Ryzen family together and I think that we have completely moved the needle in how we approach co-development together.... and you're going to see that in what comes to market from both the device level, the driver level and even the customer experiences that are going to be delivered. [random pc talk/customer stuff]

Lisa: so our audience is always wondering... what's next?

Roanne: *grinning ear to ear to Lisa*
Lisa: *grinning ear to ear to Roanne*

<5 seconds later>

Lisa: are you going to tell them?!

Roanne: I KNOW... OH IF ONLY I COULD... Lisa and I have some inside scoops on the depth of partnership between AMD and Microsoft... not just in PC space but also when it comes to the datacenters and cloud and when it comes to graphics and gaming... I am <really> excited about the stuff that we're doing there.... I think it's going to be next level in terms of the things that you can expect from us... [random closing remarks].
 
Status
Not open for further replies.
Top Bottom