• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Ellery

Member
PC Performance =/= Console Performance. Too many discrepancies to compare. I have a 1080 currently but I can almost guarentee the conoles will perform better due to optimization, extra bandwidth, less limitations, and more memory.

Yes I know. We are discussing discrete graphics cards right now though and that doesn't change anything related to relative difference between GPUs, because we can't factor in developer talent and architectural API differences for console.

Obviously God of War on the 1.8 TFLOP PS4 looks amazing, better than most PC games on my 5.5 TFLOPS AMD r9 290X
 
Last edited:

Fake

Member
PC Performance =/= Console Performance. Too many discrepancies to compare. I have a 1080 currently but I can almost guarentee the conoles will perform better due to optimization, extra bandwidth, less limitations, and more memory.
Game benchmarks are little useless in the console market. Dunno why bringing up.
Holy shit. This is performance per watt.

Dude what are you doing? Are you for real right now?
He is not so wrong you know. Those benchmark are from PC, they don't actually translate to the console market.
 
Last edited:

Ellery

Member
He is not wrong you know. Those benchmark are from PC, they don't actually translate to the console market.

I think maybe you should re-read exactly what I wrote so far today about graphics cards. I think we are talking about 2 different things here.

I don't know how any of this relates to consoles right now. I didn't even say anything about what GPU is in the console.

It will be a custom Navi based chip. I have no idea how many CUs or what clockspeeds
 
Last edited:

ethomaz

Banned
He is not wrong you know. Those benchmark are from PC, they don't actually translate to the console market.
I wans't trying to estimate optimizations for consoles... just overall PC graphics performance.
I did get the wrong graphs :(
 

Fake

Member
I think maybe you should re-read exactly what I wrote so far today about graphics cards. I think we are talking about 2 different things here.

I don't know how any of this relates to consoles right now. I didn't even say anything about what GPU is in the console.
Its the subject of the thread, but I understand. Sry.
I wans't trying to estimate optimizations for consoles... just overall PC graphics performance.
I did get the wrong graphs :(
I get, but again those pc vs pc/console comparisons are silly. They never translate to the console. I even posted a Digital Foundry video talking exactly that.
 
Last edited:

Ellery

Member
I am so happy when we finally get to see real world gaming benchmarks for Ryzen 3000 from third party reviewers.
Not a big fan of all the synthetic stuff, but the R5 3600(X) compared to the Intel i7 8700K is going to be very interesting.
 
Last edited:

LordOfChaos

Member
I find the HBM + ddr4 rumor to be weird. Wouldnt that require 2 separate controllers on board which would increase price and use more space?

Fun fact, the PS4 already has (at least) two memory controllers on board. There's 256MB DDR3 attached to the ARM coprocessor.

Heck it's exactly what the Pro did too more directly, the extra 1GB of DDR3 freeing up more of the GDDR5. The tiny amount of silicon needed for the MC is evidently worth it over all of the OS memory needlessly being faster GDDR.
 
Last edited:

ethomaz

Banned
Last edited:
Fun fact, the PS4 already has (at least) two memory controllers on board. There's 256MB DDR3 attached to the ARM coprocessor.

Heck it's exactly what the Pro did too more directly, the extra 1GB of DDR3 freeing up more of the GDDR5. The tiny amount of silicon needed for the MC is evidently worth it over all of the OS memory needlessly being faster GDDR.
Not a fair comparison. You're talking about 2 discrete chips (APU + southbridge), while the DDR4/HBM2 scenario refers to the APU having 2 different memory controllers.

Of course it's safe to expect a small amount (like 4GB) of (LP)DDR4 RAM in the southbridge for the OS needs.
 
Last edited:

shark sandwich

tenuously links anime, pedophile and incels
What did everyone truly expect for Navi 10? You aren't going to get a Wine tier GPU at Beer budget prices. The customizations on consoles is where they shine, more bandwidth and a wealth of GDDR6 is where they will get their edge. But to expect 2070 TI or better in performance was setting yourselves up for disappointment to begin with.
Radeon VII has by far the highest bandwidth ever for a GPU (1 Tb/s). That barely granted any performance boost over the Vega 64 with half the bandwidth.

If you’re counting on memory bandwidth on consoles to magically improve Navi’s performance then you are going to be disappointed.
 

Fake

Member
If you’re counting on memory bandwidth on consoles to magically improve Navi’s performance then you are going to be disappointed.
Well, those SSD magically improve the loading time on PS5 dev kit. Its no use having technology if no one knows how to use it.
 
Last edited:

LordOfChaos

Member
Not a fair comparison. You're talking about 2 discrete chips (APU + southbridge), while the DDR4/HBM2 scenario refers to the APU having 2 different memory controllers.

Of course it's safe to expect a small amount (like 4GB) of (LP)DDR4 RAM in the southbridge for the OS needs.

That's what I was referring to though. The PS4 Pro has 1GB DDR3 on the coprocessor and offloaded the RAM from the main pool with it. I'm assuming the PS5 would be set up in a similar way.

If the PS4 can offload OS memory to DDR on the ARM coprocessor/southbridge, why does the PS5 need to have both pools on the APU? If people are talking about a split pool of game memory that's not what I meant
 
Last edited:

ethomaz

Banned
Guys 1080-class is Vega 64 territory.

Vega 64 is 12.6 TFs.

It fits most leaks... I don't see any issue with DF comment.
 
Last edited:

SonGoku

Member
If they can tolerate and manage the power consumption and cooling when using GDDR6 at similar bandwidth / throughput then they can save R&D budget and invest it somewhere else
I think HBM2 is too expensive, if HBM3 really is cheaper i could see Sony replacing the GDDR6 pool with HBM3 for cost reductions
So if we take the Vega 20 7nm size... half should about 170 mm² that tells us that at 255 mm² Navi 10 has way more CUs than 36.
Peharps Navi 10 is a 48 CUs chip.
does it scale linearly like that? don't other components besides CU take around 60% of die space so adding more CUs wouldn't change area drastically, besides Navi CUs supposed to be smaller right?
Guys 1080-class is Vega 64 territory.
Vega 64 is 12.6 TFs.
It fits most leaks.
Die size doesn't fit, a 40CU or (48CU) GPU would be barely an increment over 40 CU Pro or 44CU X at 7nm.
 
Last edited:

ethomaz

Banned
does it scale linearly like that? don't other components besides CU take around 60% of die space so adding more CUs wouldn't change area drastically, besides Navi CUs supposed to be smaller right?
Yeap it is not linear plus Vega CUs has FP64 units that are not present on Polaris and probably won't be on Navi 10 making it bigger.

Even so I believe 48CUs fits the increase over Polaris 36CUs maintaining the similar size.

It is a estimate after all.
 
Last edited:

SonGoku

Member
Yeap it is not linear plus Vega CUs has FP64 units that are not present on Polaris and probably won't be on Navi 10 making it bigger.
Even so I believe 48CUs fits the increase over Polaris 36CUs maintaining the similar size.
It is a estimate after all.
Yeah i doubt this chip is bigger than 48CUs but i don't think its the one it will land on consoles. It makes no sense for a console to push clocks to make up for low CUs and dramatically increase power consumption when there is plenty of room to fit 72CUs even on a console sized APU on 7nm.
 
Last edited:

ethomaz

Banned
Yeah i doubt this chip is bigger than 48CUs but i don't think its the one it will land on consoles. It makes no sense for a console to push clocks to make up for low CUs and dramatically increase power consumption when there is plenty of room to fit 72CUs even on a console sized APU on 7nm.
Yesp console APUs can have customized number of CUs.
BTW at time PS5 launch AMD will have the Navi 20 with more CUs fighting the high-end market too.

It is just that Navi this year will have that lower CU count... it is aimed to mainstream/mid-end after all.
 
Last edited:

SonGoku

Member
Yesp console APUs can have customized number of CUs.
BTW at time PS5 launch AMD will have the Navi 20 with more CUs fighting the high-end market too.

It is just that Navi this year will have that lower CU count.
Yeah that's why i said Richard i trolling not because of the gtx1080 bit but because of his disingenuous die size estimate for a console.
 

demigod

Member
Prices?

$499 Navi 10 XT
$399 Navi 10 Pro

That was what Sapphire leaked.

RTX 2070 that is the 1080-class performance is $499 too.

I'm coming back for you if that 2070 ends up as $399. I just can't see AMD pricing their card that's barely better than 2070 without raytracing at $499.
 

Ellery

Member
I'm coming back for you if that 2070 ends up as $399. I just can't see AMD pricing their card that's barely better than 2070 without raytracing at $499.

Very much agree with that. I have absolutely zero brand loyalty and put in whatever I deem to be the best buy at the time I am buying, but a GTX 1080 performing RX5700 AMD Navi card for $499 would be DoA in my opinion.
Obviously I don't have all the information and actual specifications including VRAM, overclockability, features and power draw, but why would I consider a $499 card that performs around 2070/1080/Vega56/64 if I could just go instead with :

- RTX 2070 for the same price (actually 30€ cheaper) and has RayTracing cores
- Vega 56 which is sub $300 and can go as low as 230€
- Vega 64 which is sub $400
- Used GTX 1080 for like $300

But maybe those prices are wrong and the RX 5700 comes in with 16GB GDDR6 and costs $379 for RTX 2070 performance then it would be a much more attractive buy.
 

SonGoku

Member
What's disingenuous about it?
The chip he is talking about is likely 48CUs or less. There's no way consoles use such small chip at 7nm. They'll get a much better perf/watt ratio using a bigger chip.
For reference on 16nm:
Pro: 40CUs
X: 44CUs
But 320-360mm² is roughly what Xbox One, Xbox One X, PS4 and PS4 Pro fall into isn't it?
Correct but die size won't double or any sort by adding more CUs, other components take more space. Theoretically you could fit 90CUs on a 350mm2 APU, NOT that i expect that much.
The most likely escenario is 64CUs with 8 disabled (56 enabled) although 72CUs with 8 disabled (64 enabled) wouldn't be unrealistic either assuming RDNA breaks the 64CU limit.
 
Last edited:
Radeon VII has by far the highest bandwidth ever for a GPU (1 Tb/s). That barely granted any performance boost over the Vega 64 with half the bandwidth.

If you’re counting on memory bandwidth on consoles to magically improve Navi’s performance then you are going to be disappointed.
True, but that is still VEGA architecture with HBM2, isn't it? HBM2 is still slower than GDDR6 from what I've been reading. HBM3 coupled with NAVI is where we will see some performance gains at a lower cost. But I also don't have much experience speaking to the hardware side of things. But this is what I'm gathering.
 

SonGoku

Member
HBM3 coupled with NAVI is where we will see some performance gains at a lower cost.
I doubt we'll see consoles going with any type of HBM memory come launch
HBM2 is still slower than GDDR6 from what I've been reading.
Its not..
HBM2 is more expensive though so you get less memory for the same price compared to GDDR6
Less memory=less stacks= smaller bus = lower bandwidth
 
Last edited:

McHuj

Member
The chip he is talking about is likely 48CUs or less. There's no way consoles use such small chip at 7nm. They'll get a much better perf/watt ratio using a bigger chip.
For reference on 16nm:
Pro: 40CUs
X: 44CUs

Correct but die size won't double or any sort by adding more CUs, other components take more space. Theoretically you could fit 90CUs on a 350mm2 APU, NOT that i expect that much.
The most likely escenario is 64CUs with 8 disabled (56 enabled) although 72CUs with 8 disabled (64 enabled) wouldn't be unrealistic either assuming RDNA breaks the 64CU limit.

You really have clue what you're talking about, do you?
 

shark sandwich

tenuously links anime, pedophile and incels
True, but that is still VEGA architecture with HBM2, isn't it? HBM2 is still slower than GDDR6 from what I've been reading. HBM3 coupled with NAVI is where we will see some performance gains at a lower cost. But I also don't have much experience speaking to the hardware side of things. But this is what I'm gathering.
The HBM2 that Radeon VII uses is by far the fastest memory configuration of any consumer GPU to date, and is almost certainly faster than what the next gen consoles will have.

I don’t know where you’re getting the idea that HBM3 is going to magically boost Navi performance through the roof while simultaneously lowering cost. Definitely gonna need a citation on that.
 

SonGoku

Member
I'll give a basic summary of the article:
It says that they have been given information that a number of Navi parts are GCN based while others, like many of the ones AMD has decided to highlight, are RDNA.
Apparently there are a number of reasons why but the one they cite in the article is that modern games and engines have been optimized for GCN and they don't want to lose that and have to start from scratch.
And it is not until Navi 20 is released early next year that they will release "pure" RDNA to market and challenge NVIDIA in the highest performance range.

SweClockers is a long running site and they used to be really well respected and trusted back in the day but i haven't really kept up with them and don't know what their current reputation is like so i can't vouch for them.
Interesting! So in layman terms the first edition of Navi chips are the potato edition. Would also explain previous driver leaks with GCN traces.
Makes sense AMD is waiting for next gen PS5/SNEK games designed around the new arch to launch alongside them pure RDNA big Navi.
 
Last edited:
Status
Not open for further replies.
Top Bottom