• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Latest Speculation from "known sources" points to Navi(GPU in next-gen consoles) having issues.

SonGoku

Member
PS4 Pro released the same year as Polaris 10 and had ~155W average total system consumption for $399. MS released Xbox One X a year later and it got them another ~20W on the TDP budget. In the end they still have a card that's basically just AMD's 2016 $250/150W mid-range part with higher memory bandwidth and more cache.

I was just wanting to tell you if you're going by the adoredtv leaks, then RX 3080 isn't a 150W any longer, it's 175W and supposedly worse. Something from the 160W tier with power capping and voltage tuning, or the 130W tier would be more appropriate. Adoredtv could be bullshit, but of course it allows us to talk next-gen console tdp and PC pricing tiers.

P.S.- My max prediction(48CUs at 1.8GHz peak core) isn't very far off your 12ish TF prediction. Anywhere near GTX 1080 power in a console will be awesome.
That's what im saying though a 3080 XT (going by adoredtv latest video) with clocks fined tuned to hit a performance per watt sweet spot can reach ~12.5TF on 150W real. Another factor to influence power consumption are yields and manufacturing refinements which could both improve by the time PS5 enters mass production bringing further power reductions.
Im talking about the 190W 3080XT btw.

So realistically we can get ~12.5TF on a 200W box for $500. The console would have to be slightly bigger than the X but that console is almost as small as the slim anyways so no issue there.
A 130W GPU (~10-11TF) would be more appropriate if they were trying to hit $399.

PS: I think a 56CU part (60 CU with 4 disabled) will hit a better performance per watt sweet spot.
 
Last edited:

CrustyBritches

Gold Member
Another factor to influence power consumption are yields and manufacturing refinements which could both improve by the time PS5 enters mass production bringing further power reductions.

So realistically we can get ~12.5TF on a 200W box for $500. The console would have to be slightly bigger than the X but that console is almost as small as the slim anyways so no issue there.

A 130W GPU (~10-11TF) would be more appropriate if they were trying to hit $399.
I guess I'm just conditioned from experience to expect hotter and hungrier than expected on AMD. Compared to Polaris 10 and PS4 Pro in 2016, a 2019 $399 console would be ~155W total. A 2020 $499 console would be ~172W total and get the previous year's 150W GPU. Everything aside from the GPU is pulling maybe 20-30W, that leaves 140-150W for the GPU.

I'm expecting something compact, quiet, and power efficient from PS5. More Xbox One X than PS3 phat. My guess for Vega 64 type power on Navi 7nm was ~167W. RX 3070XT(48CUs) fits the bill nicely. Consoles need 150W and that's within a decent margin for underclock and tuning. Still on the high side for Gonzalo's clocks, but they could power cap say it's that speed while on the back side it's lower. The memory subsystems are what needs work on AMD cards anyway.

Admittedly I'm pulling CU counts and power consumption from Adoredtv and core clocks from the Gonzalo, so if either source is wrong my expectations could shift. As of right now, if Navi is power hungry maybe 44CU. If it's decent, then I'd hope for 48CUs. So 10-11TF based on Navi's performance. Which is just fucking ludicrously powerful for a tiny box under 200W.

P.S.-My high and your low aren't too far off.
 

SonGoku

Member
I guess I'm just conditioned from experience to expect hotter and hungrier than expected on AMD. Compared to Polaris 10 and PS4 Pro in 2016, a 2019 $399 console would be ~155W total. A 2020 $499 console would be ~172W total and get the previous year's 150W GPU. Everything aside from the GPU is pulling maybe 20-30W, that leaves 140-150W for the GPU.

I'm expecting something compact, quiet, and power efficient from PS5. More Xbox One X than PS3 phat. My guess for Vega 64 type power on Navi 7nm was ~167W. RX 3070XT(48CUs) fits the bill nicely. Consoles need 150W and that's within a decent margin for underclock and tuning. Still on the high side for Gonzalo's clocks, but they could power cap say it's that speed while on the back side it's lower. The memory subsystems are what needs work on AMD cards anyway.

Admittedly I'm pulling CU counts and power consumption from Adoredtv and core clocks from the Gonzalo, so if either source is wrong my expectations could shift. As of right now, if Navi is power hungry maybe 44CU. If it's decent, then I'd hope for 48CUs. So 10-11TF based on Navi's performance. Which is just fucking ludicrously powerful for a tiny box under 200W.

P.S.-My high and your low aren't too far off.
PS5 doesn't need to sacrifice power for size, launch consoles tend to be big. The launch PS4 is bigger than the X. They can target that sweet spot of size/power
May I remind you im going with the worst case scenario for Navi having less power reduction gains than what the shift to 7nm allows. Things might turn up to be better even and the rumored 13TF becomes a reality but for now Im working under the assumption of the latest video being right.

A 56 CU card (60 with 4 disabled) with clocks fine tuned to hit a performance sweet spot can deliver ~12.5TF for 150W and maybe bit less. Im taking into account AdoredTV latest video for power consumption. Like i commented earlier i think a 56CU card will deliver better performance per Watt

This is all doable for $500, if the launch price is $400 then i will lean more towards 11TF.

PS: The reason why AMD cards are so power hungry is diminishing returns with AMD pushing cards beyond their level.
 
Last edited:

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
In short, do not hardcode a specific algorithm on a chip, focus on what raytracing in general may need to run efficiently and how to maximise overall chip throughput.
We had GPGPU raytracing before Turing introduced RTC -- the latter advanced RT by an order of magnitude. I get it that people would generally prefer programmability over this or that hardcoded feature, but we eventually have to face the reality of physics. Not only that we need hw-accelerated RT, but we need that acceleration to again advance by another order of magnitude for the next step in visual fidelity.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
We had GPGPU raytracing before Turing introduced RTC -- the latter advanced RT by an order of magnitude. I get it that people would generally prefer programmability over this or that hardcoded feature, but we eventually have to face the reality of physics. Not only that we need hw-accelerated RT, but we need that acceleration to again advance by an order of magnitude for the next step in visual fidelity.

I am not saying no to any fixed function logic, we still have them for rasterisation/texture filtering/etc..., but can we minimise the logic needed. see if we can re-use it for other purposes too (increasing reusability), and kind of lead to stronger (re)unified shaders.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I am not saying no to any fixed function logic, we still have them for rasterisation/texture filtering/etc..., but can we minimise the logic needed. see if we can re-use it for other purposes too (increasing reusability), and kind of lead to stronger (re)unified shaders.
The great divide in power efficiency in computing has always been programmable logic vs purpose-built logic (i.e. ASICs) Unfortunately we're at a stage where we are the end of our wits WRT improving the efficiency of programmable units in quite a few areas. We'll be seeing more and more transitioning to ASICs in such areas where (1) there are established algorithms, and (2) ever insatiable need for more throughput. You might recall how CNN computing evolved -- from CPUs, to GPGPUs, to TPUs; while GPGPUs are still kings for CNN training, they are being phased out for inference, just because inference is too important a task today with too high demands for throughput (and to some degree latency) so that people would be willing to pay the price of programmability. Ergo the TPU succeeding the GPGPU for inference as we speak. Situation with raytracing is not too far, at least where it comes to the 'tracing' part -- shaders are another matter.
 
Last edited:

thelastword

Banned
If you're aware that it's an unhealthy attitude, why not change it?

Even when AMD is ahead, they are seen as being behind because... Reasons. AMD has had good products, and still have. I bet you to find a better deal in their price range right now, that is superior to the RX 570 or the Vega 56 (referring to US prices here...). People hate AMD when they can't keep up. Sure, they like AMD when they do well, but, they like it because nVidia is forced to lower prices, and then they go out and buy nVidia still... And that is exactly what propels the gaming industry to what it is today. The exploitation of people that will pay more and more for less and less. People trashed the Radeon VII for its price, but the only reason that card exists in the first place, is because nVidia's RTX prices allow it. But no one blames nVidia for it. Sure, they trash RTX, but they don't say that it's their fault that the Radeon VII has that price. But it definitely is AMD's fault that the RTX 2080 Ti is $1200+ 😵

While I understand your analogy... Lexus is not charging more and more for every new car generation simply because Toyota cannot reach 0-60 mph in only 3.6 seconds. In fact, for the majority of people, 0-60 mph in 4.6 seconds is still overkill. Not that that is helping Lexus sell any more cars though, because the majority of car buyers are generally more sensible to buy something for their actual use, unlike gamers, which are more worried about the brand name than what is actually best for them, both in the short and the long term.

Just remember that your money is a vote for how you want things in the world to be. People have voted that slower cards for more money is fine for multiple generations, and so, here we are.

I do not have hope anymore that this will change. That includes Navi. Because if the RX 570 can't change anything right now, what makes one think that anything else that AMD will put out, can? At this point they need a $500 RTX 2080 Ti level of performance card, and even then, I have my doubts that people will buy it.



Oh..... What about this? Example comments;

"1080 Ti here using 419.35..running at 1440p 144hz monitor with gsync on..game does hit 60 but randomly drops down to 30 or lower when fighting enemies. Kinda hard to deflect when the whole game stutters lol. "
"Same, 2080ti with 6core cpu still getting 20-30fps, unplayable even in the lowest possible setting. Tried every single thing on the web and none of them works. Trash. "

https://steamcommunity.com/app/814380/discussions/0/1850323802572206287/

In other words... Anecdotal evidence does not prove that somehow AMD has inferior drivers to nVidia.

No one should buy a 1650, 1050ti, 1050, 970 and the 1060 3GB over the card in the link below (AMD's RX570) $129.95…...

https://www.amazon.com/dp/B06ZYRRW9T/?tag=neogaf0e-20

The 8Gb version is only a few bucks more and don't forget you get two of the latest games with the purchase....

---------------
No one should buy a GTX 1070, 1070ti, 1660Ti, RTX 2060 over the card in the link below (The Sapphire Pulse Vega 56) $299.99 ,

https://www.amazon.com/dp/B07BGZNTQJ/?tag=neogaf0e-20

In the latest games and also on the latest API's, Vega 56 is knocking on GTX 1080's door...….A card which was suppose to be a GTX 1070 competitor... You realize in Crytek's latest interview on raytracing they keep comparing the Vega 56 to a GTX 1080....With Vega 56 it's simple, you undervolt the card and watch the clocks stabilize with lower temperatures and enjoy higher perf.....Most of the times you see benchmarks, even when Vega wins in the latest titles, there's still lots of perf left on the table...….Benchmark guys never do much with Vega, they keep it at stock and they always run NVidia cards already massively OC;d out the box...
 

The Skull

Member
Benchmark guys never do much with Vega

That's true. I always see the clocks (especially Vega 64's) at low 1500 and the HBM untouched. I have mine at 1650 (around 1620-1640 in game clock speed) and 1100 on the HBM, with a good undervolt. The max power draw I've seen is between 200-215.

Although I did enjoy Gamer's Nexus insanely over clocking the vega 56 to beat the 2070 I think it was.
 
Last edited:
Top Bottom