• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Alleged AMD Next-Gen Flagship Navi GPU Specifications Leaked – 5120 Cores, 24 GB HBM2e Memory, 2 TB/s Bandwidth



An unreleased GPU that is allegedly based on AMD's next-generation 'Radeon RX' Navi GPU architecture has leaked out by Twitter user CyberPunkCat who claims have sources over at SK Hynix who provided him with an info sheet listing down the specifications for this new 2020 flagship design.

AMD's Next-Gen, Flagship 'Big Navi' GPU Allegedly Leaks Out - 80 CUs, 5120 Cores, 24 GB HBM2E Memory & 2 TB/s Bandwidth
We know from previous reports that AMD has already promised us both, a next-generation 7nm+ Navi & a 7nm refresh lineup of Radeon RX graphics cards in 2020. The 7nm refresh lineup would be replacing the existing graphics cards with a more fine-tuned design while enthusiasts would be really looking forward to the 7nm+ lineup of GPUs which is reportedly going to include the much-awaited 'Big Navi' chip.
Q: Lisa, can you give us some idea of what new GPUs you're expected to launch to the rest of 2020, for PCs and for data center?
LS: Yes. In 2019, we launched our new architecture in GPUs, it's the RDNA architecture, and that was the Navi based products. You should expect that those will be refreshed in 2020 - and we'll have a next-generation RDNA architecture that will be part of our 2020 lineup. So we're pretty excited about that, and we'll talk more about that at our financial analyst day. On the data center GPU side, you should also expect that we'll have some new products in the second half of this year.
One of the key features on the Big Navi Radeon RX GPU is that it is going to disrupt the 4K gaming segment, similar to how Ryzen disrupted the entire CPU segment. These are some bold claims by AMD but if these leaked specifications are anything to go by, then these claims may not be that far fetched.

“With the Radeon 5000-series we are essentially covering 90-something-percent of the total PC gamers today,” says Chandrasekhar. “And so that’s the reason why no 4K right now, it’s because the vast majority of them are at 1440p and 1080p.
“That doesn’t mean a 4K-capable GPU isn’t coming, it is coming, but for here and now we want to focus on the vast majority of gamers.”
“Similar to Ryzen,” he says, “all of us need a thriving Radeon GPU ecosystem. So, are we going after 4K, and going to similarly disrupt 4K? Absolutely, you can count on that. But that’s all I can say right now.”
- PCGamesN


AMD's Big Navi 'Radeon RX' Enthusiast GPU Alleged Specifications
Coming to the juicy bits, the specifications leaked in a document specifically sourced back to SK Hynix. We can't say how much legitimate this document is but it looks pretty credible at first sight and the specifications do seem plausible. The document mentions the specific GPU config with its codename 'D32310/15'. There's no way to tell whether this is Navi or something else but I will get to that in a bit. The first question that arises is how do we know that this is indeed an AMD GPU?

If you go by our recent report, an AMD GPU with a similar codename just passed RRA certification this month. Based on the evidence my colleague had collected, it sure does look like a new flagship Navi GPU and one that might end up being faster than all of AMD's Radeon RX gaming graphics cards released to date.

Surprisingly, the document lists the complete GPU and memory configurations. For the GPU itself, we are looking at 5120 stream processors packed in 80 Compute Units. The GPU consists of a total of 320 TMUs and 96 ROPs. The L2 cache size on the chip also sits at a mammoth 12 MB. That's 2x the cache of NVIDIA's flagship Tesla V100 GPU and Titan RTX. It's also 3x the cache size over AMD's own Radeon RX 5700 XT which is based on the Navi 10 GPU. There are no clock speeds mentioned for this part but the memory configuration is listed.

In terms of memory, we are looking at 24 GB of high-performance HBM2e. The GPU would feature a 4096-bit bus interface and reportedly up to 2 Terabytes per second worth of bandwidth. The new HBM2e standard was unveiled a while back with both SK Hynix and Samsung accelerating production this year since we are expecting the arrival of new HPC parts from both NVIDIA and AMD. Interestingly, if this leak holds any truth to it, then it looks like AMD is going back to HBM2 design on its flagship gaming graphics cards.

Recently, AMD made the switch to GDDR6 on its entire Navi portfolio but there have been no high-end graphics cards in AMD's line since the Radeon VII which did feature HBM2 design. Bear in mind that all AMD flagships since the Radeon R9 Fury series have featured HBM/2 memory. It won't be a surprise either to see flagship Navi do the same.

However, there are some things we have to consider here. While 24 GB 4096-bit is possible on 12-hi stacks, AMD revealed some design hindrances when going with 12-hi stacks back in August 2019 so it is possible that they have changed plans and are going for the limits with what HBM2e has to offer.

Although vendors can stack the HBM to up to 12-hi, AMD’s Macri believes all vendors will keep 8-hi stacks. “There are capacitive and resistance limits as you go up the stack. There’s a point where you hit where in order to keep the frequency high you add another stack of vias. That creates extra area of the design. We’re trying keep density cost in balance,” he said. via Semi Engineering
Just to point out, SK Hynix does indeed have 4 Gb/s HBM2e stacks which they showcased last week at ISSCC-2020 as stated in a reply to the leaker by Twitter user Hans De Vries.

That would be Big Navi if true. Hynix showed a 4Gb/s HBM2e stack last week at ISSCC-2020.
That's 512 GB/s for the die stack which would be 3 high in this case.
Samsung anounced an even faster 5Gb/s HBM2e (640GB/s) and an 8.5Gb/s LPDDR5 sdram. pic.twitter.com/bw6WmLdzSn
— Hans de Vries (@HansDeVriesNL) February 24, 2020


This would be a similar move to how NVIDIA sourced the best GDDR5X chips for its Pascal and best GDDR6 chips for its Turing lineup since they were close to the memory manufacturers. AMD being one of the leading partners in HBM development can do the same and outsource the best HBM2e dies for its own next-generation flagship gaming GPUs.

So to conclude it, while all of this seems perfectly plausible, it would end up making the flagship Navi Radeon RX product a very expensive graphics card. We might even be talking about a price close to the $999 US which is pretty much the norm for enthusiast-grade graphics cards these days. We have already seen an unreleased Radeon RX GPU is up to 17% faster than the RTX 2080 Ti in very early benchmarks and various reports claiming up to 2x performance increase over the Radeon RX 5700 XT for the flagship Navi part. One thing is confirmed though, AMD & NVIDIA both are going to push the boundaries of GPU performance in 2020, bringing new levels of graphical Compute & horsepower, unlike anything we have seen before with their latest line of next-gen graphics cards.
https://cdn.wccftech.com/wp-content...-Graphics-Card-Flagship-Enthusiast-Design.jpg
 

Shin

Banned
I happen to read this and the article below a few minutes ago on Notebookcheck, so posting it.
The memory amount if crazy*, but expected since consoles will raise the bar of what's minimum.
Looking forward to see the next wave of top of the line GPU's clash, around Q2/Q3 I'll most likely start building my new desktop.

 

ZywyPL

Gold Member
Meh, not only expensive and highly unavailable HBM all over again, but a whole useless 24GB of it, I guess AMD will never learn? Just slap 16GB GDDR6 and call it a day, how hard can it be?

I happen to read this and the article below a few minutes ago on Notebookcheck, so posting it.
The memory amount if crazy*, but expected since consoles will raise the bar of what's minimum.
Looking forward to see the next wave of top of the line GPU's clash, around Q2/Q3 I'll most likely start building my new desktop.


I really doubt the rumored 6144SP, the 2080Ti already runs games in 4K at what, 100-150FSP? So there are like 10 people on the planet with displays that can actually fully utilize those GPUs, what would be the logic behind putting eve more CUDA cores? If anything, the RT performance needs to be seriously stepped up, because it has like just a quarter of the performance of rasterization rendering.
 

Shin

Banned
Meh, not only expensive and highly unavailable HBM all over again, but a whole useless 24GB of it, I guess AMD will never learn? Just slap 16GB GDDR6 and call it a day, how hard can it be?
I agree to an extent, though GDDR6 operates at 1.35v I think, HBM2 or whatever at 1.2 and then there's the bandwidth difference.
Considering that AMD has/ad issues with power draw and by extension heat, it would make sense (assuming that's partially the reason).
 
If XSX eats monsters, this thing is eating the Devil's corpse. Crazy power!
Next gen GPU's gonna be very interesting, Nvidia MCM etc...
At 1500mhz that's around 15.36 TFlops
At 1755 (the 5700xt's gaming clock) that's around 17.97 TFlops
RDNA2 is supposed to improve power consumption.
Meh, not only expensive and highly unavailable HBM all over again, but a whole useless 24GB of it, I guess AMD will never learn? Just slap 16GB GDDR6 and call it a day, how hard can it be?
It will obviously have a cheaper GDDR6 variant.
 

Irobot82

Member
Next gen GPU's gonna be very interesting, Nvidia MCM etc...

RDNA2 is supposed to improve power consumption.

It will obviously have a cheaper GDDR6 variant.

Power consumption as long as it's not out of control matters little to none to me. I was just doing some napkin math to see what kind of TFlops we should expect from the new card.
 

Thaedolus

Gold Member
Sounds like I’ll be looking to replace my 1080ti sooner than later...sucks my monitor is an OG gsync only though, makes a possible jump to AMD sound less appealing
 

TaySan

Banned
I will believe it when I see it. AMD does this with every one of their GPUs and they always dissapoint. Maybe they will turn things around like Ryzen.
 
Meh, not only expensive and highly unavailable HBM all over again, but a whole useless 24GB of it, I guess AMD will never learn? Just slap 16GB GDDR6 and call it a day, how hard can it be?

AMD is forced to use HBM because their GPU's draw way too much power, HBM draws much less power than GDDR and that is why Nvidia can use GDDR and AMD can't for top-end parts. This power draw difference will become much more stark when Ampere arrives at 7nm process node, today's reminder that Nvidia has been at 14/12nm for the past 4 years and AMD has been at 7nm more than a year and still cannot catch up with Nvidia's power efficiency.

Also you can ask any 5700XT owner what AMD's drivers for RDNA are like. They are BAD. When people are openly complaining about the state of drivers at r/AMD you know it's awful.
 
Last edited:
Next gen GPU's gonna be very interesting, Nvidia MCM etc...

RDNA2 is supposed to improve power consumption.

It will obviously have a cheaper GDDR6 variant.
At WCCF they say this Navi card is supposed to be a 275W GPU!!! WTF how is that even possible. If that's the case then the XSX will be <190W. That's absurd.
 
Last edited:
At WCCF they say this Navi card is supposed to be a 275W GPU!!! WTF how is that even possible. If that's the case then the XSX will be <190W. That's absurd.
275W isn't some arbitrary number. It is a practical limitation of power draw because that's how much you can safely pull from the PCI-e slot + 2x 6-pin PCI-e connectors. Neither AMD nor Nvidia are crazy enough to release a video card that requires 3 or more 6-pin PCI-e connectors, probably because as a practical functional limitation they can also design reference coolers which can dissipate 275W worth of power draw's heat and not need to go to pure water cooling for a reference card which would of course be absurd.

Actually I take that back, AMD once made the Radeon R9 Fury X and that was a disaster. Afterwards AMD never fucked around with AIO water cooling reference designs ever again.
 
Last edited:

Ascend

Member
AMD is forced to use HBM because their GPU's draw way too much power, HBM draws much less power than GDDR and that is why Nvidia can use GDDR and AMD can't for top-end parts. This power draw difference will become much more stark when Ampere arrives at 7nm process node, today's reminder that Nvidia has been at 14/12nm for the past 4 years and AMD has been at 7nm more than a year and still cannot catch up with Nvidia's power efficiency.

Also you can ask any 5700XT owner what AMD's drivers for RDNA are like. They are BAD. When people are openly complaining about the state of drivers at r/AMD you know it's awful.
You never pass up an opportunity to trash AMD, do you?

AMD was never 'forced' to use HBM. They chose to use it because, they co-developed it to bypass bandwidth limitations. The power efficiency is a bonus. And it's no secret that the RDNA cards are currently bandwidth starved. And considering the price of GDDR6 now, HBM will make even more sense.

AMD's cards being more power hungry out of the box is true. But that is only because AMD overvolts their cards more than they need to for whatever reason. It's why their cards undervolt a lot better than nVidia cards, and when you do that, the power difference isn't really that big. Aside from that, GCN was really pushed past its frequency limit. RDNA, and especially RDNA2, are completely different beasts in that regard. Assigning the characteristics of the GCN architecture to RDNA is simply short-sighted and biased. And the node difference is also short-sighted. Nodes also develop over time and become more efficient. How do you think Intel did it over the years? Their 10nm still can't really match their 14nm. And even though TSMC's 7nm is generally better than 14nm, if it really was so much better, nVidia would have already jumped ship. But no one ever asks themselves why nVidia doesn't. nVidia is always later to the newer nodes than AMD.

People complain about nVidia drivers on nVidia's forums all the time. Those barely get headlines. It's simply popular to hate on AMD. The latest driver thing seems more like a smear campaign than anything else. But whatever.

Anyway, this card, if these rumors are true, will be a beast that will definitely be faster than a 2080Ti.
 
Last edited:

CuNi

Member
With the amount of games that support RTX and even what can be expected in the coming years, I'm more interested in what card can deliver more 1080p non RTX performance. That 240hz screen wants to be properly fed.
 
Last edited:

ZywyPL

Gold Member
It will obviously have a cheaper GDDR6 variant.

Yeah, "obviously", just like Fury, Vega and RVII had. Oh wait...

AMD was never 'forced' to use HBM. They chose to use it because, they co-developed it to bypass bandwidth limitations. The power efficiency is a bonus. And it's no secret that the RDNA cards are currently bandwidth starved. And considering the price of GDDR6 now, HBM will make even more sense.

The thing is, their cards are pretty efficient up until they add HBM to them, like really, they scale so damn well, basically linearly from 512-core low-power/entry-level models up to 2560-core gaming models, and then they add just a little bit more cores but change memory to HBM, and out of a sudden all the efficiency dies... Obviously, the memory itself isn't the issue here, it's much better than any revision of GDDR, so the question is what the hell is going wrong? We could say it's the memory controller, OK, but they had HBCC with Vega, which IMO should become a standard feature of their every new card, especially in the upcoming consoles, so that's not the case either, it didn't help at all.

But the bottom line is, as just the end consumer I really don't care what's the issue, that's their issue, not mine, if they can't provide me a product that's working as expected, and I mean every single time, not in just some single case scenarios like Battlefield, Doom or Forza etc., then I'll go to someone else who can. For me the inconsistency in performance is what currently throws me away from AMD's products the most, I could live with all the other issues, but if I'm dropping a couple of hundreds of dollars on a GPU or CPU I have all the right to demand it to work equally across the game engines, APIs and so on.

So all that being said, when looking at those specs I simply cannot fight the feeling we're looking at 1000-1200$ card (if not more), that in half of the cases will perform like 1080-1080Ti, a 2-3yo, previous-gen NV cards, like AMD GPUs always do...
 

Sosokrates

Gold Member
Seems like a weird naming scheme.

The Navi cards have round numbers, 5500,5600,5700 so far so it would make more sense for it to be 5800 and 5900. They will need something in-between a 5700xt and a 5900
 

Ascend

Member
Seems like a weird naming scheme.

The Navi cards have round numbers, 5500,5600,5700 so far so it would make more sense for it to be 5800 and 5900. They will need something in-between a 5700xt and a 5900
Big Navi/RDNA2 is going to be 6000 series.
 

thelastword

Banned
Yeah, Big Navi is gonna take names...….
RDNA is designed for gaming. Vega will remain the compute architecture for now. The cut down version(s) will probably use GDDR6.
Absolutely, but RDNA 2.0 is pure efficiency.....It's where they did their hardest work to get maximum frames out of the architecture.......Less Powerdraw, less latency, less bottlenecks and less idle compute resources.....They are trying to get 99-100% efficiency on the arch, as opposed to Vega which had plenty raw power which was not being utilized for upwards of 50% at idle times....

RDNA was what AMD needed for gaming......However, they have also improved on the efficiency of Vega a tonne since then, learning from RDNA most probably.....I think the latest Vega APU's has a 54-56% uplift on the same 8-10 CU's......So AMD's new GPU design seems to be benefitting all of it's products....

What I'm really waiting on is the breakdown on the tech behind RDNA 2.0 relative to CU's and other general improvements from RDNA 1...….I suspect the CU's on RDNA 2 are much larger, has double or triple the instruction sets as opposed to RDNA 1. It's going to be detailed/revealed soonish, so I can't wait...
 

Rickyiez

Member
I'm just cautiously optimistic. As much as I wanted AMD to succeed, their last few releases wasn't up to par to the initial hype.
 

ZywyPL

Gold Member
How is this still a thing? AMD drivers have been fine for awhile.

Literally just a week or two ago massive reports of black screen issues started to appear after the latest update, if that's "fine" for you then I really don't know what to say...
 

Kenpachii

Gold Member
All that power...destined to be crippled by shit drivers.

GPU only functions to pressure nvidia prices tho. After there last radeon 7 debacle and there shit tier driver support which still is a thing nobody would touch any of there gpu's with any 10 foot pole stick unless they can deliver a price which is far superior towards nvidia.
 
Last edited:

Chiggs

Gold Member
Literally just a week or two ago massive reports of black screen issues started to appear after the latest update, if that's "fine" for you then I really don't know what to say...

If only I could find people complaining about Nvidia drivers online....
 

Ascend

Member
Good news, we probably can aspect a 3080ti right out of the gate.

Now price it at 499.
People always want unrealistic things when it comes to AMD.

I'm expecting this thing to be in the range of 70%-80% faster than a 5700XT, considering AMD's efficiency improvements with CU scaling. Remember that this is double the CUs of the 5700XT. So this should be around 30% faster than a 2080Ti. And you think they should price it at $499?

A 80 CU part will most likely be a very large die, which makes that price unreasonable. And that's not even mentioning the whole HBM pricing.
Not only that, when they did low prices with the R9 290X, the lower price worked against them, even though it was as fast as a Titan and half as cheap. Selling good products too cheaply gives the impression that the product IS cheap.

I would love for prices to come down more than anything. I would immediately jump on it if it really is $499. But, I think the launch price of the Radeon VII ($699) is the optimal price for this. Even $100 more at $799 is acceptable, although just barely.

Literally just a week or two ago massive reports of black screen issues started to appear after the latest update, if that's "fine" for you then I really don't know what to say...
Because nVidia never released drivers that even killed cards...
 
Last edited:

thelastword

Banned
If only I could find people complaining about Nvidia drivers online....
Same thing everytime…….Nvidia drivers have had so many issues over the years, but nobody brings it up because we expect some issues with drivers at times, but some folk here are hell bent on painting AMD as having driver issues selectively, when it is AMD drivers that has been the most stable for the last ten years...People need to do their research.....Some people are pretending like no AMD GPU owner can use his computer at the moment......When I've been gaming and typing this from an AMD based PC all this time without a problem...…..

Best interface, best features, most stable drivers for the last decade.......
 

phil_t98

Gold Member
Same thing everytime…….Nvidia drivers have had so many issues over the years, but nobody brings it up because we expect some issues with drivers at times, but some folk here are hell bent on painting AMD as having driver issues selectively, when it is AMD drivers that has been the most stable for the last ten years...People need to do their research.....Some people are pretending like no AMD GPU owner can use his computer at the moment......When I've been gaming and typing this from an AMD based PC all this time without a problem...…..

Best interface, best features, most stable drivers for the last decade.......
Also within in Xbox and ps drivers are less of an issue and the console versions on the chips run amazingly well
 

Ascend

Member
275W isn't some arbitrary number. It is a practical limitation of power draw because that's how much you can safely pull from the PCI-e slot + 2x 6-pin PCI-e connectors. Neither AMD nor Nvidia are crazy enough to release a video card that requires 3 or more 6-pin PCI-e connectors, probably because as a practical functional limitation they can also design reference coolers which can dissipate 275W worth of power draw's heat and not need to go to pure water cooling for a reference card which would of course be absurd.

Actually I take that back, AMD once made the Radeon R9 Fury X and that was a disaster. Afterwards AMD never fucked around with AIO water cooling reference designs ever again.
You do know 8-Pin exists right...?
 
Top Bottom