• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Radeon RX 6900XT and 6800/6800XT final specs. 16 GB G6. Up to ~21Tflops(~24 Tflops Boost)

Leonidas

Member

6900XT (AMD exclusive)
80 CU (5120 Stream Processors)
2040MHz Game Clock (2330MHz Boost Clock)
16GB GDDR6 @ 16Gbps

6800XT
72 CU (4608 Stream Processors)
2015MHz Game Clock (2250MHz Boost Clock)
16GB GDDR6 @ 16Gbps

6800
64 CU (4096 Stream Processors)
1815MHz Game Clock (2150MHz Boost Clock)
16GB GDDR6 @ 16Gbps

Interested to see where price and performance land. Given that even the 6800 has 16 GB G6 at the same speed as the 6900XT the 6800 for right now seems like it will be the best option. Reference clocks (for 6800) are lower than consoles though... Interested to see if 6800 can OC to 2 GHz like the others. Could be an interesting card if so.

Power consumption is concerning though, an issue I had with RTX 3080/3090 also.

EkwiwWPXUAMlZWE


6900XT is ~350 watts. 6800XT is ~320 watts. 6800 is ~290 watts. If igorsLAB power consumption numbers are correct

OC models of 6800 could be over 300 watts too.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Pretty impressive. They should be able to compete with the 3080 on rasterization performance with the 6900xt.

Let's see if they have decent rt and machine learning capabilities.

Also, that's if tflops are scaling performance 1:1. The rtx 3080 is 30 tflops but is only offering around 20 tflops of performance compared to the 2080. Even nvidia gpus are no longer scaling well with extra shader processors. And increasing clocks beyond 2.0 ghz doesn't give u 1:1 performance increases either.

So in reality it could just be aiming at the 3070 instead. We shall see
 
16GB VRAM on all the SKU's...

Makes 16GB & 20GB 3070's & 3080's all but inevitable (not that there was much doubt, IMO), just a question of when.
It's DDR6 though. I'll be interested to see how well it scales next to 6x. If it gives the same bandwidth as 10gbs of 6x it might be a mute point. But I'd bet it'll scale better.

I really can't wait to see the benchmarks. I'm hopeful AMD is competitive again. We really need someone to knock Nvidia down a peg and get them to get their heads out of their asses.
 
It's DDR6 though. I'll be interested to see how well it scales next to 6x. If it gives the same bandwidth as 10gbs of 6x it might be a mute point. But I'd bet it'll scale better.

I really can't wait to see the benchmarks. I'm hopeful AMD is competitive again. We really need someone to knock Nvidia down a peg and get them to get their heads out of their asses.


We need someone to knock nvidia down a peg from ... putting all sorts of inovations on the market like dlss and having an extremely powerful card selling at 700 bucks ? Why exactly do we need nvidia to be put down again ? They've consistently inovated and put out spectacular products even in the absence of direct competition. What are we talking about here ?
 

Agent_4Seven

Tears of Nintendo
7 days until annuncement
And who knows how many more months until AMD will actually allow AIB partners to make their custom models, let alone release stable and working drivers. The reference model design looks like crap mixed with a stollen RTX 2000 design, don't like it at all.

I bet that the prices for XT models will be the same as RTX 3070/80/90.
 
Last edited:
I might go with the 6800XT if the price is right. It will only be marginally slower than the 3080, i'm thinking like 10-15%, while probably being easily 200 bucks cheaper than the 20GB version of the 3080.

Normally I'd wait for the 3070 but I'm not buying an 8 GB card in 2021.
 

longdi

Banned
I'm not sure if it will be marginally slower than 3080. The rumoured XTX with 88cu and 2.4ghz is the one thats in between 3080 and 3090.

Now it seems there is no XTX and the game clocks are 20% slower than rumors.

At least we will see 16/20 cards from Nvidia.
10GB poors getting shoved by Nvidia :messenger_weary:
 
Last edited:

Reallink

Member
I might go with the 6800XT if the price is right. It will only be marginally slower than the 3080, i'm thinking like 10-15%, while probably being easily 200 bucks cheaper than the 20GB version of the 3080.

Normally I'd wait for the 3070 but I'm not buying an 8 GB card in 2021.

The standard 3080 is $200 cheaper than the 20GB 3080. Why would anyone buy a 6800XT if it's 15% slower than a 3080, very likely inferior in RT and AIScaling, for the same price? Even $599 would be a terrible value.
 
Last edited:

3liteDragon

Member
Does Game clock mean sustainable clock?

How long can it hold boost clock?
Base clock - Worst-case scenario when gaming (under heavy load), unlikely to happen (unless you have bad cooling/power supply)
Game clock - Expected base performance when gaming, but frequencies tend to be better than this sometimes, since AMD's been conservative with their advertised clocks lately
Boost clock - Best-case scenario when gaming (Depends on how good cooling and power supply are)

If the GPU's running at let's say a 100MHz over it's specified game clock but thermals and power supply are still good, that's when you see automatic increase in frequency which might hit or get close to the specified boost clock. You should expect your GPU to maintain frequencies between the specified game clock and boost clock most of the time though.

Hope that clears it up.
 
Last edited:

regawdless

Banned
So curious regarding the launch. Sounds like there's a lot going on in the background. There's real potential of another fucked up launch. How good will the drivers be? When will AIB cards hit? Price will be extremely important.
Exciting times!
 

nosseman

Member
Xbox Series X - RX 6800 (Navi 21)?
Playstation 5 - RX 6700 (Navi 22)?

Gameclock for RX 6800 is very close to the sustained clock for Xbox Series X.
 
Last edited:

longdi

Banned
Does Game clock mean sustainable clock?

How long can it hold boost clock?

Game clocks are the conservative sustainable clocks.

For 5700XT, the young 7nm rdna family, In games, the average clocks are 2-5% higher than game clocks, but that's with caveats too.
Its depending on your room temp, on the existing games(who knows how much stress future games will be), on the heat sink fan assembly.

But at game clocks, you get the optimal fan noise, heat, power efficiency.

The boost clocks are on the other hand, very very very optimistic marketing speak, maybe if you run a simple flat game engine, it may hit it every other few minutes. But I've not seen enough 5700xt reviews that saw boost clocks in action.

To force the GPU to run at boost clocks, you need to do offset over clock, raise the voltages, blast the fan noise or strap 2kg heat sink with triple 120mm fans, overclockers may even apply liquid metal and strap a water block and pipe the heat through 360mm radiators! 🤷‍♀️
 
Last edited:
Game clocks are the conservative sustainable clocks.

For 5700XT, the young 7nm rdna family, In games, the average clocks are 2-5% higher than game clocks, but that's with caveats too.
Its depending on your room temp, on the existing games(who knows how much stress future games will be), on the heat sink fan assembly.

But at game clocks, you get the optimal fan noise, heat, power efficiency.

The boost clocks are on the other hand, very very very optimistic marketing speak, maybe if you run a simple flat game engine, it may hit it every other few minutes. But I've not seen enough 5700xt reviews that saw boost clocks in action.

To force the GPU to run at boost clocks, you need to do offset over clock, raise the voltages, blast the fan noise or strap 2kg heat sink with triple 120mm fans, overclockers may even apply liquid metal and strap a water block and pipe the heat through 360mm radiators! 🤷‍♀️

If I'm understanding you correctly....

Boost clock = Unsustainable. You might see it reach this for a fraction of a second in MSI afterburner.

My 3900X is the same. They say 4.7GHz .... yeah .... maybe on one core for less than a second occasionally.

Does that sound about right?
 
Last edited:

longdi

Banned
If I'm understanding you correctly....

Boost clock = Unsustainable. You might see it reach this for a fraction of a second in MSI afterburner.

My 3900X is the same. They say 4.7GHz .... yeah .... maybe on one core for less than a second occasionally.

Does that sound about right?

You need to overclock the gpu and hope your silicon can hold. Apply big cooling while at it.

Its not as bad as amd Ryzen cpu , where the advertised boost clocks in their zen are truly impossible to reach for all cores or even single core load, without exotic cooling.
 

longdi

Banned
For rdna1, the average game clocks are about 3% higher than advertised with existing 'last gen' games.

Im not sure if rdna2 6000 will have similar headroom, seeing they are already pushing 7nm with 2ghz gameclock.

My prediction is 6000/rdna2 game clocks are very much the end game limits, with no more positive upside. 🤷‍♀️
 
Last edited:

pullcounter

Member
that 6800xt is a monster. you're gonna see almost no difference in performance compared to that 80cu card. my guess:

6800xt $599
6900xt $899-$999
 

regawdless

Banned
Still don't see this competing well if it does not have good RT and a DLSS equivalent.

I'm sure there is a market for a card that offers close enough performance compared to a 3080 while being significantly cheaper for people who don't think raytracing is worth it yet.
Or people who will be fine with raytracing "lite" just like the consoles will offer. Which will be the baseline for raytracing for the whole upcoming generation. And we can expect these cards to outperform the consoles in this regard.

I'm not one of these people, but I see why it could be attractive. All depends on the price though.
 
I'm sure there is a market for a card that offers close enough performance compared to a 3080 while being significantly cheaper for people who don't think raytracing is worth it yet.
Or people who will be fine with raytracing "lite" just like the consoles will offer. Which will be the baseline for raytracing for the whole upcoming generation. And we can expect these cards to outperform the consoles in this regard.

I'm not one of these people, but I see why it could be attractive. All depends on the price though.


Why would it be "significantly" cheaper if it offers similar performance ? Raytracing is only gonna get more and more mainstream going forward, i dont see much value in picking a new card that doesnt do that well. Nvidia just published an article yesterday with 12 new games with RTX and DLSS by the end of this year


AMD is very likely gonna have a fair gap lower performance than a 3080. Shit tier raytracing that takes performance away from rasterisation, no DLSS. Its would need to be cheaper by several hundreds of dollars, which it wont.
 

FireFly

Member
AMD is very likely gonna have a fair gap lower performance than a 3080. Shit tier raytracing that takes performance away from rasterisation, no DLSS. Its would need to be cheaper by several hundreds of dollars, which it wont.
AMD can do BVH acceleration and shading concurrently, and we have no idea what the performance will be like. If raytracing performance is between a 3080 and 3070 for example, they can just slot in between the two cards, with the value play being double the RAM of the 3070.
 

Md Ray

Member
Memory bandwidth is going to be a big problem on these GPUs, it looks like. 512 GB/s max even on an 80 CU part?

Even if you have 2x the amount of TF compared to PS5/XSX GPUs, perf increase won't be 2x over consoles due to those low mem b/w. Heck the flagship seems to have even lower bandwidth than XSX.

I guess this is why they're rumoured to have Infinity Cache or something?
 
Last edited:

ZywyPL

Banned
Xbox Series X - RX 6800 (Navi 21)?
Playstation 5 - RX 6700 (Navi 22)?

Gameclock for RX 6800 is very close to the sustained clock for Xbox Series X.


Consoles are below those cards specs. We will most likely see 6700, 6600 and 6500 cards in the near future, which will be more representative of PS5 and XSX GPUs.
 

Marlenus

Member

6900XT (AMD exclusive)
80 CU (5120 Stream Processors)
2040MHz Game Clock (2330MHz Boost Clock)
16GB GDDR6 @ 16Gbps

6800XT
72 CU (4608 Stream Processors)
2015MHz Game Clock (2250MHz Boost Clock)
16GB GDDR6 @ 16Gbps

6800
64 CU (4096 Stream Processors)
1815MHz Game Clock (2150MHz Boost Clock)
16GB GDDR6 @ 16Gbps

Interested to see where price and performance land. Given that even the 6800 has 16 GB G6 at the same speed as the 6900XT the 6800 for right now seems like it will be the best option. Reference clocks (for 6800) are lower than consoles though... Interested to see if 6800 can OC to 2 GHz like the others. Could be an interesting card if so.

Power consumption is concerning though, an issue I had with RTX 3080/3090 also.

EkwiwWPXUAMlZWE


6900XT is ~350 watts. 6800XT is ~320 watts. 6800 is ~290 watts. If igorsLAB power consumption numbers are correct

OC models of 6800 could be over 300 watts too.

Igors numbers are wrong. 5700XT has a TDP of 180W and a TBP of 225W. The TGP of N21 is 255W and that includes memory so TBP will be sub 300W for stock configs. OC models will push higher.

The Series X GPU + Ram peaks at 150W. Do you really think an extra 12 CUs @ similar clocks is going to consume 140W more? That is a 93% power increase for 23% more CUs. RDNA only had a 74% power increase for an 82% CU increase (and 100% ROP, memory bus) so the numbers are blatantly incorrect.

6800 should be 180W ish at those clockspeeds.
 
AMD can do BVH acceleration and shading concurrently, and we have no idea what the performance will be like. If raytracing performance is between a 3080 and 3070 for example, they can just slot in between the two cards, with the value play being double the RAM of the 3070.


Its near impossible for AMD to put out raytracing performance between 3070 and 3080 on their first gen on this. Nvidia has 2 generations of raytracing on the market while amd has none as of now. They wont match turing's dedicated rtx cores, nevermind be between 3070 and 3080
 

onesvenus

Member
If I'm reading this right we should wait to see the differences between 6800 and 6800XT to infer anything about XSX and PS5 GPUs.
XSX is a little slower than the 6800 and PS5 is exactly half the 6800XT
 
Memory bandwidth is going to be a big problem on these GPUs, it looks like. 512 GB/s max even on a 80 CU part?

Even if you have 2x the amount of TF compared to PS5/XSX GPUs, perf increase won't be 2x over consoles due to those low mem b/w.

I guess this is why they're rumoured to have Infinity Cache or something?

Yeah the new infinity cache system they have is supposed to mitigate the lower bandwidth bus and supposedly perform at the equivalent of a much bigger bus/higher memory bandwidth. We will see what they have in store pretty soon, only a week left until the official reveal.


Power consumption is concerning though, an issue I had with RTX 3080/3090 also.

EkwiwWPXUAMlZWE


6900XT is ~350 watts. 6800XT is ~320 watts. 6800 is ~290 watts. If igorsLAB power consumption numbers are correct

OC models of 6800 could be over 300 watts too.

As for the power consumption for the 6000 series, that isn't really any kind of "leak" or confirmation and is just Igor doing some speculative calculations so I wouldn't take those numbers as gospel. In fact some other very trustworthy people in the tech twitter sphere have cast large doubts on those figures mentioning that they just don't add up. Of course they could end up being in that ballpark in the end but we will have to wait and see. Personally I would take them with a huge grain of salt.

Most likely AMD will beat out Nvidia in power draw and efficiency this generation, partially due to node advantage and partially due to big architecture improvements. Afterall the 3000 series seem to be pushed to their absolute limits way past the efficiency sweet spot to gain a tiny bit more performance to counter the 6000 series, I don't see AMD clocking in with higher power draw unless they too decided to push their cards to the absolute limit once Nvidia demonstrated that high power draw wasn't viewed as a negative anymore.

I would expect the absolute max power draw of the top AMD reference card to be around 280-300w, of course AIBs will OC their models increasing power draw somewhat, good news is that these cards are rumoured to have actual OC headroom. Again most of what we hear so far is speculation to some degree, some of it more informed than others, we won't know 100% until the reveal on the 28th of October.

Its amazing how much of a tight ship AMD has ran so far with leaks regarding Big Navi cards, even a week out from reveal there is still tons of speculation, conjecture, rumours and contradictory leaks. They have really done a good job keeping their cards close to the chest this time around.

Apparently so much so that just like Nvidia, they have locked down the bios/drivers of the models being sent to AIBs so that they cannot gauge real unlocked perfomance until after reveal (and possibly closer to review/release date?).
 

nosseman

Member
Consoles are below those cards specs. We will most likely see 6700, 6600 and 6500 cards in the near future, which will be more representative of PS5 and XSX GPUs.

If you go by the CUs Xbox Series X is pretty close to RX 6800 - ie Navi 21.

6800 is 64 CUs and Xbox Series X is 52 (cut down or disabled CUs to raise yield).

6600 and 6500 are to small - to few CUs.
 

Md Ray

Member
Xbox Series X - RX 6800 (Navi 21)?
Playstation 5 - RX 6700 (Navi 22)?

Gameclock for RX 6800 is very close to the sustained clock for Xbox Series X.
No. They aren't comparable.

6800 = 64, 72 CUs (17, 20 TF), 6700 = 40 CUs (?? TF).
XSX and PS5 have 52 and 36 active CUs, respectively, in comparison.

Also, memory bandwidth of RX 6800 = 512 GB/s.

XSX = 560 + 336 GB/s
PS5 = 448 GB/s

RX 6700'a bandwidth is unknown at this point. But if I have to guess, it'll likely be 384 GB/s. 192-bit interface @ 16 Gbps.
 
Last edited:
If you go by the CUs Xbox Series X is pretty close to RX 6800 - ie Navi 21.

6800 is 64 CUs and Xbox Series X is 52 (cut down or disabled CUs to raise yield).

6600 and 6500 are to small - to few CUs.

Just to be clear, there won't be a 1:1 equivalent desktop GPU for either the XSX or the PS5. While they are both based on RDNA2 technology, they also have customizations from both MS and Sony, also they are part of a low powered SoC, which means that the GPU and CPU are combined on the same chip. There should be fairly big differences in configuration and performance of a high power draw, larger, discreet desktop GPU with their own drivers etc...
 

regawdless

Banned
Why would it be "significantly" cheaper if it offers similar performance ? Raytracing is only gonna get more and more mainstream going forward, i dont see much value in picking a new card that doesnt do that well. Nvidia just published an article yesterday with 12 new games with RTX and DLSS by the end of this year


AMD is very likely gonna have a fair gap lower performance than a 3080. Shit tier raytracing that takes performance away from rasterisation, no DLSS. Its would need to be cheaper by several hundreds of dollars, which it wont.

I was just hypothesising about a scenario in which AMD could be an option.
I'm with you and rather sceptical regarding AMD value proposition.
As I said, the only chance for AMD is to offer close enough performance compared to a 3080, for example being 15% slower in non raytracing games. While also not having comparable raytracing performance and no DLSS, they need to be at least 150 bucks cheaper.
Then I see a market for them, because these cards will still be able to offer better raytracing than the consoles. Of course Nvidia will be miles ahead regarding raytracing. But keep in mind, consoles will stay the lead platforms in most cases.

Will they be able to offer such an attractive price? I don't know. I think Nvidia did a great job with the pricing of the 3080 and 3070.
Tough spot for AMD.
 

Dampf

Member
Sounds good, but I'm skeptical about VRAM. Such high amounts of VRAM will only lead to higher prices. Why not a 6800 with 12 VRAM instead? Those high amounts of VRAM are unnecessary
 

nosseman

Member
No. They aren't comparable.

6800 = 64 CUs (15 TF), 6700 = 40 CUs (?? TF).
XSX and PS5 have 52 and 36 active CUs, respectively, in comparison.

Also, memory bandwidth of RX 6800 = 512 GB/s.

XSX = 560 + 336 GB/s
PS5 = 448 GB/s

RX 6700'a bandwidth is unknown at this point. But if I have to guess, it'll likely be 384 GB/s. 192-bit interface @ 16 Gbps.

The chips are obviously cut down/diabled CUs to raise yield.

The memory bus is also different because of how a dGPU is different from a GPU on an APU. Xbox Series X and Playstation 5 share the memory bus between the GPU and CPU.

Navi 22 (6700 XT) are probably higher clocked than RX 6800 - perhaps as high than 6900 XT but since its fewer CUs it will still have lower TF.
 

nosseman

Member
Sounds good, but I'm skeptical about VRAM. Such high amounts of VRAM will only lead to higher prices. Why not a 6800 with 12 VRAM instead? Those high amounts of VRAM are unnecessary

12 GB VRAM would mean 192 bit memory bus and lower bandwidth since thare are not memory modules available in the size that could have 12Gb VRAM and 256 bit memory bus.
 
Top Bottom