• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon RX6800/RX6800XT Reviews/Benchmarks Thread |OT|

Well ... Scalpers gonna scalp. I paid a slight premium, but not a 1000€^^...

I wouldn't have thought that it would keep being as bad as it is now (wrt availability of new gpus, or gpus in general)...

Guess TSMC is REALLY swamped with PS5, XBox and all of AMDs new offerings.

Finished Control yesterday. Onto Cyberpunk next I think.
 

supernova8

Banned
Just some comedy gold from my girl Lisa Su at CES 2021:


8RC29aX.jpg
 
Yeah prices and availability across the board for pretty much all GPUs right now is an absolute shitshow. Hopefully things become more sane by March/April but who am I kidding, will probably continue into the summer/September before things get better.
 
I paid half that at Alternate^^

Tbf it's a 6900... But mine overclocks to those speeds too. (Or close to)

And nvidia even dared to release yet another SKU for their unavailable repertoire.
 
Last edited:
Considering selling my 480 now for 250$, i mean the crypto bubble is about to pop and the used card price will drop like a rock, then just wait a few months until i can get a 6700 or a 6800 for normal price in the store. good idea?
 

llien

Member
Well yeah only available if you pay what you'd normally pay for an entire PC
On the other hand, for less than 2080Ti... :D

----------------------------

Clickbaity rumor mill:



TLDW:
  1. Mcm design
  2. Fundamental improvements to architecture, radical changes to geometry pipeline inspired by sony's work on ps5
  3. Improved ray tracing
  4. Perf target at 2.5x of rdna2 (compared to rdna2 being 2x of rdna1)
  5. Drastic ipc and clock improvements
  6. Rtg budget's now massive
  7. Improved superresolution
  8. 2022 launch
 

Kenpachii

Member
On the other hand, for less than 2080Ti... :D

----------------------------

Clickbaity rumor mill:



TLDW:
  1. Mcm design
  2. Fundamental improvements to architecture, radical changes to geometry pipeline inspired by sony's work on ps5
  3. Improved ray tracing
  4. Perf target at 2.5x of rdna2 (compared to rdna2 being 2x of rdna1)
  5. Drastic ipc and clock improvements
  6. Rtg budget's now massive
  7. Improved superresolution
  8. 2022 launch

2,5x sounds bullshit to me.

Big chance the next gpu is a minor increase
 

Ascend

Member
2,5x sounds bullshit to me.

Big chance the next gpu is a minor increase
Well... It is a performance target, which means it will not necessarily be achieved. That being said, AMD has been pretty much spot on with their performance targets the last couple of years.

Many factors are in play here... Remember the Radeon VII? What does it look like compared to the 6800 non-XT? Both the Radeon VII and the 6800 have 60 CUs. The 6800 is almost 50% faster... Why? More clocks, more ROPs, more efficient architecture... BUT, more importantly... It's all on the same node, while using 50W less WITHOUT HBM.

People really have no idea how much AMD has advanced in such a short period of time. 2.5x sounds like a lot because people simply do not realize their advancements. But consider them going to an even better node. That alone will give them better power consumption and better clocks. How much performance do you think they can squeeze out of that? Add in an IPC increase or other architectural improvements, and things are looking quite good. 2.5x still sounds too farfetched maybe... But then there's the ace in the hole... The patents are already out, and that is, a chiplet-based GPU.
We don't know if it will happen... But... Don't underestimate AMD...

In other 'leaks'...;
 
Last edited:

llien

Member
2,5x sounds bullshit to me.

Big chance the next gpu is a minor increase
5nm is claimed to be 1.8 density on top of power consumption savings.

MCM could make monster-size chips much more affordable, if it truly works (with AMD's uber-cache that allows it to beat NV despite having slower VRAM perhaps it could).
 

FireFly

Member
5nm is claimed to be 1.8 density on top of power consumption savings.

MCM could make monster-size chips much more affordable, if it truly works (with AMD's uber-cache that allows it to beat NV despite having slower VRAM perhaps it could).
More affordable than they otherwise would have been, but according to Microsoft there isn't a significant reduction in the cost per transistor for 5nm. So if they're doubling the number of transistors again, it's going to be very expensive. I wouldn't be surprised if $1000 was the "new" price point at the high end.
 
Last edited:

Ascend

Member
More affordable than they otherwise would have been, but according to Microsoft there isn't a significant reduction in the cost per transistor for 5nm. So if they're doubling the number of transistors again, it's going to be very expensive. I wouldn't be surprised if $1000 was the "new" price point at the high end.
Only if they do a monolithic die. Smaller chiplets means the wafer faults affect relatively less chips, increasing yield and keeping the cost down. It's also what allowed Ryzen to be kept relatively cheap compared to Intel's 10+ core CPUs.
 
On the other hand, for less than 2080Ti... :D

----------------------------

Clickbaity rumor mill:



TLDW:
  1. Mcm design
  2. Fundamental improvements to architecture, radical changes to geometry pipeline inspired by sony's work on ps5
  3. Improved ray tracing
  4. Perf target at 2.5x of rdna2 (compared to rdna2 being 2x of rdna1)
  5. Drastic ipc and clock improvements
  6. Rtg budget's now massive
  7. Improved superresolution
  8. 2022 launch

8. Oh, you mean for RDNA2? LMFAO
 

llien

Member
More affordable than they otherwise would have been, but according to Microsoft there isn't a significant reduction in the cost per transistor for 5nm. So if they're doubling the number of transistors again, it's going to be very expensive. I wouldn't be surprised if $1000 was the "new" price point at the high end.
Even more so, MCM is inherently cheaper than one monolith chip.
Given how swiftly was AMD able to catch up (and if you check new games only, 6900XT beats 3090, and as for node difference, it happens transistor for transistor, the rest could be written off to power consumption), imagine situation in which NV would find itself, if they really pull of MCM chip next gen, while NV lags behind.

AMD already has expertise in the area, from CCX designs of Zen to "infinity cache", that could be the key to solving MCM design issues.

8. Oh, you mean for RDNA2? LMFAO
Oh, shut the hell up with the bitching, you aren't able to buy which RDNA2 card?

All cards are available to buy in Germany, and in great variety, just not at MSRP (which is kinda not surprising, given "$499" 3070 goes for 800 Euro +)
.
I thank green fanboi for that, apparently, $1200 for a 2080Ti was totally ok, even though MSRP was $999.

And, why AMD isn't ramping up production any further is clear, TSMC 7nm fabs run at full capacity, whereas in case of NV, there is entire Samsung 8nm nearly exclusively to them.
 

Kenpachii

Member
I've rechecked RG's track record and they were quite good with RDNA2 predictions, so could have a real source.
Given what Dr Su was doing intel front, AMD pushing it to the limit is not surprising.

image

Didn't know it was 5nm tho, but yea the next gpu is going to be a wild ride if that's true.
 

Ascend

Member
Didn't know it was 5nm tho, but yea the next gpu is going to be a wild ride if that's true.
I imagine that if they want to reach the 2.5x performance target, they would have to close to double the CUs, while keeping power consumption in check.
If the chiplet part is true, it might be a 4x 40CU configuration, similar to how their 32C/64T CPUs are a 4x 8C configuration.

I do not think they would do 2x80, but it is possible. Maybe the interconnects between the GPUs will be cheaper in a 2x 80 rather than 4x 40. The best configuration will be dependent on which one is more cost effective in terms of yields of the chiplets and the cost of the I/O die equivalent for the GPU.
 

Rikkori

Member
5nm process means it will not be minor.
lol 5nm
go check the raw stats on it, the "shrink" is a joke. Intel does more with a + revision than tsmc is forecasting with 7nm->5nm.
Plus for GPUs AMD never maxes the node density, so at best they'll get 10-15% better PPW but any real performance gains will have to be mostly due to architecture.
 
Even more so, MCM is inherently cheaper than one monolith chip.
Given how swiftly was AMD able to catch up (and if you check new games only, 6900XT beats 3090, and as for node difference, it happens transistor for transistor, the rest could be written off to power consumption), imagine situation in which NV would find itself, if they really pull of MCM chip next gen, while NV lags behind.

AMD already has expertise in the area, from CCX designs of Zen to "infinity cache", that could be the key to solving MCM design issues.


Oh, shut the hell up with the bitching, you aren't able to buy which RDNA2 card?

All cards are available to buy in Germany, and in great variety, just not at MSRP (which is kinda not surprising, given "$499" 3070 goes for 800 Euro +)
.
I thank green fanboi for that, apparently, $1200 for a 2080Ti was totally ok, even though MSRP was $999.

And, why AMD isn't ramping up production any further is clear, TSMC 7nm fabs run at full capacity, whereas in case of NV, there is entire Samsung 8nm nearly exclusively to them.

ooooo, touched a nerve, did I? Don't compare them to Nvidia because you and every other AMD white night worshipping at the alter of Lisa Su actually believed Frank Azor's bullshit about being about to buy one of these at launched. AMD and AMD fanboys are the reason that RDNA is a massive failure in terms of regaining any marketshare from Nvidia. And it's a shame really because they are finally competitive at EVERY rung of performance ladder. Oh well, there's always "2022" where RDNA 3 will surely be widely available. Fuck that stupid hype job though. Any youtuber pushing that garbage when they full well know RDNA 2 isn't likely to be widely available itself until late 2021 or 2022 is pushing FUD

Oh, and the numbers, despite whatever anecdotal "evidence" you have is that at least lower end cards in Ampere's lineup were sold through in much larger numbers than their corresponding RDNA 2 counterparts (aka 3070 being available nearly every day vs 6800 never to be found).
 
That makes zero sense.

Doesn't it though? Y'all helped perpetuate the myth that supply for Ampere was going to be so incredibly meager compared to RDNA2 that AMD didn't feel the need to make any real effort because they knew whatever they did would be a win in the minds of diehards aka there would be no real blowback for complete lies told on Twitter from the head marketing execs.

So, in effect, the hardcore PC fans told them it was ok to completely shaft PC gamers and prioritize and redirect almost all of their 7nm production capacity to consoles because AMD is more profitable in that sector. If they've already won in the hearts and minds of the most dedicated PC fanboys they need to buy their inferior products at above market price, why try to regain marketshare in a less profitable business when you only have so much capacity to go around? It's an utter fail, made more possible/acceptable by a community that's been repeatedly kicked (overpromised and underdelivered) for GPU cycle upon GPU cycle. All they've done is bring a semblance of competition back to the GPU space. They have not really moved the needle at all. But feel free to disagree. I'm sure you'll be perfectly happy paying the same or more for a worse GPU because you really WANT to believe.
 

Ascend

Member
Doesn't it though? Y'all helped perpetuate the myth that supply for Ampere was going to be so incredibly meager compared to RDNA2 that AMD didn't feel the need to make any real effort because they knew whatever they did would be a win in the minds of diehards aka there would be no real blowback for complete lies told on Twitter from the head marketing execs.
You really think that AMD makes business decisions based on loud minorities...? They want mass market share.

So, in effect, the hardcore PC fans told them it was ok to completely shaft PC gamers and prioritize and redirect almost all of their 7nm production capacity to consoles because AMD is more profitable in that sector.
Then why are both consoles also having shortages? Why do AMD's CPUs also have shortages?

If they've already won in the hearts and minds of the most dedicated PC fanboys they need to buy their inferior products at above market price, why try to regain marketshare in a less profitable business when you only have so much capacity to go around? It's an utter fail, made more possible/acceptable by a community that's been repeatedly kicked (overpromised and underdelivered) for GPU cycle upon GPU cycle. All they've done is bring a semblance of competition back to the GPU space. They have not really moved the needle at all. But feel free to disagree. I'm sure you'll be perfectly happy paying the same or more for a worse GPU because you really WANT to believe.
I was looking for a 6800XT Nitro+. When I saw the price, I immediately said that I would not be getting it until it can be found within $50 of MSRP. At this point, I might actually be better off waiting for RDNA3 altogether. It's not like I can't use my current card at the resolution I play at.

The reality is that AMD simply did not foresee the situation, and the majority of the tech industry didn't. nVidia was (and still is) also having shortages after all. The only one not having significant shortages was Intel, because barely anyone wants their stuff anymore. Demand for tech in 2020 was a lot higher than anyone expected, and there is no way that AMD could have remedied this, considering TSMC is pretty much at max throughput.

Your way of thinking says a lot. It's not how the world works though. But hey. Do your thing. Just try not to derail the thread.
 
Last edited:
You really think that AMD makes business decisions based on loud minorities...? They want mass market share.


Then why are both consoles also having shortages? Why do AMD's CPUs also have shortages?


I was looking for a 6800XT Nitro+. When I saw the price, I immediately said that I would not be getting it until it can be found within $50 of MSRP. At this point, I might actually be better off waiting for RDNA3 altogether. It's not like I can't use my current card at the resolution I play at.

The reality is that AMD simply did not foresee the situation, and the majority of the tech industry didn't. nVidia was (and still is) also having shortages after all. The only one not having significant shortages was Intel, because barely anyone wants their stuff anymore. Demand for tech in 2020 was a lot higher than anyone expected, and there is no way that AMD could have remedied this, considering TSMC is pretty much at max throughput.

Your way of thinking says a lot. It's not how the world works though. But hey. Do your thing. Just try not to derail the thread.

AMD could have redirected
You really think that AMD makes business decisions based on loud minorities...? They want mass market share.


Then why are both consoles also having shortages? Why do AMD's CPUs also have shortages?


I was looking for a 6800XT Nitro+. When I saw the price, I immediately said that I would not be getting it until it can be found within $50 of MSRP. At this point, I might actually be better off waiting for RDNA3 altogether. It's not like I can't use my current card at the resolution I play at.

The reality is that AMD simply did not foresee the situation, and the majority of the tech industry didn't. nVidia was (and still is) also having shortages after all. The only one not having significant shortages was Intel, because barely anyone wants their stuff anymore. Demand for tech in 2020 was a lot higher than anyone expected, and there is no way that AMD could have remedied this, considering TSMC is pretty much at max throughput.

Your way of thinking says a lot. It's not how the world works though. But hey. Do your thing. Just try not to derail the thread.

AMD doesn't make allocation decisions based on a loud minority on the internet but they sure as shit cater to them when it comes to PR. Case in point: AMD reddit. They're constantly on there backing up the insane speculation and saying things that are to the effect of 'well, wait and see, we're really going to blow you away' when anyone postulated insane theories on performance or availability. As I stated, they think they can game Nvidia with talk instead of results. Wake the fuck up.

Also, AMD CPUs, while hard to find, are not nearly hard to find as GPU. Why? Because the CPUs are far more profitable for them. That's just a fact.

And are we totally forgetting or giving them a pass for launching later and have worse supply? Of course TMSC has limits but they already had the advantage of seeing the dumpster fire that was Ampere's launch and somehow managed to make theirs worse.

Of course, this is all in service of the larger point that is: RDNA 2 was just a big, fact fail and a disappointment to anyone that had reasonably high expectations that they'd finally turned the corner and gotten their GPU division back to respectability.
 
Last edited:

supernova8

Banned
On the other hand, for less than 2080Ti... :D

----------------------------

Clickbaity rumor mill:



TLDW:
  1. Mcm design
  2. Fundamental improvements to architecture, radical changes to geometry pipeline inspired by sony's work on ps5
  3. Improved ray tracing
  4. Perf target at 2.5x of rdna2 (compared to rdna2 being 2x of rdna1)
  5. Drastic ipc and clock improvements
  6. Rtg budget's now massive
  7. Improved superresolution
  8. 2022 launch

5:09 DOOWWDDD, DOOOOOOOOOOOOOOOOOOOOOOOOOOIIIUUDDDD

Problem is he provided lots of detail on stuff and then really just glided over raytracing without much detail at all. Unless there is clear indication otherwise, I'm expecting RDNA3 raytracing to be as much as a disappointment against the competition as RDNA2 is.

In terms of raw performance, sure RDNA2 is great and if they could keep their prices down it'd be all gravy. May end up going for a 6700XT in the end.
 
Last edited:

Ascend

Member
AMD doesn't make allocation decisions based on a loud minority on the internet but they sure as shit cater to them when it comes to PR. Case in point: AMD reddit. They're constantly on there backing up the insane speculation and saying things that are to the effect of 'well, wait and see, we're really going to blow you away' when anyone postulated insane theories on performance or availability. As I stated, they think they can game Nvidia with talk instead of results. Wake the fuck up.

Also, AMD CPUs, while hard to find, are not nearly hard to find as GPU. Why? Because the CPUs are far more profitable for them. That's just a fact.

And are we totally forgetting or giving them a pass for launching later and have worse supply? Of course TMSC has limits but they already had the advantage of seeing the dumpster fire that was Ampere's launch and somehow managed to make theirs worse.

Of course, this is all in service of the larger point that is: RDNA 2 was just a big, fact fail and a disappointment to anyone that had reasonably high expectations that they'd finally turned the corner and gotten their GPU division back to respectability.
RDNA2 was not a fail performance-wise, especially because the majority was expecting at best 2080 Ti performance. Additionally... Considering nVidia's mess of a line-up, obviously reacting to AMD, I'd say AMD's results are bigger than their talk.

No one is getting a pass for messing up their launch. But you can't suddenly change everything in a month, especially in the current landscape worldwide. So watching the failure of the Ampere launch didn't really give AMD enough leeway at all to change much. They did mess up with their PR on Twitter, and with special treatment for a certain group of people. At least they weren't selling cards directly to miners because that's more profitable...
 

Rikkori

Member
1.8 times transistor density.
(naming is a joke and definitely not comparable to how Intel names own things)
Nope.
Even Apple only got 1.5x from 7nm -> 5nm and that's historically the best case for the node, and wholly unattainable for a GPU - which you can see by looking again at RDNA 1 & 2 densities vs what the paper specs for the node tell you.
 

llien

Member
Problem is he provided lots of detail on stuff and then really just glided over raytracing without much detail at all. Unless there is clear indication otherwise, I'm expecting RDNA3 raytracing to be as much as a disappointment against the competition as RDNA2 is.
The good part in the embarrassing "deep dive into RT performance" by DF was the revelation, that there are several distinct steps in RT world, and ONLY ONE OF THEM is hardware accelerated.
Notably: denoising, blurring, <insert tech> somehow producing palatable reflections out of noisy mess is notably NOT part of hardware RT.

See the catch?
AMD is likely beating NV at raw RT performance (as in "does this thing intersect that thing"), as I recall hardware figures hint at notably higher throughtput than in NV cards.

Even Apple
1.5 is not "only".
There are other considerations / reasons for a more sparse transistor design.
I've chuckled at "even".

AMD shifted from tightly packing transistors and running them at lower clocks, to more sparse designs with higher clock.
 

supernova8

Banned
See the catch?
AMD is likely beating NV at raw RT performance (as in "does this thing intersect that thing"), as I recall hardware figures hint at notably higher throughtput than in NV cards.
I'm not really sure what you're referring to but whether it's through specific RT cores or whatever, there's no getting around the fact that NVIDIA is better in raytracing right now. (In other words, even if AMD's RT is better when it comes to non-RT cores, they either increase those cores or they aren't as performant overall).

Even if RDNA3 gets better raytracing, NVIDIA's not going to leave their RT performance as is. It may even be the case that they handicapped the RT performance this year because they knew/expected AMD would still have a relatively poor RT showing.


I think I could agree that AMD is not letting NVIDIA win easily, but I don't really see AMD GPUs "winning" convincingly in anything and therefore I'm still on the fence about buying RDNA2. I will probably go for a 6700XT if the prices are great (ie cheap) but AMD's pricing for 6800/6800XT suggests they will kinda just match 3060/3060Ti and if that's the case, I'd rather go for a 3060 Ti.
 
Last edited:

llien

Member
I'm not really sure what you're referring to but whether it's through specific RT cores or whatever, there's no getting around the fact that NVIDIA is better in raytracing right now. (In other words, even if AMD's RT is better when it comes to non-RT cores, they either increase those cores or they aren't as performant overall).

DF said that, of 4 steps involved in anything RT, only 1 is hardware accelerated (ray intersection test).
The rest is not.

NV is FASTER in green sponsored games, AMD is FASTER in red involved games (Dirt 5).

But to the point above: it's not about hardware being capable to cast rays at all. AMD is likely ahead of NV on that front. It's the other 3 steps that make large performance difference, and none of them is "hardware RT", it's software tricks to achieve some effect.

AMD will likely invest into that stuff, just in case, but it's notable that even 2 years after introduction, RT remains largely a "fps destroying" feature with highly questionable visual benefits (e.g. CP2077), I won't even mention "effects yet unseen" which is plain lie.

I may be remembering wrong, but didn't DF disable the de-noising and it remained much slower?
There is nothing coming out of hardware RT step (intersections) that you could simply visualize as is.
 

spyshagg

Should not be allowed to breed
DF said that, of 4 steps involved in anything RT, only 1 is hardware accelerated (ray intersection test).
The rest is not.

NV is FASTER in green sponsored games, AMD is FASTER in red involved games (Dirt 5).

But to the point above: it's not about hardware being capable to cast rays at all. AMD is likely ahead of NV on that front. It's the other 3 steps that make large performance difference, and none of them is "hardware RT", it's software tricks to achieve some effect.

AMD will likely invest into that stuff, just in case, but it's notable that even 2 years after introduction, RT remains largely a "fps destroying" feature with highly questionable visual benefits (e.g. CP2077), I won't even mention "effects yet unseen" which is plain lie.


There is nothing coming out of hardware RT step (intersections) that you could simply visualize as is.

They did disabled it either in Quake or Legion and captured performance.
 

supernova8

Banned
NV is FASTER in green sponsored games, AMD is FASTER in red involved games (Dirt 5).


This Crytek ray tracing benchmark apparently has no outside input from NVIDIA or AMD, and you see that NVIDIA is comfortably ahead (6800 XT, 6900 XT, RTX 3080 and RTX 3090 comparisons are what I'm mentioning specifically).

Even at the 1080p level NVIDIA wins, and then the gap gets wider as you move up to 1440p and more so as you switch to 4K. Considering these cards are all above $600 (msrp at least), you'd probably expect target customers to be playing at 1440p minimum and probably 4K. Don't get me wrong I think AMD has come a million miles since their old shitty GPUs but NVIDIA is still ahead.

I don't think it really matters about how you break down the different stages of ray tracing. If NVIDIA's performance is better than NVIDIA's performance is better regardless of how you slice it.

Besides, maybe I'm mistaken but I remember someone (maybe not DF?) saying while the RT in Dirt 5 was better on AMD, the RT on AMD was lower quality than the RT on NVIDIA. If so, it would suggest that AMD's RT implementation is throttling calculations to make up the difference. Again I could be mistaken but I think I did see it.
 

llien

Member
They did disabled it either in Quake or Legion and captured performance.
As I said, RT hardware step alone doesn't produce anything palatable, you can't disable everything else and get it rendered, you could, perhaps, disable some of the "other steps".


This Crytek ray tracing benchmark apparently has no outside input from NVIDIA or AMD, and you see that NVIDIA is comfortably ahead (6800 XT, 6900 XT, RTX 3080 and RTX 3090 comparisons are what I'm mentioning specifically).
That's curious, ignoring what wccf is, but far from obvious, what it is being tested to begin with.
It's the demo that was first shown on Vega GPU, with no hardware RT being used, I'm puzzled if it is used in this demo at all.

The hardware intersection bit I'm referring to, is about "up to" number, of NV claimed "up to 10 billion intersection" for Ampere (which is 10 times more than Tesla :)) per second, vs AMD's... 380 billion on XSeX. (again: up to)

Fishiness of "up to" aside (e.g. imagine "it's that many if all structures fit into LL1 cache") "It is largely not about intersections at all" explains, why we do not see even remotely as major RT perf jump as it is claimed.

Last, but not least, how many Crytek RT engine based games are out there? Not even one, right?
 
Last edited:

Buggy Loop

Member
Dirt 5 is not faster with RT on over Nvidia because of RT cores, but because the rasterization baseline, to begin with, is 30% higher because they fucked up the VRS on Nvidia side and made an exclusive algo for VRS only for AMD.

This has been talked like so many times already, it’s fucking Groundhog Day with you. But, much like your watchdog FUD you were using with a broken RT game, where you got called on, over and over, you only filter the news that confirm your bias.
 
Top Bottom