• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon RX6800/RX6800XT Reviews/Benchmarks Thread |OT|

Radeon is only relevant nowadays because of Consoles and cheaper prices, now however? nVidia is light years ahead with all of their features and stuff like that, on the top of that, something like Infinity cache must be very expensive to incorporate into chip, I would not be surprised if Ampere would be cheaper to build.

Navi 21 is 519.8mm2
GA102 is 628mm2

Infinity cache takes up a lot of die space sure, but its still an overall smaller die than GA102, which means it should cost less to manufacture and will typically have better yields.

GA102 could be cheaper, but the only reason for that would be if Samsung was basically giving 8nm Wafers away to Nvidia at cost with zero margins. TSMC N7 meanwhile will be a more expensive node to use
By the way the key operator there is "could". We don't actually know.

Also, Nvidia uses GDDR6X which is significantly more expensive than regular GDDR6. Not just that but Nvidia co-developed it, so it ultimately cost them a fuckload of money. Add to that, what is known to be a very expensive founders edition cooler and all of the insane VRM requirements made necessary by Ampere's high power consumption. All told it wouldn't surprise me that while the GA102 die could be cheaper than the Navi 21 die, its ultimately more expensive to make an RTX3000 series graphics card.

Exactly and that way it's eating compute capabilities unlike nVidia solution, if my assumption is right...

This is also wrong. The RT Accelerator is a fixed function unit that exists in the Compute Unit. They just didn't give it fancy branding like "RT Core" as Nvidia did.
It works differently to how Nvidia does it, but it is dedicated hardware.
 
This is also wrong. The RT Accelerator is a fixed function unit that exists in the Compute Unit. They just didn't give it fancy branding like "RT Core" as Nvidia did.
It works differently to how Nvidia does it, but it is dedicated hardware.
My alternator on my vehicle is dedicated to producing power. When my sound system pulls ~400 amps of current at 15 volts, it's not "free" power, even though my alt can sustain the majority of that, while my battery bank supplies the rest. Its still taking power from my engine, and reducing my gas mileage, similar to how AMD could lose performance by enabling raytracing. With that being said, it's not free on Nvidia's solution either, but it costs less performance. Probably not the best analogy, but whatever.
 
Last edited:

llien

Member


Someone complaining about coolers on these is a clear "full of shit" indicator.

fannoise_load.png
 

mitchman

Gold Member
Correct. But it's specific to Sony. ANd specific to internal use for the console itself. Your not going to see that translate to the PC. Which will use DXR.

Thats my understanding anyway. Insomniac was literally boasting this that it was best in class compared to what they were seeing from others.
Yes, and? The API used is irrelevant here, both support HWA RT just fine. GNM has traditionally had the performance edge with slightly less API overhead, though.
 

llien

Member
Another evidence here is their site Techspot which is spreading the false information https://www.techspot.com/news/86900-capacitor-issues-causing-rtx-30803090-crashes.html
So we are discussing HU, which has data that aligns pretty damn well with other sources.

Igor's Lab has posted an interesting investigative article where he advances a possible reason for the recent crash to desktop problems for RTX 3080 owners.

What exactly is false about capacitor issues again?
 
Navi 21 is 519.8mm2
GA102 is 628mm2

Infinity cache takes up a lot of die space sure, but its still an overall smaller die than GA102, which means it should cost less to manufacture and will typically have better yields.

GA102 could be cheaper, but the only reason for that would be if Samsung was basically giving 8nm Wafers away to Nvidia at cost with zero margins. TSMC N7 meanwhile will be a more expensive node to use
By the way the key operator there is "could". We don't actually know.

Also, Nvidia uses GDDR6X which is significantly more expensive than regular GDDR6. Not just that but Nvidia co-developed it, so it ultimately cost them a fuckload of money. Add to that, what is known to be a very expensive founders edition cooler and all of the insane VRM requirements made necessary by Ampere's high power consumption. All told it wouldn't surprise me that while the GA102 die could be cheaper than the Navi 21 die, its ultimately more expensive to make an RTX3000 series graphics card.

But the fact is that the new RDNA2 and Ampere cards are priced very similarly.

And since Nvidia is beating AMD by a huge margin when new exiting features such as RT are used, Nvidia is able to provide better bang for the buck in this case!

To me it seems that AMD using the Silicon budget for big cache like Infinity Cache instead of more CU's or dedicated tensor cores is not working for them. Maybe AMD should try to secure GDDR6X (or something similar) for upcoming RDNA2/3 products and use the available silicon for computational assets.

At this point the RDNA2 GPU's cannot be recommended for new and up-coming AAA/AA titles which, by and large, will use RT and provide support for DLSS.
 

regawdless

Banned
I find some of the enthusiasm for AMD kinda weird. 6800XT vs RTX3080.

Let's take for example PS5 vs Series X. While having nearly identical performance and costing the same, imagine the Series X having only around 60% of the raytracing performance, making it basically not feasible to use, AND Sony having DLSS while MS has no alternative.
It would be a bloodbath and I doubt people would celebrate MS for this achievement.

You can think of raytracing what you want, but it will be supported by a lot of upcoming AAA games and smaller games like Mortal Shell are receiving patches. In one form or another, it will become more and more important. We're talking 650+ bucks enthusiast cards here and the 50 bucks difference is insignificant at this price point.
 

llien

Member
Cards were sold out in the first weeks since forever.
How good or bad situation is will become visible in a couple of months.

But the fact is that the new RDNA2 and Ampere cards are priced very similarly.

You are missing elephant in the room: Ampere is priced in wild contrast to NV's pricing practices.
Guess what is the reason for that.

Let's take for example PS5 vs Series X. While having nearly identical performance and costing the same, imagine the Series X having only around 60% of the raytracing performance, making it basically not feasible to use, AND Sony having DLSS while MS has no alternative.
Current RT figures are worth jack shit as it is all NV sponsored titles, most of which are not even using Microsoft's API.
It is not even remotely imaginable, that RT could become a mainstream feature, without being optimized for RDNA2.

If it ever gets there, that is. I remember Godfall promised something.
 
Last edited:

llien

Member
Ahahaaaaa:

The new 2.0.95 update has added ray tracing into Godfall, but only for “applicable AMD GPUs”.

This quick update, 2.0.95 is just for PC and enables the ability for players to play Godfall with ray tracing on while using applicable AMD GPUs,” the patch notes said. “Note: players will need to have the latest AMD GPU drivers to enable this. Ray tracing using NVIDIA GPUs will come with a future update.



A- is complaining?
A- for a outstanding cooler is not passing "full of shit" test.
 

psorcerer

Banned
So they are going to gimp PC versions from now on?

No, but the implementations will need to perform adequately on consoles.
Nobody will create a game with minecraft-like graphics just so they can use path-tracing on PC.

It's not just implementation. And even if it were, how often have we seen AMD overtake Nvidia with driver support that's maximum efficiency with EVERY game released? The drivers are shitty or have been for several years which is why we don't use the boards here. I'm not seeing any difference in Minecraft that's worthwhile compared to Nvidia boards.

Drivers are irrelevant here.
Current implementations AFAIK use high level primitives from NV frameworks.

What is a console game? Any game that comes out on consoles? A game that only comes out on consoles?

Any game that needs to perform on console.
 

regawdless

Banned
looks like Nvidia is yet again the superior choice

I'm very curious how AMD will try to compete with Nvidia in the future. They went all in on rasterization with extremely high clocks, infinity cache etc. And are able to match Nvidia in this regard, while leaving a big hole in raytracing performance.
Nvidia being so far ahead with dedicated HW for raytracing does not bode well for AMD.

I think I'll make more sense for them to cover lower end products, with great efficiency and good performance per dollar.

BUT I'm still glad that they're going after Nvidia in the high-end market. Nvidia needs to be pushed, otherwise they'll pull a 20xx gen again.
Good guy AMD makes my Nvidia cards better and cheaper I guess.
 

llien

Member
I'm very curious how AMD will try to compete with Nvidia in the future

I wonder how people would cope with AMD, after trouncing Ampere with RDNA2, will continue the trend the way it went with Intel. :messenger_beaming:

RT and upscaling are the only remaining points in Amper favour (it loses at price, VRAM size, power consumption, ties/loses at 1440p, 5%-ish ahead at 4k, with no SAM enabled), but RT situation is far from clear, while Lisa is after upscaling sprinkled with fancy techie words it seems.


Infinity cache allows AMD to get away with ridiculously narrow mem bus.


Whether NV can produce and sell claimed cards at claimed price any time soon is dubious at best, I expect this situation to last until hardware is bumped.



It's amazing how we got from "AMD would barely beat 2080Ti, if at all" to taking on 3090 (even 6800XT beats it in a number of games).
 
Last edited:

ZywyPL

Banned
I'm very curious how AMD will try to compete with Nvidia in the future. They went all in on rasterization with extremely high clocks, infinity cache etc. And are able to match Nvidia in this regard, while leaving a big hole in raytracing performance.
Nvidia being so far ahead with dedicated HW for raytracing does not bode well for AMD.

Well given the rasterization performance comes directly from the number of cores/CUs, AMD can always just double/tripple/quadruple the RT cores attached the CUs once 5nm process node becomes widely available, and hence easily double/tripple/quadruple the RT performance.
 

regawdless

Banned
I wonder how people would cope with AMD, after trouncing Ampere with RDNA2, will continue the trend the way it went with Intel. :messenger_beaming:

RT and upscaling are the only remaining points in Amper favour (it loses at price, VRAM size, power consumption, ties/loses at 1440p, 5%-ish ahead at 4k, with no SAM enabled), but RT situation is far from clear, while Lisa is after upscaling sprinkled with fancy techie words it seems.


Infinity cache allows AMD to get away with ridiculously narrow mem bus.


Whether NV can produce and sell claimed cards at claimed price any time soon is dubious at best, I expect this situation to last until hardware is bumped.



It's amazing how we got from "AMD would barely beat 2080Ti, if at all" to taking on 3090 (even 6800XT beats it in a number of games).

You sir, have an agenda.

Price difference is insignificant, especially regarding Nvidia offering more features. It looks like the 3080 is a bit faster in 1440p, while with SAM, the 6800XT is a tiny bit faster. VRAM size doesn't matter, as benchmarks show that at higher resolutions, the 6800XT has bandwidth issues that the 3080 doesn't have. Despite the fancy techie word "infinity cache". Nvidia has way faster RAM.

But why I'm even arguing. You've shown time and time again, that you're the absolute AMD fanboi. Keep fighting the good fight and enjoy your AMD GPU.

Just don't turn on raytracing.
 
I wonder how people would cope with AMD, after trouncing Ampere with RDNA2, will continue the trend the way it went with Intel. :messenger_beaming:

RT and upscaling are the only remaining points in Amper favour (it loses at price, VRAM size, power consumption, ties/loses at 1440p, 5%-ish ahead at 4k, with no SAM enabled), but RT situation is far from clear, while Lisa is after upscaling sprinkled with fancy techie words it seems.


Infinity cache allows AMD to get away with ridiculously narrow mem bus.


Whether NV can produce and sell claimed cards at claimed price any time soon is dubious at best, I expect this situation to last until hardware is bumped.



It's amazing how we got from "AMD would barely beat 2080Ti, if at all" to taking on 3090 (even 6800XT beats it in a number of games).


AMD didnt trounce Ampere. They're behind in literally every possible aspect outside of vram size. Like, literally everything. Even productivity aspects, av decoding. Ray tracing, DLSS, performance. They lose at every resolution, 1080p, 1440p and 4k. You have to look really selectively at tests to conclude that they tie at 1440p. They dont

The price is the only benefit.


 
Last edited:

psorcerer

Banned
Well given the rasterization performance comes directly from the number of cores/CUs, AMD can always just double/tripple/quadruple the RT cores attached the CUs once 5nm process node becomes widely available, and hence easily double/tripple/quadruple the RT performance.

Will not work. There is no such thing as RT performance. It's an NV marketing term.
 

regawdless

Banned
Well given the rasterization performance comes directly from the number of cores/CUs, AMD can always just double/tripple/quadruple the RT cores attached the CUs once 5nm process node becomes widely available, and hence easily double/tripple/quadruple the RT performance.

Don't believe it's that easy.
 
Last edited:

spyshagg

Should not be allowed to breed
Once again my educated predictions come to truth.

The REAL value in these next-gen GPUs is the RT. Without that - you have nothing but an upgraded 2080Ti for rasterization.

AMD boards are nearly 2x slower on average than the 3090. And that's without using DLSS 2.0.

That's the reason these Nvidia boards cost what they do. They deliver where it counts.

Raytracing is "where it counts" in 2021? RT is dead for navi2 in 2021, the 10GB is dead for the 3080 by 2021.

What are you smoking, give me it.
 

Ascend

Member
AMD didnt trounce Ampere. They're behind in literally every possible aspect outside of vram size. Like, literally everything. Even productivity aspects, av decoding. Ray tracing, DLSS, performance. They lose at every resolution, 1080p, 1440p and 4k. You have to look really selectively at tests to conclude that they tie at 1440p. They dont

The price is the only benefit.
This is objectively false.
AMD wins out at both 1080p and 1440p
AMD has lower power consumption
AMD has smart access memory while nVidia does not (if you discount AMD's DLSS equivalent coming in the future, you also have to discount nVidia's Smart Access Memory equivalent coming in the future)
AMD is performing significantly better in some most recent titles like AC Valhalla.

It is true that AMD didn't trounce Ampere, but Ampere is not trouncing AMD either. It might in the mind of biased ones, but it factually does not. If it would have, nVidia would not have priced their cards this 'low'.
 
yes, in >2022, maybe. Different ball game when we get there.


How are you reaching 2023 when nearly all the big games released at the end of this year sport ray tracing and a lot of smaller ones do as well ? Sony is releasing nearly every 1st party game with ray tracing. This will explode just now going into 2021. I think you'll have trouble finding an AAA game that doesnt have ray tracing by this time next year
 
AMD wins out at both 1080p and 1440p
uh-huh.... according to all tests I´ve seen AMD is at best equal/trading blows on a game by game basis. They don`t "win" anywhere. And the moment any kind of RT is activated they always lose horribly....

you save 50 bucks and lose DLSS and RT basically. In the price regions we are talking about I´d call that really bad value.
 
Last edited:

Mister Wolf

Gold Member
This is objectively false.
AMD wins out at both 1080p and 1440p
AMD has lower power consumption
AMD has smart access memory while nVidia does not (if you discount AMD's DLSS equivalent coming in the future, you also have to discount nVidia's Smart Access Memory equivalent coming in the future)
AMD is performing significantly better in some most recent titles like AC Valhalla.

It is true that AMD didn't trounce Ampere, but Ampere is not trouncing AMD either. It might in the mind of biased ones, but it factually does not. If it would have, nVidia would not have priced their cards this 'low'.

relative-performance_2560-1440.png
 

llien

Member
Well given the rasterization performance comes directly from the number of cores/CUs, AMD can always just double/tripple/quadruple the RT cores attached the CUs once 5nm process node becomes widely available, and hence easily double/tripple/quadruple the RT performance.
It ain't clear at all where we currently are on RT front.
I suspect just boosting RT cores is not enough, you need to address mem bandwidth issue and here is where it is actually AMD who has secret sauce from Zen3, the "infinity cache".
 

llien

Member
They don`t "win" anywhere.

TPU is fairly awkward, check this, same site, 1 day later, but using Zen3 instead of 9900k:

relative-performance_2560-1440.png


 
Last edited:

Ascend

Member
uh-huh.... according to all tests I´ve seen AMD is at best equal/trading blows on a game by game basis. They don`t "win" anywhere. And the moment any kind of RT is activated they always lose horribly....
Then you didn't look very hard...;
1440p-Average.png

 
Overall I think this is a pretty big win for AMD/Radeon compared to their past performance that a lot of people in this thread seem to be missing.

Reference Cooler:
AMD was notorious for releasing terrible "blower" style reference coolers that ran loud with high temps. Making only AIB models viable buys.

This time around their cooler seems to be pretty damn good essentially matching the expensive custom 3080 reference cooler in heat/noise. That is a pretty huge improvement from where they were in the past. Granted not everyone cares about reference models and AIBs will take the lions share of sales but this is still a huge improvement and nice win for AMD showing that they are serious about competing and are willing to up their game and improve on their obvious weak spots.

Power Draw:
For the last few generations of GPUs the Radeon family were consistently lambasted for higher power draw, hot running, inefficient designs. This time around we see that AMD have really focused on improving here and are actually running mostly below their specified 300w. This time around they draw less power across the board compared to Nvidia, which is a huge turn around/upset to the previous generations landscape. Another great win for AMD here.

Efficiency:
This goes hand in hand with power draw, but AMD has massively increased their efficiency even from RDNA1 to RDNA2 while running on the same 7nm node. You have to remember that RDNA1 was already a massive efficiency improvement over Vega. This is pretty impressive stuff and allows these GPUs to run at very high clocks while reducing power draw, a really impressive outcome for AMD, they seem to have a really great engineering team behind these RDNA2 card.

Rasterization Performance:
For the last 5-7 years of GPU releases AMD was mostly behind in performance at the highest end of the stack. Sometimes by a lot sometimes by a little but they were almost always behind in performance at the top end of the stack by a noticeable amount.

Prior to the RDNA2 reveal we had many rumours and tons of "concerned" Nvidia fans stating, almost as an outright fact that RDNA2 GPUs would definitely at the very top end be 2080ti levels of performance and might compete with a 3070. Were were told that in an absolute best case scenario that maybe they would scrape out 2080ti + 10-15% performance.

AMD could never compete with the powerhouse performance of Nvidia's latest GPUs you see, there were "power king" meme threads, the way it is meant to be played after all. As some later rumours started becoming clearer and it seemed likely that AMD might compete with 3080, we were told "Yeah but they will never touch the 3090! lolol". Along comes the 6900XT competing for $500 less. And here we are, Radeon group has seemingly done the impossible and is competing at the high end across the stack even with the hail mary release of the 3090. This is a pretty incredible improvement for Radeon group and cannot be overstated enough.

Release Cadence:
Previous Radeon GPUs tended to arrive much later than their Nvidia counterparts, often times more than a year behind Nvidia and often with worse performance or close to performance of a GPU that was to be replaced soon by a newer Nvidia model. Here we have essentially release parity, with only a month or two in the difference of releases. AMD have finally caught up to Nvidia here which again is a huge improvement.

Current rumours have AMD set to potentially release RDNA3 this time next year, which would be a huge improvement and upset to the normal 2 yearly development/release cycle we normally see for GPUs. We will have to wait and see how they progress but they already went from RDNA1 to RDNA2 in little over a year.
------

The above are some pretty fantastic improvements. In addition previously AMD cards were missing RT hardware/functionality altogether and they now have RT functionality performing at around 2080ti levels or higher depending on the title. Granted not as performant as Nvidia's 2nd generation of Ray Tracing with Ampere, but so far almost all of the RT enabled games were optimized/tested against Nvidia cards, most of them being sponsored by Nvidia to the point that Nvidia themselves coded most of the RTX functionality for some titles.

When you look at Dirt 5, which is the first RT enabled game optimized for AMD's solution it seems to perform pretty well on AMD cards. It will be interesting to revisit the RT landscape a year from now and see how these cards perform on (non Nvidia sponsored) console ports and on AMD sponsored titles.

Nvidia will still have a clear win with RT overall for this generation, if you need the best RT performance you should definitely go with Nvidia. However I find it interesting that 2080ti levels of RT performance (and some games higher) is suddenly considered "unplayable trash" but that is for another discussion.

Features:
Much like Nvidia was first to market with MS Direct Storage and rebranded it "RTX IO" and how Nvidia were first to market with Ray Tracing and rebranded DXR as "RTX", AMD here are first to market with resizable BAR support in windows as part of the PCIE spec. Like Nvidia, they have rebranded this feature as SAM.

SAM seems like a cool feature that can offer in a lot of cases 3-5% performance increases for essentially free. There are some games that don't benefit from this, and some outliers that get up to 11% increase such as Forza. Overall it seems like a cool tech that AMD are first to bring to the table.

Nvidia are working on adding their own resizable BAR support but we have no idea how long it will take them to release it or what Mobo/CPU combos will be certified to support it.

Regarding DLSS, this is definitely somewhere that AMD are behind. AMD are currently working on their FidelityFX Super Resolution technology to compete with DLSS. We don't know if this will be an algorithmic approach or if it will leverage ML in some way, or possibly a combination. AMD have mentioned their solution will work differently to DLSS so we can only guess for now until we have more information.

What we do know is that it will be a cross platform, cross vendor feature that should also work on Nvidia GPUs. If they go with an algorithmic approach then it may end up working on every game or a lot of them as an on/off toggle in the drivers. Of course this is all speculation until we have more concrete info to go off. What we do know is they are currently developing it and it should hopefully release by end of Q1 2021.

Value Proposition:
The 6800XT offers roughly equal rasterization performance with a 3080. It does this with less power draw and still runs cool and quiet. It has 16GB of VRAM vs 10GB VRAM on the 3080.

It offers RT but still behind 3080. I know this is a big issue for a lot of people, believe it or not there are actually tons of people out there who don't care about RT at all right now and won't for the forseable future. There are also people who view it as a "nice to have" but don't believe the benefits currently are worth the trade off until hardware improves to a much higher level and more games go heavily into designing their games around RT.

Personally I think RT is the future, in 10 years time almost all games will support RT and hardware will be fast enough with engines designed around it that performance will be great. That time is not quite right now though, as least for me. So I fall into the "nice to have" group at the moment.

AIB models seem to offer some overclocking headroom.

It offers SAM, which is a nice bonus but is currently missing a DLSS competitor. It is a little disappointing that Super Resolution was not ready for launch but they are currently working on it so hopefully the wait should not be too long.

Would it be more competitive if it was priced at $600 rather than $650? Definitely and I can understand the argument that for $50 more you can get better RT, DLSS and CUDA. The 3080 is a fantastic card afterall. Of course the chances of actually getting one for anywhere near MSRP even after shortages are sorted is another story altogether. Especially for AIB models.

The reality is that AMD and pretty much all manufacturers are wafer constrained on the supply side. AMD CPUs are very high profit margin products on the same 7nm wafer as a GPU. The consoles are also produced in huge quantities, AMD is on TSMC 7nm which is not cheap.

The simple reality is that right now AMD would sell out the entirety of their stock regardless of the price. They have no need to lower the price currently as they would simply be losing guaranteed money. AMD are also looking to position themselves out of the "budget" brand market.
 
Last edited:

Mister Wolf

Gold Member
TPU is fairly awkward, check this, same site, 1 day later, but using Zen3 instead of 9900k:

relative-performance_2560-1440.png



I'm not worried about that SAM bullshit any more than you are factoring in DLSS.
 
This is objectively false.
AMD wins out at both 1080p and 1440p
AMD has lower power consumption
AMD has smart access memory while nVidia does not (if you discount AMD's DLSS equivalent coming in the future, you also have to discount nVidia's Smart Access Memory equivalent coming in the future)
AMD is performing significantly better in some most recent titles like AC Valhalla.

It is true that AMD didn't trounce Ampere, but Ampere is not trouncing AMD either. It might in the mind of biased ones, but it factually does not. If it would have, nVidia would not have priced their cards this 'low'.


No, it doesnt. It looses at all resolution.


The reddit fellow who gathered all the 4k benchmarks will also get the data from every website for 1440p and 1080p and compile the results. You will have the confirmation that AMD loses in every resolution soon. It loses with SAM as well, because as gamers nexus pointed out, its not a universal boost. Its per game. You get 1% higher, you get 2% or you get nothing at all. Hitman 2 was the only game as far as i know that had a big boost of performance of 10%. Thats it.

AMD performs better in Valhalla, true. The 3080 performs significantly better in Legion. It depends on the game when you have this sort of anomaly. I've seen that some are trying to create a narative that AMD does better in newer games. It does not. It does worse in all of them outside of Assassins Creed and Dirt.
 

The Skull

Member
Overall I think this is a pretty big win for AMD/Radeon compared to their past performance that a lot of people in this thread seem to be missing.

Reference Cooler:
AMD was notorious for releasing terrible "blower" style reference coolers that ran loud with high temps. Making only AIB models viable buys.

This time around their cooler seems to be pretty damn good essentially matching the expensive custom 3080 reference cooler in heat/noise. That is a pretty huge improvement from where they were in the past. Granted not everyone cares about reference models and AIBs will take the lions share of sales but this is still a huge improvement and nice win for AMD showing that they are serious about competing and are willing to up their game and improve on their obvious weak spots.

Power Draw:
For the last few generations of GPUs the Radeon family were consistently lambasted for higher power draw, hot running, inefficient designs. This time around we see that AMD have really focused on improving here and are actually running mostly below their specified 300w. This time around they draw less power across the board compared to Nvidia, which is a huge turn around/upset to the previous generations landscape. Another great win for AMD here.

Efficiency:
This goes hand in hand with power draw, but AMD has massively increased their efficiency even from RDNA1 to RDNA2 while running on the same 7nm node. You have to remember that RDNA1 was already a massive efficiency improvement over Vega. This is pretty impressive stuff and allows these GPUs to run at very high clocks while reducing power draw, a really impressive outcome for AMD, they seem to have a really great engineering team behind these RDNA2 card.

Rasterization Performance:
For the last 5-7 years of GPU releases AMD was mostly behind in performance at the highest end of the stack. Sometimes by a lot sometimes by a little but they were almost always behind in performance at the top end of the stack by a noticeable amount.

Prior to the RDNA2 reveal we had many rumours and tons of "concerned" Nvidia fans stating, almost as an outright fact that RDNA2 GPUs would definitely at the very top end be 2080ti levels of performance and might compete with a 3070. Were were told that in an absolute best case scenario that maybe they would scrape out 2080ti + 10-15% performance.

AMD could never compete with the powerhouse performance of Nvidia's latest GPUs you see, there were "power king" meme threads, the way it is meant to be played after all. As some later rumours started becoming clearer and it seemed likely that AMD might compete with 3080, we were told "Yeah but they will never touch the 3090! lolol". Along comes the 6900XT competing for $500 less. And here we are, Radeon group has seemingly done the impossible and is competing at the high end across the stack even with the hail mary release of the 3090. This is a pretty incredible improvement for Radeon group and cannot be overstated enough.

Release Cadence:
Previous Radeon GPUs tended to arrive much later than their Nvidia counterparts, often times more than a year behind Nvidia and often with worse performance or close to performance of a GPU that was to be replaced soon by a newer Nvidia model. Here we have essentially release parity, with only a month or two in the difference of releases. AMD have finally caught up to Nvidia here which again is a huge improvement.

Current rumours have AMD set to potentially release RDNA3 this time next year, which would be a huge improvement and upset to the normal 2 yearly development/release cycle we normally see for GPUs. We will have to wait and see how they progress but they already went from RDNA1 to RDNA2 in little over a year.
------

The above are some pretty fantastic improvements. In addition previously AMD cards were missing RT hardware/functionality altogether and they now have RT functionality performing at around 2080ti levels or higher depending on the title. Granted not as performant as Nvidia's 2nd generation of Ray Tracing with Ampere, but so far almost all of the RT enabled games were optimized/tested against Nvidia cards, most of them being sponsored by Nvidia to the point that Nvidia themselves coded most of the RTX functionality for some titles.

When you look at Dirt 5, which is the first RT enabled game optimized for AMD's solution it seems to perform pretty well on AMD cards. It will be interesting to revisit the RT landscape a year from now and see how these cards perform on (non Nvidia sponsored) console ports and on AMD sponsored titles.

Nvidia will still have a clear win with RT overall for this generation, if you need the best RT performance you should definitely go with Nvidia. However I find it interesting that 2080ti levels of RT performance (and some games higher) is suddenly considered "unplayable trash" but that is for another discussion.

Features:
Much like Nvidia was first to market with MS Direct Storage and rebranded it "RTX IO" and how Nvidia were first to market with Ray Tracing and rebranded DXR as "RTX", AMD here are first to market with resizable BAR support in windows as part of the PCIE spec. Like Nvidia, they have rebranded this feature as SAM.

SAM seems like a cool feature that can offer in a lot of cases 3-5% performance increases for essentially free. There are some games that don't benefit from this, and some outliers that get up to 11% increase such as Forza. Overall it seems like a cool tech that AMD are first to bring to the table.

Nvidia are working on adding their own resizable BAR support but we have no idea how long it will take them to release it or what Mobo/CPU combos will be certified to support it.

Regarding DLSS, this is definitely somewhere that AMD are behind. AMD are currently working on their FidelityFX Super Resolution technology to compete with DLSS. We don't know if this will be an algorithmic approach or if it will leverage ML in some way, or possibly a combination. AMD have mentioned their solution will work differently to DLSS so we can only guess for now until we have more information.

What we do know is that it will be a cross platform, cross vendor feature that should also work on Nvidia GPUs. If they go with an algorithmic approach then it may end up working on every game or a lot of them as an on/off toggle in the drivers. Of course this is all speculation until we have more concrete info to go off. What we do know is they are currently developing it and it should hopefully release by end of Q1 2021.

Value Proposition:
The 6800XT offers roughly equal rasterization performance with a 3080. It does this with less power draw and still runs cool and quiet. It has 16GB of VRAM vs 10GB VRAM on the 3080.

It offers RT but still behind 3080. I know this is a big issue for a lot of people, believe it or not there are actually tons of people out there who don't care about RT at all right now and won't for the forseable future. There are also people who view it as a "nice to have" but don't believe the benefits currently are worth the trade off until hardware improves to a much higher level and more games go heavily into designing their games around RT.

Personally I think RT is the future, in 10 years time almost all games will support RT and hardware will be fast enough with engines designed around it that performance will be great. That time is not quite right now though, as least for me. So I fall into the "nice to have" group at the moment.

AIB models seem to offer some overclocking headroom.

It offers SAM, which is a nice bonus but is currently missing a DLSS competitor. It is a little disappointing that Super Resolution was not ready for launch but they are currently working on it so hopefully the wait should not be too long.

Would it be more competitive if it was priced at $600 rather than $650? Definitely and I can understand the argument that for $50 more you can get better RT, DLSS and CUDA. The 3080 is a fantastic card afterall. Of course the chances of actually getting one for anywhere near MSRP even after shortages are sorted is another story altogether. Especially for AIB models.

The reality is that AMD and pretty much all manufacturers are wafer constrained on the supply side. AMD CPUs are very high profit margin products on the same 7nm wafer as a GPU. The consoles are also produced in huge quantities, AMD is on TSMC 7nm which is not cheap.

The simple reality is that right now AMD would sell out the entirety of their stock regardless of the price. They have no need to lower the price currently as they would simply be losing guaranteed money. AMD are also looking to position themselves out of the "budget" brand market.

Think you're in the wrong thread buddy. No room here for nuanced takes!
 

llien

Member
I think it needs to be remembered what is behind those average figures across games.
Here is TPUs chart in a nicer way, even on i9900:

average-fps-2560-1440.png



average-fps-3840-2160.png


Just picking up the right number of "sponsored" games lets one get desired results.


I'm not worried about that SAM bullshit any more than you are factoring in DLSS.



DLSS is running something at lower resolution and pretending it didn't happen, SAM is a feature one has when using Radeon's with the best gaming CPUs available at the moment.
It's ridiculous to equate the two.
 

psorcerer

Banned
How are you reaching 2023 when nearly all the big games released at the end of this year sport ray tracing and a lot of smaller ones do as well ? Sony is releasing nearly every 1st party game with ray tracing. This will explode just now going into 2021. I think you'll have trouble finding an AAA game that doesnt have ray tracing by this time next year

And what exactly makes you think that all these games will perform better on NV?
 

Mister Wolf

Gold Member
I think it needs to be remembered what is behind those average figures across games.
Here is TPUs chart in a nicer way, even on i9900:

average-fps-2560-1440.png



average-fps-3840-2160.png


Just picking up the right number of "sponsored" games lets one get desired results.






DLSS is running something at lower resolution and pretending it didn't happen, SAM is a feature one has when using Radeon's with the best gaming CPUs available at the moment.
It's ridiculous to equate the two.

DLSS will be used by more consumers than SAM. That's even including people who currently use an AMD CPU like myself.
 
And what exactly makes you think that all these games will perform better on NV?

Since nvidia has the better designed and higher performing ray tracing implementation, i went with that. But bullshit like how it happened with Dirt 5 now or Godfall implementing rtx only on amd and later on nvidia isnt out of the question. At the moment, an AMD sponsored game means actively gimping performance on nvidia and removing features that are beneficial for the paying customer. AMD turned out the bad guy in the end, high prices on cpu's the first second they saw themselves on top. Doing shit like this with the graphics cards now. Theres really not much reason to cheer for them. They're being shitbags
 
Last edited:

Papacheeks

Banned
DLSS will be used by more consumers than SAM. That's even including people who currently use an AMD CPU like myself.

No. SAM is not proprietary. Once Intel incorporates that which they said they are working on, more people will be able to take advantage who dont just have a Ryzen 5000 series.
 
Bout time it happened to Nvidia after all their game works bullshit.


Its not the same. Those were proprietary tech. The shit AMD does now, with SAM, with raytracing, with gimping performance for amd sponsored games - these are standars features that work on everything. They're just holding them hostage from nvidia or intel. They've become complete scum in a very short amount of time. Even the biggest cheerleaders for them shouldnt praise this. Locking features from competing parts even if they can run them ? Thats too scummy. They locked SAM from their own platform to make you buy their latest parts, in a combo evern - mobo,cpu and gpu.
 
Last edited:

psorcerer

Banned
Since nvidia has the better designed and higher performing ray tracing implementation,

I'm not sure how you arrived to that?
What exactly "raytracing implementation" is?
We do know from the past that all the "GameWorks" implementations were subpar at best. Why do you think it suddenly got better?

At the moment, an AMD sponsored game means actively gimping performance on nvidia and removing features that are beneficial for the paying customer.

I'm not sure either why "gimping" is used here. You have two different vendor-specific implementations. They always perform worse on the opponent hardware.
I'm just telling you that it may mean nothing for the future games, where in-house implementations will be used.
 

The Skull

Member
Locking features from competing parts even if they can run them ? Thats too scummy

I absolutely agree. Open standards so everyone, despite what hardware vendor you go with, can enjoy acceptable performance. I still can't deny it's funny to see it happen to Nvidia though, albeit I don't wish for the trend to continue.
 

rodrigolfp

Haptic Gamepads 4 Life
"Ray Tracing (Hybrid Rendering) - 3080 pulls significantly ahead in most titles.
Ray Tracing (Path Tracing) - 3080 pulls way ahead. (Minecraft and Quake DXR are the only PT titles at the moment)
Ray Tracing DLSS - 3080 Pulls even further ahead, but AMD's Super Resolution is not available yet to compare.
Productivity Performance (Blender etc..) - 3080 still maintains its general lead in most cases due to the massive CUDA advantage."

So, Nvidia again...
 
Top Bottom