• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RDNA2 Isn't As Impressive As It Seems

DLSS2 and drivers are the main benefits to NVIDIA right now. They would make me reconsider an AMD card. Not in love with how they gimped VRAM on 3080 though
 
Yeah OP is just a giant nvidia fanboy
Or he prefers better feature set for the future? No one is taking away from the joy you have from your AMD cards. Some just want to be future proof, instead of being rasterization king, when DLSS and raytracing are more performant in Nvidia. I can buy both enthusiast cards right now, but guess what I'll choose, since I want to be future proof.
 

llien

Member
Jokes aside, besides beating NV all around at per transistor basis (difference in power consumption could be "justified" by process difference) AMD did amazing job on the memory front.

RDNA2 is using slower VRAM with smaller bus yet achieves better performance, something where NV had an edge not so long ago.
 
Jokes aside, besides beating NV all around at per transistor basis (difference in power consumption could be "justified" by process difference) AMD did amazing job on the memory front.

RDNA2 is using slower VRAM with smaller bus yet achieves better performance, something where NV had an edge not so long ago.
So for better current gen performance in older games, is now better? What about games that use better technique like raytracing or DLSS? You can't tell the difference anyways, so what does it matter to you?
 

BluRayHiDef

Banned
Feels like OP is really justify him buying a 3090 with this thread.

I'm a grown man; I don't need to justify anything; I'm satisfied with my purchase. However, I'm also intrigued by the specifics of computer hardware, hence this thread.
 

llien

Member
What about games that use better technique like raytracing or DLSS?

What about handful of games that use NV's proprietary RT (DXR as a standard exists, I know) and about games that use TAA based upscaling (that obviously blurs shit) with some generic NN sprinkled over it?

Yeah, what about them?
 
I think we all know Nvidia will launch their Super 3000 series on TMSC 7nm process and slap more VRAM on it. Gonna be some buyers remorse yet again.


This buyers remorse is complete nonsense unless they would launch a stronger card a month later at the exact same price. Why would anyone, ever have buyers remorde because a refresh line will launch 6 months or a year down the line ? Entire gpu generations used to take 6 months then one every year. Should people never have bought anything because in 12 months something better will come out ?
 
What about handful of games that use NV's proprietary RT (DXR as a standard exists, I know) and about games that use TAA based upscaling (that obviously blurs shit) with some generic NN sprinkled over it?

Yeah, what about them?
Like what for example? Please list examples so we can all be educated.
 

BluRayHiDef

Banned
I was asking what about handful of games with NV specific RT and upscaling, and I don't quite understand your answer.

It's more than a handful of games and you know that. Twelve more games that support Nvidia's DLSS and their implementation of ray tracing will be released this year alone, including the biggest title of the foreseeable future: Cyberpunk 2077, which alone makes an RTX 30 Series card worth buying. I'm betting that AMD's cards will perform worse in this game relative to Nvidia's cards, especially with RTX and/ or DLSS enabled.
 

Ascend

Member
So for better current gen performance in older games, is now better? What about games that use better technique like raytracing or DLSS? You can't tell the difference anyways, so what does it matter to you?
moving-the-goalposts-gif-2.gif
 
This buyers remorse is complete nonsense unless they would launch a stronger card a month later at the exact same price. Why would anyone, ever have buyers remorde because a refresh line will launch 6 months or a year down the line ? Entire gpu generations used to take 6 months then one every year. Should people never have bought anything because in 12 months something better will come out ?
I guess we'll see. What is for sure is Nvidia sure as hell didn't want to use Samsung's node, but they screwed the pooch with TMSC. The 3000 series will actually be able to stretch its legs on TMSC node and they will certainly slap on more VRAM. If you're happy with the 3000 series, fantastic, but I'm interested to see the performance gains in the future.
 

iJudged

Banned
So what? I am still buying the 6800XT card to shit on nvidias lousy practities, like charging 1500 dollars for 2080ti because there were no competition.
Only reason I am jumping ship. Glad I help off getting both 2080 and 3080
 

Ascend

Member
Last edited:
Pretty sure I do, considering I've been building, troubleshooting, upgrading and overclocking PCs for like 15 years.


ylNSz1pVk2kbS.gif



giphy.gif



There's no need to be upset.
Building for 15 years, but still have no clue...


I'd be upset if I had no idea either. No offense to you bro.
 
Last edited:

Revas

Member
ITT: a lot of clueless fanboys that should stick to slinging rocks in the console wars. OP is objectively correct in his calculations and understanding of what's what. AMD may win this round but it's by luck of having a better foundry make their chips rather than some miraculous design brilliance on their part.

Regardless, I'm still sitting comfortable on my 1080 Ti and will happily wait for a 40 series from Nvidia with most likely 5nm where the jump over Ampere will be massive. 4080 Ti here I come.

AMD didn't get "lucky" with TSMC and Nvidia didn't get "unlucky" with Samsung. Besides, what does "impressive" mean within the given context anyway? It may not be a marvel of modern architecture, but it looks like AMD went from competing with Nvidia only at the mid range to competing with them across the stack. That's what most people care about and judging from reactions I've seen - many are some degree of shocked, cautious and impressed. So the architecture itself in comparison to Ampere means what exactly? It's cheaper, almost as good and here's the kicker - it may be much more available.
 

Ascend

Member
Building for 15 years, but still have no clue...
giphy.gif


Tell me something... Did you buy an AMD card because of Async compute? No? Did you buy an AMD card supporting DX12 when nVidia's 900 series was still stuck with DX11..? Let me guess. You didn't, didn't you? Obviously you're full of shit with your "better technique" and "superior/newer technology" nonsense. You're constantly goal post shifting as to why nVidia is better rather than judging the cards on their actual merits. Go ahead. Lie again, just to save face.

The most important thing; The 6800 cards are DX12U compliant, just like nVidia's 3000 series (if not more). That says more than enough about the hardware supported features for games.
 
giphy.gif


Tell me something... Did you buy an AMD card because of Async compute? No? Did you buy an AMD card supporting DX12 when nVidia's 900 series was still stuck with DX11..? Let me guess. You didn't, didn't you? Obviously you're full of shit with your "better technique" and "superior/newer technology" nonsense. You're constantly goal post shifting as to why nVidia is better rather than judging the cards on their actual merits. Go ahead. Lie again, just to save face.

The most important thing; The 6800 cards are DX12U compliant, just like nVidia's 3000 series (if not more). That says more than enough about the hardware supported features for games.
I get Nvidia because they have better performance. That's literally the end game. You can be triggered all you want. But you can't convince me a small percentage of rasterization is better than better ray tracing. You have absolutely no clue on building pc's if you think rasterization will beat raytracing in the future. You are complete oblivious at this point in time. You keep coming after me and failing every single time now. Maybe I should block these miniscule arguments at this point if you keep losing them.
 
Look, if you want to talk about DLSS and other upscaling tech, there is a thread for that.

BluRayHiDef BluRayHiDef mentioned there are "dozen" of games with RT/and that fancy upscaling and I wonder if it includes WoW.

As for unreleased games (whopping one of them) we'll judge it once it is released.
You sound triggered. Moving the goalposts. No offense. But you have no argument. At all.
 

Ascend

Member
I get Nvidia because they have better performance. That's literally the end game. You can be triggered all you want. But you can't convince me a small percentage of rasterization is better than better ray tracing. You have absolutely no clue on building pc's if you think rasterization will beat raytracing in the future. You are complete oblivious at this point in time. You keep coming after me and failing every single time now. Maybe I should block these miniscule arguments at this point if you keep losing them.
You heard it here first folks... He gets nVidia because they have better performance. Except when that performance is not only in the area that benefits AMD, it's in the area that games have used for years and will continue to use for years, considering that ray tracing is too heavy. Suddenly, a "small" percentage advantage of rasterization is irrelevant and not better than a small percentage advantage in ray tracing. Even though that 'small' percentage advantage in rasterization means a $579 card occasionally beating a $1499 one. And even though both vendors support hardware accelerated ray tracing, only one of them matters. And this is all coming from someone that claims to be impartial... Of course you are bro. Of course you are...

pigeon-chess.jpg
 
You heard it here first folks... He gets nVidia because they have better performance. Except when that performance is not only in the area that benefits AMD, it's in the area that games have used for years and will continue to use for years, considering that ray tracing is too heavy. Suddenly, a "small" percentage advantage of rasterization is irrelevant and not better than a small percentage advantage in ray tracing. Even though that 'small' percentage advantage in rasterization means a $579 card occasionally beating a $1499 one. And even though both vendors support hardware accelerated ray tracing, only one of them matters. And this is all coming from someone that claims to be impartial... Of course you are bro. Of course you are...
Why do you hate that Nvidia leads in future performance? Why is that a bad thing to AMD? they have people who want to spend less, for much less performance. PC is not the same arena as consoles, so keep that retarded mindset out of PC gaming space. People who want the best of the best will still go to Nvidia. The ones who have a budget or miraculously don't care for raytracing will go AMD. No need to keep throwing the lesser results in our face.
 
Last edited:

mr.dilya

Banned
Or he prefers better feature set for the future? No one is taking away from the joy you have from your AMD cards. Some just want to be future proof, instead of being rasterization king, when DLSS and raytracing are more performant in Nvidia. I can buy both enthusiast cards right now, but guess what I'll choose, since I want to be future proof.

Dlss is def a factor, but double the Vram on AMDs first two entries more than makes up for it. When you factor in the price difference there’s no way you can say NVidia isn’t hosing consumers.
 
Dlss is def a factor, but double the Vram on AMDs first two entries more than makes up for it. When you factor in the price difference there’s no way you can say NVidia isn’t hosing consumers.
I think we'll have issues with 16gb of ram not being able to hold textures in future games, before DLSS saves performance with lower resolution and higher texture quality, with lower bandwidth costs. 6x vs regular gddr6 ram. Direct Storage coming out, will be huge for RTX I/0. What does AMD have then, with raytracing being the defacto of ultimate performance.

I want AMD to be competiton, but I'll have to wait n see.
 

Ascend

Member
Why do you hate that Nvidia leads in future performance?
Define "future performance". You still didn't answer whether you bought a DX12 AMD GPU when nVidia only had DX11 cards. So, back then, were you for "future performance", or were you for "fastest performance available"?
Vague language like "future performance" is like marketing speak and manipulative.

But let's use another metric. Last time I checked, the majority of people still game at 1080p. On Steam, that is 65.5% of users. Who's in 2nd place? 1366x768 believe it or not, at 9.3%. Third place is 1440p at 6.9%, and 4K doesn't even reach Top 5 at 2.3%. How long has 4K been around? It is still not widely adopted. 4K has been around longer than ray tracing, and if you're really going to base your purchasing decision on a feature that just came around the corner, you have to define what "future performance" means. Because in a year or two, all current cards will still be too slow to use RT properly, and in 10 years when it possibly latches on, these cards will be too slow anyway. And DLSS does not change that, especially now that nVidia will most definitely be losing market share.

Why is that a bad thing to AMD? they have people who want to spend less, for much less performance.
Your bias is shining through once again... Spend less for much less performance? You sound like the people supporting Intel a while back. All I can say to this is...;
ksJII1V.png


PC is not the same arena as consoles, so keep that retarded mindset out of PC gaming space.
You mean the one of blindly following your favorite brand? Yeah... Take your own advice.

People who want the best of the best will still go to Nvidia.
And the people that have actual brains will go for AMD cards this generation.

The ones who have a budget or miraculously don't care for raytracing will go AMD.
$999 is a 'budget'?

No need to keep throwing the lesser results in our face.
Of course oh unbiased holy one...
 
Define "future performance". You still didn't answer whether you bought a DX12 AMD GPU when nVidia only had DX11 cards. So, back then, were you for "future performance", or were you for "fastest performance available"?
Vague language like "future performance" is like marketing speak and manipulative.

But let's use another metric. Last time I checked, the majority of people still game at 1080p. On Steam, that is 65.5% of users. Who's in 2nd place? 1366x768 believe it or not, at 9.3%. Third place is 1440p at 6.9%, and 4K doesn't even reach Top 5 at 2.3%. How long has 4K been around? It is still not widely adopted. 4K has been around longer than ray tracing, and if you're really going to base your purchasing decision on a feature that just came around the corner, you have to define what "future performance" means. Because in a year or two, all current cards will still be too slow to use RT properly, and in 10 years when it possibly latches on, these cards will be too slow anyway. And DLSS does not change that, especially now that nVidia will most definitely be losing market share.


Your bias is shining through once again... Spend less for much less performance? You sound like the people supporting Intel a while back. All I can say to this is...;
ksJII1V.png



You mean the one of blindly following your favorite brand? Yeah... Take your own advice.


And the people that have actual brains will go for AMD cards this generation.


$999 is a 'budget'?


Of course oh unbiased holy one...
So you have no issue with lower performance of raytracing. Cool for you that prefer no raytracing, as I've said before. If you don't care for future performance like Ascend Ascend , AMD is perfectly fine for you. Those who only care about the here and now, and give no shit about the future, you guys are perfectly safe with AMD. The ones who care about not only the here and now, but the future, are looking towards future games. Again, what games does Ascend Ascend have to prove, besides AMD picked results WITH ABSOLUTELY NO RAYTRACING BENCHMARKS in sight. Please don't hate on Ascend Ascend , as he doesn't care for future titles or even current gen games that feature raytracing. Only older games, not even current games.
 

Ascend

Member
So you have no issue with lower performance of raytracing. Cool for you that prefer no raytracing, as I've said before. If you don't care for future performance like Ascend Ascend , AMD is perfectly fine for you. Those who only care about the here and now, and give no shit about the future, you guys are perfectly safe with AMD. The ones who care about not only the here and now, but the future, are looking towards future games. Again, what games does Ascend Ascend have to prove, besides AMD picked results WITH ABSOLUTELY NO RAYTRACING BENCHMARKS in sight. Please don't hate on Ascend Ascend , as he doesn't care for future titles or even current gen games that feature raytracing. Only older games, not even current games.
Very funny, considering AMD graphics cards are not only well known to age a lot better than nVidia's, aside from the RTX 3090, AMD has more VRAM as well... And considering that consoles drive the baseline for game development in most cases and they carry RDNA2... Yeah...

As for ray tracing, absence of evidence is not the same as evidence of absence. These 6800 series cards have hardware accelerated ray tracing. Stop pretending like they don't. And I'll say it again. The 6800 cards are fully DX12U compliant. Your whole "Ascend cares only about older games" assertion is nothing more than a shallow shaming tactic that bullies use when they are cornered. AMD achieved great things with RDNA2 on the same node as RDNA and support all the required DX12U features. And the ones not even considering AMD this time around were never going to buy an AMD card anyway, especially since nVidia has extreme shortages in supply of their 3000 series cards.

If you'd rather have no card than an AMD card, that says quite a lot.
 
RDNA2 is manufactured on the "7nm" manufacturing process of Taiwan Semiconductor Manufacturing Company (TSMC), whereas Ampere is manufactured on the "8nm" manufacturing process of Samsung. Despite the actual sizes of these manufacturing processes not being consistent with their marketing names (hence I've put them in quotation marks), TSMC's "7nm" process is indeed smaller than Samsung's "8nm" process, which is what's important to consider.

Hence, because RDNA2 is manufactured on the smaller process, it packs more transistors per square millimeter. For example, the following calculations show the difference in transistor density between RDNA2's largest consumer chip, Navi 21, and Ampere's largest consumer chip, GA102.

Navi 21: 536 square millimeters and 26.8 billion transistors -> 26.8 billion/536mm^2 = 50,000,000 transistors per square millimeter

GA102: 628.4 square millimeters and 28.3 billion transistors -> 28.3 billion/628.4mm^2 = 45,035,009.5 transistors per square millimeter

50,000,000 / 45,035,009.5 = 1.110247351 -> 1 - 1.110247351 = 0.110247351 -> 0.110247351 x 100 = 11.0247351% -> 11%
The fact that Nvidia chose to use an inferior node, for whatever reason, is not our problem. They fucked up, that's on them. Could Ampere have been better on TSMC N7? Maybe who knows, but they didn't so it doesn't fucking matter.

This additional 11% of transistors per square millimeter is why RDNA2 performs as well as it does in rasterization relative to Ampere even though Ampere has more transistors overall; the entirety of Ampere's 28.3 billion transistors cannot be used exclusively for rasterization since many of them comprise RT Cores and Tensor Cores that exclusively perform ray tracing and artificially intelligent upscaling, respectively.

So what? That's not our problem. If Nvidia wasted transistors on specialised hardware units at the expense of their general purpose graphics performance, that isn't our problem and it isn't AMD's problem.

Also, maybe you had forgotten, or perhaps it hadn't occurred to you but AMD also has hardware RT acceleration units. One per compute unit. You're also forgetting the Infinity Cache, of which there is a whopping 128MB.

While the exact number of transistors that comprise RT Cores and Tensor Cores is not known, we can be sure that they amount to more than the difference in the overall number of transistors in RDNA2 and Ampere (28.3 billion - 26.8 billion = 1.5 billion) based on diagrams that illustrate the relative sizes of CUDA Cores, RT Cores, and Tensor Cores.

I believe it was Dr. Ian Cuttress over on twitter who pointed out that with 6t SRAM cells, which means AMD have dedicated a whopping 6 billion transistors on the Infinity Cache.

Which means only 20 billion transistors are actually going towards GPU (less when you factor in the display unit, and encode/decode blocks, but lets not get too technical here). So whatever point it is you're making about Nvidia somehow having better performance per transistor is completely invalid.

And even if it wasn't invalid, its still meaningless. Ultimately, the performance of the product and its pricing are what matters to the end users. All the rest of it is just posturing.

Not sure what the purpose of the GA102 block diagram is here tbh...

Hence, Ampere performs roughly as well as RDNA2 in rasterization with less transistors.
No it doesn't: See above.
And even if it did, it doesn't fucking matter. Ampere is what it is. Could Nvidia have won if they had thrown yet more transistors at the problem for more performance? Yes. Did they? No. So it doesn't fucking matter.
Should also mention that more transistors means bigger die, which means higher cost. Nvidia aren't going to eat that cost so you can benefit from more performance. You'd best believe they'd charge you for it.

This indicates that RDNA2 isn't as efficiently designed as Ampere or - at the very least - isn't as efficiently put to use by AMD's drivers as Ampere is put to use by Nvidia's drivers.

Isn't as efficient by what metric? I've already established that your claim of AMD having more transistors put towards rasterisation is dubious at best, and patently false in all likelihood.

This assertion is based on rationale: an additional 11% of transistors should always result in better performance in rasterization (when other features are not enabled), but as AMD themselves showed at their announcement event, RDNA2 is faster than Ampere in rasterization only some of the time and is barely so whenever it is. Hence, despite having more transistors per square millimeter and despite being able to use all of them for rasterization (whereas Ampere can use only some of its transistors for rasterization),

Once again. 6 billion transistors in Navi 21 are put into the 128 MB of Infinity Cache. Once gain, AMD also have transistors put towards hardware Ray Tracing. So your point is moot.

RDNA2 is only as fast or slightly faster than Ampere in rasterization. Hence, RDNA2 isn't as impressive as it seems.

There are numerous ways in which RDNA2 is far more impressive than Ampere. One of those ways its impressive is how an 80CU Navi 21 can match an 82 (active) SM GA102. Another way its impressive is how a 22TFLOP GPU manages to perform as well as a 36 TFLOP GPU. Another way its impressive is how the 6900XT with 512GB/s GDDR6 matches a 3090 with 936GB/s GDDR6X. Another way its impressive is how the 6900XT at 300W matches the 3090 at (over) 350W.
Sure Ampere could have better performance per transistor in rasterisation (which it doesn't), but in all other areas its pretty fucking poor.

It can be argued that RDNA2 is indeed more efficient because it's performing as well as it is in rasterization relative to Ampere despite using less power; Navi 21 uses 300 watts at most at stock settings but GA102 uses 350 watts at most at stock settings. However, it must be considered that Navi 21 is - once again - manufactured on a smaller manufacturing process, that 300 watts is only 16.7% less than 300 watts, and that Navi 21 uses more transistors for rasterization (which naturally require less power since they don't have to function as fast as a lower number of transistors).

Already explained how your hypothesis about transistor performance is not really valid.
The process node Nvidia chose would have been decided 2-3 years ago now, as the chip design must be closely linked with the process its being manufactured on. And Nvidia chose wrong. That's not AMD's problem. There are no what ifs, or could have been's. Ampere is what it is, and its a culmination of poor decisions by Nvidia. The blame lies solely with Nvidia.

Hence, if Ampere were to be refreshed on TSMC's "7nm" manufacturing processes, it would be outright faster than RDNA2 in rasterization.

Who knows. Maybe. Unfortunately, they didn't use TSMC N7. So here we are.
 
Last edited:
Very funny, considering AMD graphics cards are not only well known to age a lot better than nVidia's, aside from the RTX 3090, AMD has more VRAM as well... And considering that consoles drive the baseline for game development in most cases and they carry RDNA2... Yeah...

As for ray tracing, absence of evidence is not the same as evidence of absence. These 6800 series cards have hardware accelerated ray tracing. Stop pretending like they don't. And I'll say it again. The 6800 cards are fully DX12U compliant. Your whole "Ascend cares only about older games" assertion is nothing more than a shallow shaming tactic that bullies use when they are cornered. AMD achieved great things with RDNA2 on the same node as RDNA and support all the required DX12U features. And the ones not even considering AMD this time around were never going to buy an AMD card anyway, especially since nVidia has extreme shortages in supply of their 3000 series cards.

If you'd rather have no card than an AMD card, that says quite a lot.
The raytracing is not as good as Nvidia, so no matter how hard you battle for AMD, people who want next gen graphics cards, have no other place to go than Nvidia. Sorry dude, but many people love the way raytracing looks, and no matter how hard you rally for them, they won't have better raytracing or AI neural networks. Older games will play better on AMD, but not current or next gen games with raytracing. Unless you turn off raytracing, to have better framerates. You can cry to the moon, but they won't believe.

Also that G Guilhermegrg is an AMD employee who hates to see Nvidia have better raytracing.
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
Does RDNA2 not also have part of their die dedicated to eat tracing functions and not rasterization?

Honest question I haven't read anything technical about the architecture yet but I know it supports RT.
 

Elias

Member
The raytracing is not as good as Nvidia, so no matter how hard you battle for AMD, people who want next gen graphics cards, have no other place to go than Nvidia. Sorry dude, but many people love the way raytracing looks, and no matter how hard you rally for them, they won't have better raytracing or AI neural networks. Older games will play better on AMD, but not current or next gen games with raytracing. Unless you turn off raytracing, to have better framerates. You can cry to the moon, but they won't believe.

Also that G Guilhermegrg is an AMD employee who hates to see Nvidia have better raytracing.
AMD's rtx performance is actually looking pretty good. You may want to wait before passing judgement.
 

BluRayHiDef

Banned
Does RDNA2 not also have part of their die dedicated to eat tracing functions and not rasterization?

Honest question I haven't read anything technical about the architecture yet but I know it supports RT.
The ray tracing hardware is built into the CUs, which also perform rasterization. So, it's the same hardware that does both functions.
 
Top Bottom