• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon RX6800/RX6800XT Reviews/Benchmarks Thread |OT|

ZywyPL

Banned
I strongly disagree on the VRAM, releasing a high end GPU with only 8GB VRAM in 2020 would have been a terrible move. You could even argue that the 3080 doesn't have enough VRAM with only 10GB for a high end 2020 GPU.

That's just you personal opinion that is not reflected in the actual real world, sorry, until the games really start having hiccups due to insufficient RAM the issue doesn't exists, literally.
 

BluRayHiDef

Banned
No there's not.

I embedded a video. You can deny what's proven in it, but that doesn't make it untrue.
Heck, there's also the folliwing:

Zrzutekranu202012061.png
 
This thread continues to be wild :) You have people posting hard data, averages, facts - labeled nvidia fanboys. Then you have prople outright looking sideways and ignoring the results, making false claims, making claims based on potential future, claims based on nonexistent technologies - that call themselves neutral. Not biased towards anything at all. Its amazing
 
I strongly disagree on the VRAM, releasing a high end GPU with only 8GB VRAM in 2020 would have been a terrible move. You could even argue that the 3080 doesn't have enough VRAM with only 10GB for a high end 2020 GPU. Not really looking to get into an allocated vs used memory discussion as that's been done to death but once the 3070ti launches with 16GB and the 3080ti launches with 20GB and Nvidia switches focus to those I assume a lot of people will suddenly change their tune on VRAM amounts. In simple terms, having more VRAM is always better than having less.

That's pretty much your superstition - so far we don't see negative effects of 10GB of ram on 3080 while Radeon magic cache already shows noticable performance drop as resolution increases.
 
I embedded a video. You can deny what's proven in it, but that doesn't make it untrue.
Heck, there's also the folliwing:

Zrzutekranu202012061.png

Watch Dogs Legion was already explained earlier in the thread as there is a driver issue acknowledged by AMD, meaning it is not rendering as intended and will be fixed at some stage with an update. Stop trying to spread this false narrative.

Regarding your "proof" it comes from a single youtuber who I've never heard of who compares scenes at two different times of day and says the Nvidia scene "looks darker" so there must be an issue. Hardly a smoking gun the way you seem to be trying to claim. I haven't seen this claim verified or corroborated by anyone else on any tech site, tech twitter etc...

If there was an issue it would be widely known by now similarly to how it was spotted and discussed very quickly that Watch Dogs had an issue with RT not rendering properly due to the driver bug. You need to drop this false narrative as it is really not helping your credibility.

If anything seeing as DIRT5 is an AMD sponsored/optimized title it would be more likely that if there was some kind of rendering difference (there's not) that the AMD rendered frame would be the source of truth as it was intended and the Nvidia rendered frame was off due to being unoptimised/driver not being ready from Nvidia for that title.

Until I see this verified and corroborated by credible people it is nonsense. And even if it somehow was true with DIRT5 and wasn't a weird driver issue or something like that, it would need to be true for all games with RT enabled when running on AMD cards. We have seen multiple games benchmarked and compared with RT and this has never come up.
 
That's just you personal opinion that is not reflected in the actual real world, sorry, until the games really start having hiccups due to insufficient RAM the issue doesn't exists, literally.

I respect your opinion, I just happen to disagree on this. I suppose we will see one way or the other as more titles release over the next 2-3 years. Maybe 10GB ends up being enough in the end but we will have to wait and see. Doesn't change the fact that more VRAM is always better than less VRAM in a general sense.
 

Ascend

Member
slow your roll bro. Bigotry? IS that the right word?
If anything, that word is not strong enough.

Edit;
Just for clarification, here's the definition of bigotry;

bigotry
/ˈbɪɡətri/
obstinate or unreasonable attachment to a belief, opinion, or faction; in particular, prejudice against a person or people on the basis of their membership of a particular group.


One only has to say that they prefer the 6800 cards, in a 6800 thread, to see the army come in to 'correct' that person...

This thread continues to be wild :) You have people posting hard data, averages, facts - labeled nvidia fanboys. Then you have prople outright looking sideways and ignoring the results, making false claims, making claims based on potential future, claims based on nonexistent technologies - that call themselves neutral. Not biased towards anything at all. Its amazing
That's what the whole RT argument is about...
 
Last edited:

ZywyPL

Banned
Doesn't change the fact that more VRAM is always better than less VRAM in a general sense.

That's true, but unfortunately more RAM boosts the price significantly, as seen with 3090 and all previous Titan cards, and that cost is being thrown on us, end consumers. So in practice, ironically, the less RAM the better, by the time 8-10GB really won't be enough anymore the cards will be outdated already and won;t be able to run games on more than Low-Medium anyway. Take the 1060 for example, many people said the 6GB won't be enough for the years to come, and yet the card aged very well and served their owned for many years, and today no one even cares about the card anymore, 8GB wouldn;t make it any more performant today.
 
Last edited:

Ascend

Member
That's true, but unfortunately more RAM boosts the price significantly, as seen with 3090 and all previous Titan cards, and that cost is being thrown on us, end consumers. So in practice, ironically, the less RAM the better, by the time 8-10GB really won't be enough anymore the cards will be outdated already and won;t be able to run games on more than Low-Medium anyway. Take the 1060 for example, many people said the 6GB won't be enough for the years to come, and yet the card aged very well and served their owned for many years, and today no one even cares about the card anymore, 8GB wouldn;t make it any more performant today.
There's one game that already exceeds 8GB.
The idea that by the time the VRAM wouldn't be enough these cards would be too slow to use it, is the same as the idea that by the time RT becomes prevalent or mandatory, these cards would be too slow to actually use it.
 
Last edited:
Infinity cache


does good work at lower resolutions, not so much at 4k. You have games that run faster at 1440p on 6800XT but they drop harder than 3080 when you go at 4k. It does some job for them, to aleviate the slow memory and narrow bus, but its not like the real thing. There are limits. 4k seems to be it. It happens the same in Cyberpuk

Linus messed around with Cyberpunk at 4k and 8k, with a 3080 and a 3090. It was at 8k that the vram was not sufficient at 10 gigs. Since Cyberpunk is mandatory DLSS for RT, it means you're gonna be far lower. Saw a guy on another forum, at 1440p, maxed raster and RT, DLSS set to quality was bellow 5 gigs of vram.

The 10 gigs are a non issue at the present time. However, at some point it will become insuficcient. We will have to wait and see how far into the future this will happen. If by the time 10 gigs will become a bottleneck the 5080 will be out, then it will be a non issue. If it will happen next year ...
 

Ascend

Member
How much VRAM does Cyberpunk 2077 actually use at max settings at 4K?
Actual usage is currently not know. I assume Techspot will look at that in the upcoming days/weeks. Until now, this is what I know;

The RTX 3090 at 4K Ultra settings +RT allocates around 12GB of VRAM and achieves 16 fps.
The RTX 3080 at 4K Ultra settings +RT + DLSSQ allocates around 9GB of VRAM and achieves 30fps.
The RTX 3070 at 4K Ultra settings +RT allocates pretty much the whole 8GB of VRAM and achieves 3fps.
The RTX 3070 at 1440p Ultra settings + RT +DLSSB allocates pretty much the whole 8GB of VRAM and achieves 54 fps.

Who knows what the actual rendering resolution would be for 1440p + DLSSB?
 
Last edited:

BluRayHiDef

Banned
Actual usage is currently not know. I assume Techspot will look at that in the upcoming days/weeks. Until now, this is what I know;

The RTX 3090 at 4K Ultra settings +RT allocates around 12GB of VRAM and achieves 16 fps.
The RTX 3080 at 4K Ultra settings +RT + DLSSQ allocates around 9GB of VRAM and achieves 30fps.
The RTX 3070 at 4K Ultra settings +RT allocates pretty much the whole 8GB of VRAM and achieves 3fps.
The RTX 3070 at 1440p Ultra settings + RT +DLSSB allocates pretty much the whole 8GB of VRAM and achieves 54 fps.

Who knows what the actual rendering resolution would be for 1440p + DLSSB?

What is the source of this data? Also, what is the frame rate at which the RTX 3090 renders the game when using the settings chosen for the RTX 3080?
 

Buggy Loop

Member
If anything, that word is not strong enough.

Edit;
Just for clarification, here's the definition of bigotry;

bigotry
/ˈbɪɡətri/
obstinate or unreasonable attachment to a belief, opinion, or faction; in particular, prejudice against a person or people on the basis of their membership of a particular group.


One only has to say that they prefer the 6800 cards, in a 6800 thread, to see the army come in to 'correct' that person...


That's what the whole RT argument is about...

Are you referring to me as the bigot here from the few previous posts I made? Just @ me rather than try to side eye me and smirks with your «friends »

I doubt this forum kept archive from that far out but yea, from 2004 to 2016, you would probably find my name in many AMD threads. It’s not a « plot » or a tactics like you seem to allude to. I simply grew out of it, AMD is probably the biggest hardware cult there is. I used to always hang on rage 3D too, and here on this very forum, analyse the ATI flipper to find that tiny advantage over other architectures. Always shitting in Nvidia for their proprietary tech. In the end, I realized all of this makes no sense, it’s a moral high ground, but who the fuck cares.

As an electrical engineer with some basics of semiconductors, I wanted to ask a simple question : How come Nvidia even survived this battle in rasterization? If you can answer that, please go ahead.

If AMD went with an RT hybrid solution, and as per their patent, they want a simplified version to save on silicon area/complexity at the cost of RT performances? Fine. Like i said earlier, it’s a legit decision.

If they have accelerated ML math integers in the pipeline rather than dedicated ML cores to again save on silicon area, fine. It can probably crunch enough for some upscaling or texture AI upscale like that Xbox developer wants to do.

But with these sacrifices, I expect then that they should be godly in rasterization. Somehow, either that fizzled on AMD’s side or Nvidia’s double shading pipelines was a surprise to them.

I have my theory that the SRAM solution ate too much silicon area that should have been used for more CUs. These cards should have been on HBM2. Other wise, all these sacrifices serve just the SRAM implementation. Not sure it was the right move.

I wish a tech site would dive in these architectures and explain it better.
 

Sun Blaze

Banned
The problem with the 10GB of VRAM talk is that you're buying a $700+ GPU, you shouldn't even worry about VRAM AT ALL. 1080 Ti released 4 years prior had 11GB. I know 3080 is GDDR6X compared to GDDR5X, but this was 4 years ago and the 1080 Ti still has 1 extra GB of VRAM.
It's mind-boggling that the amount hasn't budged in 2 generations. The 1070 was 8GB, just like the 3070 which has bog-standard GDDR6. How is that acceptable? The argument isn't whether 10GB is enough. The argument is that at that price range it should NOT EVEN BE a discussion.
 

SantaC

Member
does good work at lower resolutions, not so much at 4k. You have games that run faster at 1440p on 6800XT but they drop harder than 3080 when you go at 4k. It does some job for them, to aleviate the slow memory and narrow bus, but its not like the real thing. There are limits. 4k seems to be it. It happens the same in Cyberpuk

Linus messed around with Cyberpunk at 4k and 8k, with a 3080 and a 3090. It was at 8k that the vram was not sufficient at 10 gigs. Since Cyberpunk is mandatory DLSS for RT, it means you're gonna be far lower. Saw a guy on another forum, at 1440p, maxed raster and RT, DLSS set to quality was bellow 5 gigs of vram.

The 10 gigs are a non issue at the present time. However, at some point it will become insuficcient. We will have to wait and see how far into the future this will happen. If by the time 10 gigs will become a bottleneck the 5080 will be out, then it will be a non issue. If it will happen next year ...
4K seems fine to me? As long as you dont use raytracing
 
does good work at lower resolutions, not so much at 4k. You have games that run faster at 1440p on 6800XT but they drop harder than 3080 when you go at 4k. It does some job for them, to aleviate the slow memory and narrow bus, but its not like the real thing. There are limits. 4k seems to be it. It happens the same in Cyberpuk

That seems to be what most people assumed at first looking at the benchmark results but apparently the 6000 series scale "normally" when going up to 4K in the same proportion that Turing does for example.

The reason the 3000 series performs better at 4K and scales relatively worse compared to its 4K performance at lower resolutions is due to the high compute power not being fully utilized at lower resolutions. We saw similar things with Vega where the 4K results were proportionally better than what they should have been based on lower resolution results due to the higher compute power.

Of course Vega cards are not really what we call 4K capable cards at any kind of playable framerate but simply pointing out the proportionality of it.



Of course having more bandwidth is certainly always better than having less so maybe performance might improve a little if the 6000 series had a wider bus but we can't really know for sure. The Infinity Cache seems to do a great job at mitigating the need for a wider bus.
 

Sky-X

Neo Member
I feel AMD is being a bit under-appreciated for what they bringing to the table, though this is to be expected due to Nvidia's long dominance over the high end segment and innovations like RT/DLSS. AMD made a huge comeback and the architecture is phenomenal, what they have done in rasterization performance is not useless. its still the main way to play video games today, as not every game supports DLSS. even relatively new games like RDR2 doesn't support the feature + There is value in having 16GB of VRAM even if its slower than 3080's. it may prove useful, it may not, but better to have it than not. Looking back at GTX 780 3GB model, its still capable of playing games but a lot of them stutter due to lack of memory. It feels like if you invested in AMD, you will have a card to carry you through the whole generation (even by using low/med settings after a few years).

DLSS and RT performance are big advantages for Nvidia, no doubt about that. I think owning any new card from those cards either from Nvidia or AMD will be great regardless. with Nvidia you will have the best visual effects and performance of today's game, with AMD you will have performance on the ~ same level and what seems like a longer-term investment with the bigger memory, which is also great, even if you can't use RT. Not everyone care about RT anyway.
 

Ascend

Member
What is the source of this data? Also, what is the frame rate at which the RTX 3090 renders the game when using the settings chosen for the RTX 3080?
Multiple YouTube videos.

Are you referring to me as the bigot here from the few previous posts I made? Just @ me rather than try to side eye me and smirks with your «friends »
I was not targeting you personally. But there are the ones that are constantly in here trying to shove their opinions down others' throats. You're not of them, in my view. Note that that can always change :messenger_smiling_horns:

I doubt this forum kept archive from that far out but yea, from 2004 to 2016, you would probably find my name in many AMD threads. It’s not a « plot » or a tactics like you seem to allude to. I simply grew out of it, AMD is probably the biggest hardware cult there is. I used to always hang on rage 3D too, and here on this very forum, analyse the ATI flipper to find that tiny advantage over other architectures. Always shitting in Nvidia for their proprietary tech. In the end, I realized all of this makes no sense, it’s a moral high ground, but who the fuck cares.
That's the exact issue. People don't seem to care that GPU prices have gone up astronomically, exactly because of nVidia's price gouging and being anti-competitive. But by all means... Let people stop caring. Soon we'll be paying over $800 for mid range cards.

As an electrical engineer with some basics of semiconductors, I wanted to ask a simple question : How come Nvidia even survived this battle in rasterization? If you can answer that, please go ahead.
I'm not exactly sure what you mean by this question. If you mean that they survived what you called 'the biggest hardware cult there is', the answer is simple;
AMD is not the biggest hardware cult there is. To many people, only nVidia exists in the graphics space.
The majority of people I know never heard of the R9 290X, the HD7970, the HD5850, the X1950 Pro, the 5700XT... When I told a friend (who is a programmer and a gamer) that I got a good deal on an R9 Fury, she looked at me confused and said; "what is that?". Then I was confused and said that it's an AMD graphics card. She replied with 'Oh... Is that what they're called?' I remember that conversation clearly, because, I expected her to know that at least they existed. And she was talking about getting the GTX 980 at the time.

nVidia has more mind share and is known by pretty much every gamer that is a anything but a mobile gamer. ATi/AMD/Radeon only gets traction when their CPUs are doing well, and even then, it remains obscure compared to nVidia.

How nVidia got in this position? One is definitely better marketing. The other is anti-competitive practices. And I'd actually place good products in the last place, because the other two have caused people to buy nVidia, independently of whether they were good products or not. Prime example; Fermi.

If AMD went with an RT hybrid solution, and as per their patent, they want a simplified version to save on silicon area/complexity at the cost of RT performances? Fine. Like i said earlier, it’s a legit decision.

If they have accelerated ML math integers in the pipeline rather than dedicated ML cores to again save on silicon area, fine. It can probably crunch enough for some upscaling or texture AI upscale like that Xbox developer wants to do.

But with these sacrifices, I expect then that they should be godly in rasterization. Somehow, either that fizzled on AMD’s side or Nvidia’s double shading pipelines was a surprise to them.
If you compare AMD to AMD, you will notice the huge jump they actually made. And if they can keep doing those jumps, sort of like they are doing with Ryzen... Let's just say competition will be good.

I have my theory that the SRAM solution ate too much silicon area that should have been used for more CUs. These cards should have been on HBM2. Other wise, all these sacrifices serve just the SRAM implementation. Not sure it was the right move.

I wish a tech site would dive in these architectures and explain it better.
There's a balance between bandwidth and CUs. AMD went the infinity cache way, likely because it is cheaper than using HBM, while achieving similar results.
I don't think the performance drop at 4K is a bandwidth limit either. It's more that nVidia's cards scale down poorly at low resolutions because of the massive amounts of parallel FP ALUs. AMD's cards used to scale better than nVidia's at higher resolutions, because they had a lot of unused CUs at the lower resolutions.

I don't know if you've noticed, but the design philosophy of AMD and nVidia has kind of flipped.
 

llien

Member
We already heard it 7 years ago...
We barely had AMD presence on PC 7 years ago.
Unreal Engine 4 was based on NV 7 years ago.
Remember how UE5 was demoed this year?




the best visual effects and performance of today's game
Have you seen CP2077 RT on vs Off? :)

 
Last edited:

llien

Member
If AMD went with an RT hybrid solution, and as per their patent, they want a simplified version to save on silicon area/complexity at the cost of RT performances? Fine. Like i said earlier, it’s a legit decision.
This is nothing, but buzz.
No understanding what "deep" solution is or is not, no understanding what "hybrid" means in the context and completely ignoring actual dedicated hardware pieces.

It's like "DLSS 2.0 is AI" bullshit, but just applied to something else.
 
Last edited:

Buggy Loop

Member
This is nothing, but buzz.
No understanding what "deep" solution is or is not, no understanding what "hybrid" means in the context and completely ignoring actual dedicated hardware pieces.

It's like "DLSS 2.0 is AI" bullshit, but just applied to something else.

TmwEd2PhN4KqnwwNiceixn-1200-80.jpg


AMD’s patent itself mentions it removes complexity and saves area...

Their BVH is not fully independent, it’s accelerated but has to knock back on the shader’s door to ask what to do next while Nvidia went with ASIC solution.

Like I said, don’t care, but if you save area and complexity, it’s silicon area that should be used for more power elsewhere.
 

Ascend

Member
TmwEd2PhN4KqnwwNiceixn-1200-80.jpg


AMD’s patent itself mentions it removes complexity and saves area...

Their BVH is not fully independent, it’s accelerated but has to knock back on the shader’s door to ask what to do next while Nvidia went with ASIC solution.

Like I said, don’t care, but if you save area and complexity, it’s silicon area that should be used for more power elsewhere.
The area went towards the Infinity cache to allow a smaller bandwidth bus. The bandwidth bus is one of the most expensive components. That's the reason no one wants to make cards with a 512-bit bus anymore. Too expensive. And as cards become more powerful, more bandwidth is needed. And HBM... Well... It's expensive because of the complexity of the interposer, among other things.

Yup. It seems to add a lot to the atmosphere of the game.
RT Ultra seems to make a big difference. RT Medium is a waste in performance.

 

llien

Member
TmwEd2PhN4KqnwwNiceixn-1200-80.jpg


AMD’s patent itself mentions it removes complexity and saves area...

Their BVH is not fully independent, it’s accelerated but has to knock back on the shader’s door to ask what to do next while Nvidia went with ASIC solution.

Like I said, don’t care, but if you save area and complexity, it’s silicon area that should be used for more power elsewhere.
This literally says, we have more flexible solution, not "we've saved on hardware".
There is NO WAY to be flexible about structures you could use, if it would be "hardware" on that bit.
Specialized hardware that can only work with structure bla, is a very dumb (and simple to implement) thing. AMD simply says it brings you nowhere.
 

Ascend

Member
It's surprising to me, how many were arguing about having no compromises with RTX cards, and how many people are turning down settings on their RTX cards to be able to run Cyberpunk 2077 smoothly...
 
How come Nvidia even survived this battle in rasterization? If you can answer that, please go ahead.

Well firstly improvements to graphics technology and the R&D required is an incremental process. Each advancement you make today helps you tomorrow and you build and improve. Similarly each delayed improvement, milestone not met or advancement not yet reached will stack up over time.

For the last 5-7 years AMD's Radeon group were massively starved for cash to engage in R&D in GPUs and made the products they released weaker with each release compared to Nvidia. AMD at one point was essentially bankrupt, were it not for the console deals they had in place the company would have shuttered their doors by now and no longer exist.

AMD accrued a ton of tech debt during that time frame where Nvidia was unimpeded. Nvidia is the market leader by a very solid margin and has maintained that position for the last 7+ years.

Radeon group fell so far behind in both their products and release cycle that prior to RDNA2 unveiling/launch we had tons of people writing them off completely saying they expected 2080ti levels of performance or maybe at best 2080ti+10-15%. A lot of these same people are the ones in this thread mentioning how disappointed they are with the performance of these cards and how there is no reason why anyone would ever want to buy one and that Nvidia is better in every way.

If you look at RDNA1 which was the best GPU AMD had released at that point in time it was significantly behind the 2080ti. AMD wasn't even competing at the top end at all for several years. RDNA2 massively improved over RDNA1, including AMD getting a handle on their release cycles not being behind Nvidia anymore. They are now competing with Nvidia's top of the line cards in rasterization which is a massive incremental improvement over RDNA1 and way beyond what many had thought possible from them.

The thing is these improvements happen incrementally and AMD made a massive leap forward here to eliminate a ton of their tech debt.

Nvidia is many times larger than AMD, and that is the entire AMD company including CPUs. In terms of Radeon group within AMD Nvidia is possibly even 10 times larger. They have more engineers, developers and more revenue for R&D. Nvidia has also maintained this size/revenue/R&D advantage for years allowing them to build a smooth running machine of a organization with some of the best graphics engineers and developers in the world. The question is not "How come AMD doesn't beat them soundly in raster?" the question should be "how the hell did they catch up in raster so fast?".

Regarding the actual silicon, Nvidia has more transistors overall and a much larger die area/size. Although Nvidia has put more of their transistors towards RT and Tensor math AMD also put portions of their silicon towards these things (although less) so it is not like AMD only put resources towards rasterization. Would AMD have better performance if they made a larger die size and packed more CUs? Certainly but then power draw and heat would also increase and efficiency would go down while cost would rise. There are always tradeoffs when it comes to engineering.

AMD has shed most of its tech debt and essentially caught up with Nivdia in a general sense. That is a huge accomplishment and a lot of it can be owed to actually having revenue due to the success of Ryzen, being able to hire more and more engineers and developers than before. They will likely continue to improve heavily with their next card incrementing from RDNA2.
 

Buggy Loop

Member
This literally says, we have more flexible solution, not "we've saved on hardware".
There is NO WAY to be flexible about structures you could use, if it would be "hardware" on that bit.
Specialized hardware that can only work with structure bla, is a very dumb (and simple to implement) thing. AMD simply says it brings you nowhere.

Are we still going on assuming we don’t know about these cards RT performances? Might have been an interesting discussion months ago, but now?

No really, I’m really not saying it’s a bad choice they did, it’s a choice and like anything else in engineering, it’s a decision to perhaps get another edge elsewhere. So what are you arguing about here? That AMD solution will suddenly catch up in RT path tracing as soon as it’s optimized for them? Are we really going there?

I wish I could skip this whole gen of cards to be honest, I would prefer squeezing my 1060 more until RDNA 3 or Hopper, because Cyberpunk 2077 is the sole reason I want to upgrade for RT, and the solutions for that, even with Ampere which is on it’s own turf, aren’t that satisfying.
 
TmwEd2PhN4KqnwwNiceixn-1200-80.jpg


AMD’s patent itself mentions it removes complexity and saves area...

Their BVH is not fully independent, it’s accelerated but has to knock back on the shader’s door to ask what to do next while Nvidia went with ASIC solution.

Like I said, don’t care, but if you save area and complexity, it’s silicon area that should be used for more power elsewhere.

BVHs are build via compute shaders on the CUDA cores and not inside the RT cores.


function wise the RT cores and the RA are much of a muchness (an please dont misquote me on that: that absolutely does not autmatically mean that AMD must be doing it at the same efficiency as nvidia.)
 

Buggy Loop

Member
Well firstly improvements to graphics technology and the R&D required is an incremental process. Each advancement you make today helps you tomorrow and you build and improve. Similarly each delayed improvement, milestone not met or advancement not yet reached will stack up over time.

For the last 5-7 years AMD's Radeon group were massively starved for cash to engage in R&D in GPUs and made the products they released weaker with each release compared to Nvidia. AMD at one point was essentially bankrupt, were it not for the console deals they had in place the company would have shuttered their doors by now and no longer exist.

AMD accrued a ton of tech debt during that time frame where Nvidia was unimpeded. Nvidia is the market leader by a very solid margin and has maintained that position for the last 7+ years.

Radeon group fell so far behind in both their products and release cycle that prior to RDNA2 unveiling/launch we had tons of people writing them off completely saying they expected 2080ti levels of performance or maybe at best 2080ti+10-15%. A lot of these same people are the ones in this thread mentioning how disappointed they are with the performance of these cards and how there is no reason why anyone would ever want to buy one and that Nvidia is better in every way.

If you look at RDNA1 which was the best GPU AMD had released at that point in time it was significantly behind the 2080ti. AMD wasn't even competing at the top end at all for several years. RDNA2 massively improved over RDNA1, including AMD getting a handle on their release cycles not being behind Nvidia anymore. They are now competing with Nvidia's top of the line cards in rasterization which is a massive incremental improvement over RDNA1 and way beyond what many had thought possible from them.

The thing is these improvements happen incrementally and AMD made a massive leap forward here to eliminate a ton of their tech debt.

Nvidia is many times larger than AMD, and that is the entire AMD company including CPUs. In terms of Radeon group within AMD Nvidia is possibly even 10 times larger. They have more engineers, developers and more revenue for R&D. Nvidia has also maintained this size/revenue/R&D advantage for years allowing them to build a smooth running machine of a organization with some of the best graphics engineers and developers in the world. The question is not "How come AMD doesn't beat them soundly in raster?" the question should be "how the hell did they catch up in raster so fast?".

Regarding the actual silicon, Nvidia has more transistors overall and a much larger die area/size. Although Nvidia has put more of their transistors towards RT and Tensor math AMD also put portions of their silicon towards these things (although less) so it is not like AMD only put resources towards rasterization. Would AMD have better performance if they made a larger die size and packed more CUs? Certainly but then power draw and heat would also increase and efficiency would go down while cost would rise. There are always tradeoffs when it comes to engineering.

AMD has shed most of its tech debt and essentially caught up with Nivdia in a general sense. That is a huge accomplishment and a lot of it can be owed to actually having revenue due to the success of Ryzen, being able to hire more and more engineers and developers than before. They will likely continue to improve heavily with their next card incrementing from RDNA2.

I’ll give you that, it’s not an easy battle against such a big company like Nvidia.

I hope they have a memory solution that is not SRAM for RDNA 3 so that we can see what the full filled silicon with CUs can do.
 
I’ll give you that, it’s not an easy battle against such a big company like Nvidia.

I hope they have a memory solution that is not SRAM for RDNA 3 so that we can see what the full filled silicon with CUs can do.

I think Infinity Cache in some form is here to stay for the foreseeable future with RDNA based cards. Supposedly RDNA3 is to be a MCM/Chiplet design and also with a potential node shrink to 5nm so it will be interesting to see what they can accomplish there. I wonder will the Ryzen team bring any more unexpected CPU knowhow to the table for RDNA3?

I agree it would be interesting to see what RDNA2/3 would look like with a 512Mb bus and HBM2e and more transistors put towards raster or RT, but AMD already had HBM cards with the Vega line and it didn't seem to help much so who knows what kind of performance a hypothetical card would have yielded, I do know it would have probably been hotter, draw more power, clock lower, be less efficient and cost AMD more to produce.
 

Ascend

Member
Leaks again...

6700XT is supposedly 15%-20% faster than the 5700XT at $349.
6700 has similar performance to the 5700XT for $299.



I hoped for a little better performance... At least the pricing seems ok.
 

llien

Member
Are we still going on assuming we don’t know about these cards RT performances? Might have been an interesting discussion months ago, but now?

What are "card performances"?
Card performance of some quick hack "port it back to DXR from proprietary NV" stuff?
Do you know why BF was not using DXR? Oh, perf issues and bugs, said changelog.

Is it performance in pure DXR 1.0 or DXR 1.1 game that we compare, oh, are there any?

Is it Dirt 5 (DXR 1.1) perhaps?

Had it even not been the case of no true apple's to apples comparison, what on earth would it tell you about AMD approach, that, as far as I read it, can be used to accelerate Lumen (UE5 engine thing) something with NV's blackbox approach would struggle with?

Cyberpunk 2077 is the sole reason I want to upgrade for RT

I don't know why anyone would check honest CP77 on/off comparison and want to upgrade for RT in that game, given how it looks, but that is a matter of opinion anyway.
 

llien

Member
Leaks again...

6700XT is supposedly 15%-20% faster than the 5700XT at $349.
6700 has similar performance to the 5700XT for $299.



I hoped for a little better performance... At least the pricing seems ok.

What is dude's track record?

3060Ti is about 20-25% faster than 5700XT.
 

Ascend

Member
What is dude's track record?

3060Ti is about 20-25% faster than 5700XT.
He was pretty much on-point with almost everything regarding the 6800 series cards.

Anyway... TGOG is calling out Jayztwocent's review;
"AMD 6900XT Reviews... Something isn't right... Jayztwocents Review of the RX 6900XT Exposes a LOT of Problems in Today's Review Community. But Why Was It SO Bad? Today we go over Jayz Review and how the Tech Media create their own narrative to fit their own agenda. "

 

Buggy Loop

Member
I'd play with it off, given a choice.
This:

bea35145-eb2f-5a36-8408-d8207be3e78c


looks much more involved to me, than this:

3bfb403f-3afe-5161-a5e0-0f9ec42dd872

That's because what you don't see (and why screenshots without context don't make any fucking sense for a dynamic lighting solution) is that there's doors to the outside right there. What you see is the outdoor's light coming into the room.

Now you might prefer, in a screenshot, one over the other as an artistic choice, but dynamically, in motion and in the context that no possible human would say it makes sense that as soon as you are indoor that outside light sources have no influence, does not make any fucking sense. The above screenshot only makes sense if it's night time.

And no matter DXR 1.0 or 1.1 You must be the only one or one of the few that thinks somehow that's gonna make AMD gain field on RT performances. I'm not talking about just RT shadows here. Don't hesitate to @ me whenever that secret sauce happens.
 
Top Bottom