Jaguar Victory
Member
DLSS2 and drivers are the main benefits to NVIDIA right now. They would make me reconsider an AMD card. Not in love with how they gimped VRAM on 3080 though
Yeah OP is just a giant nvidia fanboyI spent a lot of money on RTX 3080 and 3090 cards so I can't handle the fact these AMD cards might be better and cheaper so I'm gonna make a thread to shit on them and make myself feel better.
Or he prefers better feature set for the future? No one is taking away from the joy you have from your AMD cards. Some just want to be future proof, instead of being rasterization king, when DLSS and raytracing are more performant in Nvidia. I can buy both enthusiast cards right now, but guess what I'll choose, since I want to be future proof.Yeah OP is just a giant nvidia fanboy
Always nice to see the effects of shattered narratives.
"Amd cant compete with nvidia. At best they will have a 3070 competitor".
This is ryzen all over again.
So for better current gen performance in older games, is now better? What about games that use better technique like raytracing or DLSS? You can't tell the difference anyways, so what does it matter to you?Jokes aside, besides beating NV all around at per transistor basis (difference in power consumption could be "justified" by process difference) AMD did amazing job on the memory front.
RDNA2 is using slower VRAM with smaller bus yet achieves better performance, something where NV had an edge not so long ago.
Now you AMD zealots can reign once more and the lands shall prosper!I was waiting for the day Radeon would make a mockery of Nvidia again. It's been a long 8 years and it's just as hilarious now as it was then watching all of the Nvidia zealots have a meltdown.
Feels like OP is really justify him buying a 3090 with this thread.
What about games that use better technique like raytracing or DLSS?
I think we all know Nvidia will launch their Super 3000 series on TMSC 7nm process and slap more VRAM on it. Gonna be some buyers remorse yet again.
Like what for example? Please list examples so we can all be educated.What about handful of games that use NV's proprietary RT (DXR as a standard exists, I know) and about games that use TAA based upscaling (that obviously blurs shit) with some generic NN sprinkled over it?
Yeah, what about them?
I was asking what about handful of games with NV specific RT and upscaling, and I don't quite understand your answer.Like what for example? Please list examples so we can all be educated.
I was asking what about handful of games with NV specific RT and upscaling, and I don't quite understand your answer.
So for better current gen performance in older games, is now better? What about games that use better technique like raytracing or DLSS? You can't tell the difference anyways, so what does it matter to you?
I guess we'll see. What is for sure is Nvidia sure as hell didn't want to use Samsung's node, but they screwed the pooch with TMSC. The 3000 series will actually be able to stretch its legs on TMSC node and they will certainly slap on more VRAM. If you're happy with the 3000 series, fantastic, but I'm interested to see the performance gains in the future.This buyers remorse is complete nonsense unless they would launch a stronger card a month later at the exact same price. Why would anyone, ever have buyers remorde because a refresh line will launch 6 months or a year down the line ? Entire gpu generations used to take 6 months then one every year. Should people never have bought anything because in 12 months something better will come out ?
So why doess DLSS look better than AMD offering. Let's not forget your meme.I was asking what about handful of games with NV specific RT and upscaling, and I don't quite understand your answer.
Sounds like you have no idea what you are talking about, and constantly try and come at me after I have constantly proved you wrong, over and over.
Only reason I am jumping ship. Glad I help off getting both 2080 and 3080So what? I am still buying the 6800XT card to shit on nvidias lousy practities, like charging 1500 dollars for 2080ti because there were no competition.
Pretty sure I do, considering I've been building, troubleshooting, upgrading and overclocking PCs for like 15 years.Sounds like you have no idea what you are talking about,
and constantly try and come at me
after I have constantly proved you wrong, over and over.
Building for 15 years, but still have no clue...Pretty sure I do, considering I've been building, troubleshooting, upgrading and overclocking PCs for like 15 years.
There's no need to be upset.
ITT: a lot of clueless fanboys that should stick to slinging rocks in the console wars. OP is objectively correct in his calculations and understanding of what's what. AMD may win this round but it's by luck of having a better foundry make their chips rather than some miraculous design brilliance on their part.
Regardless, I'm still sitting comfortable on my 1080 Ti and will happily wait for a 40 series from Nvidia with most likely 5nm where the jump over Ampere will be massive. 4080 Ti here I come.
Building for 15 years, but still have no clue...
I get Nvidia because they have better performance. That's literally the end game. You can be triggered all you want. But you can't convince me a small percentage of rasterization is better than better ray tracing. You have absolutely no clue on building pc's if you think rasterization will beat raytracing in the future. You are complete oblivious at this point in time. You keep coming after me and failing every single time now. Maybe I should block these miniscule arguments at this point if you keep losing them.
Tell me something... Did you buy an AMD card because of Async compute? No? Did you buy an AMD card supporting DX12 when nVidia's 900 series was still stuck with DX11..? Let me guess. You didn't, didn't you? Obviously you're full of shit with your "better technique" and "superior/newer technology" nonsense. You're constantly goal post shifting as to why nVidia is better rather than judging the cards on their actual merits. Go ahead. Lie again, just to save face.
The most important thing; The 6800 cards are DX12U compliant, just like nVidia's 3000 series (if not more). That says more than enough about the hardware supported features for games.
Look, if you want to talk about DLSS and other upscaling tech, there is a thread for that.look better
You sound triggered. Moving the goalposts. No offense. But you have no argument. At all.Look, if you want to talk about DLSS and other upscaling tech, there is a thread for that.
BluRayHiDef mentioned there are "dozen" of games with RT/and that fancy upscaling and I wonder if it includes WoW.
As for unreleased games (whopping one of them) we'll judge it once it is released.
You heard it here first folks... He gets nVidia because they have better performance. Except when that performance is not only in the area that benefits AMD, it's in the area that games have used for years and will continue to use for years, considering that ray tracing is too heavy. Suddenly, a "small" percentage advantage of rasterization is irrelevant and not better than a small percentage advantage in ray tracing. Even though that 'small' percentage advantage in rasterization means a $579 card occasionally beating a $1499 one. And even though both vendors support hardware accelerated ray tracing, only one of them matters. And this is all coming from someone that claims to be impartial... Of course you are bro. Of course you are...I get Nvidia because they have better performance. That's literally the end game. You can be triggered all you want. But you can't convince me a small percentage of rasterization is better than better ray tracing. You have absolutely no clue on building pc's if you think rasterization will beat raytracing in the future. You are complete oblivious at this point in time. You keep coming after me and failing every single time now. Maybe I should block these miniscule arguments at this point if you keep losing them.
Why do you hate that Nvidia leads in future performance? Why is that a bad thing to AMD? they have people who want to spend less, for much less performance. PC is not the same arena as consoles, so keep that retarded mindset out of PC gaming space. People who want the best of the best will still go to Nvidia. The ones who have a budget or miraculously don't care for raytracing will go AMD. No need to keep throwing the lesser results in our face.You heard it here first folks... He gets nVidia because they have better performance. Except when that performance is not only in the area that benefits AMD, it's in the area that games have used for years and will continue to use for years, considering that ray tracing is too heavy. Suddenly, a "small" percentage advantage of rasterization is irrelevant and not better than a small percentage advantage in ray tracing. Even though that 'small' percentage advantage in rasterization means a $579 card occasionally beating a $1499 one. And even though both vendors support hardware accelerated ray tracing, only one of them matters. And this is all coming from someone that claims to be impartial... Of course you are bro. Of course you are...
Or he prefers better feature set for the future? No one is taking away from the joy you have from your AMD cards. Some just want to be future proof, instead of being rasterization king, when DLSS and raytracing are more performant in Nvidia. I can buy both enthusiast cards right now, but guess what I'll choose, since I want to be future proof.
I think we'll have issues with 16gb of ram not being able to hold textures in future games, before DLSS saves performance with lower resolution and higher texture quality, with lower bandwidth costs. 6x vs regular gddr6 ram. Direct Storage coming out, will be huge for RTX I/0. What does AMD have then, with raytracing being the defacto of ultimate performance.Dlss is def a factor, but double the Vram on AMDs first two entries more than makes up for it. When you factor in the price difference there’s no way you can say NVidia isn’t hosing consumers.
Define "future performance". You still didn't answer whether you bought a DX12 AMD GPU when nVidia only had DX11 cards. So, back then, were you for "future performance", or were you for "fastest performance available"?Why do you hate that Nvidia leads in future performance?
Your bias is shining through once again... Spend less for much less performance? You sound like the people supporting Intel a while back. All I can say to this is...;Why is that a bad thing to AMD? they have people who want to spend less, for much less performance.
You mean the one of blindly following your favorite brand? Yeah... Take your own advice.PC is not the same arena as consoles, so keep that retarded mindset out of PC gaming space.
And the people that have actual brains will go for AMD cards this generation.People who want the best of the best will still go to Nvidia.
$999 is a 'budget'?The ones who have a budget or miraculously don't care for raytracing will go AMD.
Of course oh unbiased holy one...No need to keep throwing the lesser results in our face.
So you have no issue with lower performance of raytracing. Cool for you that prefer no raytracing, as I've said before. If you don't care for future performance like Ascend , AMD is perfectly fine for you. Those who only care about the here and now, and give no shit about the future, you guys are perfectly safe with AMD. The ones who care about not only the here and now, but the future, are looking towards future games. Again, what games does Ascend have to prove, besides AMD picked results WITH ABSOLUTELY NO RAYTRACING BENCHMARKS in sight. Please don't hate on Ascend , as he doesn't care for future titles or even current gen games that feature raytracing. Only older games, not even current games.Define "future performance". You still didn't answer whether you bought a DX12 AMD GPU when nVidia only had DX11 cards. So, back then, were you for "future performance", or were you for "fastest performance available"?
Vague language like "future performance" is like marketing speak and manipulative.
But let's use another metric. Last time I checked, the majority of people still game at 1080p. On Steam, that is 65.5% of users. Who's in 2nd place? 1366x768 believe it or not, at 9.3%. Third place is 1440p at 6.9%, and 4K doesn't even reach Top 5 at 2.3%. How long has 4K been around? It is still not widely adopted. 4K has been around longer than ray tracing, and if you're really going to base your purchasing decision on a feature that just came around the corner, you have to define what "future performance" means. Because in a year or two, all current cards will still be too slow to use RT properly, and in 10 years when it possibly latches on, these cards will be too slow anyway. And DLSS does not change that, especially now that nVidia will most definitely be losing market share.
Your bias is shining through once again... Spend less for much less performance? You sound like the people supporting Intel a while back. All I can say to this is...;
You mean the one of blindly following your favorite brand? Yeah... Take your own advice.
And the people that have actual brains will go for AMD cards this generation.
$999 is a 'budget'?
Of course oh unbiased holy one...
Very funny, considering AMD graphics cards are not only well known to age a lot better than nVidia's, aside from the RTX 3090, AMD has more VRAM as well... And considering that consoles drive the baseline for game development in most cases and they carry RDNA2... Yeah...So you have no issue with lower performance of raytracing. Cool for you that prefer no raytracing, as I've said before. If you don't care for future performance like Ascend , AMD is perfectly fine for you. Those who only care about the here and now, and give no shit about the future, you guys are perfectly safe with AMD. The ones who care about not only the here and now, but the future, are looking towards future games. Again, what games does Ascend have to prove, besides AMD picked results WITH ABSOLUTELY NO RAYTRACING BENCHMARKS in sight. Please don't hate on Ascend , as he doesn't care for future titles or even current gen games that feature raytracing. Only older games, not even current games.
The fact that Nvidia chose to use an inferior node, for whatever reason, is not our problem. They fucked up, that's on them. Could Ampere have been better on TSMC N7? Maybe who knows, but they didn't so it doesn't fucking matter.RDNA2 is manufactured on the "7nm" manufacturing process of Taiwan Semiconductor Manufacturing Company (TSMC), whereas Ampere is manufactured on the "8nm" manufacturing process of Samsung. Despite the actual sizes of these manufacturing processes not being consistent with their marketing names (hence I've put them in quotation marks), TSMC's "7nm" process is indeed smaller than Samsung's "8nm" process, which is what's important to consider.
Hence, because RDNA2 is manufactured on the smaller process, it packs more transistors per square millimeter. For example, the following calculations show the difference in transistor density between RDNA2's largest consumer chip, Navi 21, and Ampere's largest consumer chip, GA102.
Navi 21: 536 square millimeters and 26.8 billion transistors -> 26.8 billion/536mm^2 = 50,000,000 transistors per square millimeter
GA102: 628.4 square millimeters and 28.3 billion transistors -> 28.3 billion/628.4mm^2 = 45,035,009.5 transistors per square millimeter
50,000,000 / 45,035,009.5 = 1.110247351 -> 1 - 1.110247351 = 0.110247351 -> 0.110247351 x 100 = 11.0247351% -> 11%
This additional 11% of transistors per square millimeter is why RDNA2 performs as well as it does in rasterization relative to Ampere even though Ampere has more transistors overall; the entirety of Ampere's 28.3 billion transistors cannot be used exclusively for rasterization since many of them comprise RT Cores and Tensor Cores that exclusively perform ray tracing and artificially intelligent upscaling, respectively.
While the exact number of transistors that comprise RT Cores and Tensor Cores is not known, we can be sure that they amount to more than the difference in the overall number of transistors in RDNA2 and Ampere (28.3 billion - 26.8 billion = 1.5 billion) based on diagrams that illustrate the relative sizes of CUDA Cores, RT Cores, and Tensor Cores.
Not sure what the purpose of the GA102 block diagram is here tbh...
No it doesn't: See above.Hence, Ampere performs roughly as well as RDNA2 in rasterization with less transistors.
This indicates that RDNA2 isn't as efficiently designed as Ampere or - at the very least - isn't as efficiently put to use by AMD's drivers as Ampere is put to use by Nvidia's drivers.
This assertion is based on rationale: an additional 11% of transistors should always result in better performance in rasterization (when other features are not enabled), but as AMD themselves showed at their announcement event, RDNA2 is faster than Ampere in rasterization only some of the time and is barely so whenever it is. Hence, despite having more transistors per square millimeter and despite being able to use all of them for rasterization (whereas Ampere can use only some of its transistors for rasterization),
RDNA2 is only as fast or slightly faster than Ampere in rasterization. Hence, RDNA2 isn't as impressive as it seems.
It can be argued that RDNA2 is indeed more efficient because it's performing as well as it is in rasterization relative to Ampere despite using less power; Navi 21 uses 300 watts at most at stock settings but GA102 uses 350 watts at most at stock settings. However, it must be considered that Navi 21 is - once again - manufactured on a smaller manufacturing process, that 300 watts is only 16.7% less than 300 watts, and that Navi 21 uses more transistors for rasterization (which naturally require less power since they don't have to function as fast as a lower number of transistors).
Hence, if Ampere were to be refreshed on TSMC's "7nm" manufacturing processes, it would be outright faster than RDNA2 in rasterization.
The raytracing is not as good as Nvidia, so no matter how hard you battle for AMD, people who want next gen graphics cards, have no other place to go than Nvidia. Sorry dude, but many people love the way raytracing looks, and no matter how hard you rally for them, they won't have better raytracing or AI neural networks. Older games will play better on AMD, but not current or next gen games with raytracing. Unless you turn off raytracing, to have better framerates. You can cry to the moon, but they won't believe.Very funny, considering AMD graphics cards are not only well known to age a lot better than nVidia's, aside from the RTX 3090, AMD has more VRAM as well... And considering that consoles drive the baseline for game development in most cases and they carry RDNA2... Yeah...
As for ray tracing, absence of evidence is not the same as evidence of absence. These 6800 series cards have hardware accelerated ray tracing. Stop pretending like they don't. And I'll say it again. The 6800 cards are fully DX12U compliant. Your whole "Ascend cares only about older games" assertion is nothing more than a shallow shaming tactic that bullies use when they are cornered. AMD achieved great things with RDNA2 on the same node as RDNA and support all the required DX12U features. And the ones not even considering AMD this time around were never going to buy an AMD card anyway, especially since nVidia has extreme shortages in supply of their 3000 series cards.
If you'd rather have no card than an AMD card, that says quite a lot.
AMD's rtx performance is actually looking pretty good. You may want to wait before passing judgement.The raytracing is not as good as Nvidia, so no matter how hard you battle for AMD, people who want next gen graphics cards, have no other place to go than Nvidia. Sorry dude, but many people love the way raytracing looks, and no matter how hard you rally for them, they won't have better raytracing or AI neural networks. Older games will play better on AMD, but not current or next gen games with raytracing. Unless you turn off raytracing, to have better framerates. You can cry to the moon, but they won't believe.
Also that G Guilhermegrg is an AMD employee who hates to see Nvidia have better raytracing.
You got actual benchmarks, vs picked ones? waited for Nvidia's ones, why won't you?AMD's rtx performance is actually looking pretty good. You may want to wait before passing judgement.
The ray tracing hardware is built into the CUs, which also perform rasterization. So, it's the same hardware that does both functions.Does RDNA2 not also have part of their die dedicated to eat tracing functions and not rasterization?
Honest question I haven't read anything technical about the architecture yet but I know it supports RT.