• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • Hey Guest. Check out the NeoGAF 2.2 Update Thread for details on our new Giphy integration and other new features.

[DF] AMD vs NVIDIA - Ray Tracing Performance Deep Dive Feat. RX 6800XT & RTX 3080

Rikkori

Member
May 9, 2020
2,006
3,699
410
Anticipation Popcorn GIF
 

KungFucius

Member
Jul 16, 2008
1,610
436
1,095
Fuck AMD. It would have been better for us all if they could do RT. Now we are going to get non RT, AMD optimized games so they can hock their overpriced cards.
 

scydrex

Member
Aug 10, 2017
546
329
310
AMD RT right now sucks... Nvidia is a better option if you can find one plus DLSS 2.0.
 

ethomaz

is mad because DF didn't do a video on a video of a video of a video on PS5
Mar 19, 2013
36,820
32,350
1,230
38
Brazil
That was already know.

AMD RT solution works well for simple ray-tracing effects.
More complex the effect become more ahead nVidia pull over AMD solution.
It is clear the AMD RT hardware is not as stronger as nVidia.

Said that you can do amazing effects with AMD RT hardware like Miles Morales showed but people needs to understand Sony API give devs more freedom because it is focused in a single hardware... nVidia hardware in the same conditions could do even better.

AMD and nVidia are tied to heavy overhead APIs that follow predefined standards.
 
Last edited:

Dr.D00p

Member
May 23, 2019
857
2,000
350
They can just about get away with poor RT performance this gen but in 2-3yrs when the new generation of cards are due, and RT is firmly established in game releases, people will start to go out of their way to avoid them in a way they won't with the 68xx cards.
 
Apr 11, 2016
1,137
1,023
460
AMD should have priced these cards better, they are too close in price to NVidia for missing so much features.

They cost more than nvidia. I mean the actual prices that you have in stores, not what AMD tried to pass as msrp. I noticed 2 cards in stock at a local retailer 2 days ago, a regular 6800 was 985 euros and a 6900Xt was 1430 euros.
 
Last edited:

Kataploom

Member
Jan 30, 2014
1,028
575
645
Colombia
Well, I think it's normal, considering Nvidia is not only ahead of AMD in RT but also games are more optimized for their hardware since RDNA2 cards marketshare is almost non-existant. Let alone that devs didn't even have actual RT support for AMD cards till those came out (or so i think, may be wrong).

AMD is also late to their DLSS alternative which could help a lot with performance, I really hope they hurry because I'd rather have their implementation which will supposedly work on every game and not only in specific ones.
 
Last edited:

Ascend

Member
Jul 23, 2018
3,047
4,349
515
AMD should have priced these cards better, they are too close in price to NVidia for missing so much features.
Recent leaks show a 5900XT that was planned to be released at an MSRP of $540 US, which is the 6800XT;

That means that they are charging $110 more than what they had initially planned. AMD is either taking advantage of the shortages they knew would happen... Or they overestimated the RTX cards and got greedy after they saw nVidia's performance. Or both. Either way, if things settle down and AMD can't sell their cards, they will definitely drop their prices.

Not that I think anyone should base their graphics card buying decision on RT performance right now, but whatever.
 

llien

Member
Feb 1, 2017
9,510
7,501
895
It's a good, honest and informative video.
Went to a site, in the very first paragraphs came across:

" RDNA 2 calculates ray traversal on the compute units, introducing competition for resources, while Nvidia does this in a specialised processor within the RT core."

Which is BS and, notably, while nobody can say for sure what the heck is inside, AMD has clearly stated to use dedicated piece of hardware:




PS
Oh boy, and then goes on to TAA Derivative DLSS magical bazinga, "deep dive" I'd expect DF to do, no less.
 
Last edited:

llien

Member
Feb 1, 2017
9,510
7,501
895
That means that they are charging $110 more than what they had initially planned. AMD is either taking advantage of the shortages they knew would happen... Or they overestimated the RTX cards
Look at unusual pricing and batshit bananas memory configurations on Ampere cards, RDNA2 wrecked J.H.'s plans and very obviously caused him to drop a tire (3080 => 3070 with half VRAM, 3080Ti => 3080 with half VRAM, Titan => 3090)
 

Soltype

Member
Mar 30, 2015
2,478
1,597
570
Recent leaks show a 5900XT that was planned to be released at an MSRP of $540 US, which is the 6800XT;

That means that they are charging $110 more than what they had initially planned. AMD is either taking advantage of the shortages they knew would happen... Or they overestimated the RTX cards and got greedy after they saw nVidia's performance. Or both. Either way, if things settle down and AMD can't sell their cards, they will definitely drop their prices.

Not that I think anyone should base their graphics card buying decision on RT performance right now, but whatever.
Never knew about 5900xt, if the 6800xt was $540, it'd be bonkers.
 

ethomaz

is mad because DF didn't do a video on a video of a video of a video on PS5
Mar 19, 2013
36,820
32,350
1,230
38
Brazil
Went to a site, in the very first paragraphs came across:

" RDNA 2 calculates ray traversal on the compute units, introducing competition for resources, while Nvidia does this in a specialised processor within the RT core."

Which is BS and, notably, while nobody can say for sure what the heck is inside, AMD has clearly stated to use dedicated piece of hardware:




PS
Oh boy, and then goes on to TAA Derivative DLSS magical bazinga, "deep dive" I'd expect DF to do, no less.
Your own pic says it uses the CUs lol
 
Last edited:

Superayate

Member
Nov 24, 2020
126
157
210
Went to a site, in the very first paragraphs came across:

" RDNA 2 calculates ray traversal on the compute units, introducing competition for resources, while Nvidia does this in a specialised processor within the RT core."

Which is BS and, notably, while nobody can say for sure what the heck is inside, AMD has clearly stated to use dedicated piece of hardware:




PS
Oh boy, and then goes on to TAA Derivative DLSS magical bazinga, "deep dive" I'd expect DF to do, no less.

Why it is BS ? It is written in your AMD screen that "traversal of the BVH and shading of ray results is handled by shader code running on the Compute Units". I'm wrong?
 

ethomaz

is mad because DF didn't do a video on a video of a video of a video on PS5
Mar 19, 2013
36,820
32,350
1,230
38
Brazil
Why it is BS ? It is written in your AMD screen that "traversal of the BVH and shading of ray results is handled by shader code running on the Compute Units". I'm wrong?
Nope... all RT calcs are doing by the Compute Units in RDNA 2... inside the CUs there is a new small unit to handle ray/box and ray/triangle intersections only but that is still inside the CUs.
AMD stated that a lot of times.

Alex just repeated what AMD said.
 
Last edited:

Darius87

Member
Jul 16, 2018
887
2,010
385
Euh... I think you need to compare what AMD and Nvidia are currently proposing on the market, they are at near the same price so it's normal to compare these two GPUs.
if comparing price/performance then i agree with you but if comparing only RT like in this video then it's not fair.
 
  • Empathy
Reactions: DonJuanSchlong

Sean Mirrsen

Neo Member
Sep 30, 2020
41
55
120
Togliatti, Russian Federation
Went to a site, in the very first paragraphs came across:

" RDNA 2 calculates ray traversal on the compute units, introducing competition for resources, while Nvidia does this in a specialised processor within the RT core."

Which is BS and, notably, while nobody can say for sure what the heck is inside, AMD has clearly stated to use dedicated piece of hardware:




PS
Oh boy, and then goes on to TAA Derivative DLSS magical bazinga, "deep dive" I'd expect DF to do, no less.

It does say in the image it uses the CUs. The CUs may have "ray accelerators", but it still means that in the overall scene, RT performance is cannibalized from shader performance by having CUs do raytracing instead of shader work.

Which I think is why the AMD card loses performance when hitting complex RT, and the Nvidia card doesn't. It's kind of like an integrated GPU versus a discrete one: even if the total processing power available between the two were comparable, the dedicated setup will have more stable performance as complexity scales up, and in the shared setup either half can bottleneck the other.
 

Buggy Loop

Member
Jun 9, 2004
5,457
721
1,670
Quebec, canada
It does say in the image it uses the CUs. The CUs may have "ray accelerators", but it still means that in the overall scene, RT performance is cannibalized from shader performance by having CUs do raytracing instead of shader work.

Which I think is why the AMD card loses performance when hitting complex RT, and the Nvidia card doesn't. It's kind of like an integrated GPU versus a discrete one: even if the total processing power available between the two were comparable, the dedicated setup will have more stable performance as complexity scales up, and in the shared setup either half can bottleneck the other.

Bingo

Every time the BVH accelerator, the only ray tracing added to RDNA 2’s otherwise identical shader core arrangement to 5700 series including the shader’s cache, is done doing the fixed function for a single node of the BVH tree, it has to stop the shader to ask « ok what next? », all instructions from then on are on the programmable shader cores, just like a GTX. Turing/Ampere on the other hand handle in an ASIC method that will return the result to the shading unit after, according to Nvidia, up to thousands of instructions per ray.

Now if the RT is simple, it’s easy to see that you can probably fit a few instructions here and there in the scheduling and not saturate the shading pipeline. The more complex it gets, the more Nvidia’s ASIC approach makes sense as it leaves the SM alone and work concurrently even if the instructions are looping many nodes to find an intersection. ASIC management for a stackless traversal saves a ton of cycles.

Not even considering the enormous research and published papers they have output since 2012 about BVH optimization, approximation methods, parallel construction of high quality BV hierarchies, surface area heuristic methods, triangle splitting, parallel radix tree construction, treelet formation. The incoherent rays, the queuing/scheduling, batch processing and compression of them.. only Nvidia knows what’s really inside the ASIC structure, but they’ve put the research and work and it shows.
 

Rikkori

Member
May 9, 2020
2,006
3,699
410
They cost more than nvidia. I mean the actual prices that you have in stores, not what AMD tried to pass as msrp. I noticed 2 cards in stock at a local retailer 2 days ago, a regular 6800 was 985 euros and a 6900Xt was 1430 euros.
So long as mining season is in full swing you'll actually have a better chance to get an AMD card at a reasonable cost than Ampere. Right now they just have no chips tho.
 
  • Like
Reactions: Ascend
Dec 29, 2018
3,221
9,286
705
Nevermind the ray tracing. AMD has bigger hurdles to overcome in the form of DLSS. This is Nvidia's killer feature.

Ultra Ray Tracing at 1080p with little to no performance hit on RTX card.

I'd whore myself out for a fiver to put towards picking up an RTX card.
 

RoboFu

Member
Oct 10, 2017
2,199
3,053
480
Basically the higher the raytracing settings the lower the difference between the two but if you want low raytracing then 3080 is your card.

lol that’s a weird sell. The king of crappy raytracing!!
 
Last edited: