• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[DF] AMD vs NVIDIA - Ray Tracing Performance Deep Dive Feat. RX 6800XT & RTX 3080

Patrick S.

Banned
So a Nvidia card is better prepared for the future...
sarcastic nicholas cage GIF
 
AMD is still a couple of generations away from Nvidia.
And at this point they don't even have a DLSS equivalent. Their proposed super resolution tech is not ML but rather a AA solution.
 

Rikkori

Member
The other way around. The simpler the RT, the closer AMD is to competitive performance. As complexity increases RTX pulls ahead.
Not quite. After a certain point the difference starts shrinking, which is the opposite of what we'd expect. Watch the video again.

Not that it matters so much, at those settings the games are barely playable anyway, but it's an interesting finding.
 

Buggy Loop

Member
Basically if you care about RT, go with NVIDIA and if you don’t care about RT, go with AMD.

Hoping to see some major improvements in RDNA 3 though.

That’s really a tiring false narrative that circulates since the reviews. Even in pure rasterization, on average the 3080 pulls ahead even at lower resolutions, while also not choking at 4K.



and is better for VR



Then DLSS, NVENC, RT, RTX voice... and a reputation that AMD took 6 months to fix the 5700XT via drivers.

For 50$ less?

Ryan Reynolds Reaction GIF
 

Denton

Member
That was a fantastic video, particularly for showing the real difference in RT performance between the two architectures, in milliseconds. That's something I hope all HW outlets will do when comparing RT performance in addition to standard framerate metric.
 
While we on topic of Nvidia cards?

3060 Ti or 3070?

for 1080p, 1440p and occasionally 4K (if game is really well optimized like Doom Eternal. Fuck me, this game can run at 4K/60 on a goddamn toaster)
 
Last edited:

regawdless

Banned
Very good and informative video. So, huge advantages for Nvidia once a higher degree of complexion is added. But after a certain ceiling is being hit and the cards capabilities are maxed out it seems, it flattens by a good margin.

It highlights the severe limitations of AMD cards. Hope they'll improve their next cards by a lot.
 

llien

Member
It does say in the image it uses the CUs. The CUs may have "ray accelerators", but it still means that in the overall scene, RT performance is cannibalized from shader performance by having CUs do raytracing instead of shader work.
This is "Zen3 is RISC" level of BS.

Why it is BS ? It is written in your AMD screen that "traversal of the BVH and shading of ray results is handled by shader code running on the Compute Units". I'm wrong?
Because I doubt you, or most users hyping the shit out of NV tech, understand what it means.
Shading of Ray results is handled by shader code in both worlds to begin with.
RT intersection is the most time consuming operation, traversal of the structures is not.
And for the traversal: it needs a tiny bit of compute power (a handful of shaders would be enough, and for reference, 6800XT has 4608 of them) and shifting traversal to specialized HW severely limits structure types that you could use while RT-ing.

E.g. Lumen, demoed here:



is using structures that NVidia cannot work with, but AMD with its approach could.


And, ultimately, yet another full o shit faux "deep dive" but in fact hidden add for some companies tech, has this buried out there somewhere:

Typically, in any RT scenario, there are four steps. To begin with, the scene is prepared on the GPU, filled with all of the objects that can potentially affect ray tracing. In the second step, rays are shot out into that scene, traversing it and tested to see if they hit objects. Then there's the next step, where the results from step two are shaded - like the colour of a reflection or whether a pixel is in or out of shadow. The final step is denoising. You see, the GPU can't send out unlimited amounts of rays to be traced - only a finite amount can be traced, so the end result looks quite noisy. Denoising smooths out the image, and producing the final effect.

So, there are numerous factors at play in dealing with RT performance. Of the four steps, only the second one is hardware accelerated.



Yay, look at that. It's not RT perf at all, but shading, denoising, neither of which is "RT" or has anything to do with "RT cores", oh, and we are testing games like Quake RTX filled with NV specific code for doing that non RT stuff (oh, and there is a lot of it you might be interested to check this out)


Oh, and look at that:

PlayStation 5's Spider-Man: Miles Morales demonstrates that Radeon ray tracing can produce some impressive results on more challenging effects - and that's using a GPU that's significantly less powerful than the 6800 XT.


If anything, AMD's Infinity Cache, which already allows it to use much slower VRAM, among other things, should give AMD an edge in RT activities.
Which might or might not matter, as RT in its DXR form might simply evaporate.

AMD manages to hide from us for 2 months
Welp, NV manages to hide effects unseen before DXR for more than 2 years, so, I'd give AMD some time too.
 
Last edited:

alucard0712_rus

Gold Member
With 10GB? Nope.
Why not?
First of all devs make games for certain GPUs and memory budgets.
3080 even with 10GB will be very very good for years, there are no problems facing 30 -non-Ti series. Cause they are good and popular.

Also Ti models are coming with twice the RAM.
AMD on other side is wasted their additional VRAM capacity - by the time games require more than 8-10GB R6000 will be obsolete anyway.
 

Md Ray

Member
While we on topic of Nvidia cards?

3060 Ti or 3070?

for 1080p, 1440p and occasionally 4K (if game is really well optimized like Doom Eternal. Fuck me, this game can run at 4K/60 on a goddamn toaster)
If your aim is to get 2x the perf (e.g. 30fps -> 60fps) of the next-gen base console (i.e. Series S), then I suspect even a 3060 Ti would be enough. RTX 3070 will allow you to enable some higher quality non-RT effects/increase draw distance on top while also delivering 2x the perf.

Heck, even a 3060 Ti should be able to do that but with 3070 you get an additional 10-15% more perf headroom over 3060 Ti which will come in handy to keep your target frame-rate more consistent while the graphics are being pushed harder. I'm talking about pure rasterization only. When you take RT and DLSS into consideration you're going to be on a whole another level. So yeah, I'd recommend 3070.

My only gripe with these cards is the limited amount of VRAM NVIDIA has included. I'd have preferred a minimum of 10-11GB because I feel like 8GB is right on the edge of getting maxed out very quickly as soon as next-gen games arrive. I think 8GB is already not enough for DOOM Eternal's top-end textures.

Even if 3070 is capable of running games at 4K, in the future it's going to be held back by its VRAM capacity. Kinda like the Radeon HD 7870 card was severely limited by its 2GB VRAM when the silicon itself was essentially a fully enabled PS4 GPU with 75% fewer ACEs.
 

regawdless

Banned
I think the real raytracing test will be Cyberpunk once the AMD patch hits. With reflections and GI lighting, it'll make the cards sweat.

But I'm curious how that'll even look like because it's so damn demanding and even Nvidia cards need DLSS in that game. Without an alternative from AMD, their cards will have a very rough time.
 
How long until the dreaded time where 10 gigs will not be enough comes ? We're almost 4 months into ampere's life. No issues so far. Nothing on the horizon this year that should pose any troubles. When should we expect those troubles to come ?

Usually AMD cards start to show their mythical long term adventages somewhere around time when you can buy <200$ low power gpu with the procesing power of you 2 generations old flagship :D
 

Ascend

Member
In 10 years we likely will all be buying cards with RT, and comparing RT performance at that point is warranted.
Right now, basing your graphics card buying decision primarily on RT performance is like basing your current car purchase primarily on how good its self-driving / autonomous driving capability is.
 

littlecat

Neo Member
How long until the dreaded time where 10 gigs will not be enough comes ? We're almost 4 months into ampere's life. No issues so far. Nothing on the horizon this year that should pose any troubles. When should we expect those troubles to come ?
They also overlooked the fact 3080 has huge bandwidth @760GB/s and RTX IO support. In fact it will age better than AMD cards I think.
 
Last edited:

llien

Member
They also overlooked the fact 3080 has huge bandwidth @760GB/s and RTX IO support.
Oh, boy, see, is it just a coincident that most green tech is surrounded by smoke and mirrors?

DirectStorage API (you can' call it RTX IO, DLSS BigBoobs, or Susan, if it makes you feel batter) is part of the DX 12 Ultimate, supported by RDNA2.
Not even mentioning the "direct form SSD" hype coming from PS5 world, which, last time I've checked, was powered by AMD APU.
 

Allandor

Member
They also overlooked the fact 3080 has huge bandwidth @760GB/s and RTX IO support. In fact it will age better than AMD cards I think.
Well... I don't think AMD needs so much bandwidth. Many bandwidth intensive things are "catched" by the big cache. But it will be really interesting in future benchmarks. But it is like with any GPU generation, don't buy GPUs for the future, buy them for the games you want to play currently.

I really have no doubts, that when games demand the 16GB of memory, the card will no longer perform that well and you must reduce details (same applies to nvidia cards).

Also currently you can really save much memory by reducing texture resolution a bit without loosing much quality. High vs ultra texture settings are normally not really visible except for screenshots. And the more newer games get optimized for SSDs, the less extra memory is needed. And the more fluctuating the data in the GPU memory is, the less bandwidth is gained through the big cache (but still great for calculations or e.g. render targets).
 
Last edited:

littlecat

Neo Member
Oh, boy, see, is it just a coincident that most green tech is surrounded by smoke and mirrors?

DirectStorage API (you can' call it RTX IO, DLSS BigBoobs, or Susan, if it makes you feel batter) is part of the DX 12 Ultimate, supported by RDNA2.
Not even mentioning the "direct form SSD" hype coming from PS5 world, which, last time I've checked, was powered by AMD APU.
What AMD calls 'resizable bar' again? Being a corporation they can call it whatever they want as long as they implement it.
DLSS is another story and it requires additional transistors and proprietary IP.
 
Last edited:

Ascend

Member
What AMD calls 'resizable bar' again? Being a corporation they can call it whatever they want as long as they implement it.
DLSS is another story and it requires additional transistors and proprietary IP.
It doesn't require it. There's DirectML for example.
It's the way nVidia chose to implement it, like nVidia does with all their tech. Look at PhysX, G-Sync and GameWorks.
 

regawdless

Banned
In 10 years we likely will all be buying cards with RT, and comparing RT performance at that point is warranted.
Right now, basing your graphics card buying decision primarily on RT performance is like basing your current car purchase primarily on how good its self-driving / autonomous driving capability is.

But buying an RTX card today is feasible and makes sense. The current gen with the 3080 are the target HW for raytracing, the devs scale and adjust the raytracing to work reasonably well on these cards. 10 years in the future is an absurdly long time in GPU terms.

Today raytracing can achieve amazing results, even if you look at the comparatively weak next gen consoles. People are jizzing because of the reflections in Miles Morales on PS5 and it really does add a lot and looks great. So even with major shortcuts, RT can make a big difference.

Ignoring that feature seems pretty ignorant, especially because it's the main differenciator between these two cards. Sure, you can also put up the VRAM argument, slower but significantly more for AMD, while less but significantly faster for Nvidia. It will be a factor in 3+ years maybe, while raytracing is already a factor in some key games and will just gain importance.
 

littlecat

Neo Member
It doesn't require it. There's DirectML for example.
It's the way nVidia chose to implement it, like nVidia does with all their tech. Look at PhysX, G-Sync and GameWorks.
AMD navi card just lacks the hardware for DLSS-like feature. If AMD could pull this off then I will applaud for them. Right now, I have put my money to Ryzen 2 CPU and RTX 3080 and I'm satisfied with this purchase.
 
Last edited:

Ascend

Member
AMD navi card just lacks the hardware for DLSS-like feature. If AMD could pull this off then I will applaud for them. Right now, I have put my money to Ryzen 2 CPU and RTX 3080 and I'm satisfied with this purchase.
Maybe. Maybe not. In hindsight, one could also have argued that monitors lacked the hardware for variable refresh rate like G-Sync, and look where we are now. Freesync pretty much pushed G-Sync out of the market, and now HDMI 2.1 is making both irrelevant.

Your purchase is fine, and most people would be satisfied with it. Provided you got the graphics card at a reasonable price. Right now I'm not touching the graphics card market with a 10 foot pole. I'll let things die down a bit.

But buying an RTX card today is feasible and makes sense. The current gen with the 3080 are the target HW for raytracing, the devs scale and adjust the raytracing to work reasonably well on these cards. 10 years in the future is an absurdly long time in GPU terms.
I am not interested in paying over $700 to lower graphics settings for playable framerates.

Ignoring that feature seems pretty ignorant, especially because it's the main differenciator between these two cards. Sure, you can also put up the VRAM argument, slower but significantly more for AMD, while less but significantly faster for Nvidia. It will be a factor in 3+ years maybe, while raytracing is already a factor in some key games and will just gain importance.
I never said to ignore it, but to not put a high priority on it.
 

Rbk_3

Member
Am I the only one that couldn't give 2 shits about Ray Tracing? That will never be worth the performance hit to me
 

Ascend

Member
Am I the only one that couldn't give 2 shits about Ray Tracing? That will never be worth the performance hit to me
Well I wouldn't say never. There was a point where MSAA, AO, tessellation each were too demanding. Over time RT will become more playable. But with tech like this it's generally not a good idea to be an early adopter.

DLSS is a stronger argument to go for RTX over AMD cards than RT is, in my view. DLSS works best for mid range GPUs to run 4K, or if you want high framerate gaming. But the idea was to make DLSS work for all games without developer intervention. Until that is the case, I really would not base my purchase primarily on that either. It is indeed being adopted. But ultimately every nVidia tech seems to end up obsolete as a non-proprietary solution is implemented that works for everyone, or dead.
 

Rbk_3

Member
Well I wouldn't say never. There was a point where MSAA, AO, tessellation each were too demanding. Over time RT will become more playable. But with tech like this it's generally not a good idea to be an early adopter.

DLSS is a stronger argument to go for RTX over AMD cards than RT is, in my view. DLSS works best for mid range GPUs to run 4K, or if you want high framerate gaming. But the idea was to make DLSS work for all games without developer intervention. Until that is the case, I really would not base my purchase primarily on that either. It is indeed being adopted. But ultimately every nVidia tech seems to end up obsolete as a non-proprietary solution is implemented that works for everyone, or dead.

I also think DLSS is pretty overrated. Cold War even on Quality, it isn't great. You can definitely tell it is not native resolution.
 
Top Bottom