• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Is DLSS Really "Better Than Native"? - 24 Game Comparison, DLSS 2 vs FSR 2 vs Native

Zathalus

Member
Just as I thought, you have nothing on CDNA3, so you can't really tell what is the diference.
What is the latest available CDNA architecture again? That's right it is CDNA2. There is no point in debating CDNA3 when no product that contains it has been released or even has a release date. Because then you might as well mention Nvidia Blackwell which is releasing next year.

BTW, AMD is claiming 8x increase in AI performance... which would still put them behind H100 (which came out over 6 months ago) in AI workloads, and that is including the CPU as well.
 

winjer

Gold Member
What is the latest available CDNA architecture again? That's right it is CDNA2. There is no point in debating CDNA3 when no product that contains it has been released or even has a release date. Because then you might as well mention Nvidia Blackwell which is releasing next year.

BTW, AMD is claiming 8x increase in AI performance... which would still put them behind H100 (which came out over 6 months ago) in AI workloads, and that is including the CPU as well.

Still, you are comparing a 6 month old product with a 2 year old one.
Now, I'm not saying AMD is going to beat NVidia. But let's compare products from the same generation.
At least your are no longer comparing with a GPU that has no tensor units.
 
Last edited:

Loxus

Member
What do you mean not utilized? RDNA3 and CDNA has been out for a while. That being said the specifications for AI acceleration for all the cards have been released, Nvidia Tensor cores are simply on another level. The very few AI benchmarks you can get of the 7900 XTX support this.
This thread is about DLSS vs FSR.
Does FSR utilize dedicated hardware?
Currently, FSR gives not the best but good results and that's without any use of dedicated hardware. So with use of dedicated hardware, FSR should be more competitive with DLSS.

This article talks about AMD plan for AI acceleration within RDNA 3. It shouldn't be surprising if FSR 3 utilize this AI acceleration.
AMD won't have users 'paying for features they never use' when it comes to AI in GPUs
However, the new approach with RDNA 3 has been to implement AI, with the new AI Matrix Accelerator block inside the Navi 31 GPU, but only where it's really needed.

"We are focused on including the specs that users want and need to give them enjoyment in consumer GPUs. Otherwise, users are paying for features they never use."

"Even if AI is used for image processing, AI should be in charge of more advanced processing," says Wang. The plan is to ensure any AI tech AMD brings to the table isn't limited to image processing."
 

Zathalus

Member
Still, you are comparing a 6 month old product with a 2 year old one.
Now, I'm not saying AMD is going to beat NVidia. But let's compare products from the same generation.
At least your are no longer comparing with a GPU that has no tensor units.
7 months vs 17 months actually. Only a 10 month difference between them, so not even a year. I can also only compare what is actually out at the moment, with CDNA3 likely still over half a year away. Lastly, CDNA2 does have Matrix accelerators.
 

winjer

Gold Member
7 months vs 17 months actually. Only a 10 month difference between them, so not even a year. I can also only compare what is actually out at the moment, with CDNA3 likely still over half a year away. Lastly, CDNA2 does have Matrix accelerators.

Of course CDNA2 has tensor units. That's why I brought up CDNA, when you were making comparisons to an RDNA card.
CNDA3 should be released mid summer, like CDNA2 was. So it's only a few months away.
 

Zathalus

Member
This thread is about DLSS vs FSR.
Does FSR utilize dedicated hardware?
Currently, FSR gives not the best but good results and that's without any use of dedicated hardware. So with use of dedicated hardware, FSR should be more competitive with DLSS.

This article talks about AMD plan for AI acceleration within RDNA 3. It shouldn't be surprising if FSR 3 utilize this AI acceleration.
AMD won't have users 'paying for features they never use' when it comes to AI in GPUs
However, the new approach with RDNA 3 has been to implement AI, with the new AI Matrix Accelerator block inside the Navi 31 GPU, but only where it's really needed.

"We are focused on including the specs that users want and need to give them enjoyment in consumer GPUs. Otherwise, users are paying for features they never use."

"Even if AI is used for image processing, AI should be in charge of more advanced processing," says Wang. The plan is to ensure any AI tech AMD brings to the table isn't limited to image processing."
I know FSR doesn't use dedicated hardware. I never claimed otherwise. I was just pointing out that even the AI acceleration of RDNA3 is miles behind the capability of Nvidias Tensor cores. Who knows if they are fast enough for something like DLSS or XeSS. If they were fast enough you would think AMD will be able to match something Intel of all people released over a year ago.
 
Yes, FSR2 still lags at the Performance mode. But this is only usable at 4K, be it with DLSS2 or FSR2.
I never used DLSS2 bellow Quality mode at 1440p. Even with balanced I could immediately tell the drop in quality. Performance mode is even worse.
In static shots it kind of looks ok. But in movement, the drop in quality is immediately apparent.

To be honest DLSS performance mode ain't too hot even at 4K, but then I have my PC connected up to a 4K 48" display that I sit relatively close to. Quality is pretty much there these days so I generally always use it even if its just as AA. Balanced looks noticeably softer at times but is worth the trade-off on something like Darktide to get RTGI at 60fps, so I don't mind it. Performance is absolute naff though. Had to use it recently for the Cyberpunk overdrive mode and it was fuzzy with constant shimmering and noticeable ghosting on brake lights etc. I guess display size can have some factor in it, what makes the performance mode unusable to me may not be noticeable for some folk championing it here if they're using a smaller display.
 

Clear

CliffyB's Cock Holster
Don't really understand how any sort of upscaled image can be "better" than native. At best its the same as native with some additional sharpening or other post-effect applied.
 

01011001

Banned
Don't really understand how any sort of upscaled image can be "better" than native. At best its the same as native with some additional sharpening or other post-effect applied.

TAA often sucks... that's why.

if you play the native + TAA version of Death Stranding you'll think you set it to the wrong resolution, that's how bad it looks.

meanwhile with DLSS it looks basically perfect.
even DLSS performance mode will usually look better than native in that game.
 

Clear

CliffyB's Cock Holster
TAA often sucks... that's why.

if you play the native + TAA version of Death Stranding you'll think you set it to the wrong resolution, that's how bad it looks.

meanwhile with DLSS it looks basically perfect.
even DLSS performance mode will usually look better than native in that game.

Fair point. I was just saying that its kind of a strange thing that it can be claimed that an upscaled lower res version can be perceptually "better" (in inverted commas because its subjective what each of us finds most pleasing) than the actual higher resolution in a representative sense.
 

iQuasarLV

Member
Yes, FSR2 still lags at the Performance mode. But this is only usable at 4K, be it with DLSS2 or FSR2.
I never used DLSS2 bellow Quality mode at 1440p. Even with balanced I could immediately tell the drop in quality. Performance mode is even worse.
In static shots it kind of looks ok. But in movement, the drop in quality is immediately apparent.
Yea man, it looks like a bad game streaming connection. When you don't move it looks okay but once you do it starts to get pixelated.
 
Top Bottom