• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

4k upscaling in Death Stranding: DLSS 2.0 (NV only) vs FidelityFX (cross-plat)

Rikkori

Member
DLSS needs the latest game ready drivers to look good.

Try DLSS balanced, it should exceed TAA quality.

DLSS off

WatchDogsLegion_no-DLSS_WQHD_2-pcgh.png

DLSS Balanced

WatchDogsLegion_DLSS-B_WQHD_2-pcgh.png
I'm pointing out RT quality in particular, go find that & keep in mind DLSS also applies sharpening so for more accurate comparison you have to add CAS to native.
 

rofif

Can’t Git Gud
Nvidia's marketing = undefeatable. That's why I don't bother with these discussions so much. It's like trying to tell PS fans the SSD isn't going to substitute for vram.

In reality DLSS is nowhere near native even with crappy TAA. Funny how you never hear DLSS nukes RT quality (happened even in W:YB) or all the various artifacts it suffers from in particular as it relates to particles. Magic AI upscaling my ass.

UeykcQW.jpg
But DLSS is 1440p or 1080p or whatever depending on quality set on 4k monitor. It's not running native 4k so expecting the same is stupid
It still manages too look almost as sharp but stabilize the image greatly and add fantastic aa solution... all at 1440p performance at 4k
 

Dampf

Member
I'm pointing out RT quality in particular, go find that & keep in mind DLSS also applies sharpening so for more accurate comparison you have to add CAS to native.
The fine lines on the London Eye cannot be reconstructed with sharpening. That is the result of the 16K DLSS training feed.

RT Quality doesn't have much to do with DLSS though. In this case, RT render resolution is just tied to the DLSS input resolution, similar like its in Fortnite, for better performance. You could as well run the RT reflections at 100% screen percentage with DLSS but the developers decicde not to, to further optimize performance with DLSS. Running RT effects at 100% is extremly cost intensive and not worth it, as you are not likely to notice lower res reflections in gameplay. I'm not saying optimization in both of these games is good however, it still runs like crap but without these changes you'd surely not have framerates above 30.


What's so funny? We have solid evidence for this being the case, as F1 2020 DLSS was blurry with outdated drivers.
 
Last edited:

llien

Member
What's so funny?
That now there is "oh, this DLSS 2.0 frame looks like shit... oh, that is gotta be bad drivers" and even "well, it's the latest drivers... bait wait, NEW much better drivers will come and then!"

I guess one day there might even be 3080 beating demand at $699... as most users will go with 6800/6800XT... :messenger_tears_of_joy:
 

Dampf

Member
That now there is "oh, this DLSS 2.0 frame looks like shit... oh, that is gotta be bad drivers" and even "well, it's the latest drivers... bait wait, NEW much better drivers will come and then!"

I guess one day there might even be 3080 beating demand at $699... as most users will go with 6800/6800XT... :messenger_tears_of_joy:
It's not an excuse, but these game ready drivers are really important for best DLSS quality as these deliver the newest nvdlss.sys file for the game.

Anyway, as I said RT reflections do not have to run at the input DLSS resolution, that's a developers choice. I do wonder why the RT reflections are not reconstruced like the rest of the image however, maybe there's some sort of technical limitation.
 
Last edited:

llien

Member
the newest nvdlss.sys file for the game.
See how you are lost in FUD.
If you'd check the OP, you'd know that per game NN training is a feature of DLSS 1.0, no longer present in DLSS 2.0.

Anyway, as I said RT reflections do not have to run at the input DLSS resolution
"RT reflections" have a very abstract concept of "resolution":
You cast rays, you get "something".
With the number of rays cards can cast today, what you get is a noisy mess:

85KG1Xo.png


r4LJppH.png


Frame above is from Quake RT by NV (an open source thingy).
More on this here:
 

MadAnon

Member
CAS

l1J2Cv1.png


DLSS 2:

SnpwykJ.png


To me, the below looks like blurry version of the one above, with loss of detail.

But such evidence is in the eyes of beholder.
That closest bush looks more blurry with DLSS but the rest of the image looks blurrier with CAS. I mean pay attention to that triangular rock and the bush behind it. Especially those dark leaves hanging over. Those are clearly way blurrier and pixelated with CAS. CAS looks like it has that one closest bush more in focus while the rest of the image is out of focus.
 
Last edited:

GymWolf

Member
Cough:



Not sure what that is supposed to mean, but I honestly don't see much of a difference, can't even say which of the two I like more.


If AMD would call RDNA, CGN <insert number> people would perceive it as worse than it is.
To me that looked like PR move.

They might release new version with some "cool" buzzwords thrown around, I doubt it would change much.
Highly subjective picture peeping.
Not to sound rude but...

tenor.gif

tenor.gif

stare.gif


Are you for real dude??! Even stevie wonder can see the staggering difference...
 
Last edited:

llien

Member
That closest bush looks more blurry with DLSS but the rest of the image looks blurrier with CAS
That is true and I've pointed that out myself in this very thread.

However getting blurry is what DLSS tends to do in a number of occasions, which again, is quite expected from TAA based processing.

The most glaring is entire screen being washed out in ars technica example with quickly moving camera.

Not to sound rude but...
Oh, not to sound rude but, please print your post on a high quality paper, roll it up and carefully shove it up your ass, and let it shine there, where it truly belongs, being so freaking useful.
 

thelastword

Banned
But DLSS is 1440p or 1080p or whatever depending on quality set on 4k monitor. It's not running native 4k so expecting the same is stupid
It still manages too look almost as sharp but stabilize the image greatly and add fantastic aa solution... all at 1440p performance at 4k
I guess you weren't around when numerous people and even DF suggested that DLSS looks as good or even better than native, when I told these folk from what I'm seeing, DLSS loses much in detail in distant geometry and background detail. DLSS 2.0 is simply 1440p upshot to 4k via A.I, they are using TAA now because of the aliasing issues that DLSS 1.0 had, but are also using sharpening to mitigate the blurriness accrued with TAA......You will see the compromises of DLSS 2.0 easily on background details and certain imagery and finelines not properly approximated through the A.I algo......

As I said, enough comparisons, screens etc... have not been compared side by side. For example, and this would be a good test. Ask the biased folk trying to sell DLSS right now to compare a DLSS 2.0 image with lots of distant detail vs a native 4k image with no TAA? Then you will clearly see what I mean.....But don't worry, the comparisons are coming. Super Resolution vs DLSS 2.0 will get it's faceoff in December and it's not going to be close as far as resolved background detail and higher frames.....
 

rofif

Can’t Git Gud
I guess you weren't around when numerous people and even DF suggested that DLSS looks as good or even better than native, when I told these folk from what I'm seeing, DLSS loses much in detail in distant geometry and background detail. DLSS 2.0 is simply 1440p upshot to 4k via A.I, they are using TAA now because of the aliasing issues that DLSS 1.0 had, but are also using sharpening to mitigate the blurriness accrued with TAA......You will see the compromises of DLSS 2.0 easily on background details and certain imagery and finelines not properly approximated through the A.I algo......

As I said, enough comparisons, screens etc... have not been compared side by side. For example, and this would be a good test. Ask the biased folk trying to sell DLSS right now to compare a DLSS 2.0 image with lots of distant detail vs a native 4k image with no TAA? Then you will clearly see what I mean.....But don't worry, the comparisons are coming. Super Resolution vs DLSS 2.0 will get it's faceoff in December and it's not going to be close as far as resolved background detail and higher frames.....
It does have some aspects that look better than pure native without any aa solution like stabilizing shader shimmering and all kinda of edges crawling but native is native
 

GymWolf

Member
That is true and I've pointed that out myself in this very thread.

However getting blurry is what DLSS tends to do in a number of occasions, which again, is quite expected from TAA based processing.

The most glaring is entire screen being washed out in ars technica example with quickly moving camera.


Oh, not to sound rude but, please print your post on a high quality paper, roll it up and carefully shove it up your ass, and let it shine there, where it truly belongs, being so freaking useful.
Can i roll and smoke it instead of putting it in my ass?! Maybe while stoned i'm not gonna see a big difference in that comparison like you do 🕺
 
Nobody is this thread denied it being "beneficial".
The issue is FUD surrounding it, making it something it clearly is not.
Raytracing looks and reacts much better than baked lighting. DLSS can look just as good, or sometimes even better than native 4K, while giving you much more performance, and can even allow you to run raytracing while maintaining a playable framerate. Its definitely better than saying raytracing is pointless and DLSS sucks, as many have proven otherwise.
 
Last edited:

Armorian

Banned
Raytracing looks and reacts much better than baked lighting. DLSS can look just as good, or sometimes even better than native 4K, while giving you much more performance, and can even allow you to run raytracing while maintaining a playable framerate. Its definitely better than saying raytracing is pointless and DLSS sucks, as many have proven otherwise.

Almost 2x peformance in Ghostrunner and that's with DLSS QUALITY.
 

llien

Member
Raytracing looks and reacts much better than baked lighting
Yes. Yet baked lightning is not the only way lighting effects work, e.g. UE5.
RT struggles to impress in a number of cases, from Watch Dogs to WoW.

DLSS can look just as good, or sometimes even better than native 4K
Exhibits all downsides of TAA, tends to blur stuff, wash out fine detail in textures. struggles with fast motion, is not even remotely what it is claimed to be.
Lower resolution improving framerates being sold as some sort of breakthrough is outright from braindead territory.

 
Yes. Yet baked lightning is not the only way lighting effects work, e.g. UE5.
RT struggles to impress in a number of cases, from Watch Dogs to WoW.


Exhibits all downsides of TAA, tends to blur stuff, wash out fine detail in textures. struggles with fast motion, is not even remotely what it is claimed to be.
Lower resolution improving framerates being sold as some sort of breakthrough is outright from braindead territory.


Why can't you just admit you hate Nvidia to death. I've never seen someone so bitter/salty towards a company in all my life 😂. I bet if AMD created DLSS, you would claim how amazing and crisp the image is, and just how it's the best thing since sliced bread.


 

Dampf

Member
See how you are lost in FUD.
If you'd check the OP, you'd know that per game NN training is a feature of DLSS 1.0, no longer present in DLSS 2.0.


"RT reflections" have a very abstract concept of "resolution":
You cast rays, you get "something".
With the number of rays cards can cast today, what you get is a noisy mess:

85KG1Xo.png


r4LJppH.png


Frame above is from Quake RT by NV (an open source thingy).
More on this here:

While it is true that per game training is not needed with DLSS 2.0, it is necessary to deploy the newest versions of DLSS in game ready drivers for best image quality. And yes in some cases, per game training can still be helpful. Just because the model is general now it doesn't mean it can't be trained with game specific footage for even better results.

You're confusing rays per pixel (rpp) with the render resolution of RT effects. For example, the reflections in Gran Tourismo 7 run at 25% screen resolution, meaning 1080p at 4K. In Unreal Engine 4, you can adjust the render resolution via console commands.

The render resolution of RT reflections and other RT effects are usually determined by quality settings. Ultra can be 100% screen percentage, High 50%, medium 25% and so on. For that, the target resolution is used. DLSS has not much to do with that.

In Watch Dogs and Fortnite, it is different however. There the RT effects are tied to the DLSS input resolution (like 1080p at 4K and DLSS performance) This is why you perceive the RT effects to be of lower quality with DLSS. However, this is a developer decision and not the fault of DLSS. In Watch Dogs the quality settings don't change the render resolution much or at all and this is why the performance impact of medium Raytracing is not much different than fwith Ultra settings.

Also it seems strange to me why you always nitpick one specific image where DLSS is seemingly doing not a great job and repeat that all over this thread, completely ignoring the otherwise outstanding results of DLSS.
 
Last edited:

rofif

Can’t Git Gud
I finally booted Death Stranding on pc and holy crap this thread is so full of shit with those saying DLSS looks bad
The most important thing hard to see in screenshots is how it behaves in motion.
Taa creates a lot of motion blur in this game during motion. The grass looks bad and distance objects and lines flicker and shimmer. DLSS stabilizes it so much, it looks like playing the game with ground truth render. No shimmer or jaggies ever, even in motion. Even grass patches are stable.
DF showed it nicely in their video. If there is any loss of sharpness I can't see it though. The image stability is king. And yes, I play on a 4k monitor and I am looking for the detail.
The fact that it runs better than native 4k, looks better and more stable... mind blowing and people still complain!
 
Last edited:
I finally booted Death Stranding on pc and holy crap this thread is so full of this with those saying DLSS looks bad
The most important thing hard to see in screenshots is how it behaves in motion.
Taa creates a lot of motion blur in this game during motion. The grass looks bad and distance objects and lines flicker and shimmer. DLSS stabilizes it so much, it looks like playing the game with ground truth render. No shimmer or jaggies ever, even in motion. Even grass patches are stable.
DF showed it nicely in their video. If there is any loss of sharpness I can't see it though. The image stability is king. And yes, I play on a 4k monitor and I am looking for the detail.
The fact that it runs better than native 4k, looks better and more stable... mind blowing and people still complain!
Its literally only the dumb fanboys doing that, while peddling that FidelityFX is superior (fucking lol 😂😂😂).
 

Rikkori

Member
Like we've said all along! All the "AI magic" was BULLSHIT! People just don't understand how shit most TAA implementations have been, that's why this suddenly seems so "magical". Then you go play Division 2 & see what a proper TAA upsampling does and then you realise "oh yeah, the AI part really was just horse shit" & how you don't need tensor cores to get these results. Or SotTR where SMAA T2x is some real good shit. Or in Kingdom Come Deliverance (Cryengine bitches!). Like I said, expect this to be deprecated once next-gen actually hits and we see all the temporal reconstruction techniques used there be ported over for PC too.



 

Elog

Member
In reality these two methods produce quite similar results at the moment with various angles, objects and scenes being better/worse with one of them compared to the other. Not sure why this is so sensitive.
 

rofif

Can’t Git Gud
In reality these two methods produce quite similar results at the moment with various angles, objects and scenes being better/worse with one of them compared to the other. Not sure why this is so sensitive.
Motion. DLSS provides super stable motion. No shimmering or break up of grass etc
 

Elog

Member
Motion. DLSS provides super stable motion. No shimmering or break up of grass etc

My point is more that if the current implementation of both methods make technology people split hairs between them, they are essentially the same.

My take away is variability - both methods create fantastic results at times and less so at others and it varies across titles and within titles, and sometimes even within the same scenes depending on camera angle etc.

That make screenshots that are taken to show that one method is better than the other intellectual garbage. This requires very thorough analysis to understand in terms of the quality of the picture output across a gaming session.
 
My point is more that if the current implementation of both methods make technology people split hairs between them, they are essentially the same.

My take away is variability - both methods create fantastic results at times and less so at others and it varies across titles and within titles, and sometimes even within the same scenes depending on camera angle etc.

That make screenshots that are taken to show that one method is better than the other intellectual garbage. This requires very thorough analysis to understand in terms of the quality of the picture output across a gaming session.


But they dont produce similar results not by a long shot. The actual reality of the situation is that both dlss and ray tracing are best observed live, in motion, while playing. Not by AMD enthusiasts that think they cracked the code and the entire tech world is feeding bullshit when they talk about dlss
 
If DLSS was bullshit, "didn't AHCKTUALLLY use Tensor cores at all" and was no different than regular TAA then AMD would already have a DLSS competitor. But they don't. They have a shitty sharpening filter. And consoles would have their own DLSS competitor too, but they don't. They have shitty checkerboard rendering.
 
Last edited:

Phase

Member
I hope some fun mods come out for Death Stranding. I played through it all on release but I would definitely jump back in with some added incentive.
 
Last edited:

MarlboroRed

Member
DLSS2.0 is a great tool to have. I'm really impressed with the implementation in CP77. Visual Quality takes a noticeable hit, but boy the performance trade off is well worth it.
 

Dampf

Member
DLSS2.0 is a great tool to have. I'm really impressed with the implementation in CP77. Visual Quality takes a noticeable hit, but boy the performance trade off is well worth it.
It doesn't actually. TAA is just oversharpened in this game and DLSS is not sharpened at all. Use CAS Sharpening or Nvidia Control Panel and it will exceed native quality in many aspects.
 
Last edited:

MarlboroRed

Member
It doesn't actually. TAA is just oversharpened in this game and DLSS is not sharpened at all. Use CAS Sharpening or Nvidia Control Panel and it will exceed native quality in many aspects.
Makes sense. The games locks off any anti-aliasing options when choosing DLSS. I already slapped on some with the NVCP sharpening.
Felt a little dirty, and still didn't really reach native IQ but again, the performance increase puts my aging RIG over the magic 60FPS line for a smooth picture.

Running the game in native resolution, I lose 1/4 to 1/3rd of my FPS.
Personally, I'm bothered more by suboptimal performance. I gladly take the hit in IQ.

I booted up RDR2 the other night again and tried out reshades CAS shader @.875 Resolution Scale. The IQ, to my eye, is almost identical to native with no sharpening applied.
NVCP/Freestyle in that game on the other hand looks artificial to me, no matter what settings I seem to set.
 
Last edited:
Top Bottom