• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

4k upscaling in Death Stranding: DLSS 2.0 (NV only) vs FidelityFX (cross-plat)

llien

Member
With major Big Navi announcement coming in 3 days, let's fight over upscaling tech, before it becomes much more important, shall we?

What resolution?
I find the "delivers... 4k" trend, started by known types, very annoying. No matter how many buzzwords are surrounding some technology, upscaling is upscaling. There is no good reason to call it by the target resolution, other than deception.

"but isn't it sharpening", "but isn't it AI", "but deep learning", "but <insert buzzword>"
You have original resolution, lower than 4k, which is upscaled (upsampled, transformed, bazinga-ed, susaned, pick your poison) into 4k.
At the end of the day: you can call it Susan, if it makes you happy.

Why only one game?
This is the only game that supports both (at least to my knowledge)

Who has reviewed?
Toms Hardware and Ars Technica.


What did they conclude?
They disagree (see more below)
Toms is short on details, wile Ars technica article is filled with pictures, I'd advise you to actually check out the article.

Ars Technica's conclusion:
Fidelity FX CAS is both faster and delivers better quality.

Toms:
Fidelity FX does a great job, but... Very positive on DLSS 2.0. (and a rather frustrating "no difference unless you are pixel peeping")
Complains about shimmering seen with FidelityFX.
(I wish I knew how to embed juxtapose links)

Side note on what DLSS 2 & 1 are (and what they are not)
1.0
(source: nvidia) 1.0 was "true" AI based approach, with neural network being trained per game at datacenters using higher resolution images. At least in theory it could have led to great results (NN would be biased in ways that match visuals in particular game) but that didn't quite work.
2.0
(source: anandtech) 2.0 ditched per game training, uses TAA (temporal anti-aliasing) and some "one fits all" NN transformations, no more per game training at datacenter.
Hence it has weaknesses typical for TAA (problems with small, quickly moving objects, blur) as well as strengths (reduces shimmering)
What about something-something-deep-learning?
Took place with 1.0 at datacenters. Was never meant to run on customer PCs with neither version.

Briefly on Neural Networks
It started when humans discovered how neurons work:
A bunch of "inputs".
A bunch of "outputs"
When certain signal level on inputs is reached, neuron signals it's active state on the outputs.
Signal level at which neuron triggers is called "weight". (just a single value, cool, eh?)

Apparently there are many ways to connect neurons (and in humans, very distinct structures have been discovered, e.g. in eyes), one can create networks with wildly different number of neurons, their alignment, number of layers, types of connections between layers. Once NN structure is settled, it needs to be trained.

It is fed inputs, and weights are adjusted to achieve the desired output. This is "learning" or "training" a very time consuming process. Note how it's up to the trainer how many sample inputs to feed to the network.

Once training is done, a set of weights is known. Using those weights to process the input is called inference. It is a much much less computationally expensive process than training, even mobile phones can do that. (of course it also depends on the complexity of the network)

Being massively parallel (think of CUs as small, dumb CPU cores), GPUs are inherently good at both inference and training of NNs. Tensor could improve performance, or not, it depends.

Is NN approach inherently superior to procedural methods?
In general, no, not at all. See, for instance, calculator.
It depends on the type of the problem. The more ambiguous, "somehow catch the clue" type it is, the more likely is NN to excel at it.

And your point was?
1) There is only one game that supports both and results are far from clear, it is highly subjective
2) World when we'd get some games with upscaling tech of AMD and some games with upscaling by NV would be terrible, think about that. Luckily, though, FidelityFX is cross-platform.
 
Last edited:

00_Zer0

Member
I'm hoping AMD comes up with an open source version of this technology where any developer who wants this in their game can have it. If not I believe more cross platform solutions will come out like FidelityFX where this technology will start becoming commonplace in most games.

Edit-Oops it seems that this is AMD technology, and judging by the results of it vs. DLSS 2.0, it seems that Nvidia's technology is better. Having said that, I am hopeful AMD has more direct competition with DLSS 2.0 by creating a new program where they use AI and upscaling technology that can better compete with Nvidia. Call it what you want FidelityFX 2.0 or whatever, but make a distinction that it can compete directly with Nvidia's technology. I am rooting for RDNA2 Big Navi, but clearly even with FidelityFX they need a better answer to DLSS 2.0 than that.
 
Last edited:

geordiemp

Member
With major Big Navi announcement coming in 3 days, let's fight over upscaling tech, before it becomes much more important, shall we?

What resolution?
I find the "delivers... 4k" trend, started by known types, very annoying. No matter how many buzzwords are surrounding some technology, upscaling is upscaling. There is no good reason to call it by the target resolution, other than deception.

"but isn't it sharpening", "but isn't it AI", "but deep learning", "but <insert buzzword>"
You have original resolution, lower than 4k, which is upscaled (upsampled, transformed, bazinga-ed, susaned, pick your poison) into 4k.
At the end of the day: you can call it Susan, if it makes you happy.

Why only one game?
This is the only game that supports both (at least to my knowledge)

Who has reviewed?
Toms Hardware and Ars Technica.


What did they conclude?
They disagree (see more below)
Toms is short on details, wile Ars technica article is filled with pictures, I'd advise you to actually check out the article.

Ars Technica's conclusion:
Fidelity FX CAS is both faster and delivers better quality.

Toms:
Fidelity FX does a great job, but... Very positive on DLSS 2.0. (and a rather frustrating "no difference unless you are pixel peeping")
Complains about shimmering seen with FidelityFX.
(I wish I knew how to embed juxtapose links)

Side note on what DLSS 2 & 1 are (and what they are not)
1.0
(source: nvidia) 1.0 was "true" AI based approach, with neural network being trained per game at datacenters using higher resolution images. At least in theory it could have led to great results (NN would be biased in ways that match visuals in particular game) but that didn't quite work.
2.0
(source: anandtech) 2.0 ditched per game training, uses TAA (temporal anti-aliasing) and some "one fits all" NN transformations, no more per game training at datacenter.
Hence it has weaknesses typical for TAA (problems with small, quickly moving objects, blur) as well as strengths (reduces shimmering)
What about something-something-deep-learning?
Took place with 1.0 at datacenters. Was never meant to run on customer PCs with neither version.

Briefly on Neural Networks
It started when humans discovered how neurons work:
A bunch of "inputs".
A bunch of "outputs"
When certain signal level on inputs is reached, neuron signals it's active state on the outputs.
Signal level at which neuron triggers is called "weight". (just a single value, cool, eh?)

Apparently there are many ways to connect neurons (and in humans, very distinct structures have been discovered, e.g. in eyes), one can create networks with wildly different number of neurons, their alignment, number of layers, types of connections between layers. Once NN structure is settled, it needs to be trained.

It is fed inputs, and weights are adjusted to achieve the desired output. This is "learning" or "training" a very time consuming process. Note how it's up to the trainer how many sample inputs to feed to the network.

Once training is done, a set of weights is known. Using those weights to process the input is called inference. It is a much much less computationally expensive process than training, even mobile phones can do that. (of course it also depends on the complexity of the network)

Being massively parallel (think of CUs as small, dumb CPU cores), GPUs are inherently good at both inference and training of NNs. Tensor could improve performance, or not, it depends.

And your point was?
1) There is only one game that supports both and results are far from clear, it is highly subjective
2) World when we'd get some games with upscaling tech of AMD and some games with upscaling by NV would be terrible, think about that. Luckily, though, FidelityFX is cross-platform.

Lets wait and see what the new versions are in RDNA2, as little has been leaked.

Note that temporal techniques have also been improving, and for consoles they work better at 60 FPS.

At the end of the day there are 4000 last gen games just on ps4, god knows how many games last gen in total, and it would be good for everyone if it was more widely available for more games rather than a handfull..
 
Last edited:
Is this really whats happening ? Console fanboism due to them using amd is pouring over to PC gpu's ? You have the tech from nvidia which adds details while lowering performance cost resulting in a richer detail picture than native and you have amd's sharpening filter which is nothing worth talking at all. It does nothing, except lowering the image quality. Why is the discourse around DLSS so moronic ?
 

geordiemp

Member
Is this really whats happening ? Console fanboism due to them using amd is pouring over to PC gpu's ? You have the tech from nvidia which adds details while lowering performance cost resulting in a richer detail picture than native and you have amd's sharpening filter which is nothing worth talking at all. It does nothing, except lowering the image quality. Why is the discourse around DLSS so moronic ?

So an upscaling technique adds details - its magic of course.

And competitor systems are rubbish sharpening, and everyone is moronic.

Good inputs, thanks.
 

Mister Wolf

Member
FidelityFX:
15081924422l.jpg


DLSS 2.0:
15081924851l.jpg


One is a sharpening filter and the other is actually upscaling. The end.
 

llien

Member
I'm hoping AMD comes up with an open source version of this technology where any developer who wants this in their game can have it.

Cough:


actually upscaling
Not sure what that is supposed to mean, but I honestly don't see much of a difference, can't even say which of the two I like more.

Lets wait and see what the new versions are in RDNA2, as little has been leaked.
If AMD would call RDNA, CGN <insert number> people would perceive it as worse than it is.
To me that looked like PR move.

They might release new version with some "cool" buzzwords thrown around, I doubt it would change much.
Highly subjective picture peeping.
 
Last edited:
So an upscaling technique adds details - its magic of course.

And competitor systems are rubbish sharpening, and everyone is moronic.

Good inputs, thanks.

Thats exactly right. You can see for yourself, this is the latest game suporting rtx and dlss, Pumpin Jack.

LaNk8VU.jpg
V84SxR0.jpeg


Comparing DLSS to amd's sharpening shit is literally nonsense. Same is dismissing DLSS, or keep saying that not many games suport it. There arent in fact that many AAA games who dont suport it since this came out. Its only 2 years old. How many AAA games came out in that time that dont support dlss ? Borderlands 3, Horizon, Red Dead Redemption 2 would certainly benefit from it. Outside of them, i cant think of any other game that would need dlss and doesnt have it. Games like Gears 5, Doom Eternal or Resident Evil 3 run extremely well on every hardware. DLSS is not a requirement for them. Other than that, Control, Metro, Anthem, Avengers, Tomb Raider, Death Stranding, Monster Hunter World, Battlefield 5, Final Fantasy XV, Wolfenstein they all have dlss. Watch Dogs Legion, Call of Duty, Cyberpunk will all have DLSS.

People who keep repeating the bullshit how "there arent many games using this" fail to name what other games should have dlss as of now. This only just turn 2 years old. Games havent literally been released as to have an inflated number of titles using this.
 

Skifi28

Member
These days you can get some great upscaling results with just TAA-reconstruction without even leaning on any specific ventor solutions, we have seen some great results even on consoles. Sure, if it comes down to zooming 16x you'll see differences, but I think all this DLSS circle jerk is a little pointless for the 5-6 games that actually support it.
 

JimboJones

Member
I use amd's sharpening filter all the time with games that support TAA, it really helps get some clarity back in warzone and I wish it was available when I played mankind divided ( it's built in sharpening is waaaay to aggressive)
But it's limited in what it can do, it's not adding any details and I would never use it to lower the resolution.
 

cheezcake

Member
Side note on what DLSS 2 & 1 are (and what they are not)
1.0
(source: nvidia) 1.0 was "true" AI based approach, with neural network being trained per game at datacenters using higher resolution images. At least in theory it could have led to great results (NN would be biased in ways that match visuals in particular game) but that didn't quite work.
2.0
(source: anandtech) 2.0 ditched per game training, uses TAA (temporal anti-aliasing) and some "one fits all" NN transformations, no more per game training at datacenter.
Hence it has weaknesses typical for TAA (problems with small, quickly moving objects, blur) as well as strengths (reduces shimmering)
What about something-something-deep-learning?
Took place with 1.0 at datacenters. Was never meant to run on customer PCs with neither version.

You've fundamentally misunderstood and misrepresented DLSS here.
  1. "uses TAA (temporal anti-aliasing)"
    • No it doesn't. They added motion vector information from the game as an input to the NN model. TAA also uses motion vector data. By your logic DLSS uses FXAA because both take the final rasterized image as inputs.
  2. This whole insinuation that "1.0 was "true" AI based approach" and hence 2.0 is not, and phrasing ""one fits all" NN transformations, no more per game training at datacenter" as a bad thing.
    • Anyone who works in ML will tell you 2.0 is the actually sensible approach, and that 1.0 was the hacky version. The reason is simple. The true aim of any ML-based approach is generalisation. You give it a limited set of data to train on, and expect that it should be able to give a very close to "correct" output for a totally new and unseen set of inputs. By limiting the model to only work per-game that just means it hasn't generalised strongly. DLSS 2.0 is able to use more data (because you can now feed input data from multiple games) and generalises stronger (because the same model clearly works across multiple games). It is strictly superior.
 
Last edited:

00_Zer0

Member
Cough:



Not sure what that is supposed to mean, but I honestly don't see much of a difference, can't even say which of the two I like more.


If AMD would call RDNA, CGN <insert number> people would perceive it as worse than it is.
To me that looked like PR move.

They might release new version with some "cool" buzzwords thrown around, I doubt it would change much.
Highly subjective picture peeping.
Oops, never realized AMD created this, but I'm hoping they come up with something new to compete with DLSS 2.0.
 

llien

Member
You've fundamentally misunderstood and misrepresented DLSS here.
  1. "uses TAA (temporal anti-aliasing)"
    • No it doesn't.

Yes it does. From the article linked in OP:

"...So for their second stab at AI upscaling, NVIDIA is taking a different tack. Instead of relying on individual, per-game neural networks, NVIDIA has built a single generic neural network that they are optimizing the hell out of. And to make up for the lack of information that comes from per-game networks, the company is making up for it by integrating real-time motion vector information from the game itself, a fundamental aspect of temporal anti-aliasing (TAA) and similar techniques. The net result is that DLSS 2.0 behaves a lot more like a temporal upscaling solution, which makes it dumber in some ways, but also smarter in others. ..."


By limiting the model to only work per-game..
You allow it to be biased in per-game ways.
Game that looks like typical Nintendo game and game that looks like, say, GoW 2018 would have different biases.
That would make perfect sense.
Except it failed on practice (not because concept itself is wrong, though).

Really? Look at the eyebrows, eyelashes and hair. With FidelityFX it's all pixelated. With DLSS it's smooth and not aliased.
TAA tends to blur things in general. regawdless regawdless
Of course blurry image is less pixelated.

upscaling technique adds details
Nothing beats baseless statement, if it had been repeated many times.
Concrete game gives example of details being wiped out (raindrops for instance) as, wait for it, one would have expected from TAA based upscaling.
 

llien

Member
CAS:
15082340652l.jpg


DLSS:
15082340127l.jpg


One look at the grass will tell you that DLSS is clearly producing a "sharper" more detailed image.

Now that we are into nuances, bush does look hands down worse in DLSS 2.0 (blurry, very evident loss of detail)

It is curious what is happening with the rest, and whether any of the two upscaling tech is fighting against shaders blurring background on purpose.

If comparing frame as a whole, I like the part to the left or the rock in CAS version, to the right in the upper side on D2.

regawdless regawdless
Bits in this post look better in D2 version to you?

Just clarifying.
 
Last edited:

Dampf

Member
Again...

DLSS is something entirely different from Fidelity. Fidelity is upscaling and sharpening. It's similar to turning down the resolution scaling in a game like BF5 and adding RIS/Image Sharpening on top of that, that is all Fidelity does. The reason why it still looks good is because those comparisons are made at 4K, meaning the internal render resolution is already higher than most monitors people look at for the comparisons (1440p and 1080p) and the reason why Ars found Fidelity to be better looking is because DLSS has issues with low quality motion vectors Death Stranding provides for its reconstruction, resulting in artifacts that do not happen with simple upscaling like Fidelity. I mean 1800p or 1600p which seemingly is Fidelity FX render resolution at 4K+sharpening still looks very good almost close to native 4K. However, DLSS can render as low as 1080p and still look pretty much like native 4K or in some cases even better.

DLSS is AI based reconstruction and adds missing details. Entirely different methode compared to just upscaling. In 4K you can clearly see that on every monitor when zooming into the picture, which is why DF does it, in gameplay the differences are rather subtile. At lower resolutions, it becomes immediately apparent however that FIdelity FX is much inferior to DLSS. Take a look at this comparison at 1080p (please view fullscreen) https://imgsli.com/MTk0MTQ/0/1 the difference in image quality and detail is immediately very clear.

That is all there is to say, really.
 
Last edited:
One is a sharpening filter and the other is actually upscaling. The end.
Upscaling is the general word used to describe scaling from a lower resolution to an higher one using anything else than nearest neighbor (it was popular when people got their HD TVs and wanted them "upscaled" to 1080p using.

AI "upscaling" adds a neural network element to it in order to inject actual details in the image, instead of just blowing up the original and then making it sharp (no details gain), but they are two techniques to get your games to output at your monitor's native resolution.
 

llien

Member
It seems people are too swayed away by that weird "fullscreen face" shot.
More examples (CAS always on the left, D2 on the right):


"Yet even in Nvidia's own officially captured footage, its DLSS model sometimes fails to convince. Here, the CAS + FXAA side offers an arguably sharper and clearer interpretation of stones, foliage, and rushing, moving water. You may prefer one method over the other, but the gap is less pronounced—and AMD's method has a performance edge. "
Fn0uXhZ.png



DLSS:
This is a zoomed crop of a cut scene captured with DLSS enabled, upscaling to 2160p. Notice the lack of fine particle detail in the rain droplets landing on this black-and-gold mask.
m4Os36j.png


CAS
Another zoomed crop of the same scene rendered with AMD's CAS and upscaling method, upscaled to 2160p. The fine particle details survive the process.
QZKLk3k.png



In motion:
DLSS reconstruction to 1440p. I'm moving my mouse at a high speed, and the image is somewhat soft.
ZeG87pk.png


CAS:
CAS reconstruction to 1440p of the same scene. Same mouse speed, but the pixel information is much crisper without looking overly aliased.
bkqThd0.png


PhoenixTank PhoenixTank
Thanks, updated.
 
It seems people are too swayed away by that weird "fullscreen face" shot.
More examples (CAS always on the left, D2 on the right):


"Yet even in Nvidia's own officially captured footage, its DLSS model sometimes fails to convince. Here, the CAS + FXAA side offers an arguably sharper and clearer interpretation of stones, foliage, and rushing, moving water. You may prefer one method over the other, but the gap is less pronounced—and AMD's method has a performance edge. "
Fn0uXhZ.png



DLSS:
This is a zoomed crop of a cut scene captured with DLSS enabled, upscaling to 2160p. Notice the lack of fine particle detail in the rain droplets landing on this black-and-gold mask.
m4Os36j.png


CAS
Another zoomed crop of the same scene rendered with AMD's CAS and upscaling method, upscaled to 2160p. The fine particle details survive the process.
QZKLk3k.png



In motion:
DLSS reconstruction to 1440p. I'm moving my mouse at a high speed, and the image is somewhat soft.
ZeG87pk.png


CAS:
CAS reconstruction to 1440p of the same scene. Same mouse speed, but the pixel information is much crisper without looking overly aliased.
bkqThd0.png


PhoenixTank PhoenixTank
Thanks, updated.

I don't trust your screenshots because you clearly have an agenda. If you uploaded a video I'd reconsider (make sure to show the settings of course).
 

llien

Member
I don't trust your screenshots because you clearly have an agenda
Dude, seriously, did you just blame me of forging images? :messenger_face_screaming:
That one hell of a denial levels.

Those in the last post are from ars technica. Go and check for yourself.
The other one, with green bushes that look much better with CAS has been used by other users in this tread.

PS
Interesting part about bushes bit, is that it is from source hyping D2.
 
Last edited:

Great Hair

Banned
FidelityFX:
15081924422l.jpg


DLSS 2.0:
15081924851l.jpg


One is a sharpening filter and the other is actually upscaling. The end.

Playing games this upclose aint fun!


(LEFT:-DLSS, MIDDLE:NATIVE,RIGHT:FIDELITYFX)


In Death Stranding Fidelity delivers a much better image, very deep/far as well. DLSS2 blurs the crap out of the objects, assets in the background. Both have a similar performance.

guess it depends on the scene






The Water/texture is gone with DLSS2.

LEFT CAS
MIDDLE DLSS
RIGHT NATIVE

 
Last edited:
Top Bottom