• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia at Live GTC : DLSS 5

DLSS 1 February 2019 → DLSS 2 April 2020 is quite a crazy ramp up.
I can already see it:
Switch3 launching in 2032-33 in theory below ps6 or even ps6pr0 performance but thx to dlss5(or its next iteration however its gonna be named) makes games looking much more realistic vs sony consoles and their amd's fake frame gen, meltdowns would have no end :D
Its not even that unprobable scenario.
"Artistic vision" would be screamed from the rooftops :messenger_smiling_hearts:
 
Last edited:
I can already see it:
Switch3 launching in 2032-33 in theory below ps6 or even ps6pr0 performance but thx to dlss5(or its next iteration however its gonna be named) makes games looking much more realistic vs sony consoles and their amd's fake frame gen, meltdowns would have no end :D
Its not even that unprobable scenario.
"Artistic vision" would be screamed from the rooftops :messenger_smiling_hearts:
Disgusted Show GIF
 
I really hope any pushback from Devs and fans alike is significant enough to minimise the ugly stifling of real art and creativity.
I hate this homogenized slop.
 




In large part even before AI the messaging from developers in 2008 made us feel hopeful for graphics, what are console manufacturers saying about AI? MGS4 in 2008, old snake was impressive.

We can't justify AI DSLL 5 or whatever yet until a full game is released.

Are graphics getting worse?
 
Anyone know what those two sliders do exactly?

wWMYmBPNeqXSGtcs.jpg

Yes, this site is simply applying the off-the-shelf open source FLUX.2 klein model to your input image.

FLUX.2 is a diffusion model (with transformer backbone, so a DiT) so to generate images, it takes random noise and denoises it iteratively in steps. The "seed" is just what generates the random data for the noise, an RNG seed basically (same seed -> same noise, if you want to rerun something and tweak it slightly with a different image or prompt).

Usually, the number of steps needed is much higher, but this is a quick model distilled from a larger model, so it only needs about 4 steps to get a decent image--more will help but it hits a ceiling since the model is small and not that capable.

As for your reference image, that is given to the model alongside the random noise that it is iterating over. It's like giving it a "before and after" shot where the after side is just tv static and it has to extract something from it in a few steps to match the input image on the left and the task given in the prompt (the prompt for this site is literally just "make it more realistic").

FLUX.2 klein is a cool model for being very fast (you can run this entire demo on your own computer with a modest GPU or on a typical silicon Mac, easily, and quickly), but it isn't the greatest quality. Other models are much better, even the larger FLUX ones that are still free to use.
 
Judging by this and some other threads, the anti-ai sentiment does well to make AI relevant and popular. People supposedly hate it but can't stop thinking about it and bashing it non-stop, kinda similar to how atheists probably think of God more than a fervent believer.

Deep down, it's as if the ai-hater is more excited for DLSS5 than me for example who am liking the technology and want to see where it will lead. In a way I almost wish they made this tech mandatory and had every game use it by default, just for the reaction of "slop" accusers.
 
Longer video posted today had some more moments with faces I haven't seen, new comparisons to look at.


ka34PNNdh0BBWWNG.jpg

zXW81kmNqUHIBSZ0.jpg


vA4mfcWMO9PcZSpb.jpg
40TI0iRIUw51tERY.jpg


c64I5ePBDrtltn5O.jpg
0zQm4AKIUfP5ZoQI.jpg


non-human shots, guns
w764SoyTGRjUHKkn.jpg
zUPxN7NwyzAgeZWw.jpg


uMtki42ZpZRu8ecA.jpg
f6HJMOBoGp3Eh8CU.jpg


I don't know why anyone would play a game like Starfield without this feature, if given the choice.
 
Longer video posted today had some more moments with faces I haven't seen, new comparisons to look at.


ka34PNNdh0BBWWNG.jpg

zXW81kmNqUHIBSZ0.jpg


vA4mfcWMO9PcZSpb.jpg
40TI0iRIUw51tERY.jpg


c64I5ePBDrtltn5O.jpg
0zQm4AKIUfP5ZoQI.jpg


non-human shots, guns
w764SoyTGRjUHKkn.jpg
zUPxN7NwyzAgeZWw.jpg


uMtki42ZpZRu8ecA.jpg
f6HJMOBoGp3Eh8CU.jpg


I don't know why anyone would play a game like Starfield without this feature, if given the choice.

Yeah this is why I'd love it if you can force it at driver level on older games. Would be a lot of fun going through my library testing that :)
 
Longer video posted today had some more moments with faces I haven't seen, new comparisons to look at.


ka34PNNdh0BBWWNG.jpg

zXW81kmNqUHIBSZ0.jpg


vA4mfcWMO9PcZSpb.jpg
40TI0iRIUw51tERY.jpg


c64I5ePBDrtltn5O.jpg
0zQm4AKIUfP5ZoQI.jpg


non-human shots, guns
w764SoyTGRjUHKkn.jpg
zUPxN7NwyzAgeZWw.jpg


uMtki42ZpZRu8ecA.jpg
f6HJMOBoGp3Eh8CU.jpg


I don't know why anyone would play a game like Starfield without this feature, if given the choice.

With more mature tech and better implementation this will eventually result in actually good generational differences.
 
Last edited:
Probably because most of us can't afford two 4090s.
It's doubtful that you'll need anything like that. Tensor performance is the one area where Blackwell outperforms Ada by a significant margin and the 4090 is outperformed by a 5070 Ti. Even the base 5070 isn't too far off.
 
Top Bottom