• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[DF] Death Stranding PC DLSS 2.0 vs PS4 Pro Checkerboarding: Image Reconstruction Analysis

fdzYHsv.png




DLSS 2.0 is magical
 
You don't need dedicated hardware to use machine learning models, they just run faster on certain hardware. I use 1080s for Machine Learning (Training and Evaluating). It's the training that takes the time. At frame times though, I can imagine faster dedicated hardware works well with tons of int4 and int8 instructions. A lot of people just use plain ol' CPUs.
 

ZywyPL

Banned
This has so much potential for Xbox as well with DirectML. Hopefully they flesh it out and make it a viable feature for devs to use.

Unfortunately XBX ML is not running on a dedicated hardware, but on CUs, it's basically 2nd iteration of Rapid Packed Math found in PS4 Pro and RX Vega GPUs, except instead of FP16 you can have INT8 and INT4 computations. So that 49/97TOPS are strictly theoretical, with not a single TFLOP left for the actual rendering, but in practice, let's say you want to have 10TF like PS5, you'll need 44CUs (which happens to give the exact same 10.28TF), so you're left with 8CUs left, an equivalent of 7.5/15TOPS (INT8/4), versus 228/455TOPS in 2080Ti, that's... nothing.
 

Entroyp

Member
Without dedicated hardware for DLSS like features on consoles, which are already limited with resources, is it worth it?
 
Last edited:
Unfortunately XBX ML is not running on a dedicated hardware, but on CUs, it's basically 2nd iteration of Rapid Packed Math found in PS4 Pro and RX Vega GPUs, except instead of FP16 you can have INT8 and INT4 computations. So that 49/97TOPS are strictly theoretical, with not a single TFLOP left for the actual rendering, but in practice, let's say you want to have 10TF like PS5, you'll need 44CUs (which happens to give the exact same 10.28TF), so you're left with 8CUs left, an equivalent of 7.5/15TOPS (INT8/4), versus 228/455TOPS in 2080Ti, that's... nothing.

I’m sure they have their use of it thoroughly planned out seeing as it seems to be part of the new Xbox (as far as I’m aware).

Maybe not doing exactly what DLSS is but who knows, they aren’t making it for nothing.

Well, maybe since it’s Microsoft.
 
What is better a blurred image when you zoomed the image to 8x with your bionics eyes? Or artifacts on particles playing normally?

DLSS quality looks pretty good, but those problems with particles are sick

anyway, https://www.psu.com/news/sony-paten...ampling-technique-could-be-used-on-ps5-games/
If you had a toggle in game settings, you would pick DLSS over checkerboarding every goddamn time. Stop lying to yourself.

I hope this Sony patent is sound. This kind of tech is the future.
 

martino

Member
i don't see the point of course dlss , a costier with dedicated hardware method , destroy an older one that was great for its time
 

llien

Member
Oh, look, someone is comparing PS4 (double 7870) pics to some abstract "PC". Oh, and "machine learning" all over the place.

HyP9dUF.png



Without dedicated hardware for DLSS like features on consoles
GPUs are well fit for neural network inference without any dedicated hardware.
Nvidia does actual "machine learning" on per game bases in datacenter.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
If you had a toggle in game settings, you would pick DLSS over checkerboarding every goddamn time. Stop lying to yourself.

I hope this Sony patent is sound. This kind of tech is the future.

Nobody is saying otherwise, but as ZywyPL ZywyPL was mentioning earlier, on both consoles you do not have additional dedicated units exclusively irking on ML computation. The amount of operations per seconds these consoles can do vs what nVIDIA cards do on Tensor Cores is not even net the same ballpark, not even in its rear view mirror.

It is possible a mid generation upgrade will bet on ML accelerators and for sure, IMHO, the next generation of consoles after will, but we cannot compare these consoles to dedicated ML accelerators).
 

IntentionalPun

Ask me about my wife's perfect butthole
Unfortunately XBX ML is not running on a dedicated hardware, but on CUs, it's basically 2nd iteration of Rapid Packed Math found in PS4 Pro and RX Vega GPUs, except instead of FP16 you can have INT8 and INT4 computations. So that 49/97TOPS are strictly theoretical, with not a single TFLOP left for the actual rendering, but in practice, let's say you want to have 10TF like PS5, you'll need 44CUs (which happens to give the exact same 10.28TF), so you're left with 8CUs left, an equivalent of 7.5/15TOPS (INT8/4), versus 228/455TOPS in 2080Ti, that's... nothing.
Why would you need 10TF to process something like DLSS?

Most of the actual ML work is done on servers.. doing the heavy lifting of "learning" how to produce an image, and then the actual GPU on the client end does very little compared to trying to render that same image natively.

edit: Ah I guess the tensor cores are that much better at this stuff? I'm still confused how much of that power is used during rendering vs. training the data model on the backend
 
Last edited:

Entroyp

Member
HyP9dUF.png




GPUs are well fit for neural network inference without any dedicated hardware.
Nvidia does actual "machine learning" on per game bases in datacenter.

They have to sacrifice CUs for ML, that’s my point. It’s not like these machine have power to spare. I never said CUs can’t do ML.
 
DLSS is bonkers.

It could make ultra settings and raytracing trivial on high end next-gen gpus with minimal image quality loss.

Max out everything and just DLSS that shit to oblivion to reach 240hz xD

Even the performance mode (1080p) is comparable to native 4k.
 
Last edited:

A.Romero

Member
Since they announced it seemed to me DLSS was the biggest application of the tensor cores.

We are only looking at the beginning of it.
 

nemiroff

Gold Member
PS5 has machine learning too?

We're using low end Linux computers running the results of ML training at work. If you want to do heavier tied tasks like DLSS otoh you need hardware acceleration a la tensor cores on the Nvidia RTX cards.
 
Last edited:

A.Romero

Member
Just realised... Is he really comparing a 4 year old technique against a 2 year old technique?

Yes. Not only that, the HW to get DLSS running is much more expensive than checkerboard which can be applied to any piece of modern hardware.

Still, it is important to compare in order to recognize how well newer (and more expensive) techniques are. It's information, not a diss.

It's like comparing the performance of 2 different GPU running a game, even if one of them is considerable older than the other.
 
We're using low end Linux computers running the results of ML training at work. If you want to do heavier tied tasks like DLSS otoh you need hardware acceleration a la tensor cores on the Nvidia RTX cards.

I'm not well versed in ML, but is it possible to ship the tech with a well trained set already? Perhaps per game, for its particular visual style?

I mean, Facebook intends ML-assisted image reconstruction like that powering their standalone VR headsets, powered just by snapdragon, not a frigging nextgen GPU. So I guess it has been trained beforehand and just running the resulting code, can't be that computationally costly...
 

JonnyMP3

Member
Yes. Not only that, the HW to get DLSS running is much more expensive than checkerboard which can be applied to any piece of modern hardware.

Still, it is important to compare in order to recognize how well newer (and more expensive) techniques are. It's information, not a diss.

It's like comparing the performance of 2 different GPU running a game, even if one of them is considerable older than the other.
Maybe if he framed the narrative as a technological improvement comparison rather than an technique comparison. But the fact that he put the Cerny vs Jensen fight graphic at the start doesn't agree with that theory. But without the bullshit he's basically comparing a technique invented to use on GCN cores to simulate 4K vs something that needs RTX... :messenger_unamused:
 

FireFly

Member
Maybe if he framed the narrative as a technological improvement comparison rather than an technique comparison. But the fact that he put the Cerny vs Jensen fight graphic at the start doesn't agree with that theory. But without the bullshit he's basically comparing a technique invented to use on GCN cores to simulate 4K vs something that needs RTX... :messenger_unamused:
He said in the video that it wasn't a fair comparison.
 
Last edited:

A.Romero

Member
I'm not well versed in ML, but is it possible to ship the tech with a well trained set already? Perhaps per game, for its particular visual style?

I mean, Facebook intends ML-assisted image reconstruction like that powering their standalone VR headsets, powered just by snapdragon, not a frigging nextgen GPU. So I guess it has been trained beforehand and just running the resulting code, can't be that computationally costly...

It used to be a model per game, now it is a generic model that devs can apply.

The ML part is done at Nvidia Datacenters. The application of the model is what is done at the tensor core level.

I think that in the future the principle behind this will permeate to every kind of device and computational application. Machine Learning is at it's infancy but it's advancing at an exponential level (which is hard to imagine).
 

IntentionalPun

Ask me about my wife's perfect butthole
So... Why compare something that he has admitted is unfair?
Hey guys! It's unfair of me to put up a Toyota against a Ferrari but I'm gonna do it anyway!

Because people want to see the difference?

It's a testament to the PS4 Pro how close it is.

If you raced a ferari against a toyota and the ferari barely won, that's an unfair comparison, but the results still make the toyota look great.
 
Last edited:

JonnyMP3

Member
Because people want to see the difference?

It's a testament to the PS4 Pro how close it is.

If you raced a ferari against a toyota and the ferari barely won, that's an unfair comparison, but the results still make the toyota look great.
Seeing the difference is cool. As I said, I don't mind watching the tech progression and watching the improvements upon generations, I love tech. But it is framed as CB vs DLSS rather than CB to DLSS.
 

FireFly

Member
So... Why compare something that he has admitted is unfair?
Hey guys! It's unfair of me to put up a Toyota against a Ferrari but I'm gonna do it anyway!
Because the point of doing the comparison is to find out how much better DLSS is.

I think you could do exactly the same kind of video with a Ferrari vs. a Toyota. You would start the video by saying "hey this isn't a fair comparison at all, but let's find out what the extra few hundred thousand buys you".
 
Last edited:
Top Bottom