• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DLSS / AI Scaling and next gen consoles

THE DUCK

voted poster of the decade by bots
A couple of thoughts, wondering what everyone here thinks......

As Dlss/ai scaling has shown itself to be a huge improvement to performance for nvidia, I am wondering the following:

1) Will this technology play a factor in this generation of consoles? If so, when, as we haven't really seen anything yet.

2) Is Nintendo being super sneaky here? What if they are secretly (ok not so secretly) with nvidia for a super switch as rumored and the target is a 1080p machine thats not portable (but plays switch games and new games will still play in neutered form on switch). This $299 megaton could potentially be in the 6-8TF range and be outputting an image quality similar to the ps5 and series X.

3). Is there some reason Sony and or MS didn't chase nvidia tech or did they even try? Or Nvidia was the problem?
 

VFXVeteran

Banned
A couple of thoughts, wondering what everyone here thinks......

As Dlss/ai scaling has shown itself to be a huge improvement to performance for nvidia, I am wondering the following:

1) Will this technology play a factor in this generation of consoles? If so, when, as we haven't really seen anything yet.

Consoles appear to not have anything like AI cores like Nvidia does. If they did, it would have been announced by now.

2) Is Nintendo being super sneaky here? What if they are secretly (ok not so secretly) with nvidia for a super switch as rumored and the target is a 1080p machine thats not portable (but plays switch games and new games will still play in neutered form on switch). This $299 megaton could potentially be in the 6-8TF range and be outputting an image quality similar to the ps5 and series X.

Nintendo could very well get a subset of Tensor cores like the Nvidia boards.

3). Is there some reason Sony and or MS didn't chase nvidia tech or did they even try? Or Nvidia was the problem?

Nvidia wanted money for the premium hardware they provide. Neither MS nor Sony wanted to pay that premium and went with AMD.
 
A couple of thoughts, wondering what everyone here thinks......

As Dlss/ai scaling has shown itself to be a huge improvement to performance for nvidia, I am wondering the following:

3). Is there some reason Sony and or MS didn't chase nvidia tech or did they even try? Or Nvidia was the problem?

Sony has patented their own technology similar to DLSS.
 
Yes, I expect some reconstruction techniques to be heavily utilized. Sony has already been using checkerboard rendering and Nvidia has DLSS 1.9 which is not using any tensor cores so it's possible to do without dedicated hardware. Even AMDs FidelityFX which is little more than a sharpening filter makes a big difference. I believe the UE5 demo uses some form of reconstruction too.
 

Rikkori

Member
Unfortunately DLSS has been so propagandised by outlets like DF that facts are almost gone from the discussion.

The main thing to understand about DLSS (2.0) is that what it's really good at is being a better AA than the general TAA we've seen usually. Certain studios have already demonstrated TAA on-par or better than DLSS 2.0 including when reconstructing from a lower render resolution. The best example of this is The Division 2.

Therefore the whole AI thing is nothing but non-sense (as should've been evident from their first go at it) and not at all necessary in order to achieve what we care about: better image quality at lower resolution for the purposes of improving performance.

That's why you'll see, and indeed it's exactly what Insomniac (and every other first party studio) is doing - they'll use temporal injection and dynamic resolution. So that way they accumulate frames at lower resolution which keeps image quality near native BUT offers much better performance than native. This scales much better at higher resolutions, so it's the perfect case because they want 4K and can't reach it but temporal techniques allow them to close that gap and 99% of people can't tell the difference.

This was in fact pioneered early on by Remedy for Quantum Break, and it was the only way they could make the game they wanted run on Xbox one. You can also see this be more prevalent in Ubisoft games (The Division 2 as mentioned above - best in class by far even above DLSS; but also available in Breakpoint).

There's really a lot to discuss but I don't want to get into the weeds too much. For the purposes of consoles what you need to understand is this: there is an alternative to DLSS, and it doesn't require AI, and you will absolutely see it going forward. So don't fret about missing out on Nvidia marketing. :)
 
If that's the case, why isn't it being implemented in any of the release titles? All of those games are using the same old tech from before. Dynamic res and upscaled images. We've seen no 4k/60 games using max features.
Probably not mature enough to include as part of the PS5. Similar reason probably why Nvidia says that you might prefer to use TAA over DLSS depending on the games.
Also, what do you mean max features? Like 4k/60fps and RT?
 

Bo_Hazem

Banned
Sony has that covered. They've started with checkerboarding before DLSS though on PS4 Pro.



The patent in Japanese though:

 
Last edited:

VFXVeteran

Banned
Probably not mature enough to include as part of the PS5. Similar reason probably why Nvidia says that you might prefer to use TAA over DLSS depending on the games.
Also, what do you mean max features? Like 4k/60fps and RT?
Yea. I hate TAA - it's way too blurry and based on the old school form of AA.
 
Checkerboarding is not likely to be an answer to DLSS 2.0+

While it is trying to accomplish the same thing - To achieve a higher perceived resolution using fewer pixels - as of right now, checkerboard rendering is substantially inferior to DLSS 2.0. They go about achieving that goal is completely different ways.



Is there a checkerboarding 2.0 on the horizon?

Can any form of checkerboard rendering ever hope to compete with DLSS that uses hardware acceleration (Tensor Cores)?

There is no doubt a serious effort going on behind the scenes (Sony and MS) to try and come up with their own solutions.

However, there's a pretty good chance that Nvidia will be the only game in town for this kind of tech for a while, and will enjoy the fruits of this exclusive advantage.

Looks like the new consoles came just in time to dabble in raytracing but just missed the boat on comparable DLSS like tech.
 

quest

Not Banned from OT
With out tensor like cores it is not happening. Sony and Microsoft can have patents till the cows come home but to run them on the shader cores is to slow. There is just not enough horsepower. The shader cores on top of it will have enough pressure running shaders, RT ect. To bad this came so recently and AMD had nothing to hardware assist. The only AI will be older BC games with lots of extra render time and low overhead usage.
 

THE DUCK

voted poster of the decade by bots
The lack of hardware units might not stop Nintendo if they have the right Nvidia hardware.
 

Bo_Hazem

Banned
Checkerboarding is not likely to be an answer to DLSS 2.0+

While it is trying to accomplish the same thing - To achieve a higher perceived resolution using fewer pixels - as of right now, checkerboard rendering is substantially inferior to DLSS 2.0. They go about achieving that goal is completely different ways.



Is there a checkerboarding 2.0 on the horizon?

Can any form of checkerboard rendering ever hope to compete with DLSS that uses hardware acceleration (Tensor Cores)?

There is no doubt a serious effort going on behind the scenes (Sony and MS) to try and come up with their own solutions.

However, there's a pretty good chance that Nvidia will be the only game in town for this kind of tech for a while, and will enjoy the fruits of this exclusive advantage.

Looks like the new consoles came just in time to dabble in raytracing but just missed the boat on comparable DLSS like tech.


It's already a 4 years old tech. We should learn more after AMD's RDNA2 reveal. Many NDA's are tied to it and might find out that even Xbox has similar solution borrowed from RDNA2, while it's already confirmed by Sony's patents that it has its own version of new AI image reconstruction.
 

VFXVeteran

Banned
With out tensor like cores it is not happening. Sony and Microsoft can have patents till the cows come home but to run them on the shader cores is to slow. There is just not enough horsepower. The shader cores on top of it will have enough pressure running shaders, RT ect. To bad this came so recently and AMD had nothing to hardware assist. The only AI will be older BC games with lots of extra render time and low overhead usage.

This is why I feel AMD fast-tracked supporting RT on their hardware for next-gen consoles. I really believe they planned on skipping this generation in order to refine 4k/60FPS performance and basically reach parity with the 2080Ti GPU features from last gen. It just doesn't make much sense to have a watered down RT solution (not supporting many RT features at once) without any hardware bandwidth to keep FPS high.
 
Last edited:
It's already a 4 years old tech. We should learn more after AMD's RDNA2 reveal. Many NDA's are tied to it and might find out that even Xbox has similar solution borrowed from RDNA2, while it's already confirmed by Sony's patents that it has its own version of new AI image reconstruction.

Those articles you've linked to. I've read them before. IMO they have misinterpreted the Sony patent and it is infact a patent relating to cameras and image processing ( an industry Sony is big into ) and not at all a response to DLSS.

It's possible that AMD will announce RDNA 2 to in a couple weeks and they will have strong raytracing capabilities and their own robust response to DLSS... I just wouldn't bet on it. I do hope so though.
 
Last edited:

Bo_Hazem

Banned
Those articles you've linked to. I've read them before. IMO they have misinterpreted the Sony patent and it is infact a patent relating to cameras and image processing ( an industry Sony is big into ) and not at all a response to DLSS.

It's possible that AMD will announce RDNA 2 to in a couple weeks and they will have strong raytracing capabilities and their own robust response to DLSS... I just wouldn't bet on it. I do hope so though.

Could be, it's better to wait for later this month anyway for RDNA2 reveal, after that we should learn more, at least in the next GDC in worst case scenario.
 

Bo_Hazem

Banned
Its' not a force. There is no evidence being used yet. Where is it? Show me in a demo? Also about the hardware ML units.. they aren't on the blueprints for the XSX nor were they talked about in Cerny's presentation. It's an assumption based on logic since nothing has been shown.

Well, Sony will using checkerboarding in 2016 that was pretty solid, and all of a sudden they'll scratch the idea as a whole a brute force their way to 4K@30-60fps with raytracing. Anyway, it's better we wait for RDNA2 reveal, more things will come for both sides.
 

VFXVeteran

Banned
Well, Sony will using checkerboarding in 2016 that was pretty solid, and all of a sudden they'll scratch the idea as a whole a brute force their way to 4K@30-60fps with raytracing. Anyway, it's better we wait for RDNA2 reveal, more things will come for both sides.

I'm not very hopeful tbh. I don't think AMD would stay silent on RT and ML just for this reveal. Everyone that has been close to that story has the impression that there is no Tensor-core like units and definitely no separate RT cores like in Nvidia. We've seen the XSX system and you can see there is nothing there.
 

Bo_Hazem

Banned
I'm not very hopeful tbh. I don't think AMD would stay silent on RT and ML just for this reveal. Everyone that has been close to that story has the impression that there is no Tensor-core like units and definitely no separate RT cores like in Nvidia. We've seen the XSX system and you can see there is nothing there.

XSX isn't running as high as RDNA2 clocks so I won't judge RDNA2 from its custom die anyway. It's not too far away, only 2 weeks to go. Also I think we can see smart software based RT on PS5 just like LocalRay along with the HW RT they have.
 
XSX isn't running as high as RDNA2 clocks so I won't judge RDNA2 from its custom die anyway. It's not too far away, only 2 weeks to go. Also I think we can see smart software based RT on PS5 just like LocalRay along with the HW RT they have.
Will PC RDNA2 gpu's use liquid metal as well? That is another reason people doubt the 2.5ghz clocks on Rdna2 PC GPU's, which wouldn't be starved for power, but the full fledged chips.
 

Bo_Hazem

Banned
Will PC RDNA2 gpu's use liquid metal as well? That is another reason people doubt the 2.5ghz clocks on Rdna2 PC GPU's, which wouldn't be starved for power, but the full fledged chips.

It's patented by Sony, but AMD could offer better prices for Sony to get it. They were helped before by Sony to shape their roadmap, so won't be surprised if they used the same method to secure liquid metal from leaking and causing a mess.
 
Last edited:
I'm not very hopeful tbh. I don't think AMD would stay silent on RT and ML just for this reveal. Everyone that has been close to that story has the impression that there is no Tensor-core like units and definitely no separate RT cores like in Nvidia. We've seen the XSX system and you can see there is nothing there.
The simple question is how much does DLSS tax the Tensor cores? If it is using only a fraction of the Tensor cores, it should be easy to perform similar with a bit of compute shaders.

I mean does using DLSS mean the developers can't use Tensor cores for anything in their games? Or is there ample free Tensor core performance for other uses?
 
The simple question is how much does DLSS tax the Tensor cores? If it is using only a fraction of the Tensor cores, it should be easy to perform similar with a bit of compute shaders.

I mean does using DLSS mean the developers can't use Tensor cores for anything in their games? Or is there ample free Tensor core performance for other uses?
I don't believe they are used the same. Otherwise we would have seen a humongous jump in rasterization, without raytracing enabled, from the same generation of GPU's. Whether it be Turing or Ampere. I think it's closer to free performance boost.
 
I don't believe they are used the same. Otherwise we would have seen a humongous jump in rasterization, without raytracing enabled, from the same generation of GPU's. Whether it be Turing or Ampere. I think it's closer to free performance boost.
?
Obviously tensor cores are different from compute shaders. What I'm saying is that if DLSS doesn't heavily make use of tensor cores, amd could probably achieve similar with a bit of compute shaders.
 
?
Obviously tensor cores are different from compute shaders. What I'm saying is that if DLSS doesn't heavily make use of tensor cores, amd could probably achieve similar with a bit of compute shaders.
And where would the evidence be that you are implying that DLSS doesn't make much use of the tensor cores,?
 
?
Obviously tensor cores are different from compute shaders. What I'm saying is that if DLSS doesn't heavily make use of tensor cores, amd could probably achieve similar with a bit of compute shaders.

I get what you're saying. It seems unlikely, but it's possible.

It's possible that, at any time, someone could come up with a solution that is superior to DLSS, and that it is cheap, efficient and doesn't require any kind of specialized hardware. I just think it's more likely that Nvidia will get to enjoy their DLSS exclusive advantage for quite some time. I think it's more likely that Nvidia will keep improving DLSS and the gap between DLSS and other solutions will actually widen.

If someone can come up with something that's both superior and doesn't require tensor cores, that would be pretty embarrassing for Nvidia.

Take a look at this...



I don't know who owns that technique, or what hardware they were using to achieve it. But it's very impressive and shows that others are working on DLSS like solutions.

It does seem that, moving forward, using neural networks to intelligently upscale is the way to increase resolution as needed, rather than just endlessly adding more shader cores.
 
Last edited:
Serious question, all this for a $299 handheld console that can be docked to for display mode? That would be amazing, how feasible is this?
 
And where would the evidence be that you are implying that DLSS doesn't make much use of the tensor cores,?
I'm not sure that's why I'm asking. The rtx 2060 can do 4k dlss at high speed so it seems even its lower tensor count is sufficient. But is even it fully utilized? Can't developers use ML AI techniques in game if there's DLSS?
 
I'm not sure that's why I'm asking. The rtx 2060 can do 4k dlss at high speed so it seems even its lower tensor count is sufficient. But is even it fully utilized? Can't developers use ML AI techniques in game if there's DLSS?
2060 can make use of that in a few games, which wouldn't be so demanding. You won't see it slaughtering every game at 4K resolution.
 
2060 can make use of that in a few games, which wouldn't be so demanding. You won't see it slaughtering every game at 4K resolution.
but is that due to low tensor count or merely due to lower rasterization performance? It is conceivable that with more ray tracing and rasterization hardware it could easily coast all games at 4k dlss with its current tensor cores.
 

geordiemp

Member
There is no evidence that's being used nor whether it's a software solution or not. There are no hardware AL units in either console.

ML can be done on most logic, the AMD patent on shared L1 with write back is about IPC on deep learning apps.

My thoughts are that ps5 will have replaced checkerboard with something new, and there is the patent in Japanese only which is a mixture of ML and temporal, and temporal always works better at 60 FPS.

There is more to be revealed. To me this potential is most exciting as it would work naturally with ps4 pro games that used it as well without dev effort.

And as a final thought, even 30 FPS temporal was great on the new techniques, anybody who watches this, what is the down side exactly ?

 
Last edited:

regawdless

Banned
I think Sony and MS will have some form of new / optimized AA or reconstruction tech.
Over the course of the generation, I would honestly prefer that they go for a base rendering resolution of 1440p with checkerboarding or whatever upscaling tech they come up with. This would leave enough headroom for nice effects and offer good enough IQ.
Stuff will get complicated though with performance and quality modes.
 

psorcerer

Banned
As usual, thread full of misconceptions and NV PR bullshit.
Kinda tired of explaining, nobody listens anyway.
1. DLSS is a post-processing pass, i.e. it absolutely must run after the frame us finished. Meaning all the cores are idle.
2. DNN benchmarks show us that FP16 (tensor cores) is only 2x faster than FP32 (shader cores) in the real loads on NV cards. That's why Ampere cuts these cores in half and still having the same perf.
3. Any DL solution is 99.99% software. Most of the time it's just a tedious task of creating a dataset of images to train on that wins in the DL game.
4. The selling point of DLSS is that you can integrate it into your game without thinking too much. If you're proficient enough with TAA your own in-house AA solution will look better.
5. Any RDNA2 can run FP16 at 2x FP32 speed.
 
Top Bottom