• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

So, after all the hype, it turns out that it's the PC that had the real next-gen "secret sauce" all along?

GymWolf

Member
SophisticatedAllAquaticleech-size_restricted.gif
 

psorcerer

Banned
The hype for the upcoming consoles has focused primarily on their new I/O infrastructures, especially when it comes to the PS5 (as attested by the million or so GAF threads on the subject). The Series X looks like being no slouch in this area either, with it's own (much less talked about) solution, Velocity Architecture. Other types of "secret sauce" are often alluded to, but rarely actually explained in detail.

Who knew that all along the chefs at Nvidia were busy in the kitchen working on a delicious concoction of their own. I'm talking about DLSS 2.0, of course. While PCs are often characterised as big lumbering hulks, having to use raw power (and PC users willingness to spend copious amounts of money on said power) to drive past the level of performance seen on consoles, this time around it seems that the PC is the one taking the more nimble and efficient approach.

I'm not usually one to buy into the hype, but the results of DLSS 2.0 are already plain to see. What's more, those results are only on the current line of Nvidia GPUs, we can almost certainly expect an even more impressive performance when the next gen Nvidia GPUs drop (probably a little earlier than the new consoles). I suppose AMD could have something up their sleeves regarding machine learning (it would be strange if they had ignored such a hot field completely), but if any of this tech is making its way into the next gen consoles, then both them and Sony/MS are keeping really quiet about it. One reason for concern is that DSLL 2.0 seems partially dependent on hardware (i.e. the tensor cores), which the consoles appear to lack.

Speaking of Nvidia and consoles, I wonder what they could potentially offer Nintendo for a theoretical Switch 2 in 2021/22? Maybe a 2 terraflop next gen Tegra GPU loaded with DSLL 3.0 tech could significantly close the gap with the much more powerful home consoles?

Anyway, the proof of any good sauce is in the tasting and I can't wait for the next-gen consoles and GPUs to be released later this year so that we can finally know for sure.

Until there is a perf benchmark for DLSS I would assume that it's easily achieved on any GPU. After all "DL" is just a software. And DLSS is just a fancy TAA.
 
Wasn't the rumours floating about that Nvidia are looking to jack up their prices again? They want $600 to be the entry level now (which would be the normal 3060).
Even if they did, they won’t sell 3070’s $650+. I see prices either remaining static or going down now that AMD might be looking to compete in all segments.
 
relax, everyone will be using AI-assisted upscaling, even Facebook in mobile chips

still consoles will offer true nextgen chips and fast SSD relatively cheap, because they subdize it to sell software in big volumes

so, they will have just similar a solution with cheap nextgen hardware and your old PC or Facebook's mobile chips still won't match that
 

Ceadeus

Member
I don't understand why anyone would still compare PC to console, they were never meant to be in the same ecosystem as PC is an open platform which price of admission can be double or triple over the console.

I know it's not the main point of discussion of this thread, but I can't help but to think about anyone who compared PC to console and people who still compare PC to console.
 

Clear

CliffyB's Cock Holster
Its funny to me, so much for people discarding PS4Pro's CBR as allowing for fake 4k.

DLSS may be better reconstruction, but its still a reconstruction technique. Not arguing which one is better, but in hindsight Cerny's approach to facilitate CBR at a hardware level seems remarkably forward looking.

Also if you consider the implementation cost and how that affects the number of titles supporting the technique, why would anyone suspect that DLSS style ML reconstruction is going to be more prevalent on console than CBR was? Its more work and Nvidia has no reason to share its profiles with anyone else.
 
Why do some PC gamers find it hard to understand that consoles are not PCs and are not competing with them? Consoles are for ppl that dont want a pc taking up room or that dont care for having the ultimate tech at a huge cost or tech that is outdated so often... consoles are easy to plug in and play and more accessible.
 

ZywyPL

Banned
I don't understand why anyone would still compare PC to console, they were never meant to be in the same ecosystem as PC is an open platform which price of admission can be double or triple over the console.

I know it's not the main point of discussion of this thread, but I can't help but to think about anyone who compared PC to console and people who still compare PC to console.
Why do some PC gamers find it hard to understand that consoles are not PCs and are not competing with them? Consoles are for ppl that dont want a pc taking up room or that dont care for having the ultimate tech at a huge cost or tech that is outdated so often... consoles are easy to plug in and play and more accessible.

That's why so many people (especially here) fight over their beloved plastic boxes, which one is better, more powerful, more advanced etc? It's funny to see when kids perform their daily console wars, but whenever PC steps in they both are like buuuu this man, nobody asked you for your opinion, nobody cares about PC etc... Truth hurts, those consoles aren't as advanced as people like them to be, they are just a cheap alternative, nothing more.
 
Last edited:
Why do some PC gamers find it hard to understand that consoles are not PCs and are not competing with them? Consoles are for ppl that dont want a pc taking up room or that dont care for having the ultimate tech at a huge cost or tech that is outdated so often... consoles are easy to plug in and play and more accessible.
Because console gamers are constantly going "hur dur, look at what my $500 console can do that your $600 gaming rig can't." You're talking as if it was only PC gamers who compare PC to consoles when the opposite is just as true.
 

Darklor01

Might need to stop sniffing glue
WOW.. so.. you mean to tell me.. PCs are more powerful and have more features than consoles? Well, I'll be damned.
 
We know the new NVidia GPUs will support HDMI 2.1 that will allow them to support 4k 120hz with HDR. Has anyone confirmed the consoles will have that capability.
 

Starfield

Member
I still don't understand why either console manufacturer is negotiating a deal with Nvidia to use cheaper versions of their cards for future consoles. The tech is just so much better than what amd will ever release
 
I still don't understand why either console manufacturer is negotiating a deal with Nvidia to use cheaper versions of their cards for future consoles. The tech is just so much better than what amd will ever release
Cause NVIDIA are assholes at the negotiation table.
 

llien

Member
DLSS is just a ML assisted upscaling. (both versions of it)
As with any upscaling of that type, it is prone to heavy artifacting.

"OMG, I can use upscaling to get higher FPS" is funny and scary at the same time. People are so orgasmic about dubious green technologies, I wonder if it is some sort of brain implant.
 

Kazza

Member
DLSS is just a ML assisted upscaling. (both versions of it)
As with any upscaling of that type, it is prone to heavy artifacting.

"OMG, I can use upscaling to get higher FPS" is funny and scary at the same time. People are so orgasmic about dubious green technologies, I wonder if it is some sort of brain implant.

From the looks of it, DLSS 2.0 isn't prone to much artifacting at all, which is what makes the tech so exciting. Being able to run the game at 720p internally and have it output a 1440p image (that's pretty much indistinguishable from real 1440p) means I might have some hope of running the latest games at 100+ framerates, even on a 3070.
 

Kazza

Member
Yeah, merely lost raindrops upon brief inspection in the other thread.

The word merely is very apt here. Out of the overwhelmingly positive response to DLSS 2.0, the worse that people have to point out is...a few missing raindrops. It's the exception that proves the rule - as I said, not much artifacting at all. And don't forget, we're only on the second major iteration of the software side, and the first on the hardware side. I see no reason why even those rare exceptions won't be almost eliminated by the time the 3000 series/DLSS 3.0 rolls around.

Of course, anyone really bothered by any artifacting (no matter how small) are always free to go the old fashioned brute force approach. That's one of the great things about PCs after all - choice. For most, I suspect that enabling DLSS will become a no-brainer - it may even become the default.
 

martino

Member
The real secret sauce is called great studios and money.

this.
And MS has not really shown nothing good enougth to be called competitive and it's slowly becoming worrying for them if they plan to feed a gamepass
Also if fan can sudenly objectively judge technical stuff outside of that studios talent on DA the state of forums would be better on this subject though.
 

Freeman

Banned
Is DLSS applied after the image has been completely constructed? Could console manufactures add dedicated chips to upscale the image after it was rendered using ML just like the PS4 Pro has a dedicated ups calling chip (if I remember correctly)?

Could Nvidia sell to TV manufactures chips that use DLSS to upscale stuff? If this is possible, please make every game have a 1080p game modes and we will eventually be able to upscale them well.

Reading about it I think it relies on additional information being generated along the original image as well but possibly something that they could work around.
 
Last edited:
It's crazy how far ahead on this Nvidia is compared to literally every other company on the planet. Came across this post on B3D that goes into why we aren't seeing anything near as competitive with DLSS 2.0 from Microsoft and Sony (and obviously, AMD). Post is from iroboto:

well, sorry man. This wasn't meant to be dismissive of Sony. Everyone has the same problem.

Unless Nvidia wants to start handing out models, everyone is stuck unless they want to build their own. Which is by all means can be very expensive.

To put things into perspective, Google, Amazon, and MS are the largest cloud processing for AI. None of them have a DLSS model. Facebook is trying but has something inferior to Nvidia as I understand it. Even with using RTX AI hardware, it's magnitudes away from DLSS performance.

MS can tout ML capabilities on the console, but no model, it's pointless.
The technology for AI is in the model, the hardware to run it is trivial.

Further explanation on this front: a trained model consists of data and processing and the network. Even if you have the neural network to train with, and lets say it was open source, you still need data and then you need processing. Power.

To put things into perspective, BERT is a transformer network whose job is for natural language processing. It can read sentences and understand context as it reads both forwards and backwards to understand context. BERT the network is open source. The Data is not. The data source is Wikipedia (the whole wikipedia is read into BERT for training) but you'd still have to process the data ahead of time for it to be placed for training. Assuming you had a setup capable of training so much data, then gets into the compute part of the equation. Simply put, only a handful of companies in this world can train a proper BERT model. So while there are all sorts of white papers on BERT, small teams can't verify the results or keep up because the compute requirements are so high.

For a single training:
How long does it take to pre-train BERT?
BERT-base was trained on 4 cloud TPUs for 4 days and BERT-large was trained on 16 TPUs for 4 days. There is a recent paper that talks about bringing down BERT pre-training time – Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.

If you make any change to it, any change to the network or data set. That's another 4 days of training before you can see the result. Iteration time is very slow without more horsepower on these complex networks.

***

Google BERT — estimated total training cost: US$6,912
Released last year by Google Research, BERT is a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

From the Google research paper: “training of BERT – Large was performed on 16 Cloud TPUs (64 TPU chips total). Each pretraining took 4 days to complete.” Assuming the training device was Cloud TPU v2, the total price of one-time pretraining should be 16 (devices) * 4 (days) * 24 (hours) * 4.5 (US$ per hour) = US$6,912. Google suggests researchers with tight budgets could pretrain a smaller BERT-Base model on a single preemptible Cloud TPU v2, which takes about two weeks with a cost of about US$500.

...

What may surprise many is the staggering cost of training an XLNet model. A recent tweet from Elliot Turner — the serial entrepreneur and AI expert who is now the CEO and Co-Founder of Hologram AI — has prompted heated discussion on social media. Turner wrote “it costs $245,000 to train the XLNet model (the one that’s beating BERT on NLP tasks).” His calculation is based on a resource breakdown provided in the paper: “We train XLNet-Large on 512 TPU v3 chips for 500K steps with an Adam optimizer, linear learning rate decay and a batch size of 2048, which takes about 2.5 days.”

***

None of these costs account for the amount of the R&D of how many times they had to run training just to get the result they wanted. The labour and education required from the researchers. The above is the cost of running the hardware.

Nvidia has been in the business since the beginning sucking up a ton of AI researcher talent. They have the hardware and resources and subject matter expertise in a long legacy of graphics to make it happen. It's understandable how they were able to create the models for DLSS.

I frankly can't see anyone else being able to pull this off. Not nearly as effectively. At least not anytime soon.

Bolded emphasis (sans underline) mine. Seems a lot of us have underestimated the degree of resources, experience, time and investment required for a DLSS 2.0 style data model. In some ways makes me wish at least one of the console devs still went with Nvidia. It'd be hilarious if Nvidia adds DLSS 2.0 support to the next Switch and it ends up punching well above its spec weight because of it xD.

Xbox One jokes aside, this is exactly the case where the "Cloud" is relevant. They do the heavy lifting somewhere else so the consumer cards don't have to.

Yes the cloud is essential for this type of thing. The question is does MS have a data model they've been able to rigorously train to implement something comparable with DLSS 2.0 in design onto the Series systems?

If there's one of very few companies in the world with the financial capital to fund that type of development, it's MS (Apple, Google, Amazon, Facebook and Tencent are the only other comparable companies with the financial capital to do this). The question is if they've done so, and also to what extent have they done such in what span of time, with what engineering talent, and to what extent have they customized the ML capabilities on their GPUs to help simulate some type of approach similar to Tensor cores without actually having access to Tensor cores (because those do a lot of the uplift in processing tasks for DLSS and without them you'd be wasting a ton of system resources to replicate the same thing, sort of like lacking dedicated RT hardware requires magnitudes more GPU power to replicate the technique in software).
 
Last edited:

martino

Member
Is DLSS applied after the image has been completely constructed? Could console manufactures add dedicated chips to upscale the image after it was rendered using ML just like the PS4 Pro has a dedicated ups calling chip (if I remember correctly)?

Could Nvidia sell to TV manufactures chips that use DLSS to upscale stuff? If this is possible, please make every game have a 1080p game modes and we will eventually be able to upscale them well.

Reading about it I think it relies on additional information to being generated along the original image as well.
seems possible to be done after gpu has done its work to me with this simplified explanation :

Under the hood, the DLSS 2.0 neural graphics framework is in layman's terms traint to see your current image, and then predict what the next image will be, so it can make it look sharper and more realistic. The convolutional autoencoder takes the current frame with low resolution and the last frame with a high resolution to then calculate pixel per pixel how it can make the current image into a higher quality. While training this AI, the output gets compared to the original high quality image, so the system can learn from its mistakes and get even better results the next time.
 

Freeman

Banned
seems possible to be done after gpu has done its work to me with this simplified explanation :
Doing a quick reading looks like DLSS 2.0 requires the game to generate motion vectors to go along with the image but I don't think that kills the idea of a dedicated chip added to the console or to the displays.

If HDMI 2.1 can transmit 4K frames at 120Hz, I guess it can transmit 1080p 60Hz + these motion vectors.
 
Last edited:

llien

Member
...overwhelmingly positive response...
No, the key was "brief inspection".

...the worse that people have to point out is...a few missing raindrops.
It would help if you would not be in denial about something extremely obvious and well known: AI upscaling is prone to heavy artifacting. The more aggressive the upscaling gets, the worse are the artifacts.
NV has applied it in a handful of games, using both DLSS 1.0 and 2.0, but the tech itself is old.

Could Nvidia sell to TV manufactures chips that use DLSS to upscale stuff?
Dude...
It works only in certain games, because AI was trained on certain games. Outcome of that training is inside drivers.
If someone comes up with generic neural network good enough for generic image processing, nobody would need nvidia to turn it into a chip.

this simplified explanation :
Describes what is going on a supercomputer, in datacenter.
 

Freeman

Banned
Dude...
It works only in certain games, because AI was trained on certain games. Outcome of that training is inside drivers.
If someone comes up with generic neural network good enough for generic image processing, nobody would need nvidia to turn it into a chip.
DLSS 2.0 is generic and doesn't require per-game training.

Pretty much every ML project is using NVdia hardware these days no? Whenever I dabbled with ML, support for AMD hardware sucked and using the CPU was out of the question since it was much slower, maybe that has changed since then.

As I said before, the answer to DLSS doesn't need to be better than it, you just need to be around the same ball park.
 
Last edited:

kingbean

Member
Dlss 2.0 was enough for me to sell my 1080ti for a 2080.

Dlss 3.0 should be doable on the 2000 series so I feel pretty good about it.
 

Freeman

Banned
Since we are talking about technology pushed by the masters of FUD, the level of misinformation is not surprising... at all:

C8jxoeg.png

It's not per game according to what you posted, it specifically mentions that DLSS 1.0 had to to be trained on a per game basis.

What is the FUD you are talking about?

Stuff like this would be great for consoles, being able to just render games internally at 1080p would help us get a much bigger jump than if they have to target higher resolution or God forbid native 4K.

This sort of tech is great for everyone.
 
Last edited:

Dampf

Member
Since we are talking about technology pushed by the masters of FUD, the level of misinformation is not surprising... at all:

C8jxoeg.png


To me it more looks like that you are the one who spreads misinformation here in this thread on a regular basis, calling other members by weird terms and you are seemingly not even aware of the differences between reconstructing and upscaling, so technically you lack the base knowledge to even argue here.

DLSS 2.0 is a generalized model for games. Game specific training is not required, Nvidia stated that. Your wikipedia article refers to games in general, because of course the model has to be trained with game feed, otherwise it can't work. With DLSS 1.0 you have to train a model for a specific game, like Metro Exodus for it to work, so feeding it with footage of that specific game, with DLSS 2.0 however you have one model that can be applied to many games.

That's not to say per game training is entirely out of the question, I could see Nvidia doing that with some games to improve image quality even further. The cryptobiotes from Death Stranding is such a thing that per game training would certainly prove useful, as it's an unknown thing for the AI and it doesn't help that the engine appears to give it false motion vectors or none at all.

DLSS 2.0 in general, is very artifact free and even improves the image quality especially in regards to temporal stability as proved by Digital Foundry countless of times.
 
Last edited:

Kokoloko85

Member
Lol. Did anyone question the pc elite and its $900+ tagline?

Pc elite have to remind everyone they are the best other they will melt even if no1 asked....
 
Last edited:

00_Zer0

Member
This is awesome technology, but the one downfall is that it can't be applied to every game by the user themselves. This has to be an effort between Nvidia and each developer on a per game basis. In other words this feature, while rock solid, currently doesn't boast enough support to make it worth an upgrade right now.
 
Last edited:

Dampf

Member
This is awesome technology, but the one downfall is that just it can't be applied to every game by the user themselves. This has to be an effort between Nvidia and each developer on a per game basis. In other words this feature, while rock solid, currently doesn't boast enough support to make it worth an upgrade right now.
And it will stay that way.

A technique like DLSS will always require developer input. It has to be placed instead of TAA in the rendering pipeline. But I do think as more and more engines integrate it, like the UE4 builds, DLSS will require much less developer input in the future. It should help the technology to push even further.

DLSS injection with Nvidia control panel is unlikely to be a thing, please don't get your expectations up just because a random YouTuber told you otherwise.
 
Last edited:

Kazza

Member
No, the key was "brief inspection".


It would help if you would not be in denial about something extremely obvious and well known: AI upscaling is prone to heavy artifacting. The more aggressive the upscaling gets, the worse are the artifacts.
NV has applied it in a handful of games, using both DLSS 1.0 and 2.0, but the tech itself is old.


Dude...
It works only in certain games, because AI was trained on certain games. Outcome of that training is inside drivers.
If someone comes up with generic neural network good enough for generic image processing, nobody would need nvidia to turn it into a chip.


Describes what is going on a supercomputer, in datacenter.

Denial? I never denied there was any artifacting, but what counts as "heavy" artifacting, and the amount of that would lead someone to describe a technique as being "prone" to artifacting are entirely subjective. For you, it seems that even the DLSS 2.0 implementation still isn't satisfactory in these regards, but for me (and for pretty much everyone else, be it professional analysers such as DF and NX Gamer, or just ordinary gamers) it is already very impressive. Seeing how much it improved between versions 1 and 2, and also much fast machine learning in general seems to progress, it doesn't seem too much of a stretch that DLSS 3.0 will produce a similar leap in quality. There's no denial here, just respect for what has already been achieved so far, and healthy optimism for what likely lies ahead.

This is awesome technology, but the one downfall is that it can't be applied to every game by the user themselves. This has to be an effort between Nvidia and each developer on a per game basis. In other words this feature, while rock solid, currently doesn't boast enough support to make it worth an upgrade right now.

This is pretty much the only thing that concerns me. If the consoles really do lack any way to implement this kind of technique, then its uptake may be a little slower than one might hope for. I suppose the most critical thing is how much time, money and effort it takes to implement this into your game. If the cost is relatively low, then why not include it?
 

Kazza

Member
It's crazy how far ahead on this Nvidia is compared to literally every other company on the planet. Came across this post on B3D that goes into why we aren't seeing anything near as competitive with DLSS 2.0 from Microsoft and Sony (and obviously, AMD). Post is from iroboto:



Bolded emphasis (sans underline) mine. Seems a lot of us have underestimated the degree of resources, experience, time and investment required for a DLSS 2.0 style data model. In some ways makes me wish at least one of the console devs still went with Nvidia. It'd be hilarious if Nvidia adds DLSS 2.0 support to the next Switch and it ends up punching well above its spec weight because of it xD.



Yes the cloud is essential for this type of thing. The question is does MS have a data model they've been able to rigorously train to implement something comparable with DLSS 2.0 in design onto the Series systems?

If there's one of very few companies in the world with the financial capital to fund that type of development, it's MS (Apple, Google, Amazon, Facebook and Tencent are the only other comparable companies with the financial capital to do this). The question is if they've done so, and also to what extent have they done such in what span of time, with what engineering talent, and to what extent have they customized the ML capabilities on their GPUs to help simulate some type of approach similar to Tensor cores without actually having access to Tensor cores (because those do a lot of the uplift in processing tasks for DLSS and without them you'd be wasting a ton of system resources to replicate the same thing, sort of like lacking dedicated RT hardware requires magnitudes more GPU power to replicate the technique in software).

That's some interesting background info. While I hope AMD hasn't been left in the dust on this, that does increasingly look to be the case.

It really would be incredible if the Switch 2 was able to output 4K in docked mode on the back of some kind of future DLSS 3/4 implementation. The GAF threads would be epic! I wonder if the Nintendo people here would maintain their neutral stance, or would they go into full console warrior mode?
 

llien

Member
Game specific training is not required, Nvidia stated that.
It's true they did as weird as it is.
"trains using non-game-specific content " is one strange statement though.
Using the same network across the board is not.

"Non-game specific content" contradicts common sense and NVs statements on the same page:

CBHfvsK.png


Looks like marketing people didn't get "game developers do not need to provide game specific assets" wrong.

There's no denial here, just respect...
There are also "better than native" claims.

To me it more looks like...
It does not matter how upscaling works, it is still upscaling, you get from lower resolution to higher resolution.
Again, call it Susan, if it makes you feel better.
"But its' not technique Bla" is meaningless, who cares if it is?
If tomorrow we learn demonic spells that do even better upscaling, "but it's not tech named X" wouldn't matter. (and it doesn't matter now either)
 
Last edited:
That's some interesting background info. While I hope AMD hasn't been left in the dust on this, that does increasingly look to be the case.

It really would be incredible if the Switch 2 was able to output 4K in docked mode on the back of some kind of future DLSS 3/4 implementation. The GAF threads would be epic! I wonder if the Nintendo people here would maintain their neutral stance, or would they go into full console warrior mode?

I think you already can predict the answer to this one xD.

But yeah, indeed that could be super-epic if Nintendo gets some version of this tech in the next Switch. Although I don't know where Nvidia is with the Tegra line as of now. In any case I can see them making those customizations for Nintendo if requested, I mean they make good money from supplying Nintendo those Tegras and honestly they don't have any other major clients for that processor line anyway.
 
Top Bottom