• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

So, after all the hype, it turns out that it's the PC that had the real next-gen "secret sauce" all along?

Kazza

Member
I think you already can predict the answer to this one xD.

But yeah, indeed that could be super-epic if Nintendo gets some version of this tech in the next Switch. Although I don't know where Nvidia is with the Tegra line as of now. In any case I can see them making those customizations for Nintendo if requested, I mean they make good money from supplying Nintendo those Tegras and honestly they don't have any other major clients for that processor line anyway.

Oh no! The Nintendo OT is an oasis of sanity at the moment, I'll be sad if it becomes filled with anti-Sony/Xbox memes. I suppose we could create a special Nintendo DLSS King OT and hope that the warring stays contained in there.

For anyone who wants to keep up with the latest developments in machine learning and AI in general, I can highly recommend this channel:



I like the way he calls me a "fellow scholar" at the start of every video, and usually ends them be saying "What a time to be alive!". I hope he does a video on DLSS one day, but maybe it isn't cutting edge enough for him.
 

Kamina

Golden Boy
Some RTX 2070 can already be had for $400. You're out to lunch if you think the price will jump by 50%+ especially with direct competition from AMD which prompted NVIDIA to bump down the price of the 2070 and introduce the 2070S in its stead.
In Europe you cant get it anywhere that cheap ☹️
Part of the reason why i never went ahead with any of the RTX cards.
 

Freeman

Banned
Yes.



They are already there. RDNA2 can do FP16/FP8/FP4 at 2x/4x/8x speed.
If that's enough we should be good then, it's only a matter of time.

DLSS is just a software trained on real games.
It turns out Nvidia has access to real games data.
That's it. There is no technology involved: whoever has better dataset wins the DL game.
The data is extremely easy to recreate just render the game on two different resolutions for training. Getting actually getting the right data tends do be one of the harder limitation of ML, in this case it's not a big problem.

Building a model that works with that training data and training it to work on every game is what's probably hard.
 
Last edited:

Freeman

Banned
I think just nobody thought that it would be viable.
But if it is: it's not that hard to do.
It's also good that on the user end it's just a fixed input and output, the hard work is all done on the training part. Once it's refined enough I could even see specialized hardware just for this to make it even faster without competing with other GPU tasks for resources.

And train DL on your game is much easier task than to create a universal solution for all games, like DLSS.
No. If you already know that a generic model works extremely well why would you bother sttoping at specialized ones? What they could do is use the generic model as a starting point for specialized models that would work even better and require a lot less training.

The first step to make it really worth is what NVidia is doing.
 
Last edited:

Freeman

Banned
Remove vendor lock-in.



Not viable, unless you have nothing else to do anyway, like NV.
I think you misunderstood what I'm saying, I'm talking about other vendors trying to copy Nvidia not them using their tech and becoming dependent on NVidia. To me it would even make sense for multiple vendors to work together on a solution.

How is it not viable if we already see it working great? That's pretty much what tensor cores are right now.
 
Last edited:

psorcerer

Banned
To me it would even make sense for multiple vendors to work together on a solution.

Training an "all fit" DNN would be a much harder task than to train DL-TAA for a specific game.
Why other vendor should do it like NV does?
It makes no sense for consoles.
 

Freeman

Banned
Training an "all fit" DNN would be a much harder task than to train DL-TAA for a specific game.
Why other vendor should do it like NV does?
It makes no sense for consoles.
Of course it does, requiring it to be trained for every game make it much less practical.

What makes it easier for consoles is that you only need to care about going from 1080p to 4K, that might make things a little bit less complicated (or not, depending on how it actually works).
 
Last edited:

VFXVeteran

Banned
This is just false. Demonstrably false.

DLSS 2.0 is clearly and substantially better than checkerboard rendering, not only in performance saved but also in image quality and image stability AND it can achieve these results using FEWER pixels than are needed for checkerboard rendering.

You are arguing with a guy that will deal with 4x anisotropic filtering and 1440p so long as it's on the PS. He would be marveling over it if the next-gen consoles had it. I'd steer away from his remarks on anything NOT Sony.
 

VFXVeteran

Banned
Why do some PC gamers find it hard to understand that consoles are not PCs and are not competing with them? Consoles are for ppl that dont want a pc taking up room or that dont care for having the ultimate tech at a huge cost or tech that is outdated so often... consoles are easy to plug in and play and more accessible.

Maybe because every single chance they get they talk about how their components like the SSD is going to make their games look better than PC games. Or because their favorite exclusive looks so amazingily better than any PC game? Have you been missing out on all the threads?
 
Last edited:

SF Kosmo

Al Jazeera Special Reporter
Speaking of Nvidia and consoles, I wonder what they could potentially offer Nintendo for a theoretical Switch 2 in 2021/22? Maybe a 2 terraflop next gen Tegra GPU loaded with DSLL 3.0 tech could significantly close the gap with the much more powerful home consoles?
It doesn't really work this way. DLSS runs on the GPU's tensor cores and it takes a lot of compute to do, it's just worth it because it so drastically reduces the overhead on the rest of the GPU. It's not gonna work on a Tegra-style GPU, it would need to pack some pretty hefty extra silicon purpose built for that.

It's also the kind of thing that requires a certain base level of quality to work. Like if you can't render at 720p with TAA you're not going to be able to give that algorithm enough to work with, and even then it's much more obvious at that base resolution.

If you really did have 2TFlops of power at your disposal I guess it's possible but I don't see that happening in a mobile system with remotely decent battery life for many years. Switch is like 0.4 TFlops. The most powerful mobile chips are maybe 1.25 and nowhere near the power efficiency or thermals Nintendo would want. I just don't see that happening until nvidia moves to 3.5nm, still a few years off.

But DLSS is definitely a game changer for people who want to play at 4K and even more so when it comes to unlocking the potential of ray tracing. It is not the only game in town, there are other reconstruction and upsampling methods that I think often look good enough, but DLSS lets you pare down the native res more aggressively and still looks better.
 
Last edited:

geordiemp

Member
Let me introduce you to a perfect upscaling solution coming to all consoles and PC next year



Triangle per pixel dont need ML.:messenger_beaming:
 
Last edited:

Lethal01

Member
Let me introduce you to a perfect upscaling solution coming to all consoles and PC next year



Triangle per pixel dont need ML.:messenger_beaming:


The note about "triangles per pixels" is not referring to upscaling.

This demo seems to have a great upscaling solution but it's got nothing to do with nanite.
 
Maybe because every single chance they get they talk about how their components like the SSD is going to make their games look better than PC games. Or because their favorite exclusive looks so amazingily better than any PC game? Have you been missing out on all the threads?
Okay i do also mean vice versa also :)
 
"My preferred platform will virtually be ignored leading up to the launch of the new consoles. Let me try to spin this the best way I can"

@mods. please change the title of the thread to the above. k thx.
Whoa, why so triggered? I never understood this mindset. Literally anyone can build a decent gaming PC. It's not rocket science, and no, you don't need a nuclear physicist degree in order to build one. PC's aren't some super expensive machines, that are unaffordable to everyone except the billionaires and the Illuminati.

I'm hoping this was a joke post, but several individuals do get upset whenever Digital Foundry whips out the "best version of the game is on PC".
 

Rikkori

Member
The Division 2's temporal supersampling upscaling > DLSS 2.0

Rest assured UbiMassive will not remain the only studio with this (universally applicable) technique. DLSS can't even see a pathetic 1 game per month adoption rate, never mind at game launch, and relies solely on Nvidia "partnerships". It's dead in the water except as marketing fodder.
 

Redlancet

Banned
Or because their favorite exclusive looks so amazingily better than any PC game? Have you been missing out on all the threads?
because they do,not res or framerate,just money spend,pc now cant compete with the budgets of ND,guerrilla and others,what pc games are being made exclusively with that kind of budget? none
 

hyperbertha

Member
I have to say DLSS 2.0 definitely has my mouth watering. Its as close as it gets when it comes to revolutionary tech. Apparently it can be done on consoles, all it takes is training it on a per game basis. Its not a stretch that console manufacturers can make something to rival it.
 

Redlancet

Banned
Good thing the biggest games on the market are multiplats lol.

dont know man,there is a lot of excellent exclusives out there,i have a rtx 2800ti and still find things like last of us 2 amazing,and the "biggest" games on the market its debatabe
 
Last edited:

RoadHazard

Gold Member
Enlighten me. What is the potential of a faster transfer rate feeding the system ram game assets?

It allows you to load massive amounts of data in real time, in a way that just isn't possible currently. Yes, one very nice effect of this is, as you say, that you waste less RAM on stuff you might not even need (because you can load it on demand instead of having to do it in advance in case you need it later), and you can therefore load more data that's actually useful. But it's selling it a bit short to say that's all it will do. It also makes new things possible that just couldn't be done with an HDD as the baseline spec. The simple example is the hidden loading corridors/crevices/elevators you see everywhere now, that simply won't be needed anymore. But that's a very unimaginative example, clever devs will be able to create experiences that just are not possible at all right now.
 

sendit

Member
Enlighten me. What is the potential of a faster transfer rate feeding the system ram game assets?

Most computer systems have a limited amount of ram, more so than a limited amount of storage. In the case of consoles, this ram limitation is very true. I'm sure you're capable of putting rest together and let your imagination run wild when this limitation is lifted. Or not.
 

Mister Wolf

Member
It allows you to load massive amounts of data in real time, in a way that just isn't possible currently. Yes, one very nice effect of this is, as you say, that you waste less RAM on stuff you might not even need (because you can load it on demand instead of having to do it in advance in case you need it later), and you can therefore load more data that's actually useful. But it's selling it a bit short to say that's all it will do. It also makes new things possible that just couldn't be done with an HDD as the baseline spec. The simple example is the hidden loading corridors/crevices/elevators you see everywhere now, that simply won't be needed anymore. But that's a very unimaginative example, clever devs will be able to create experiences that just are not possible at all right now.

5 + 5 = 2.5 +7.5 The bandwidth of the storage is inversely proportional to the amount of System Ram needed to accomplish the same task.

Does that give it an edge on xbox sure. The xbox would need a bigger pool of Ram to accomplish the same task. Thing is these games weren't using alot of Ram in the first place and any multiplatform game developer wouldn't design their game around using the faster storage and a smaller pool of Ram of the PS5. I understand clearly what's happening. Its you guys sensationalizing and all it comes down to is "oh well use more memory" as seen by the PC specs for The Medium which reccomend a system with 16gb or ram which still isnt alot while their last game recommended 8gb.
 
Last edited:

AllBizness

Banned
The hype for the upcoming consoles has focused primarily on their new I/O infrastructures, especially when it comes to the PS5 (as attested by the million or so GAF threads on the subject). The Series X looks like being no slouch in this area either, with it's own (much less talked about) solution, Velocity Architecture. Other types of "secret sauce" are often alluded to, but rarely actually explained in detail.

Who knew that all along the chefs at Nvidia were busy in the kitchen working on a delicious concoction of their own. I'm talking about DLSS 2.0, of course. While PCs are often characterised as big lumbering hulks, having to use raw power (and PC users willingness to spend copious amounts of money on said power) to drive past the level of performance seen on consoles, this time around it seems that the PC is the one taking the more nimble and efficient approach.

I'm not usually one to buy into the hype, but the results of DLSS 2.0 are already plain to see. What's more, those results are only on the current line of Nvidia GPUs, we can almost certainly expect an even more impressive performance when the next gen Nvidia GPUs drop (probably a little earlier than the new consoles). I suppose AMD could have something up their sleeves regarding machine learning (it would be strange if they had ignored such a hot field completely), but if any of this tech is making its way into the next gen consoles, then both them and Sony/MS are keeping really quiet about it. One reason for concern is that DSLL 2.0 seems partially dependent on hardware (i.e. the tensor cores), which the consoles appear to lack.

Speaking of Nvidia and consoles, I wonder what they could potentially offer Nintendo for a theoretical Switch 2 in 2021/22? Maybe a 2 terraflop next gen Tegra GPU loaded with DSLL 3.0 tech could significantly close the gap with the much more powerful home consoles?

Anyway, the proof of any good sauce is in the tasting and I can't wait for the next-gen consoles and GPUs to be released later this year so that we can finally know for sure.
Nah PC doesn't have them i/o capabilities ssd is more of an advantage on console. A PC has to do more then play games this is how it's always going to be. PC ssd is literally only used for faster loading.
 

VFXVeteran

Banned
It allows you to load massive amounts of data in real time, in a way that just isn't possible currently. Yes, one very nice effect of this is, as you say, that you waste less RAM on stuff you might not even need (because you can load it on demand instead of having to do it in advance in case you need it later), and you can therefore load more data that's actually useful. But it's selling it a bit short to say that's all it will do. It also makes new things possible that just couldn't be done with an HDD as the baseline spec. The simple example is the hidden loading corridors/crevices/elevators you see everywhere now, that simply won't be needed anymore. But that's a very unimaginative example, clever devs will be able to create experiences that just are not possible at all right now.

What good is it to load the data but can't render it all at a good FPS and good quality? Your bottleneck is going to always be the GPU.
 

diffusionx

Gold Member
It’s just a SSD. It is a marginally faster one than what most of us have, but it’s just a SSD. We’ve had SSDs in our computers for a decade now. It’s not going to change gaming, or computer graphics. I feel like the hype that came from the SSDs was more people just looking for something to get hyped up about, because the rest of the console (both of them) are pretty pedestrian. They look like solid machines, but they’re pedestrian.

I'll catch up with the previous posts in a second, but I thought XSX had machine learning, no?

? Machine learning is just a program. A computing platform (and the GPU is a computing platform by itself) can run this program. The DLSS method uses the tensor cores to run it, the same tensor cores doing the ray tracing. We don’t know how it’s set up to run on the consoles though. I still don’t think we know how AMD is structuring this GPU, how it is doing ray tracing, etc.
 
Last edited:

ViolentP

Member
Any time an argument is made regarding console performance vs. PC performance, I ignore it. It's a subject that PC has and always will have an unfair advantage. When I hear someone trying to make a competitive console argument, it just comes off as sad. Like your movie-loving friend that showed you his shitty film and thinking he can compete with the likes of Kubrick.

What is rarely discussed however is how the closed nature of consoles allows a developer to at some point, stop thinking about performance and start focusing on design and the distribution of that performance to reach their intended goal. It's why a lot of top-tier console games are so impressive even at a fraction of the power. It's efficiency in development. The volatile nature of PC hardware simply does not allow developers the benefit of restriction. Worlds your fucking oyster on PC but when given an immense amount of power, many don't know how to efficiently distribute it. Look at fucking Star Citizen, my bros.
 

diffusionx

Gold Member
Any time an argument is made regarding console performance vs. PC performance, I ignore it. It's a subject that PC has and always will have an unfair advantage. When I hear someone trying to make a competitive console argument, it just comes off as sad. Like your movie-loving friend that showed you his shitty film and thinking he can compete with the likes of Kubrick.

What is rarely discussed however is how the closed nature of consoles allows a developer to at some point, stop thinking about performance and start focusing on design and the distribution of that performance to reach their intended goal. It's why a lot of top-tier console games are so impressive even at a fraction of the power. It's efficiency in development. The volatile nature of PC hardware simply does not allow developers the benefit of restriction. Worlds your fucking oyster on PC but when given an immense amount of power, many don't know how to efficiently distribute it. Look at fucking Star Citizen, my bros.

Star Citizen should only be looked at as a way to fleece old boomers out of their retirement money.

Consoles and PCs are structured so similarly these days that they both benefit from each others’ existence. It’s so easy to make a game multi platform these days it just makes no sense not to, and the gaming audience is so huge. While I do wish sometimes we could go back to the days when there was, say, Quake 2 and Unreal on PC just melting PCs, that’s not feasible anymore with how much games cost. Still, even within the confines of multi platform, PCs are still breaking new ground, as they did with VR and ray tracing last generation. I kind of shake my head whenever anyone is like “muh efficiency” because ultimately it doesn’t matter, yea Death Stranding or HZD look great on PS4 but it looks and runs wayyyy better on PC, as does every other game, and I think the mere existence of a top-end platform where these games can look and run at their best pushes the tech forward. The question isn’t “wow look at how great TLOU2 looks on a piece of shit PS4”, the question is how good CAN that game look, and right now we just don’t have that answer, and we are missing out because of that.
 
Last edited:
Top Bottom