Wasn't the rumours floating about that Nvidia are looking to jack up their prices again? They want $600 to be the entry level now (which would be the normal 3060).So will a 3070.
Wasn't the rumours floating about that Nvidia are looking to jack up their prices again? They want $600 to be the entry level now (which would be the normal 3060).So will a 3070.
Wasn't the rumours floating about that Nvidia are looking to jack up their prices again? They want $600 to be the entry level now (which would be the normal 3060).
The hype for the upcoming consoles has focused primarily on their new I/O infrastructures, especially when it comes to the PS5 (as attested by the million or so GAF threads on the subject). The Series X looks like being no slouch in this area either, with it's own (much less talked about) solution, Velocity Architecture. Other types of "secret sauce" are often alluded to, but rarely actually explained in detail.
Who knew that all along the chefs at Nvidia were busy in the kitchen working on a delicious concoction of their own. I'm talking about DLSS 2.0, of course. While PCs are often characterised as big lumbering hulks, having to use raw power (and PC users willingness to spend copious amounts of money on said power) to drive past the level of performance seen on consoles, this time around it seems that the PC is the one taking the more nimble and efficient approach.
I'm not usually one to buy into the hype, but the results of DLSS 2.0 are already plain to see. What's more, those results are only on the current line of Nvidia GPUs, we can almost certainly expect an even more impressive performance when the next gen Nvidia GPUs drop (probably a little earlier than the new consoles). I suppose AMD could have something up their sleeves regarding machine learning (it would be strange if they had ignored such a hot field completely), but if any of this tech is making its way into the next gen consoles, then both them and Sony/MS are keeping really quiet about it. One reason for concern is that DSLL 2.0 seems partially dependent on hardware (i.e. the tensor cores), which the consoles appear to lack.
Speaking of Nvidia and consoles, I wonder what they could potentially offer Nintendo for a theoretical Switch 2 in 2021/22? Maybe a 2 terraflop next gen Tegra GPU loaded with DSLL 3.0 tech could significantly close the gap with the much more powerful home consoles?
Anyway, the proof of any good sauce is in the tasting and I can't wait for the next-gen consoles and GPUs to be released later this year so that we can finally know for sure.
Even if they did, they won’t sell 3070’s $650+. I see prices either remaining static or going down now that AMD might be looking to compete in all segments.Wasn't the rumours floating about that Nvidia are looking to jack up their prices again? They want $600 to be the entry level now (which would be the normal 3060).
I don't understand why anyone would still compare PC to console, they were never meant to be in the same ecosystem as PC is an open platform which price of admission can be double or triple over the console.
I know it's not the main point of discussion of this thread, but I can't help but to think about anyone who compared PC to console and people who still compare PC to console.
Why do some PC gamers find it hard to understand that consoles are not PCs and are not competing with them? Consoles are for ppl that dont want a pc taking up room or that dont care for having the ultimate tech at a huge cost or tech that is outdated so often... consoles are easy to plug in and play and more accessible.
Because console gamers are constantly going "hur dur, look at what my $500 console can do that your $600 gaming rig can't." You're talking as if it was only PC gamers who compare PC to consoles when the opposite is just as true.Why do some PC gamers find it hard to understand that consoles are not PCs and are not competing with them? Consoles are for ppl that dont want a pc taking up room or that dont care for having the ultimate tech at a huge cost or tech that is outdated so often... consoles are easy to plug in and play and more accessible.
The tech is just so much better than what amd will ever release
Cause NVIDIA are assholes at the negotiation table.I still don't understand why either console manufacturer is negotiating a deal with Nvidia to use cheaper versions of their cards for future consoles. The tech is just so much better than what amd will ever release
DLSS is just a ML assisted upscaling. (both versions of it)
As with any upscaling of that type, it is prone to heavy artifacting.
"OMG, I can use upscaling to get higher FPS" is funny and scary at the same time. People are so orgasmic about dubious green technologies, I wonder if it is some sort of brain implant.
Yeah, merely lost raindrops upon brief inspection in the other thread.isn't prone to much artifacting at all
Yeah, merely lost raindrops upon brief inspection in the other thread.
The real secret sauce is called great studios and money.
I'll take native resolution over reconstruction thanks.
well, sorry man. This wasn't meant to be dismissive of Sony. Everyone has the same problem.
Unless Nvidia wants to start handing out models, everyone is stuck unless they want to build their own. Which is by all means can be very expensive.
To put things into perspective, Google, Amazon, and MS are the largest cloud processing for AI. None of them have a DLSS model. Facebook is trying but has something inferior to Nvidia as I understand it. Even with using RTX AI hardware, it's magnitudes away from DLSS performance.
MS can tout ML capabilities on the console, but no model, it's pointless. The technology for AI is in the model, the hardware to run it is trivial.
Further explanation on this front: a trained model consists of data and processing and the network. Even if you have the neural network to train with, and lets say it was open source, you still need data and then you need processing. Power.
To put things into perspective, BERT is a transformer network whose job is for natural language processing. It can read sentences and understand context as it reads both forwards and backwards to understand context. BERT the network is open source. The Data is not. The data source is Wikipedia (the whole wikipedia is read into BERT for training) but you'd still have to process the data ahead of time for it to be placed for training. Assuming you had a setup capable of training so much data, then gets into the compute part of the equation. Simply put, only a handful of companies in this world can train a proper BERT model. So while there are all sorts of white papers on BERT, small teams can't verify the results or keep up because the compute requirements are so high.
For a single training:
How long does it take to pre-train BERT?
BERT-base was trained on 4 cloud TPUs for 4 days and BERT-large was trained on 16 TPUs for 4 days. There is a recent paper that talks about bringing down BERT pre-training time – Large Batch Optimization for Deep Learning: Training BERT in 76 minutes.
If you make any change to it, any change to the network or data set. That's another 4 days of training before you can see the result. Iteration time is very slow without more horsepower on these complex networks.
***
Google BERT — estimated total training cost: US$6,912
Released last year by Google Research, BERT is a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.
From the Google research paper: “training of BERT – Large was performed on 16 Cloud TPUs (64 TPU chips total). Each pretraining took 4 days to complete.” Assuming the training device was Cloud TPU v2, the total price of one-time pretraining should be 16 (devices) * 4 (days) * 24 (hours) * 4.5 (US$ per hour) = US$6,912. Google suggests researchers with tight budgets could pretrain a smaller BERT-Base model on a single preemptible Cloud TPU v2, which takes about two weeks with a cost of about US$500.
...
What may surprise many is the staggering cost of training an XLNet model. A recent tweet from Elliot Turner — the serial entrepreneur and AI expert who is now the CEO and Co-Founder of Hologram AI — has prompted heated discussion on social media. Turner wrote “it costs $245,000 to train the XLNet model (the one that’s beating BERT on NLP tasks).” His calculation is based on a resource breakdown provided in the paper: “We train XLNet-Large on 512 TPU v3 chips for 500K steps with an Adam optimizer, linear learning rate decay and a batch size of 2048, which takes about 2.5 days.”
***
None of these costs account for the amount of the R&D of how many times they had to run training just to get the result they wanted. The labour and education required from the researchers. The above is the cost of running the hardware.
Nvidia has been in the business since the beginning sucking up a ton of AI researcher talent. They have the hardware and resources and subject matter expertise in a long legacy of graphics to make it happen. It's understandable how they were able to create the models for DLSS.
I frankly can't see anyone else being able to pull this off. Not nearly as effectively. At least not anytime soon.
Xbox One jokes aside, this is exactly the case where the "Cloud" is relevant. They do the heavy lifting somewhere else so the consumer cards don't have to.
seems possible to be done after gpu has done its work to me with this simplified explanation :Is DLSS applied after the image has been completely constructed? Could console manufactures add dedicated chips to upscale the image after it was rendered using ML just like the PS4 Pro has a dedicated ups calling chip (if I remember correctly)?
Could Nvidia sell to TV manufactures chips that use DLSS to upscale stuff? If this is possible, please make every game have a 1080p game modes and we will eventually be able to upscale them well.
Reading about it I think it relies on additional information to being generated along the original image as well.
Under the hood, the DLSS 2.0 neural graphics framework is in layman's terms traint to see your current image, and then predict what the next image will be, so it can make it look sharper and more realistic. The convolutional autoencoder takes the current frame with low resolution and the last frame with a high resolution to then calculate pixel per pixel how it can make the current image into a higher quality. While training this AI, the output gets compared to the original high quality image, so the system can learn from its mistakes and get even better results the next time.
or that shipI fear that all the DLSS in the world can't save poor Craig
Doing a quick reading looks like DLSS 2.0 requires the game to generate motion vectors to go along with the image but I don't think that kills the idea of a dedicated chip added to the console or to the displays.seems possible to be done after gpu has done its work to me with this simplified explanation :
No, the key was "brief inspection"....overwhelmingly positive response...
It would help if you would not be in denial about something extremely obvious and well known: AI upscaling is prone to heavy artifacting. The more aggressive the upscaling gets, the worse are the artifacts....the worse that people have to point out is...a few missing raindrops.
Dude...Could Nvidia sell to TV manufactures chips that use DLSS to upscale stuff?
Describes what is going on a supercomputer, in datacenter.this simplified explanation :
DLSS 2.0 is generic and doesn't require per-game training.Dude...
It works only in certain games, because AI was trained on certain games. Outcome of that training is inside drivers.
If someone comes up with generic neural network good enough for generic image processing, nobody would need nvidia to turn it into a chip.
Since we are talking about technology pushed by the masters of FUD, the level of misinformation is not surprising... at all:DLSS 2.0 is generic and doesn't require per-game training.
It's not per game according to what you posted, it specifically mentions that DLSS 1.0 had to to be trained on a per game basis.Since we are talking about technology pushed by the masters of FUD, the level of misinformation is not surprising... at all:
Deep learning super sampling - Wikipedia
en.wikipedia.org
Ssd impact? You’re in for disappointmentPeople are counting out the SSD impact already? The gen has not even started. these developers can't all be lying
Since we are talking about technology pushed by the masters of FUD, the level of misinformation is not surprising... at all:
Deep learning super sampling - Wikipedia
en.wikipedia.org
I can assure you the Xbox One X is not "almost more powerful" than upper mid-range PC’s.Consoles are almost more powerful than PC's and that concept scares the shit out of PC gamers.
And it will stay that way.This is awesome technology, but the one downfall is that just it can't be applied to every game by the user themselves. This has to be an effort between Nvidia and each developer on a per game basis. In other words this feature, while rock solid, currently doesn't boast enough support to make it worth an upgrade right now.
finally someone had the guts to "meme" it.Got my Secret Sauce pc build ready
No, the key was "brief inspection".
It would help if you would not be in denial about something extremely obvious and well known: AI upscaling is prone to heavy artifacting. The more aggressive the upscaling gets, the worse are the artifacts.
NV has applied it in a handful of games, using both DLSS 1.0 and 2.0, but the tech itself is old.
Dude...
It works only in certain games, because AI was trained on certain games. Outcome of that training is inside drivers.
If someone comes up with generic neural network good enough for generic image processing, nobody would need nvidia to turn it into a chip.
Describes what is going on a supercomputer, in datacenter.
This is awesome technology, but the one downfall is that it can't be applied to every game by the user themselves. This has to be an effort between Nvidia and each developer on a per game basis. In other words this feature, while rock solid, currently doesn't boast enough support to make it worth an upgrade right now.
It’s not DLSS either.I expected someone named MrFunSocks to be more interested in what makes games fun (its not native 4k)
It's crazy how far ahead on this Nvidia is compared to literally every other company on the planet. Came across this post on B3D that goes into why we aren't seeing anything near as competitive with DLSS 2.0 from Microsoft and Sony (and obviously, AMD). Post is from iroboto:
Bolded emphasis (sans underline) mine. Seems a lot of us have underestimated the degree of resources, experience, time and investment required for a DLSS 2.0 style data model. In some ways makes me wish at least one of the console devs still went with Nvidia. It'd be hilarious if Nvidia adds DLSS 2.0 support to the next Switch and it ends up punching well above its spec weight because of it xD.
Yes the cloud is essential for this type of thing. The question is does MS have a data model they've been able to rigorously train to implement something comparable with DLSS 2.0 in design onto the Series systems?
If there's one of very few companies in the world with the financial capital to fund that type of development, it's MS (Apple, Google, Amazon, Facebook and Tencent are the only other comparable companies with the financial capital to do this). The question is if they've done so, and also to what extent have they done such in what span of time, with what engineering talent, and to what extent have they customized the ML capabilities on their GPUs to help simulate some type of approach similar to Tensor cores without actually having access to Tensor cores (because those do a lot of the uplift in processing tasks for DLSS and without them you'd be wasting a ton of system resources to replicate the same thing, sort of like lacking dedicated RT hardware requires magnitudes more GPU power to replicate the technique in software).
It's true they did as weird as it is.Game specific training is not required, Nvidia stated that.
There are also "better than native" claims.There's no denial here, just respect...
It does not matter how upscaling works, it is still upscaling, you get from lower resolution to higher resolution.To me it more looks like...
That's some interesting background info. While I hope AMD hasn't been left in the dust on this, that does increasingly look to be the case.
It really would be incredible if the Switch 2 was able to output 4K in docked mode on the back of some kind of future DLSS 3/4 implementation. The GAF threads would be epic! I wonder if the Nintendo people here would maintain their neutral stance, or would they go into full console warrior mode?