Is it sarcasm?Its seems you are being a bit deceptive.
In my eyes the CAS looks like shit and DLSS 2.0So lack of reading comprehension skills, it hurts.
When your effect improves lines, adds blurs/loses texture finesse, it will excel where you have lines, but lack any texture that you can detect loss off detail in. As in, wait for it, IN THE VERY EXAMPLE just presented above.
And, the best of it, this is an example of how great the glorified TAA derivative in question is.
Ok.In my eyes
Good luck with that, whatever it means.I am also working pretty heavily now
Is it sarcasm?
I hope it is sarcasm.
That is fine.The bush in question is during a plainly noticeable different time of day.
Is better (as expected)image quality in terms of lines
Worse.and textures are
I agree with that (blur and loss in detail is very clearly visible), but take into account that "quality" in green world is called "ultra quality" in this presentation.There's a very obvious difference in quality in the side-by-side, which is why they probably didn't do direct scene comparison and left it as side-by-side as it'd look much worse and even more noticeable doing a swipe.
I agree with that (blur and loss in detail is very clearly visible), but take into account that "quality" in green world is called "ultra quality" in this presentation.
Overall, AMD just needs "good enough" solution.
It doesn't need to beat anything on the market, that 1060 and 1080Ti users cannot use anyhow.
Yup. Quality on amd is probably same level as middle settings on dlss and not quality settings ... which look close .I agree with that (blur and loss in detail is very clearly visible), but take into account that "quality" in green world is called "ultra quality" in this presentation.
Overall, AMD just needs "good enough" solution.
It doesn't need to beat anything on the market, that 1060 and 1080Ti users cannot use anyhow.
What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.All fanboys from both sides do that, parroting some nonsense that they read somewhere. XBOX is using hardware ML to do auto HDR on old games that do not have HDR.
Yup. Quality on amd is probably same level as middle settings on dlss and not quality settings ... which look close .
I do not expect this to beat dlss out of the gate but this is an open source and it can only get better . Specially when most games are being developed Under Xbox / PlayStation / amd hardware.
honestly if it’s not head to head YouTube video zoomed in it’s very hard for me as a hardcore gamer to even notice . But for most gamers out there they will see as welcome enhancement for their gaming experience
That is fine.
The previous version of the explanation that I've heard was along the lines of "it actually looks the same".
I call it progress.
Is better (as expected)
Worse.
PS
Dude, not to go into nowhere land for arguments, on this very forum, green GPU owner challenged me to guess which of the two pics was the 1440p to 4k upscale (also known as "4k DLSS quality", chuckle).
And, guess what, it was easy peasy: the pic that added blur and lost texture detail was the upscaled one.
Shocking eh?
But we don’t know yet . We don’t know how good this is because we didn’t see a real high quality mod yet in a proper head to head. Come June 23 or whenever the release date is and you won’t still see a proper head to head because my gut is telling me those selected amd games day one won’t have a dlss option enabled. ( I don’t think godfall does have it )Eh TSR is a better solution in that case. Uses less resources and looks better than this thing. You know how DLSS 1.0 was garbage? It was because it did things the wrong way same way this does. I expect a complete overhaul of how this works.
Doesnt Godfall use unreal engine? You can probably use tsr there.But we don’t know yet . We don’t know how good this is because we didn’t see a real high quality mod yet in a proper head to head. Come June 23 or whenever the release date is and you won’t still see a proper head to head because my gut is telling me those selected amd games day one won’t have a dlss option enabled. ( I don’t think godfall does have it )
I don’t know I am planning to buy that game for 15$ when it’s on sale. LolDoesnt Godfall use unreal engine? You can probably use tsr there.
What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon
I never even talked about PS5 in my post? Never said it was not present on Sony's side.What exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon
Epic's comparison:
Native 4k:
FSR from 1080p:
That isnt FSR.very interesting, there is a loss of detail in FSR but considering comes from 1080p its a very good improvement vs quality lost and the result is not that bad(I am generally against this type of tech), a curious detail is the background, it tries to reconstruct detail from blurred background, I wonder if a rework of how depth of field is implemented along some tweaks in LOD textures and Mip maps can help games using this techniques
everyone can run ml through gpu the problem is how much fast ....PS5 doesn't have int4 or int8...at least from their presentation and from what PS5 engineer saidWhat exactly you mean for hardware? Because ps5 also has hardware ML (it's not clear to me whether it's part of the GE or just a Navi feature) but it's totally different to pretend to say it's like to have dedicate chips/core as MS let intend to their fans who blindly believe it.
It's started the countdown to the Riky laugh emoticon
Epic's comparison:
Native 4k:
FSR from 1080p:
This is pretty awesome. DLSS took a while to get up to it’s current state. FSR May take a bit of practical use to get up to speed.
How does this get implemented onto a Nvidia card?
AMD send to be going more for mainstream. Not only PC, but consoles, phones, etc. Nvidia is becoming more broad, but at the same time more specialized with their tech. If you want performance and quality, go green. If you want a basic solution, go red. Although I love AMD cpu's.AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.
GG no re. After 21 years of Radeon it looks like I have to move on.
There is no such thing as market segmentation when product of any sort is unavailable or sells out instantly due to an ongoing worldwide shortage.AMD send to be going more for mainstream. Not only PC, but consoles, phones, etc. Nvidia is becoming more broad, but at the same time more specialized with their tech. If you want performance and quality, go green. If you want a basic solution, go red. Although I love AMD cpu's.
Will never happen, so its a dud then.
Cerny mentioned ps5 ML capability on Road to the ps5. Now I never said series X is not ML capable. But it's not capable as Xbox fans think because it hasn't dedicated cores about it. So if you want heavy ML features, the hardware resources useful for other stuff will be sacrificed just for that. It's simple logic.I never even talked about PS5 in my post? Never said it was not present on Sony's side.
When Sony announced PS5 specs, they didn't mention Machine Learning (ML) capabilities. Microsoft made it one of the big features to focus on with XSX's marketing campaigns. Now it's MS fans fault to believe in simple facts?
Devs will use the AMD solution because it's free and has the widest support. Nvidia will pay for exclusives and try to milk DLSS as much as possible, until it becomes untenable just like gsync.
I still trying to understand what's the true point of your posts but I'm starting to think you are just here to troll about AMD and nothing more. Outside AMD is doomed I barely see any other argumentation.AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.
GG no re. After 21 years of Radeon it looks like I have to move on.
Devs will use the AMD solution because it's free and has the widest support. Nvidia will pay for exclusives and try to milk DLSS as much as possible, until it becomes untenable just like gsync.
It's Nvidia, I just assumed that there is some form of licensing or consultation cost involved. Reading the license agreement, that doesn't' seem to be the case (apart from standard free marketing).Free? Please tell me the cost of DLSS
If you're so lacking in basic reading comprehension then use the ignore button, I can't spell things out simpler than I already have.I still trying to understand what's the true point of your posts but I'm starting to think you are just here to troll about AMD and nothing more. Outside AMD is doomed I barely see any other argumentation.
The gpu market is now dictated by the laptop market, which is decided by what OEMs offer. Nvidia has been pretty much unchallenged in this market for the past, well since discrete laptop gpus were a thing.AMD has basically given up on the GPU space with this move, at least for a few more generations. They've chosen mass adoption, hence a simple and weak solution for upscaling, but in return DLSS will wipe the floor with them both from an image quality and a performance standpoint. So now not only do they have a massive deficit to make up for on the RT front, but they're still 3+ years behind on upscaling tech, and because they're too weak to add something without having it for consoles too that means there's little hope they'll change course. Nvidia's going to finally hit that magic 90% discrete GPU market share.
GG no re. After 21 years of Radeon it looks like I have to move on.
You know what DLSS 1 was? A true AI solution, with per game training.You know how DLSS 1.0 was garbage? It was because it did things the wrong way same way this does.
It is an open source product that Intel intends to grab and optimize.Will never happen, so its a dud then.
Only if supporting it is really a very low effort. (which I doubt)DLSS is here to stay.
Are you reading into green sponsored pre -RDNA games too much?a massive deficit to make up for on the RT front
Dude, are you for real?Both solutions now require to be implemented on a driver level, game by game basis
You know what DLSS 1 was? A true AI solution, with per game training.
Nothing else on this planet, including DLSS 2, is in any way "the same wa"y as DLSS 1.
DLSS 2 is 90% TAA, with some AI to it.
Tensor cores is just a lame excuse to ban Pascal.
DLSS 2.0 Selectable Modes
One of the most notable changes between the original DLSS and the fancy DLSS 2.0 version is the introduction of selectable image quality modes: Quality, Balanced, or Performance — and Ultra Performance with 2.1. This affects the game's rendering resolution, with improved performance but lower image quality as you go through that list.
With 2.0, Performance mode offered the biggest jump, upscaling games from 1080p to 4K. That's 4x upscaling (2x width and 2x height). Balanced mode uses 3x upscaling, and Quality mode uses 2x upscaling. The Ultra Performance mode introduced with DLSS 2.1 uses 9x upscaling and is mostly intended for gaming at 8K resolution (7680 x 4320) with the RTX 3090. While it can technically be used at lower target resolutions, the upscaling artifacts are very noticeable, even at 4K (720p upscaled). Basically, DLSS looks better as it gets more pixels to work with, so while 720p to 1080p looks good, rendering at 1080p or higher resolutions will achieve a better end result.
How does all of that affect performance and quality compared to the original DLSS? For an idea, we can turn to Control, which originally had DLSS 1.0 and then received DLSS 2.0 support when released. (Remember, the following image comes from Nvidia, so it'd be wise to take it with a grain of salt too.)
Control at 1080p with DLSS off (top), the DLSS 1.0 on (middle) and DLSS 2.0 Quality Mode on (bottom) (Image credit: Nvidia)
One of the improvements DLSS 2.0 is supposed to bring is strong image quality in areas with moving objects. The updated rendering in the above fan image looks far better than the image using DLSS 1.0, which actually looked noticeably worse than having DLSS off.
DLSS 2.0 is also supposed to provide an improvement over standard DLSS in areas of the image where details are more subtle.
Control at 1440p using the original DLSS (top) and DLSS 2.0 Quality Mode (bottom) (Image credit: Nvidia)
Nvidia promised that DLSS 2.0 would result in greater game adoption. That's because the original DLSS required training the AI network for every new game needed DLSS support. DLSS 2.0 uses a generalized network, meaning it works across all games and is trained by using "non-game-specific content," as per Nvidia.
For a game to support the original DLSS, the developer had to implement it, and then the AI network had to be trained specifically for that game. With DLSS 2.0, that latter step is eliminated. The game developer still has to implement DLSS 2.0, but it should take a lot less work, since it's a general AI network. It also means updates to the DLSS engine (in the drivers) can improve quality for existing games. Unreal Engine 4 and Unity have both also added DLSS 2.0 support, which means it's trivial for games based on those engines to enable the feature.
How Does DLSS Work?
Both the original DLSS and DLSS 2.0 work with Nvidia's NGX supercomputer for training of their respective AI networks, as well as RTX cards' Tensor Cores, which are used for AI-based rendering.
For a game to get DLSS 1.0 support, first Nvidia had to train the DLSS AI neural network, a type of AI network called convolutional autoencoder, with NGX. It started by showing the network thousands of screen captures from the game, each with 64x supersample anti-aliasing. Nvidia also showed the neural network images that didn't use anti-aliasing. The network then compared the shots to learn how to "approximate the quality" of the 64x supersample anti-aliased image using lower quality source frames. The goal was higher image quality without hurting the framerate too much.
The AI network would then repeat this process, tweaking its algorithms along the way so that it could eventually come close to matching the 64x quality with the base quality images via inference. The end result was "anti-aliasing approaching the quality of [64x Super Sampled], whilst avoiding the issues associated with TAA, such as screen-wide blurring, motion-based blur, ghosting and artifacting on transparencies," Nvidia explained in 2018.
DLSS also uses what Nvidia calls "temporal feedback techniques" to ensure sharp detail in the game's images and "improved stability from frame to frame." Temporal feedback is the process of applying motion vectors, which describe the directions objects in the image are moving in across frames, to the native/higher resolution output, so the appearance of the next frame can be estimated in advance.
DLSS 2.0 (Image credit: Nvidia)
DLSS 2.0 gets its speed boost through its updated AI network that uses Tensor Cores more efficiently, allowing for better framerates and the elimination of limitations on GPUs, settings and resolutions. Team Green also says DLSS 2.0 renders just 25-50% of the pixels (and only 11% of the pixels for DLSS 2.1 Ultra Performance mode), and uses new temporal feedback techniques for even sharper details and better stability over the original DLSS.
Nvidia's NGX supercomputer still has to train the DLSS 2.0 network, which is also a convolution autoencoder. Two things go into it, as per Nvidia: "low resolution, aliased images rendered by the game engine" and "low resolution, motion vectors from the same images — also generated by the game engine."
DLSS 2.0 uses those motion vectors for temporal feedback, which the convolution autoencoder (or DLSS 2.0 network) performs by taking "the low resolution current frame and the high resolution previous frame to determine on a pixel-by-pixel basis how to generate a higher quality current frame," as Nvidia puts it.
The training process for the DLSS 2.0 network also includes comparing the image output to an "ultra-high-quality" reference image rendered offline in 16K resolution (15360 x 8640). Differences between the images are sent to the AI network for learning and improvements. Nvidia's supercomputer repeatedly runs this process, on potentially tens of thousands or even millions of reference images over time, yielding a trained AI network that can reliably produce images with satisfactory quality and resolution.
With both DLSS and DLSS 2.0, after the AI network's training for the new game is complete, the NGX supercomputer sends the AI models to the Nvidia RTX graphics card through GeForce Game Ready drivers. From there, your GPU can use its Tensor Cores' AI power to run the DLSS 2.0 in real-time alongside the supported game.
Because DLSS 2.0 is a general approach rather than being trained by a single game, it also means the quality of the DLSS 2.0 algorithm can improve over time without a game needing to include updates from Nvidia. The updates reside in the drivers and can impact all games that utilize DLSS 2.0.
Most of your post it's just about what a mess, what a disaster, Nvidia will destroy them and I'm lacking in basic reading comprehension and I'm trolling because I pointed out it's better to stop with this childish unnecessary argumentation? It's not like you can predict every possible evolution so early and with so scarce tech details available. If I'm not wrong DLSS 1.0 was considered a massive failure and a disaster, not so many years ago.If you're so lacking in basic reading comprehension then use the ignore button, I can't spell things out simpler than I already have.
AMD will be fine, CPU division will carry the day in a poetic reversal of fortune.
Your concern trolling is noted though.
anyone can tell me why HWU got so excited about FSR here?
Honestly...Looks horrible.
There's a very obvious difference in quality in the side-by-side, which is why they probably didn't do direct scene comparison and left it as side-by-side as it'd look much worse and even more noticeable doing a swipe. DLSS looks substantially better. Very underwhelmed considering how long they've been cooking it. Hopefully this is just like Nvidia's first implementation of DLSS where the next part of the tree actually does what they intend.
I only watch unboxed monitor reviews. Everything else is pretty shit. Actually got my monitor based on their review.HWU's entire channel in a nutshell the past year.
His monitor reviews are amazing though. But he definitely has a hard on for AMD GPU's.
My man, over the last few months Ilien has made it his life’s work to go in every upscaling related thread and spread random FUD about DLSS to serve his waifu Lisa Su, just leave him be.Eh what.
Thank you stranger, for:Eh what.
Exactly the same shit was repeated (as exactly the same stupid shit keeps popping from folks who think of technology as some sort of magic) and I grew tired repeating all the links all the time.random FUD about DLSS