• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD FidelityFX Super Resolution may launch in spring

MonarchJT

Banned
Finally we could see if AMD's solution will live up to it and how it can somehow compete with DLSS
It would also be interesting to know if the consoles will use it and if the Xbox Series X|S will be able to take advantage of the modification made to the CU's of the gpu to support machine learning.

 

assurdum

Member
Finally we could see if AMD's solution will live up to it and how it can somehow compete with DLSS
It would also be interesting to know if the consoles will use it and if the Xbox Series X|S will be able to take advantage of the modification made to the CU's of the gpu to support machine learning.

What modification?
 

MonarchJT

Banned
What modification?


 
Last edited:

KungFucius

Member
I doubt I will be using it. The main issue with these implementations, including DLSS, is that they are game specific. For them to actually be truly useful, they need to be universal.
If both major GPUs have something that will make it more likely that most games will support both. Of course they will money hat some games and probably bundle them too, but overall, it will be supported in more games.
 

MonarchJT

Banned
I doubt I will be using it. The main issue with these implementations, including DLSS, is that they are game specific. For them to actually be truly useful, they need to be universal.
if there will be any evident performance increases it would be idiocy not to use it. Machine learning is clearly the future and I hope amd hurries to release something that knocks it down the computing power needed to play at 4k 60 fps
 

martino

Member
If both major GPUs have something that will make it more likely that most games will support both. Of course they will money hat some games and probably bundle them too, but overall, it will be supported in more games.
He missed the new where amd says it wants it to work on rdna2 consoles.
If amd delivers (good solution also on console) dlss will be used as often as it currently is(understand few games with partnership) and that solution will be the common one.
 

MonarchJT

Banned
If one console does better at this then the other and has good results it really could be huge.
First we need to see how it performs and the graphical results obtained. The first version of DLSS wasn't great, but I think amd started development with 2.0 as a target even though either what is missing is the number of tensor cores available in nvidia gous to accelerate It. Having said that also using a part of the CU's and therefore giving up some power TFs to dedicate them to machine learning would lead to exponential advantages. And as you say if a console would perform better in ml upscaling it would be huge. it would leave tons of resources for effects, assets, and frames per second. As I have already posted above during the 2020 hotchip Microsoft unveiled that it has modified the cu of the Xbox precisely to accelerate machine learning we will see.
 
Last edited:
First we need to see how it performs and the graphical results obtained. The first version of DLSS wasn't great, but I think amd started development with 2.0 as a target even though either what is missing is the number of tensor cores available in nvidia gous to accelerate It. Having said that also using a part of the CU's and therefore giving up some power TFs to dedicate them to machine learning would lead to exponential advantages. And as you say if a console would perform better in ml upscaling it would be huge. it would leave tons of resources for effects, assets, and frames per second. As I have already posted sop above during the 2020 hotchip Microsoft unveiled that it has modified the cu of the Xbox precisely to accelerate machine learning we will see.
I'm PC only currently and won't be upgrading GPU for at least 2 years. I just love drama and if there is a actual difference between the 2 with decent results you know it's gonna lead to a whole bunch of drama.
 

MonarchJT

Banned
Maybe I'm blind but I don't see absolutely any mention to hardware modification in those panels.
hotchip 2020 during the presentation of the SOC of the x series ... in the slides ML (machine learning) inference accelleration, where they clearly point to image upscaling. In the second slide it is clearly written about Hardware Soc innovation .... "machine learning accelleration"
 
Last edited:

M1chl

Currently Gif and Meme Champion
Maybe I'm blind but I don't see absolutely any mention to hardware modification in those panels.
"Very small area const, 3-10x improvement" sounds like something for ML.

Not saying it's specific for the XSX, XSS tho...
 

Clear

Member
Seems like something innate to RDNA2. I find it doubtful that Sony would have missed out on incorporating comparable tech given PS4Pro was at the forefront of giving hardware support for reconstruction with its CBR modifications.

Its obviously going to be necessary for 8k, so I suspect support will be linked to that.
 
If both major GPUs have something that will make it more likely that most games will support both. Of course they will money hat some games and probably bundle them too, but overall, it will be supported in more games.
I'm fairly sure that the systems need to be trained separately.

Hopefully there is a generic AI scaling trainer released one day.
 

MonarchJT

Banned
But it's kind of weird why it's not ready or why they not say that "it's coming...".
One thing is to have the hw to accelerate another true software that manages the acceleration and above all that exploits it (that would be up to the devs). As far as I have read the specificity of the soc of the x lies in the collaboration with the Azure team .. they have implemented INT4, INT8 and FP8. To be clear, the old ps4 pro supported up to FP16 and we know how many interviews Mark Cerny released about it but unfortunately, machine learning was still not so developed nor did Sony had developed API such as directML. As for the ps5 we don't even know if it is still, like the ps4pro, even compatible with fp16
 
Last edited:

Rikkori

Member
Don't underestimate Radeon Boost 2.0 as well, based on VRS. Performance is gonna skyrocket & applicable to consoles too! And hell EVEN handhelds next year (Rembrandt APUs)!

Let's just see if they also make an effort to contact devs in order to get them into some older popular games, or some demanding raytracing titles.
 

MonarchJT

Banned
Seems like something innate to RDNA2. I find it doubtful that Sony would have missed out on incorporating comparable tech given PS4Pro was at the forefront of giving hardware support for reconstruction with its CBR modifications.

Its obviously going to be necessary for 8k, so I suspect support will be linked to that.
We clearly don't know but given the rumor and the countless interviews that Sony had made by advertising the limited compatibility to FP16 of the ps4pro when machine learning was not yet "famous" I find it difficult to believe that supporting well beyond the FP16 they wouldn't even say a word about it. However, they can always use it via software, obviously losing considerably in performance.
 
Last edited:

M1chl

Currently Gif and Meme Champion
One thing is to have the hw to accelerate another true software that manages the acceleration and above all that exploits it (that would be up to the devs). As far as I have read the specificity of the soc of the x lies in the collaboration with the Azure team .. they have implemented INT4, INT8 and FP8. To be clear, the old ps4 pro supported up to FP16 and we know how many interviews Mark Cerny released about it but unfortunately, machine learning was still not so developed nor did Sony had developed API such as directML. As for the ps5 we don't even know if it is still, like the ps4, even compatible with fp16
All GPUs probably supports FP16, it's just that PS4 Pro had "rapid packed math" when 2xFP16 can be packed in 1xFP32 number, which saves cycles. FP16 is so called "half-precision", GPUs were first FP16 before FP32...

I believe Sony had some patent for ML solution, if I am not wrong? I believe, whatever console do it something akin to DLSS, even if bad quality (unprecise) is going to lead every benchmark. Picture is going to suffer, but then again you can have multiple modes.
 

MonarchJT

Banned
All GPUs probably supports FP16, it's just that PS4 Pro had "rapid packed math" when 2xFP16 can be packed in 1xFP32 number, which saves cycles. FP16 is so called "half-precision", GPUs were first FP16 before FP32...

I believe Sony had some patent for ML solution, if I am not wrong? I believe, whatever console do it something akin to DLSS, even if bad quality (unprecise) is going to lead every benchmark. Picture is going to suffer, but then again you can have multiple modes.
Rapid packed math yes .. and that was the acceleration in fact. Do they already have 1 api? I'm curious to read about it do you have any links, kindly?
 
Last edited:

M1chl

Currently Gif and Meme Champion
Rapid packed math yes .. and that was the acceleration in fact. Do they already have 1 api? I'm curious to read about it do you have any links, kindly?
No it's speculation, besides those APIs are not publicly available, so I am not sure how far they are.
 
As for the ps5 we don't even know if it is still, like the ps4, even compatible with fp16
I have no source for that, but given Sony's approach to backward compatibility they would have made sure that the feature was not removed from the newer architecture, where it's built-in anyway.
One thing is to have the hw to accelerate another true software that manages the acceleration and above all that exploits it (that would be up to the devs).
100% true, but sometimes the APIs can be updated to use the new hardware features automatically. Sony did this by moving some calculations from CPU to the GPU in some of their PS4 libraries, Apple does this when they change CPU architecture (68k -> PPC -> x86 -> ARM), when the new CPU calls system library they call the native version instead of an emulated version (which is sometimes hardware accelerated).

Now I have seen no such information about this kind of automatic use of AI features, and it seems unlikely since the robots need to be trained... MS seems to push VRS quite a bit, so maybe that is easy to just switch on.
 

llien

Member
Last edited:

MonarchJT

Banned
Unfortunately I don't find anything about Sony the ps5 or if they have an api about the ML (if anyone finds something please post it) but here an interesting video (for those who have never seen it) I think that Ms has co-developed the super resolution tech to then release it inside their own dx12u and collaborated with AMD to make it compatible with all RDNA2 gpu in any case. From an article

"While AMD clearly announced the feature as a part of FidelityFX technology, Tom Warren claims that this technology will be open and cross-platform. It is hard to image NVIDIA supporting FidelityFX supersampling, so we assume that this technology will be part of Microsoft DirectML technology.

XBOX Series X/S also features ML interference acceleration. The AI ‘tensor’ cores require a very small area of the die, while can provide 3-x10x performance improvement, a slide from Microsoft claimed.

DirectML super-resolution a Microsoft technology that was demonstrated back in 2019 during Game Developer Conference. It can provide a higher framerate and lower latency compared to TensorFlow, which was not designed for real-time super-resolution."

 
Last edited:

Clear

Member
We clearly don't know but given the rumor and the countless interviews that Sony had made by advertising the limited compatibility to FP16 of the ps4pro when machine learning was not yet "famous" I find it difficult to believe that supporting well beyond the FP16 they wouldn't even say a word about it. However, they can always use it via software, obviously losing considerably in performance.

It was more than FP16, it was stuff like the GBuffer implementation and support to track motion vectors, Cerny detailed a bunch of features specifically to help a 4tf gpu handle 4K output better.
 

Ascend

Member
Huh? So if a game you're playing supports that feature you'll not use it because some other game doesn't have it?
Lol I didn't mean it like that.

The main reason I say that I don't think I will be using it is because I play mostly older games that won't support it. That's where the argument comes from, because, if it's not universal, then it's no use for me. I still have The Witcher 3 in my backlog, just to give you an idea. The newest game I bought is I think Pokemon Sword, and on PC, the newest game I have purchased is Soul Calibur VI.

He missed the new where amd says it wants it to work on rdna2 consoles.
If amd delivers (good solution also on console) dlss will be used as often as it currently is(understand few games with partnership) and that solution will be the common one.
If AMD gets it working on consoles, their method is more likely to become more prevalent, yes. Then again, upscaling in consoles isn't new at all, so, I don't know what the fuss is all about. Just because it's ML doesn't necessarily make it better than what we had before, especially in the early implementations.
 

Greeno

Member
They also mention it in this article:


Where they say "we have gone even further introducing additional next-generation innovation such as hardware accelerated Machine Learning capabilities for better NPC intelligence, more lifelike animation, and improved visual quality via techniques such as ML powered super resolution".

It seems that they are suggesting it is not in included in the RDNA2 feature set.
 
Last edited:

llien

Member
I like how you call a synthetically trained machine learning network a "generic filter".
I love how "synthetically trained machine" is some kind of magic... :messenger_beaming:

I have shared TAA upscaling project done by a handful of Facebook engineers.




There is nothing "magical" about it, not even remotely.

Just how miserably did DLSS 1.0, with much more impressive teching behind it, fail tells you about what kind of "uber" engineers are there at NV.
Nobody argues about them being good at optimizing shit for NV hardware, anywhere else though, I need to see receipt, cough.
 
Last edited:
All GPUs probably supports FP16, it's just that PS4 Pro had "rapid packed math" when 2xFP16 can be packed in 1xFP32 number, which saves cycles. FP16 is so called "half-precision", GPUs were first FP16 before FP32...

I believe Sony had some patent for ML solution, if I am not wrong? I believe, whatever console do it something akin to DLSS, even if bad quality (unprecise) is going to lead every benchmark. Picture is going to suffer, but then again you can have multiple modes.

Unfortunately I don't find anything about Sony the ps5 or if they have an api about the ML (if anyone finds something please post it) but here an interesting video (for those who have never seen it) I think that Ms has co-developed the super resolution tech to then release it inside their own dx12u and collaborated with AMD to make it compatible with all RDNA2 gpu in any case.

Sony doesn't need AMD to put some their own solution in their own API. If PS4Pro had CB reconstruction technique, why that wouldn't be in PS5. Anyway :

 

M1chl

Currently Gif and Meme Champion
No they do not have same with MS, also CB is very different from something like DLSS 2.0

Sure. CB is different than DLSS. Just saying that why Sony wouldn't have something on their own like CB. But nevertheless, patent shows that they have a different solution.
 

Foorbits

Member
Since ML built into Series X|S at the hardware level it'll be interesting to see if / when they decide to pull that card out. Series S putting out 4K on the level of PS5 and Series X would be crazy to watch.
 

Rikkori

Member
1.0 was game specific, 2.0 is a generic filter that needs motion vectors to work (TAA, cough).



All AMD need is thorough use of the buzzwords that gamers want to hear.
Exactly. What the AI part of it is now is for the 'fit' of data - in theory you don't even need to have tensor cores to see that benefit, you just need the right model for clamping, which is what's usually manually done for TAA reconstruction already. In practice it doesn't work so well because not all studios are created equal, but we can see that Massive Ubisoft can do this as good or better than DLSS 2.0 with The Division 2's TAA. Not to mention - DLSS doesn't always work out so well either, with it being unable to reconstruct raytraced reflections in many cases (CP2077, WD:L).

Where Nvidia wins, as always, is at marketing. I said even from day 1 when they had shitty DLSS 1.0 - they win, because people are stupid and they want to self-deceive anyway, so if you just tell them the AI will even blow them it's that good they'll believe it.
 

Schmick

Member
Since ML built into Series X|S at the hardware level it'll be interesting to see if / when they decide to pull that card out. Series S putting out 4K on the level of PS5 and Series X would be crazy to watch.
So ML is a thing and will definitely happen on SX/SS and PS5? Kinda makes all these comparison threads pointless don't you think? And what's the point of a 3080 even a 3070 if with DLSS on a 3060 will happily play a game at 4k 60fps? I'm sure am excited about all this.
 
So FidelityFX Super Resolution didn't come on time for the Navi 21 reviews, but at this rate maybe it'll be on time for actual availability of those cards.
I wonder if this also matches up with the release of the Radeon 6700/XT Navi 22 models.

Something that competes with DLSS2 for rendering at lower resolution and then upsampling should be especially good for the Navi 2x cards with Infinity Cache.
Lower render resolution means more hits on the Infinity Cache, which then increases the GPU's effective memory bandwidth.

If FFX-SuperRes is effective at e.g. 1440p -> 4K upscaling, then according to AMD the percentage of op hits on the 128MB Infinity Cache would go from ~55% to ~75%:




So using AMD's own numbers, the effective memory bandwidth would be 2TB/s from Inf.Cache * 75% + 512GB/s from GDDR6 * 25%, resulting in 1664GB/s.


To summarize: AMD has a lot more to gain in reducing rendering resolution down from 4K to 1440p than nvidia does, as that way they get a sizeable boost in effective memory bandwidth.
Which is why we so often see the Navi 21 cards taking substantial leads on the GA102 cards at 1440p. At "pure 4K" the Navi 21 cards are probably bandwidth-limited.
 

Foorbits

Member
So ML is a thing and will definitely happen on SX/SS and PS5? Kinda makes all these comparison threads pointless don't you think? And what's the point of a 3080 even a 3070 if with DLSS on a 3060 will happily play a game at 4k 60fps? I'm sure am excited about all this.
the difference is, and I'm sure someone will gladly correct me if I'm wrong, Series S|X have a built in hardware solution to ML whereas Sony doesn't. This could be a factor in the next few years for digital foundry console warring.

For normal people comparison threads have always been pointless. If you have to compare screens side by side and view videos with meticulous detail about how they are different then it really doesn't matter which you play.
 

wachie

Member
I dont have high hopes (just like their RT) the first gen may be rough.

Hopefully they take it serious. DLSS 2.0 (where available) is actually quite good.
 
Top Bottom