Whatever the heck it was, I hope later on you won't claim that llien made it up that DLSS 2 is a TAA derivative.
I think dlss1 used 16k ground truth renders for Ai to learn
DLSS 1 was true AI solution.
They were grabbing high resolution assets, rendering frames at datacenters, doing per game training.
It is actually very logical to do it that way, as AI tries to "guess" what could be somewhere, and it depends on a given game's visuals.
I have no doubt this approach would work... say if google did it that way in their datacenters.
People keep saying how great NV is at AI, forgetting to reference a single impressive achievement, and no, rolling out oversized compute crunchers it is not.
Google on the other hand, is one hell of a juggernaut in that field.
In this case, though, I doubt it was particularly challenging AI technology wise, but the fact AI network is supposed to be fairly small and simple, for at least 2 reasons:
1) limited resources for it on GPU itself
2) even long term, it would be idiotic to have upscaling NN consume more resources than the rendering itself
We know AI is fairly capable upscaling (if it helps, you could call it supersampling) from the DirectML demo (which wasn't about supersampling at all).
Anyhow, that approach failed to produce palatable results.
DirectML demo supersampling was great, but too computationally expensive.
So, instead, things went "temporal".
And if you think NV has something unique, what nobody else have done, look what FB researchers did:
Real-time rendering in virtual reality presents a unique set of challenges — chief among them being the need to support...
research.fb.com
Note that they didn't hide any aspect of their work (and rendering was done on Titan V).