• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

Intel talks up DLSS-like method that includes AI scaling for tessellation, more

LordOfChaos

Member
Mar 31, 2014
12,156
7,762
985


Sounds interesting, it sounds like they're applying DLSS-like performance saving deep learning to more areas of the graphics pipeline.

Recently a new Intel patent was published that shows a series of new inventions in order to implement a complete AI GPGPU solution, which unlike NVidia’s DLSS, is totally independent of the traditional GPU pipeline. Such programmable graphic neural network pipeline includes several neural network hardware blocks such as AI tessellation, AI texture generation, an AI scheduler, an AI memory optimizer, and an AI visibility processor.

The AI-based tessellation mechanism proposed in the Intel patent can perform a higher-order geometry and tessellation inference using a neural network that is trained based on using a dataset that includes pre and post tessellated vertices, which can be used to replace the entire tessellation logic within a graphics pipeline with the output of the AI tessellation logic and the neural network pipeline flowing through the remaining programmable and fixed-function portion of the graphics pipeline. Also, the proposed tessellation mechanism can make use of a neural network to simulate tessellation of a 3D scene at the pixel level using a pre-trained neural network to simulate the look of a tessellated image without performing tessellation at the vertex or geometry level.




To complement this stunning new AI GPGPU architecture Intel also recently published a patent revealing its new heterogeneous tensor core architecture which enables matrix compute granularities to be changed on-the-fly and in real-time across a heterogeneous set of hardware resources. The method provided by Intel’s patent details a normalized tile execution solution that enhances performance by taking into account the runtime conditions when generating the partition configuration, leveraging knowledge about expected power consumption may enable the mapping of tensor operations to more power efficient core pools.

Looks like they have Tensor cores too.



Would be an interesting advantage for sure if it works well.
 
Last edited:
  • Like
Reactions: Caligari

LordOfChaos

Member
Mar 31, 2014
12,156
7,762
985
Right now, dedicated Intel Xe cards still feels like Vaporware

I mean it hasn't hit retail yet, the release date is in the future. It definitely doesn't meet the "only a concept" criteria as the Xe architecture is already shipping in Tiger Lake, the architecture is nearing done for retail cards.