NVIDIA’s implementation of RTX ray-tracing in Quake II - NVIDIA/Q2RTX
github.com
Quake II RTX
Quake II RTX is NVIDIA's attempt at implementing a fully functional version of Id Software's 1997 hit game Quake II with RTX path-traced global illumination.
Quake II RTX builds upon the Q2VKPT branch of the Quake II open source engine. Q2VKPT was created by former NVIDIA intern Christoph Schied, a Ph.D. student at the Karlsruhe Institute of Technology in Germany.
Q2VKPT, in turn, builds upon Q2PRO, which is a modernized version of the Quake II engine. Consequently, many of the settings and console variables that work for Q2PRO also work for Quake II RTX.
About This Quake II RTX is fully ray-traced and includes the 3 levels from the original shareware distribution.
Quake II RTX is NVIDIA's attempt at implementing a fully functional version of Id Software's 1997 hit game Quake II with RTX path-traced global illumination.
Quake II RTX builds upon the Q2VKPT branch of the Quake II open source engine. Q2VKPT was created by former NVIDIA intern Christoph Schied, a Ph.D. student at the Karlsruhe Institute of Technology in Germany.
From looking at the pt_tracer.c code, I can tell it's a student (PhD or not). It seems like he took the Vulkan API and basically went down in a linear fashion implementing the path tracer. The code is all written in C. Yuck. It also has memory barriers all over the place. That makes me nervous.
I can absolutely say with confidence that the Marble demo is a different beast. I also think it probably was implemented in a modernistic way. I believe Nvidia will release source code to that demo soon.
I might play with writing a path tracer in the coming years.. then again, I might not. I'm already on a Unreal Engine project here and I would rather stay away from programming when I get off. I see it too much everyday.
1) How could you possibly know?
2) Note how now it is "NVidia's" code (that dude just started it)
The key "accusation" about Quake is that most of the picture is made by smart ass "fakery" (which is not at all wrong per se, but as a side effect, you cannot claim that picture was created by path tracing)
1) How could you possibly know?
2) Note how now it is "NVidia's" code (that dude just started it)
The key "accusation" about Quake is that most of the picture is made by smart ass "fakery" (which is not at all wrong per se, but as a side effect, you cannot claim that picture was created by path tracing)
Well you see that it is path tracing. The primary rays are cast, then the intersections of all the various types of shaders/objects in the scene. Store them into buffers and then do your filter or whatever it does on the images before comp.
I have a gut feeling that that marble demo has way more polish to it. It is using DLSS 2.0 and the materials are way more advanced than this Quake demo. I would be surprised if that Marble demo isn't DXR and C++11 or higher and probably developed by some more experienced engineers (possibly who worked on offline path tracers before).
I could be wrong. But I wouldn't claim Marble = Quake RTX in development pipeline.
VFXVeteran
Why does hardware even matter when assessing the number of rays needed to achieve certain results (in "true path tracing" scenario)?
Why can't one pick up any of the RT-ing apps, create geometry of interest and see what picture will be rendered by having X rays per pixel?
Interesting. If I believed in any technology, I wouldn't have a long history of trolling the same thing. I hope to not have to bring up more of your history trying to refuse PC tech. Please get on topic, or get ignored. I'm completely down for a conversation or debate, just not an immature one.
It all has to do with numerical analysis. We can't approximate light and the realworld well enough on a discrete screen with finite pixels. Monte Carlo sampling is a way to approximate that well enough. The problem is that you need a significant amount of samples in order to converge on a solution. Just spamming random rays into the scene will get you somewhat close but it is too costly. That's where importance sampling comes in and is the entire branch of rendering that deals with path-tracing and using this to converge much faster to a solution.
Why can't one pick up any of the RT-ing apps, create geometry of interest and see what picture will be rendered by having X rays per pixel?
Get Maya with Arnold and pick up the book called Physically Based Rendering. I spent years reading that book, learning Arnold, having meetings with their software team and putting it into full production for over 3 movies.