• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Crytek demos realtime raytracing with Cryengine running on AMD's Vega56

Edit: Seems like this post came out wrong. This is not meant as a negative post, I'm just trying to add some context and explain why Crytek is able to achieve this on normal hardware.

Its a tech demo, just like the RTX star wars demo, both arent really possible in-game though, even less so without raytrace hardware.

 
Last edited:
Its a tech demo, just like the RTX star wars demo, both arent really possible in-game though, even less so without raytrace hardware.


they are possible without dedicated hardware you just need theoretically optimal algorithms. But these are not easy to come buy, until then you have to optimize algorithms, but a good enough programmer can pull it off.
 

scalman

Member
Because it was never rtx thing metro devs said that themselfs it could be done on other gpus and on new consoles no probs.
It was just nvidia thing to find fools buy their new gpus.
 
Last edited:

DeepEnigma

Gold Member
From my understanding, dedicated hardware was never 'necessary.' It just makes it more efficient.

And makes it (their) hardware reliant, just as many other nVidia dedicated features they pushed out over the years like PhysX, Hairworks, Gameworks, etc.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I could've sworn I heard Alex from digital foundry say metro on ultra dxr is 1 ray per pixel.
Very unlikely, even with fancy reconstruction algos. In order for a pixel to get (1) a shadow from a single light source, and (2) reflections, you need at the very least one shadow ray and one reflective ray.
 
Last edited:

MetalRain

Member
It's great that game engines make raytracing viable to wider audience and both brands of GPU. But I wonder if RTX hardware can make this run faster or is it too focused on specific workload?
 
Last edited:
Very unlikely, even with fancy reconstruction algos. In order for a pixel to get (1) a shadow from a single light source, and (2) reflections, you need at the very least one shadow ray and one reflective ray.
Metro doesn't use it for reflections though just global illumination - so shadows and light bounce.
 

McHuj

Member
I’m waiting for the lighting in games to look like this. With Vega56 used here, hopefully it’s a hint that this will be what we can expect from next gen.
 
I’m waiting for the lighting in games to look like this. With Vega56 used here, hopefully it’s a hint that this will be what we can expect from next gen.

I think the first Navi GPUs are supposed to be GTX 1080 level of performance, which would make it faster than a 56 by 10% or more depending on the game. So hopefully the Navi cores on the PS5 APU are this performance. The Navi dGPUs will have a significant clockspeed advantage but the Navi on console will be customized to make up for this (more compute units, higher ROPs etc).
 
Last edited:

dark10x

Digital Foundry pixel pusher
This is really impressive and also fascinating as there are some interesting artefacts (image trails) on dynamic objects being reflected. Very cool.
 

mortal

Gold Member
We are looking at the thing that will stop the push for 60fps on next gen consoles.

Raytracing can stay as a demo, 60fps is more important.
Don't be so cut and dry. There will certainly be developers that want to push visual fidelity, with higher poly models, high res textures, more realistic shaders, more detailed animations, and realtime raytracing. With a aim for 30fps/unlocked fps at 4K
There will also be plenty of developers, 1st party and 3rd party, that will opt for better performance, which will still come with a noticeable improvement in presentation. New games running at 60fps at 4k will definitely be also become a more common thing.

There's also considering what developers could be making for the next iterations of VR, which is pushing technology that will benefit gaming in general.
 
Last edited:

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Metro doesn't use it for reflections though just global illumination - so shadows and light bounce.
If it's just shadows from a single source then 1 ray/pixel could do. Add GI bounces and you'd be waiting for a while until that image converged with 1 ray/pixel/frame. So again, chances for 1 ray/pixel/frame are very slim.
 
Last edited:

Grinchy

Banned
That looked more impressive than some of the demos shown for the RTX cards. I don't really understand how they can make it happen with regular GPUs this well, but this makes me even more pumped for the next generation of consoles and then GPUs that come out a couple years later.
 
Last edited:

thelastword

Banned
This is the most imrpessive raytracing I've seen so far this gen.......Amazing stuff by Crytek and the open standard....
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
The programmer on metro said its "up to one ray per pixel on ultra", and half the rays on high settings.

https://www.eurogamer.net/articles/digitalfoundry-2019-metro-exodus-tech-interview
Ah, thanks for the link.
The problem is that when you try to bring your number of samples right down, sometimes to one or less per pixel, you can really see the noise. So that is why we have a denoising TAA. Any individual frame will look very noisy, but when you accumulate information over a few frames and denoise as you go then you can build up the coverage you require.
The problem is that when you try to bring your number of samples right down, sometimes to one or less per pixel, you can really see the noise. So that is why we have a denoising TAA. Any individual frame will look very noisy, but when you accumulate information over a few frames and denoise as you go then you can build up the coverage you require.
So basically they rely on a very aggressive TAA to handle extreme under-sampling. Even though in such situations the AA would make sure no noise is visible, sample starvation at sharp camera movements/turn-arounds should manifest as spots and areas changing luma abruptly over a few frames.

That said, we don't have such a rigid rpp limitation with our in-house RTX tech, but we still rely on TAA, as we don't do hybrid.

ed: BTW, in the video to the article they mention they do 3 rpp, and the undersampling effects I'm referring to are still visible.
 
Last edited:
Ah, thanks for the link.


So basically they rely on a very aggressive TAA to handle extreme under-sampling. Even though in such situations the AA would make sure no noise is visible, sample starvation at sharp camera movements/turn-arounds should manifest as spots and areas changing luma abruptly over a few frames.

That said, we don't have such a rigid rpp limitation with our in-house RTX tech, but we still rely on TAA, as we don't do hybrid.

ed: BTW, in the video to the article they mention they do 3 rpp, and the undersampling effects I'm referring to are still visible,
Well that's confusing is it 1rpp or 3?

I noticed the noise in Alex's video too btw even at ultra
 
Last edited:

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Well that's confusing is it 1rpp or 3?

I noticed the noise in Alex's video too btw even at ultra
There's a chance they do 1/sub-1 rpp at 4K (checkered) and 3 rpp at HD. Also, as I mentioned, RTX stack support evolved with time (along with devs' understanding of the tech), and many devs managed to squeeze more rpp with time.
 
One thing to keep in mind is that Crytek's implementation is supposedly SVOGI (Voxel-Based Global Illumination ) based which is a lot cheaper than ray based raytracing that we see with NVIDIA's RTX. Which would explain how they got this running on an Vega 56 without dedicated hardware.

Realistically speaking the examples we saw in battlefield 5 showed us that neither should nvidia's solution be used without the assistance of older techniques, I mean in many areas battlefield 5 had so much temporal artifacts that it looked much worse than what we have now, without even taking into account the drop in performance (in my opinion obviously), same for the much simpler ray traced quake 2.

Metro seems like the only good enough implementation in a game so far, I don't think they use it for the whole screen, and all effects (only lighting, if I remember correctly, and I'm still wondering if they gimped their implementation for those without RTX, given that Nvidia want to sell those games...).

I'm way over my head on this, but the reflections in the demo seemed to show less temporal artifacts than what i have seen on rtz battlefield 5.
 

SonGoku

Member
60fps as standard
lol give it up already, every gen is the same.
Theres no such thing as a 60 fps standard!
One thing to keep in mind is that Crytek's implementation is supposedly SVOGI (Voxel-Based Global Illumination ) based which is a lot cheaper than ray based raytracing that we see with NVIDIA's RTX.
Hey! that's what i thought too! I expect more of these type of GI techniques to take off next gen, not raytracing.
So this is a rasterization technique? can somebody who knows explain
 
Last edited:

Ascend

Member
lol give it up already, every gen is the same.
Theres no such thing as a 60 fps standard!

Hey! that's what i thought too! I expect more of these type of GI techniques to take off next gen, not raytracing.
So this is a rasterization technique? can somebody who knows explain
If you want some reading material...



This is 3 years ago;
 

Ascend

Member
So more of a Easter technique?
I'm not sure what exactly they did in the recent demo. But basically, whether it's cone tracing or voxel tracing, it is a form of ray tracing, but, you can see it as the equivalent of lowering the resolution. Because native ray tracing uses a ray (or multiple rays) for every single pixel. By using Voxels, you're basically using ray tracing on 9 pixels at the same time, for example, which really lowers the load. This is a viable solution until hardware catches up to full ray tracing.
 
Top Bottom