• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Killzone 4 Uses Raytracing For Its Real-Time Reflections

Status
Not open for further replies.

RiZ III

Member
The reflections were one of the first thing cool graphical things I had noticed in the trailer they showed originally. Looked really nice.
 

RiverBed

Banned
I wish the thread title said KZ:SF instead since it took me off guard for a sec.

Also, so many things I haven't noticed in the trailer. >_>
 

Portugeezer

Gold Member
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?

I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?
 

Swifty

Member
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?

I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?
Computer rendering on a whole is pretty much cheating one way or another.
 

farisr

Member
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?

I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?

I believe I read majority of the reflections on the really shiny reflective glass skyscrapers in the demo were done through cheating. They weren't actually reflecting the environment supposedly. Just what I heard. Not that I care, it looked really great.
 
Aah oke,
You have any links i can read into what they really do information about ROPS is really scarce on the internet. You get countless links that tell what feels like 30% of the story.
I want to clear this up what ROPS precisly do for myself.

Yes please. I don't know much about how they work and what they can do besides rendering an output image.
 

Izick

Member
Was reading up on this thread earlier this morning when I couldn't post.

Anyway, this is really awesome to see, and thanks to all of you here who explained it a little more for laymen who didn't understand what it was, or how it affects the engine and look overall.
 
Having just written a raytacer, I can tell you right now that we probably won't even see true glossy reflections in the generation after next-gen. To render them you need to super-sample reflection rays stochastically (the randomness is determined by the roughness) and then average the results together. To get a decent looking result you need to sample at least 32 rays (per reflection, so imagine how quickly the number of traces needed for reflection rays that bounce off multiple surfaces grows) and even then, reflections from further away in the scene will be noisy. Of course there are doubtless going to be ways to fake it and accelerate the process, but at this point in time any distributed raytracing techniques (glossy reflections, true-to-life depth of field, soft shadows, etc.) would take far too much computational time to be of any use in a real-time application.

so I'm confused, what are they doing here? These are real time reflections, but there not true glossy reflections?
 
A

A More Normal Bird

Unconfirmed Member
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?

I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?

Screen space effects like this are cheats in many ways. As for effort going into small details: the more things like this can be handled in real time by the renderer, the less the artists and designers have to worry about it and the easier it is to see changes without dealing with lengthy baking processes. If the power is there it seems like a pretty sensible use of resources to me.
 

pottuvoi

Banned
I was gonna come in here and say "I think you're mistaking ray-tracing for ray-casting or some other hack being used to approximate ray-traced reflections."

And then saw about a dozen people had already covered it.

Love to see this stuff in action even if it isn't "Ray-tracing" in the strictest sense.
Depends what you mean by strictest sense..
If you mean difference between ray-trace and ray-cast.. there really isn't any difference outside some secondary rays, which are very easy to implement.
Ray-tracing just means that you trace rays, not about what kind of dataset you are tracing.

In this case you either have direct tracing or sampling of screen space aligned heightfield.
Or you use heightfield or quadtree as a acceleration structure and might have accurate ray-quad hit test for quality.

I'm pretty sure it's quadtree based method in which you cone trace to get blurry reflections.
You mean parallax occlusion mapping of bullet holes textures, not ray-tracing.
POM usually shoots secondary rays for shadows, so it should be considered as ray-tracing method within a texture space. (It also would be easy to add reflections/refractions and GI.)
Having just written a raytacer, I can tell you right now that we probably won't even see true glossy reflections in the generation after next-gen. To render them you need to super-sample reflection rays stochastically (the randomness is determined by the roughness) and then average the results together. To get a decent looking result you need to sample at least 32 rays (per reflection, so imagine how quickly the number of traces needed for reflection rays that bounce off multiple surfaces grows) and even then, reflections from further away in the scene will be noisy. Of course there are doubtless going to be ways to fake it and accelerate the process, but at this point in time any distributed raytracing techniques (glossy reflections, true-to-life depth of field, soft shadows, etc.) would take far too much computational time to be of any use in a real-time application.
I'm pretty sure tracing a pre-filtered geometry can be considered a 'trueish' glossy reflection.
It's not quite as correct as shooting thousands of rays, but it does reduce the number of rays needed to get good quality reflection no matter how glossy the surface is. (it's also a lot more stable.)
 

antic604

Banned
I think that was because the smoke entered the scene.

he moves farther away from the wall

It looks more like the wall is less glossy / more matte closer to the camera, therefore the reflection is gone? If I wanted to stretch my belief in GG's tech, I'd even say that the wall became 'rougher' in that spot because of the grenade that was thrown into the corridor few seconds earlier :)
 
It looks more like the wall is less glossy / more matte closer to the camera, therefore the reflection is gone?

no, the angle for reflection became too perpendicular and would be out of screenspace (aka the character's side) and would also probably be out of view from the viewer anyways.
 

antic604

Banned
but it is not correct :/

I checked both the original video and Jimmy Fallon demo and while they both confirm the wall is matte / rough in that spot, it's not caused by the grenade which is thrown much further the scene in former and not at all in the latter.

Still, I stick to the theory that it's the roughness of the material that causes the reflection to vanish. The GG guy during the presentation says (around 2:00 mark) that the 'reflections are distance / roughness dependent' so that'd make sense.
 
so I'm confused, what are they doing here? These are real time reflections, but there not true glossy reflections?

I think he means that true glossy reflections trace lights all the way back to a light source.
If you have multiple glossy surfaces you have to trace more steps back to the source.

Here they bounce only once after applying environment cube maps to the scene.
From every pixel send out a ray does the surface reflect shoot another ray does it collide sample color.

True Reflection probably goes like this
For every pixel shoot an ray does the surface reflect shoot another ray does the next surface reflect shoot another ray. etc till you reach some non reflective surface or light source.

Yes please. I don't know much about how they work and what they can do besides rendering an output image.

This is what i found looking for Render Back ends instead of ROPS.
http://www.xbitlabs.com/articles/graphics/display/r600-architecture_7.html#sect0
http://www.extremetech.com/computing/78670-radeon-hd-2000-series-3d-architecture-explained/6
http://www.tomshardware.com/reviews/r600-finally-dx10-hardware-ati,1600-11.html
 

KKRT00

Member
I believe I read majority of the reflections on the really shiny reflective glass skyscrapers in the demo were done through cheating. They weren't actually reflecting the environment supposedly. Just what I heard. Not that I care, it looked really great.

They were just high quality cubemaps, so prerendered reflections.
 

HTupolev

Member
Impossible to do mirrors using screen space reflections. If you can't see your face, mirror can't either.
Yeah, this is a very limited and visually-unstable technique. Look at this digitalfoundry video. Note how the composition of the reflections in the water just sort of pop in and out with the player's movement and zoom, and how the first-person weapon is actually occluding and screwing with the reflections in the water.

These reflections are a really neat effect, but they have a lot of weaknesses. Offloading weird things to be handled by the screen-space buffers and such has been done a lot in the past few years; in addition to deferred shading, you have things like particles bouncing in the buffers rather than in actual geometry space in games like Halo Reach. But offloading reflections to screen-space buffers means you have very little of the information that you often want to draw from, as reflections by their very nature bounce all over the scene
 
I think he means that true glossy reflections trace lights all the way back to a light source.
If you have multiple glossy surfaces you have to trace more steps back to the source.

Here they bounce only once after applying environment cube maps to the scene.
From every pixel send out a ray does the surface reflect shoot another ray does it collide sample color.

True Reflection probably goes like this
For every pixel shoot an ray does the surface reflect shoot another ray does the next surface reflect shoot another ray. etc till you reach some non reflective surface or light source.



This is what i found looking for Render Back ends instead of ROPS.
http://www.xbitlabs.com/articles/graphics/display/r600-architecture_7.html#sect0
http://www.extremetech.com/computing/78670-radeon-hd-2000-series-3d-architecture-explained/6
http://www.tomshardware.com/reviews/r600-finally-dx10-hardware-ati,1600-11.html

So more ROPs would allow more alpha channels? I thought Alpha was limited by bandwidth? Or is it both?
 

BigTnaples

Todd Howard's Secret GAF Account
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?

I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?

You call it a small detail. But when you are actually playing the game you will sing a different tune.


In Crysis 2 DX11 it was by far the most consistently impressive graphical effect along side Bokeh and Tessellation.

When reflections are done real time and you are actually interacting with the game-world instead of watching a video of it, it constantly gives you those "Wow" moments.

These need to be standard next gen.




CliffyB's name reminds of me of the locked weapon skins @108 kb in Gears of War 3. Don't care that he says, lets keep him out of this thread.

Are you serious?


He is one of the most in touch game developers in the industry, and has done a ton of good work. Not to mention he is a great poster here on the Gaf. Get out of here with that nonsense.
 

Lord Error

Insane For Sony
They were just high quality cubemaps, so prerendered reflections.
In the presentation here, you can see that almost every surface they have in the scene is a blend of this raytraced reflection and the cubemap. They've got some metal and glass architecture on that picture where you can clearly see that something is added there when he enables realtime reflections.
 

KKRT00

Member
Amount? No, speed? Yes, but they likely were using a similar bandwidth gpu in their kits.

Even RAM speed isnt relevant for SSR.

--
In the presentation here, you can see that almost every surface they have in the scene is a blend of this raytraced reflection and the cubemap. They've got some metal and glass architecture on that picture where you can clearly see that something is added there when he enables realtime reflections.

Yes, glossy reflections [sun light reflections], but those reflections on buildings are pure cubemaps, which You can even see on the presentation when they compare buffer with cubemaps to buffer with cubemaps and SSR

---

FULL Presentation has been uploaded
http://www.youtube.com/watch?annota...&feature=iv&src_vid=DDYVcQNgu4Y&v=_29M8F-sRsU
 
I've been to the wikipedia page on Ray tracing, but I can not figure out what this is or why so many people are seemingly impressed with it. Can anyone help me understand?
 
I've been to the wikipedia page on Ray tracing, but I can not figure out what this is or why so many people are seemingly impressed with it. Can anyone help me understand?

This is not using that exact same technique. Even then... ray tracing is by FAR not the best technique for realistic rendering.
 

Lord Error

Insane For Sony
Yes, glossy reflections, but those reflections on buildings are pure cubemaps.
It looked like the glass surface on the building in that picture was being updated as well when he switched over to the "& raytraced reflections" slide, but it was a distant building in a blurry video, so I'm not sure what to think. It seemed like the realtime reflections layer was pretty liberally spread out with content that was in it.

Anyway, as someone said earlier, perhaps the more interesting aspect of this presentation is that they're ditching point lights completely and going with physically based area lights for everything.
 

Randdalf

Member
I think he means that true glossy reflections trace lights all the way back to a light source.
If you have multiple glossy surfaces you have to trace more steps back to the source.

Here they bounce only once after applying environment cube maps to the scene.
From every pixel send out a ray does the surface reflect shoot another ray does it collide sample color.

True Reflection probably goes like this
For every pixel shoot an ray does the surface reflect shoot another ray does the next surface reflect shoot another ray. etc till you reach some non reflective surface or light source.

No. What I mean is that glossy reflections, in ray tracing, are done by basically simulating the same light bounce multiple times with a randomly offset angle determined by the roughness. All these simulations are then averaged together to produce the glossy reflection. A rougher surface hence becomes blurrier because the rays are more randomly spread. This becomes prohibitive because of the number of ray traces required is quite large to produce a good looking result.
 

Raist

Banned
Remember the huge leap in quality between the alpha and final version of KZ2?

We saw alpha for KZSF, on dev kits which were lacking on RAM. So yeah. bring it on, GG.

I hope they use that GOW3 MLAA.

That shit was super clean.

KZ3 used MLAA, I don't see why they'd change for KZSF. Unless they figured out something better.
 

KKRT00

Member
It looked like the glass surface on the building in that picture was being updated as well when he switched over to the "& raytraced reflections" slide, but it was a distant building in a blurry video, so I'm not sure what to think. It seemed like the realtime reflections layer was pretty liberally spread out with content that was in it.

Anyway, as someone said earlier, perhaps the more interesting aspect of this presentation is that they're ditching point lights completely and going with physically based area lights for everything.

Have You noticed that first buffer they show is with cubemaps, then without anything and 3rd is with cubemaps and SSR?

I was the guy who was talking about Area Lights :), ok time to watch whole presentation.
 

Lord Error

Insane For Sony
Have You noticed that first buffer they show is with cubemaps, then without anything and 3rd is with cubemaps and SSR?

I was the guy who was talking about Area Lights :), ok time to watch whole presentation.
Yeah, I did notice, which makes it hard to see what was updated exactly. I was trying to see what was on just SSR layer (with masking) which they show at some point. What looks great with their building reflections is that they have situations where they're applying them on multiple layers of overlapping transparent surfaces which creates a pretty rich look.
 

TEXAN

Banned
sega_spectrum.jpg
 
No. What I mean is that glossy reflections, in ray tracing, are done by basically simulating the same light bounce multiple times with a randomly offset angle determined by the roughness. All these simulations are then averaged together to produce the glossy reflection. A rougher surface hence becomes blurrier because the rays are more randomly spread. This becomes prohibitive because of the number of ray traces required is quite large to produce a good looking result.

So how is KZ SF not already doing that? How is what there doing different?
 
Status
Not open for further replies.
Top Bottom