There is no excuse now, I don't want to see more broken mirrors when I enter a bathroom.
The Dev kit they were using to make killzone only had IIRC 2 -3GB of GDDR5
Did they even confirm if it was GDDR5 in the devkits they had at the moment?
Computer rendering on a whole is pretty much cheating one way or another.Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?
I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?
I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?
Wouldn't they probably be using an off-the-shelf card like a 7850/7870?
Sexy beast of a game, cannot wait.
Looks like it trails off really quick. Understandable I guess but hm, guess you can't really tell from this one scene.
Aah oke,
You have any links i can read into what they really do information about ROPS is really scarce on the internet. You get countless links that tell what feels like 30% of the story.
I want to clear this up what ROPS precisly do for myself.
Having just written a raytacer, I can tell you right now that we probably won't even see true glossy reflections in the generation after next-gen. To render them you need to super-sample reflection rays stochastically (the randomness is determined by the roughness) and then average the results together. To get a decent looking result you need to sample at least 32 rays (per reflection, so imagine how quickly the number of traces needed for reflection rays that bounce off multiple surfaces grows) and even then, reflections from further away in the scene will be noisy. Of course there are doubtless going to be ways to fake it and accelerate the process, but at this point in time any distributed raytracing techniques (glossy reflections, true-to-life depth of field, soft shadows, etc.) would take far too much computational time to be of any use in a real-time application.
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?
I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?
Depends what you mean by strictest sense..I was gonna come in here and say "I think you're mistaking ray-tracing for ray-casting or some other hack being used to approximate ray-traced reflections."
And then saw about a dozen people had already covered it.
Love to see this stuff in action even if it isn't "Ray-tracing" in the strictest sense.
POM usually shoots secondary rays for shadows, so it should be considered as ray-tracing method within a texture space. (It also would be easy to add reflections/refractions and GI.)You mean parallax occlusion mapping of bullet holes textures, not ray-tracing.
I'm pretty sure tracing a pre-filtered geometry can be considered a 'trueish' glossy reflection.Having just written a raytacer, I can tell you right now that we probably won't even see true glossy reflections in the generation after next-gen. To render them you need to super-sample reflection rays stochastically (the randomness is determined by the roughness) and then average the results together. To get a decent looking result you need to sample at least 32 rays (per reflection, so imagine how quickly the number of traces needed for reflection rays that bounce off multiple surfaces grows) and even then, reflections from further away in the scene will be noisy. Of course there are doubtless going to be ways to fake it and accelerate the process, but at this point in time any distributed raytracing techniques (glossy reflections, true-to-life depth of field, soft shadows, etc.) would take far too much computational time to be of any use in a real-time application.
I think that was because the smoke entered the scene.
he moves farther away from the wall
It looks more like the wall is less glossy / more matte closer to the camera, therefore the reflection is gone?
no, the angle for reflection became too perpendicular and would be out of screenspace (aka the character's side) and would also probably be out of view from the viewer anyways.
Could be, but let me stick to my opinion - it sounds much more advanced
but it is not correct :/
so I'm confused, what are they doing here? These are real time reflections, but there not true glossy reflections?
Yes please. I don't know much about how they work and what they can do besides rendering an output image.
There is no excuse now, I don't want to see more broken mirrors when I enter a bathroom.
I believe I read majority of the reflections on the really shiny reflective glass skyscrapers in the demo were done through cheating. They weren't actually reflecting the environment supposedly. Just what I heard. Not that I care, it looked really great.
Yeah, this is a very limited and visually-unstable technique. Look at this digitalfoundry video. Note how the composition of the reflections in the water just sort of pop in and out with the player's movement and zoom, and how the first-person weapon is actually occluding and screwing with the reflections in the water.Impossible to do mirrors using screen space reflections. If you can't see your face, mirror can't either.
I think he means that true glossy reflections trace lights all the way back to a light source.
If you have multiple glossy surfaces you have to trace more steps back to the source.
Here they bounce only once after applying environment cube maps to the scene.
From every pixel send out a ray does the surface reflect shoot another ray does it collide sample color.
True Reflection probably goes like this
For every pixel shoot an ray does the surface reflect shoot another ray does the next surface reflect shoot another ray. etc till you reach some non reflective surface or light source.
This is what i found looking for Render Back ends instead of ROPS.
http://www.xbitlabs.com/articles/graphics/display/r600-architecture_7.html#sect0
http://www.extremetech.com/computing/78670-radeon-hd-2000-series-3d-architecture-explained/6
http://www.tomshardware.com/reviews/r600-finally-dx10-hardware-ati,1600-11.html
yup it was 2GB.
Does RAM play a big part in features like these? Honest question because I have no idea
Although that is impressive, one thing I'm wondering, will developers not "cheat" any more for graphical effects?
I mean, reflections and stuff could be faked to good effect and be much less taxing, right? Seems like a lot of effort is going into these small details, which is fine, but will there be enough small enhancements for the bigger picture to see a big improvement?
CliffyB's name reminds of me of the locked weapon skins @108 kb in Gears of War 3. Don't care that he says, lets keep him out of this thread.
yup it was 2GB.
In the presentation here, you can see that almost every surface they have in the scene is a blend of this raytraced reflection and the cubemap. They've got some metal and glass architecture on that picture where you can clearly see that something is added there when he enables realtime reflections.They were just high quality cubemaps, so prerendered reflections.
Amount? No, speed? Yes, but they likely were using a similar bandwidth gpu in their kits.
In the presentation here, you can see that almost every surface they have in the scene is a blend of this raytraced reflection and the cubemap. They've got some metal and glass architecture on that picture where you can clearly see that something is added there when he enables realtime reflections.
Rasterization, sure.Computer rendering on a whole is pretty much cheating one way or another.
I've been to the wikipedia page on Ray tracing, but I can not figure out what this is or why so many people are seemingly impressed with it. Can anyone help me understand?
It looked like the glass surface on the building in that picture was being updated as well when he switched over to the "& raytraced reflections" slide, but it was a distant building in a blurry video, so I'm not sure what to think. It seemed like the realtime reflections layer was pretty liberally spread out with content that was in it.Yes, glossy reflections, but those reflections on buildings are pure cubemaps.
Does RAM play a big part in features like these? Honest question because I have no idea
Amount? No, speed? Yes, but they likely were using a similar bandwidth gpu in their kits.
I think he means that true glossy reflections trace lights all the way back to a light source.
If you have multiple glossy surfaces you have to trace more steps back to the source.
Here they bounce only once after applying environment cube maps to the scene.
From every pixel send out a ray does the surface reflect shoot another ray does it collide sample color.
True Reflection probably goes like this
For every pixel shoot an ray does the surface reflect shoot another ray does the next surface reflect shoot another ray. etc till you reach some non reflective surface or light source.
I hope they use that GOW3 MLAA.
That shit was super clean.
It looked like the glass surface on the building in that picture was being updated as well when he switched over to the "& raytraced reflections" slide, but it was a distant building in a blurry video, so I'm not sure what to think. It seemed like the realtime reflections layer was pretty liberally spread out with content that was in it.
Anyway, as someone said earlier, perhaps the more interesting aspect of this presentation is that they're ditching point lights completely and going with physically based area lights for everything.
Yeah, I did notice, which makes it hard to see what was updated exactly. I was trying to see what was on just SSR layer (with masking) which they show at some point. What looks great with their building reflections is that they have situations where they're applying them on multiple layers of overlapping transparent surfaces which creates a pretty rich look.Have You noticed that first buffer they show is with cubemaps, then without anything and 3rd is with cubemaps and SSR?
I was the guy who was talking about Area Lights , ok time to watch whole presentation.
---
FULL Presentation has been uploaded
Thanks
No. What I mean is that glossy reflections, in ray tracing, are done by basically simulating the same light bounce multiple times with a randomly offset angle determined by the roughness. All these simulations are then averaged together to produce the glossy reflection. A rougher surface hence becomes blurrier because the rays are more randomly spread. This becomes prohibitive because of the number of ray traces required is quite large to produce a good looking result.