• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

pottuvoi

Banned
I thought the rays there were just typical screen space god rays. My memory could be failing me, but I rotated the camera behind the light source and the rays disappeared. Perhaps I am misremembering...
Even in cases where volumetrics are done properly they might be hard to see when looking from lightsource to the fog, due to scattering methods used.
Screenspace methods usually break when light goes out of screen and do not handle hidden occluders.
 
Question about motion blur that I wanted to put here instead of one of the DF threads because I wanted to avoid a shit fit. I'm a huge fan of the effect and it was one area that I found lacking in Uncharted 4 in terms of quality. Could someone tell me if it's just a low res/low sample or an entirely different implementation than Doom has? Doom's looks perfect. Screens to illustrate what I mean. I'd love to see that be an area of improvement in a PS Neo patch for Uncharted.

26491214234_afbbb03774_o.png


26822389550_249b917e58_o.png

Looking @ UC 4 - I think there are a couple of things coming into play here. Looking at some screenshots right before release, and a couple from post-release (that you sent me <3 ) we know the DOF is basically quarter resolution, and you can see this upon its interesecting with overlapping geometry or alpha effects, or overlapping obmb. Though, if there is not obvious interessection or a shallow cut off from DOF it looks pretty darn smooth in spite of its internal resolution, the upscale in a static image is not too bad and blocky... unlike say KZ:SF where it is really obviously quarter resolution. I would imagine, and based upon some of the serious stair stepping in the screen above, that the motion blur is running at that exact same internal resolution as depth of field (I would be surprised to learn that various post processes are in fact running at different resolutions as depth of field and mb are usually conceptually coupled). So you already have a low internal resolution of the effect when it is rendered in one slice. Then combine that with the fact that there are a low amount of sample slices (which are visible in the screen you posted as well), then couple that with the fact that the upsample for motion blur on top of how it blends between samples. How the samples are blended and weighted increases their visual plausibility in stills and in motion even. This presentation from Jorge Jimenez goes into the various steps of making a plausible reconstruction filter for motion blur.
Or this paper from Tiago Sousa after slide 45.
So 4 things in total if you take the time to look through both those presentations:
1. Lower internal resolution for motion blur (not uncommon, does not need to be a huge detriment)
2. How that is upsampled to native resolution
3. How the motion blur blends between individual samples / slices
4. How it blends over non-motion blurred parts of the screen, or motion blurred areas moving in a different direction

Points 2 through 4 are the most important IMO, as that is where you can fudge and blur and sample interessingly enough to create really plausible looking results. The various different looks provided in that call of duty paper show just how different the end result of the same amount of motion blur slices can look based upon how it is upscaled and blended along its vector.

Looking @ doom - it just seems to do steps 2 and 4 much better to the point where seeing its internal resolution is really hard to even notice (it is doubtlessly less than full res though considering it is on console as well) as well as to the point where you cannot obviously see the geometric cut off where obmb begins and where it ends. That is quite unlike UC4 where you can see a hard line cut off more often than not in the motion blur itself, either between its internal samples or between motion blur on top of non-motion blurred screen parts, or motion blur going in a different direction (look at the rocks or around the red kerchief ont he head of the NPC). Contrasted this with doom:
The chain saw blades are moving and being motion blurred, but you cannot see the obvious cut off line against the non-motion blurred areas of the screen and the individual samples themselves. Although, and this may just be because the console obmb in doom is not as great, you can see some of the indvidual slices in your screen shot from the mancubus, as part of the motion blur on top of his left arm (as it is is infront of a high constrast red/white light area).
 
Not sure if this is the right thread for this. I have recently learned about DisplayPort 1.4. It seems that there isn't really a difference compared to DisplayPort 1.3 on the physical side of things. But DisplayPort 1.4 will introduce a new compression method called DSC 1.2 (Display Stream Compression), which allows higher resolutions etc. I have read that the new nvidia GPUs will already be DisplayPort 1.4 "ready". So there might be a chance that manufacturers will "skip" DP1.3 - the only thing which makes me question this a bit is the publication date of the specs. DP 1.3 is nearly 2 years old, and only now we are getting first devices. And DP 1.4 specs were published in March this year.

What I wonder, is, and this why I did post here, is it possible that DSC 1.2 works well or is there a catch? Because it sounds a bit too good to be true. Basically double the effective bandwidth on the same physical connector? They wrote that it has up to 3:1 compression, and that the result is "visually lossless". I'm a bit sceptical about the last part, but I also know how big the size difference is between, for example, different picture formats, where I can not see any difference with the nacked eye.

If this added compression feature is as good and as easy to implement as suggested, this could mean that we might get probably the best display ever in 2017:

4K + 120hz + HDR (at the same time!)
Ultra HD Premium
HDMI 2.1, DisplayPort 1.4
 

NXGamer

Member
Looking @ UC 4 - I think there are a couple of things coming into play here. Looking at some screenshots right before release, and a couple from post-release (that you sent me <3 ) we know the DOF is basically quarter resolution, and you can see this upon its interesecting with overlapping geometry or alpha effects, or overlapping obmb. Though, if there is not obvious interessection or a shallow cut off from DOF it looks pretty darn smooth in spite of its internal resolution, the upscale in a static image is not too bad and blocky... unlike say KZ:SF where it is really obviously quarter resolution. I would imagine, and based upon some of the serious stair stepping in the screen above, that the motion blur is running at that exact same internal resolution as depth of field (I would be surprised to learn that various post processes are in fact running at different resolutions as depth of field and mb are usually conceptually coupled). So you already have a low internal resolution of the effect when it is rendered in one slice. Then combine that with the fact that there are a low amount of sample slices (which are visible in the screen you posted as well), then couple that with the fact that the upsample for motion blur on top of how it blends between samples. How the samples are blended and weighted increases their visual plausibility in stills and in motion even. This presentation from Jorge Jimenez goes into the various steps of making a plausible reconstruction filter for motion blur.
Or this paper from Tiago Sousa after slide 45.
So 4 things in total if you take the time to look through both those presentations:
1. Lower internal resolution for motion blur (not uncommon, does not need to be a huge detriment)
2. How that is upsampled to native resolution
3. How the motion blur blends between individual samples / slices
4. How it blends over non-motion blurred parts of the screen, or motion blurred areas moving in a different direction

Points 2 through 4 are the most important IMO, as that is where you can fudge and blur and sample interessingly enough to create really plausible looking results. The various different looks provided in that call of duty paper show just how different the end result of the same amount of motion blur slices can look based upon how it is upscaled and blended along its vector.

Looking @ doom - it just seems to do steps 2 and 4 much better to the point where seeing its internal resolution is really hard to even notice (it is doubtlessly less than full res though considering it is on console as well) as well as to the point where you cannot obviously see the geometric cut off where obmb begins and where it ends. That is quite unlike UC4 where you can see a hard line cut off more often than not in the motion blur itself, either between its internal samples or between motion blur on top of non-motion blurred screen parts, or motion blur going in a different direction (look at the rocks or around the red kerchief ont he head of the NPC). Contrasted this with doom:

The chain saw blades are moving and being motion blurred, but you cannot see the obvious cut off line against the non-motion blurred areas of the screen and the individual samples themselves. Although, and this may just be because the console obmb in doom is not as great, you can see some of the indvidual slices in your screen shot from the mancubus, as part of the motion blur on top of his left arm (as it is is infront of a high constrast red/white light area).

Good post but just wanted to add some parts.

UC4 from some tests does vary between half and Qtr res at timers so the actual volumetric lights are NOT SSR but ray marched volumes, this is not a new addition to the engine for the PS4. DoF is also the same I recall, but would need to confirm.

Also the OMB looks great in DOOM as they apply it AFTER any DOF is applied so this merges the samples hiding any harsh edges and the "cutout" look you can get. Like the AA solution it has multiple taps and is depth culled with background/foreground occlusion work. Half rate and filtered with a bilinear reconstruction over 2 passes and relatively cheap .5ms both.

IMOP it is up with the best from Advanced warfare, Ratchet and Clank and now this. ND have delivered their best MB yet but as good as it is I think even they would agree it does the job rather than lead the pack.
 
Good post but just wanted to add some parts.

UC4 from some tests does vary between half and Qtr res at timers so the actual volumetric lights are NOT SSR but ray marched volumes, this is not a new addition to the engine for the PS4. DoF is also the same I recall, but would need to confirm
Oh yeah, I will look for some of those screen Roboplato sent me to show off the DOF resolution.BTW, are you speaking about the internal resolution of DOF varying? Or the resolution of volumetrics? But yeah, confirming the interal resolution of mb should be likewise also pretty simple wiht some exemplary screenshot.
Also the OMB looks great in DOOM as they apply it AFTER any DOF is applied so this merges the samples hiding any harsh edges and the "cutout" look you can get. Like the AA solution it has multiple taps and is depth culled with background/foreground occlusion work. Half rate and filtered with a bilinear reconstruction over 2 passes and relatively cheap .5ms both.
Yeah, the ordering is pretty important for a lot of cases as the one Crytek presentation shows. It would be nice one day when Depth of Field and Motion Blur were done in single fully related fashion.
IMOP it is up with the best from Advanced warfare, Ratchet and Clank and now this. ND have delivered their best MB yet but as good as it is I think even they would agree it does the job rather than lead the pack.
The doom one or the one in UC4? I think the UC4 one is OK like you say, but it sadly just does not hold up in any screens really... unlike other obmb's. Still it is nice that it is has it, as every game should :p
 

nOoblet16

Member
The blending (between samples as well as with the object itself) in UC4 is quite poor tbh...one of the prime reason why it looks so much worse than Doom's implementation. The ordering being an important aspect as well, trust Crytek developers (even if ex Crytek now) to know their OMB.
 

platina

Member
Yes, in *cut scenes* where the camera point of view is locked in photo mode, with only a depth of field slider and a field of view option that (sometimes) allows a bit of movement in and out, Nathan Drake has little hairs perfectly rendered on his chin, and in general everything that is human looks just amazeballs.

However when the camera is allowed to roam, you can no longer get close enough to find those same hairs, and in fact if you do manage to trick the camera into getting close to someones face, the face disappears. From a greater distance, the model does not look to be the same fidelity as it was in the fixed viewpoint.

it's a bit of a parlour trick. I don't mind because of the seamless nature of the whole game, the beautiful vegetation and shadow work in the organic areas and everything else they are doing. But showing close-ups of cut-scene photo mode should come with disclaimers that even if it is game engine, it isn't full freedom camera, game engine. That's a big difference.
You can actually see them in gameplay if you know where to go. It was a pain taking these pictures at the right time during gameplay because the slightest movement and his whole face completely blurs. You can already see it happening on the beard.
agtcig3vk3o.png

vkdnkkl3jdz.png

aqx30pi1k95.png
 

wesly999

Banned
You are not wrong. The hair is still rendered in gameplay and there is no difference between photomode model and the ingame one. Here are some pics taken by a Beyond3D member showing closest view in gameplay and in photomode: both show the rendered facial hair:
https://forum.beyond3d.com/threads/...cuss-much-spoilers.57865/page-34#post-1915124

Is it real hair or is it sprite-cards? That's the real question. Only game I've ever seen real hair is Tomb Raider on PC.
 

RoboPlato

I'd be in the dick
Looking @ UC 4 - I think there are a couple of things coming into play here. Looking at some screenshots right before release, and a couple from post-release (that you sent me <3 ) we know the DOF is basically quarter resolution, and you can see this upon its interesecting with overlapping geometry or alpha effects, or overlapping obmb. Though, if there is not obvious interessection or a shallow cut off from DOF it looks pretty darn smooth in spite of its internal resolution, the upscale in a static image is not too bad and blocky... unlike say KZ:SF where it is really obviously quarter resolution. I would imagine, and based upon some of the serious stair stepping in the screen above, that the motion blur is running at that exact same internal resolution as depth of field (I would be surprised to learn that various post processes are in fact running at different resolutions as depth of field and mb are usually conceptually coupled). So you already have a low internal resolution of the effect when it is rendered in one slice. Then combine that with the fact that there are a low amount of sample slices (which are visible in the screen you posted as well), then couple that with the fact that the upsample for motion blur on top of how it blends between samples. How the samples are blended and weighted increases their visual plausibility in stills and in motion even. This presentation from Jorge Jimenez goes into the various steps of making a plausible reconstruction filter for motion blur.
Or this paper from Tiago Sousa after slide 45.
So 4 things in total if you take the time to look through both those presentations:
1. Lower internal resolution for motion blur (not uncommon, does not need to be a huge detriment)
2. How that is upsampled to native resolution
3. How the motion blur blends between individual samples / slices
4. How it blends over non-motion blurred parts of the screen, or motion blurred areas moving in a different direction

Points 2 through 4 are the most important IMO, as that is where you can fudge and blur and sample interessingly enough to create really plausible looking results. The various different looks provided in that call of duty paper show just how different the end result of the same amount of motion blur slices can look based upon how it is upscaled and blended along its vector.

Looking @ doom - it just seems to do steps 2 and 4 much better to the point where seeing its internal resolution is really hard to even notice (it is doubtlessly less than full res though considering it is on console as well) as well as to the point where you cannot obviously see the geometric cut off where obmb begins and where it ends. That is quite unlike UC4 where you can see a hard line cut off more often than not in the motion blur itself, either between its internal samples or between motion blur on top of non-motion blurred screen parts, or motion blur going in a different direction (look at the rocks or around the red kerchief ont he head of the NPC). Contrasted this with doom:

The chain saw blades are moving and being motion blurred, but you cannot see the obvious cut off line against the non-motion blurred areas of the screen and the individual samples themselves. Although, and this may just be because the console obmb in doom is not as great, you can see some of the indvidual slices in your screen shot from the mancubus, as part of the motion blur on top of his left arm (as it is is infront of a high constrast red/white light area).

Good post but just wanted to add some parts.

UC4 from some tests does vary between half and Qtr res at timers so the actual volumetric lights are NOT SSR but ray marched volumes, this is not a new addition to the engine for the PS4. DoF is also the same I recall, but would need to confirm.

Also the OMB looks great in DOOM as they apply it AFTER any DOF is applied so this merges the samples hiding any harsh edges and the "cutout" look you can get. Like the AA solution it has multiple taps and is depth culled with background/foreground occlusion work. Half rate and filtered with a bilinear reconstruction over 2 passes and relatively cheap .5ms both.

IMOP it is up with the best from Advanced warfare, Ratchet and Clank and now this. ND have delivered their best MB yet but as good as it is I think even they would agree it does the job rather than lead the pack.

Thanks, guys!

Definitely agree with NXG. It looks fine in motion (which is obviously what the goal is) but doesn't hold up well in stills. Looking over my screens I have found some segments that where it looks better (the prison yard brawl for instance) so it may also vary in quality based on the scene.

I'm glad it's in, that effect really brings games to life for me and I'm incredibly sensitive to it. Maybe it'll get some love on PS Neo. It's literally the only thing in Uncharted 4 I think isn't at the top of the class in. I get nitpicky with motion blur. Those three games (plus CryEngine) are also my favorites in terms of the effect.
 
You can actually see them in gameplay if you know where to go. It was a pain taking these pictures at the right time during gameplay because the slightest movement and his whole face completely blurs. You can already see it happening on the beard.
I'm still highly suspicious of cut scene photo mode where it comes to model fidelity.

There doesn't seem to be any good reason to have forced that, unless the game engine is switching up its rendering to a different mode where it can luxuriate in the joys of a fixed POV and the possibility of a 1gb buffer of pre-calculated geometry movement, one that is played like a tape, and even a down-sampled image.

If they patch it to allow camera orbits in photo mode during cut scenes, indicating dynamic calculation of hidden surfaces and dynamic global illumination, it will stop niggling at me.
 

gamerMan

Member
Is it real hair or is it sprite-cards? That's the real question. Only game I've ever seen real hair is Tomb Raider on PC.

I think the hair is created by snapping polygons to the model. These serve as hair cards. Alpha transparencies textures are used to draw stubble.

GSmfH4E.jpg
 

Frozone

Member
Springfoot,

I don't mean to beat a dead horse, but I'd like to talk about this volumetric lighting in UC4 again in hopes that we can come to terms on it's true definition and whether we can truly replicate a true ray-marched volume test in UC4. I didn't really have the time to investigate thoroughly so I took your word for it (not wanting to come off smug) about this but I got some time today, did a little homework and wanted to discuss this further.

If it were truly volumetric, you'd be able to trace shadow lines from the shadow casting features on the box down through the volume and to the floor, but as you can see in the video,

This can still be "hacked" with respect to the camera and what it's seeing. UC4 may very well be walking samples towards a light source from the shading point, but it appears to only be half-correct or optimized such that if the light source direction and the camera's view direction approach parallel directions, it does not ray-march at all.. This would not be considered a true ray-march of an entire volume -- as it only works at some angles despite the light source being offscreen. Btw, any light source can be offscreen and still cast shadows as the light direction can always be rendered in world space and stored as 3D vectors represented by a 2D image plane and retrieved during shader evaluation.

Here is a vid of one of your screenshots where Nate is underwater and what appears to be volume light shafts penetrating the ocean. Sure they look great from right angles (camera dot eye direction), but what happens when you make the camera completely parallel to the light beams and look at Nate? Why does the contribution of the dust particles in the water not get added into the overall color of Nate (i.e. he should be desaturated at the very least in some areas where accumulated opacity in the volume > 0):

https://youtu.be/QtL-y-SsTFA?t=843

Here is another one where Nate walks directly under the chandelier that's in the volume casting a shadow on the ground and yet, there is no desaturation from the added particle color onto the pixels that make up his body (nor the ground).

https://www.youtube.com/watch?v=FXwuUDdikio

Now, let's look at Deck 13's implementation (where they actually give a talk on their technique, so we know it's true ray-marching). Notice how it doesn't matter where I look with the camera, I'll still get accumulation of opacity on the main character (and definitely what is seen all around him). Hence, what I would call a true ray-marched volume because the marching happens independent of camera location:

https://www.youtube.com/watch?v=jBp_2DeprbY

Long story short -- I'm still skeptical with regards to gamers using the term (with the exception of LoTF), however I am willing to agree it's a "pseudo" volume light shaft unless you have a vid that I haven't seen before that is similar to LoTF. I've honestly looked very hard for this and am now on Chapter 19.

EDIT: I finally found an example in the game! Wahoo!!

https://www.youtube.com/watch?v=tWa1difcAFE

So I agree 100%! It's really weird how they used several different techniques depending on the scene. It can be very confusing. Well, at least we are on the same page now. :)
 

pottuvoi

Banned
I think the hair is created by snapping polygons to the model. These serve as hair cards. Alpha transparencies textures are used to draw stubble.
Yup.
There is very good reason for alpha cards for stubble, simply rendering polylines would cause ridiculous amounts of aliasing.
 
The highlighted portion here you mention above:

Looks like SSR to me. But the light hitting the table looks like a simulated bounce. How? Not sure. Do they automatically place a point light on the surface or something upon intersection with a surface?
Yup.
There is very good reason for alpha cards for stubble, simply rendering polylines would cause ridiculous amounts of aliasing.

Unless you run 8xMSAA only on the strands :p
 
It's not, though. Check out the video I linked that I took that shot from.

Here at 4:11 (timecode link) you can see I swing the illuminated surface off screen, but the highlight from the GI light remains. Also happens at 2:24.

And earlier at 1:35 (link) I managed to get the light in a position where it strikes 3 separate surfaces with breaks in between, and the highlight from (what I'm assuming is a GI pointlight) separates into the 3 appropriate highlights.
0o0 thx
Since the specular lighting remains even when out of screenspace (like the video you point to shows right well at that point), there are 4 things that I imagine that they could do.
1. They actually could have a planar geometry reflection right there in that specific area (and a mirrored light for it as well)
2. There could be a real time box projected cube map in that section
3. It could be some actual form of out of screen space tracing (i doubt this though since the game is doing lot's of expensive stuff as is)
4. Something I have no idea about

Is it possible to create an "inner object reflection / occlusion" with the system in that area and not just one with the floor? For example, the specular reflection of a table leg on the table itself?

If it can do inner object reflections, and not jsut along a planar surface or reprojected from an IBL, then it would point toward 3 or... 4!
 

HTupolev

Member
And earlier at 1:35 (link) I managed to get the light in a position where it strikes 3 separate surfaces with breaks in between, and the highlight from (what I'm assuming is a GI pointlight) separates into the 3 appropriate highlights. This is counter to what I originally suspected, which was that they had a simple pointlight representing the GI bounce. I think it may still be derived in some way from a simple pointlight (again, because the GI bouncelight doesn't actually cast shadows and passes straight through other objects) but with some extra effects going on to fake some kind of shadowing from the object it's in contact with? I'm not sure.
Something like GI spotlights spawned on reflection vectors, perhaps?

Thing is, it seems like the direct lighting is pretty grainy in that section too.

It's odd.
 

Javin98

Banned
Since we're on the subject of global illumination with Springfoot and Dictator on the case, I have a few questions. If I remember correctly, Crysis 3 and Ryse both use light probes for GI and Crytek regards this as real time GI. Would you guys agree that it can be referred to as real time? The recent Far Cry games also use light probes for GI. Are the solutions in any way similar? Also, besides the flashlight GI, Uncharted 4 uses light probes for GI in several levels, although it can look inaccurate and exaggerated. Anyway, my main question is, do light probes even count as dynamic global illumination?
 
#3 Some kind of very basic, largely inaccurate (good 'nuff) ray tracing crossed my mind. These sections with the flashlight and dynamic GI only ever happen in highly controlled, contained, and slow-paced exploration sections in generally small rooms. The player never has this flashlight or the dynamic GI in any large, effects heavy action sections.

I'm not convinced that's necessarily what they're doing since so many other places the effect looks extremely similar to the one used in the Last of Us (a pointlight that was moved around to the surface the flashlight beam was hitting and colored based on the underlying surface).

Another possibility is that I'm conflating two separate effects. There's the basic 'occlusion wrapping' (still not sure what's going on there since it doesn't fit with what I had suspected about the 180 degree cone for the GI light) that I call out in the screenshots with the white lines, and then there are the accurate floor reflections. I think they're tied together as part of the same dynamic GI system, but maybe the floor reflections are just the result of a very low res, dark/high contrast, blurry planar reflection capture?
Yeah it is curious. I definitely need time to watch the video more. WIll do that ASAP!
Since we're on the subject of global illumination with Springfoot and Dictator on the case, I have a few questions. If I remember correctly, Crysis 3 and Ryse both use light probes for GI and Crytek regards this as real time GI. Would you guys agree that it can be referred to as real time? The recent Far Cry games also use light probes for GI. Are the solutions in any way similar? Also, besides the flashlight GI, Uncharted 4 uses light probes for GI in several levels, although it can look inaccurate and exaggerated. Anyway, my main question is, do light probes even count as dynamic global illumination?
CryEngine 3.5.4. in Crysis 3 on PC uses LPV (Light Propogation Volumes) along with darkening and lightening probes. It uses the single bounce diffuse only variant (the other extensions could technically be enabled in editor or as part of the then CrySDK). Since Ryse this system was deprecated (but could be enabled with tweaks) and instead replaced by manaually placed probes that would dial up and down the intensity of ambient lighting. At the same time they added in SSDO colour sampling, so all indirect GI effects were no longer done via LPV at this point, and were pretty standard and largely innacurrate. Definitely not real time and not even too awesome IMO in Ryse... it works well enough though I guess to get it on xb1.

Only after Ryse did they debut SVOGI, which is yeah...real time in a pretty brutal way.
 

Javin98

Banned
CryEngine 3.5.4. in Crysis 3 on PC uses LPV (Light Propogation Volumes) along with darkening and lightening probes. It uses the single bounce diffuse only variant (the other extensions could technically be enabled in editor or as part of the then CrySDK). Since Ryse this system was deprecated (but could be enabled with tweaks) and instead replaced by manaually placed probes that would dial up and down the intensity of ambient lighting. At the same time they added in SSDO colour sampling, so all indirect GI effects were no longer done via LPV at this point, and were pretty standard and largely innacurrate. Definitely not real time and not even too awesome IMO in Ryse... it works well enough though I guess to get it on xb1.

Only after Ryse did they debut SVOGI, which is yeah...real time in a pretty brutal way.
Thanks, yeah, I heard of SVOGI, pretty expensive GI solution, right? Also, I know you provided some good answers, but mind responding to my questions about Far Cry? And just to clarify, I'm guessing you don't consider light probes to be real time?
 
Thanks, yeah, I heard of SVOGI, pretty expensive GI solution, right? Also, I know you provided some good answers, but mind responding to my questions about Far Cry? And just to clarify, I'm guessing you don't consider light probes to be real time?

Oh yeah, um sorry
lol

1 - Yeah, SVOGI if taken to the nth degree (multiple bounces + depth trace + specular) is pretty god damn expensive. Just barely hitting 60fps in a nigh empty scene on an OC'd Titan X @ 1920X1080 kind of expensive.

2 - I guess I would consider parts of the probes system in Far Cry to be "dynamic" but largely not at all. It does interpolate betwen different values for different time of day, which makes it at least able to work between all manner of different times (which baked maps cannot do). But it is all based off pre generated data anyway so it is definitely not "real time", so moving objects do not effect the GI at all. The PC version of FC 3 supported the SH probes sampling local point lights as well as from the sun... which is interesting enough.

I guess the best way to talk about GI being real time or not is decided by the qualifier: "is information feeding the eventual screen output of GI (how it is to be shaded, scattered, relit, whatever) generated at run time? If yes, it probably makes sense to call it "real time" IMO.
 

Javin98

Banned
Oh yeah, um sorry
lol

1 - Yeah, SVOGI if taken to the nth degree (multiple bounces + depth trace + specular) is pretty god damn expensive. Just barely hitting 60fps in a nigh empty scene on an OC'd Titan X @ 1920X1080 kind of expensive.

2 - I guess I would consider parts of the probes system in Far Cry to be "dynamic" but largely not at all. It does interpolate betwen different values for different time of day, which makes it at least able to work between all manner of different times (which baked maps cannot do). But it is all based off pre generated data anyway so it is definitely not "real time", so moving objects do not effect the GI at all. The PC version of FC 3 supported the SH probes sampling local point lights as well as from the sun... which is interesting enough.
Hey, thanks! So, I'm guessing SVOGI isn't viable for even next gen, in its current form, at least. Hopefully, we'll get a cheaper form of real time GI for next gen. As for Far Cry, hmm, yeah, I've never seen moving objects receive the light probes. So I guess light probes are generally not considered real time GI. Thanks again, I learnt so much from you, man!

Edit: Hmm, going by your edit, that seems kinda strict for "real time". The last time we talked about it, I recall even GI with pre calculated stuff but updated at run time still being considered real time.
 
Hey, thanks! So, I'm guessing SVOGI isn't viable for even next gen, in its current form, at least. Hopefully, we'll get a cheaper form of real time GI for next gen. As for Far Cry, hmm, yeah, I've never seen moving objects receive the light probes. So I guess light probes are generally not considered real time GI. Thanks again, I learnt so much from you, man!

To clarify: you do not necessarily need to do all the crazy multiple bounce, full resolution, indirect specular, etc. form of SVOGI... it is cheaper if you disable some of the things which start making it approach offline rendering quality. At which point it is managable even on moderate GPUs.

The characters and dynamic objects in Far Cry receive bounce light from the environment as cast from the sun. But they themselves (characters, dynamic objects) cannot influence the indirect lighting nor can the environment geometry itself change and further influence indirect lighting. SInce all of the indirect lighting information from said environmental geometry and the sun was generated before the game even started.
edit:
Edit: Hmm, going by your edit, that seems kinda strict for "real time". The last time we talked about it, I recall even GI with pre calculated stuff but updated at run time still being considered real time.

Yeah it is probably a really strict interpretation of the wording. Definitely not the only way to look at it. But using one catch all qualifying phrase is always a bit problematic unless it is completely clear. It is probably best to just describe what is actually happening rather than just saying "real time" or "not real time" :D
 

Javin98

Banned
To clarify: you do not necessarily need to do all the crazy multiple bounce, full resolution, indirect specular, etc. form of SVOGI... it is cheaper if you disable some of the things which start making it approach offline rendering quality. At which point it is managable even on moderate GPUs.

The characters and dynamic objects in Far Cry receive bounce light from the environment as cast from the sun. But they themselves (characters, dynamic objects) cannot influence the indirect lighting nor can the environment geometry itself change and further influence indirect lighting. SInce all of the indirect lighting information from said environmental geometry and the sun was generated before the game even started.
edit:


Yeah it is probably a really strict interpretation of the wording. Definitely not the only way to look at it. But using one catch all qualifying phrase is always a bit problematic unless it is completely clear. It is probably best to just describe what is actually happening rather than just saying "real time" or "not real time" :D
Wow, thanks for the further clarification on Far Cry's GI solution. Now I need to sit down and think through all of it. Man, wonder when I'll learn all this stuff in my course.
 

Frozone

Member
Yea, on the GI thing.

UC4 has some form of dynamic GI with the flashlight (had it back with TLOU).
Alien:Isolation does as well.
QB/DOOM is also doing it. But for me it's the SSDO that's factoring into the term "realtime GI". Anytime you are getting dynamic AO when not lit by direct light source, to me, it's realtime if the object moves (even if in screenspace)

While crude and only works when the objects are not occluded or in camera, SSR is still considered dynamic GI.

GIobal Illumination could be split up into these tiers:

1) Indirect diffuse bounced light
2) Shadowing of that indirect bounced diffuse (i.e. AO)
3) Indirect reflections (i.e. SSR for now)
4) Shadowing of those reflections (AO)
5) Indirect refractions (never seen it done before)
6) Shadowing of those refractions (AO)
7) Caustics (also never really done right)

Also, this GI could be applied to light that propagates through media as well (i.e. smoke, hair). But haven't seen anything remotely close to that in realtime.
 

KKRT00

Member
It's not, though. Check out the video I linked that I took that shot from.

Here at 4:11 (timecode link) you can see I swing the illuminated surface off screen, but the highlight from the GI light remains. Also happens at 2:24.

And earlier at 1:35 (link) I managed to get the light in a position where it strikes 3 separate surfaces with breaks in between, and the highlight from (what I'm assuming is a GI pointlight) separates into the 3 appropriate highlights. This is counter to what I originally suspected, which was that they had a simple pointlight representing the GI bounce. I think it may still be derived in some way from a simple pointlight (again, because the GI bouncelight doesn't actually cast shadows and passes straight through other objects) but with some extra effects going on to fake some kind of shadowing from the object it's in contact with? I'm not sure.
Thats interesting.
Also interesting is the fact that this scene has disabled SSR.
 
Hey, thanks! So, I'm guessing SVOGI isn't viable for even next gen, in its current form, at least. Hopefully, we'll get a cheaper form of real time GI for next gen. As for Far Cry, hmm, yeah, I've never seen moving objects receive the light probes. So I guess light probes are generally not considered real time GI. Thanks again, I learnt so much from you, man!

Edit: Hmm, going by your edit, that seems kinda strict for "real time". The last time we talked about it, I recall even GI with pre calculated stuff but updated at run time still being considered real time.

SVOTI is actually used in Kingdom Come: Deliverance, which is coming to consoles.
https://www.youtube.com/watch?v=PEfqtOYjolE
 

HTupolev

Member
Far Cry 3 falls under the category of precomputed radiance transfer. In the probes, you bake information about how light bounces through the scene.
The ultra-lightweight part of the system that's used on both PC and console is a very classical approach which can only handle an infinite-distance light distribution. So, if you represent the sky as a blurry "cubemap" of sorts, you can apply that to the scene and get its lit GI response for cheap.

Calculating how light bounces isn't done in real time, but calculating the bounce result is. So you could say that elements of it are real time.
 

Frozone

Member
Far Cry 3 falls under the category of precomputed radiance transfer. In the probes, you bake information about how light bounces through the scene.
The ultra-lightweight part of the system that's used on both PC and console is a very classical approach which can only handle an infinite-distance light distribution. So, if you represent the sky as a blurry "cubemap" of sorts, you can apply that to the scene and get its lit GI response for cheap.

Calculating how light bounces isn't done in real time, but calculating the bounce result is. So you could say that elements of it are real time.

Meh. I'd only consider it "realtime" when the data can be dynamically created per frame.

Also, how are you game devs handling environment lighting? Are you always just using a directional light source and a cube map for when the object is in shadow? In our offline rendering, we use directional lights but oftentimes use a HDR image that includes the sun already baked in the image and just importance sample that image which gives us nice soft/colored shadows -- we consider it direct lighting however, and not indirect.
 
Thats interesting.
Also interesting is the fact that this scene has disabled SSR.

The engine is just optimized to hit the 30fps mark 99.9% of the time. The iq on the character models are not consistent at all. Some chapters have more shaders/effects than others. Here's my example:
ztBtMR.png

5Qx4wZ.png


You see nice skin shader/sss on Sam here, but a couple minutes later you zoom in and this happens.
kPdFf4.png

FClD2e.png


Here is an example with Drake
yIknEk.png

sss reduced heavily (most of the levels it looks disabled to me) and shaders look worse. The engine is seriously impressive and has amazing scale, it's not open world, but it's not linear either and it gives lot's of freedom. The draw distance is amazing too.
ypkbCS.png


But it's seriously annoying that on some levels you pull the camera close to drake and he looks like this:
ma1PsP.png


I just hope that maybe with the ps4k patch the game could look like the cutscenes which sports better lighting on characters and sss.
 

Javin98

Banned
Far Cry 3 falls under the category of precomputed radiance transfer. In the probes, you bake information about how light bounces through the scene.
The ultra-lightweight part of the system that's used on both PC and console is a very classical approach which can only handle an infinite-distance light distribution. So, if you represent the sky as a blurry "cubemap" of sorts, you can apply that to the scene and get its lit GI response for cheap.

Calculating how light bounces isn't done in real time, but calculating the bounce result is. So you could say that elements of it are real time.
Thanks, this makes a bit more sense now why some refer to light probes as real time GI.

The engine is just optimized to hit the 30fps mark 99.9% of the time. The iq on the character models are not consistent at all. Some chapters have more shaders/effects than others. Here's my example:
ztBtMR.png

5Qx4wZ.png


You see nice skin shader/sss on Sam here, but a couple minutes later you zoom in and this happens.
kPdFf4.png

FClD2e.png


Here is an example with Drake
yIknEk.png

sss reduced heavily (most of the levels it looks disabled to me) and shaders look worse. The engine is seriously impressive and has amazing scale, it's not open world, but it's not linear either and it gives lot's of freedom. The draw distance is amazing too.
ypkbCS.png


But it's seriously annoying that on some levels you pull the camera close to drake and he looks like this:
ma1PsP.png


I just hope that maybe with the ps4k patch the game could look like the cutscenes which sports better lighting on characters and sss.
Gameplay models do still use SSS, though. However, the SSS in cutscene models is considerably ramped up, but from comparison shots, it is generally accepted that the gameplay model and cutscene model have the same geometry count. The gameplay model certainly doesn't hold up well in direct lighting. In ambient lighting conditions, it holds up much better, although your example of Sam isn't the best one.
 

Noobcraft

Member
Thanks, this makes a bit more sense now why some refer to light probes as real time GI.


Gameplay models do still use SSS, though. However, the SSS in cutscene models is considerably ramped up, but from comparison shots, it is generally accepted that the gameplay model and cutscene model have the same geometry count. The gameplay model certainly doesn't hold up well in direct lighting. In ambient lighting conditions, it holds up much better, although your example of Sam isn't the best one.
In the pics posted above the cutscene model looks to have more geometry, you can see a sharp corner on Drake's ear in the gameplay model that looks absent in the cutscene model. Unless it's just better hidden in cutscenes.
 
Thanks, this makes a bit more sense now why some refer to light probes as real time GI.


Gameplay models do still use SSS, though. However, the SSS in cutscene models is considerably ramped up, but from comparison shots, it is generally accepted that the gameplay model and cutscene model have the same geometry count. The gameplay model certainly doesn't hold up well in direct lighting. In ambient lighting conditions, it holds up much better although your example of Sam isn't the best one.

Why is that though? I've seen this happen in many games, when in direct lighting the models completely off, but go somewhere else will the less light and it looks like a completely different model.
 

HTupolev

Member
Meh. I'd only consider it "realtime" when the data can be dynamically created per frame.
When what data can be dynamically created per-frame? The transfer behavior isn't calculated in real time, but I wasn't claiming it was. It is factually the case that Far Cry 3 produces its irradiance volume data dynamically as time progresses and as the player moves throughout the map.

Also, how are you game devs handling environment lighting?
I think I also tried to clarify this a couple weeks ago, but: I've never done any professional game development.

Are you always just using a directional light source and a cube map for when the object is in shadow?
Am I misreading this? We're just now talking about Far Cry 3's bounce irradiance volume data, which is an example of further information used to light things. Games with fully baked irradiance also frequently apply it to dynamic objects.
 

Javin98

Banned
Why is that though? I've seen this happen in many games, when in direct lighting the models completely off, but go somewhere else will the less light and it looks like a completely different model.
This is mostly because when direct lighting is shone on the character's face, it looks flat due to the lack of self shadowing. On the other hand, in ambient lighting, the flaw is less evident because the lighting is not shining directly at the character model.

In the pics posted above the cutscene model looks to have more geometry, you can see a sharp corner on Drake's ear in the gameplay model that looks absent in the cutscene model. Unless it's just better hidden in cutscenes.
Hmm, interesting. I will take a closer look at the pics, but the general consensus was that the gameplay and cutscene models are identical in geometry.
 
In the pics posted above the cutscene model looks to have more geometry, you can see a sharp corner on Drake's ear in the gameplay model that looks absent in the cutscene model. Unless it's just better hidden in cutscenes.
Trust me, I've spent hours comparing endlessly and the models even for Nadine, Rafe..everyone the character models are 1:1 the same, the only difference is a much better sss implementation, and better shaders/lighting during cutscenes. That's about all the difference i can spot.
K4Wc5L.png

XEjb3i.png


For example in this level the gameplay model of Sam is very close to the cutscene model regarding shaders and sss.
 

KKRT00

Member
The engine is just optimized to hit the 30fps mark 99.9% of the time. The iq on the character models are not consistent at all. Some chapters have more shaders/effects than others. Here's my example:

I dont think this is dynamic depending on the engine's workload. I think ND just sets it per level.
 
Trust me, I've spent hours comparing endlessly and the models even for Nadine, Rafe..everyone the character models are 1:1 the same, the only difference is a much better sss implementation, and better shaders/lighting during cutscenes. That's about all the difference i can spot.
K4Wc5L.png

XEjb3i.png


For example in this level the gameplay model of Sam is very close to the cutscene model regarding shaders and sss.

Such situations like the ones you posted above in your previous post are really weird since most of the time the gameplay shaders and SSS are top notch high quality and exactly like on the cutscenes: http://www.neogaf.com/forum/showpost.php?p=203529345&postcount=7001

Here are some little details about the characters' rendering from the artists themselves until until more details at SIGGRAPH :

http://magazine.artstation.com/2016/05/uncharted-4-art-blast/?sf26483992=1

https://www.artstation.com/artwork/lNk0z

https://www.artstation.com/artwork/6kEmV

https://www.artstation.com/artwork/JOY4A

https://www.artstation.com/artwork/4lGm8
 

KKRT00

Member
Such situations like the ones you posted above in your previous post are really weird since most of the time the gameplay shaders and SSS are top notch high quality and exactly like on the cutscenes: http://www.neogaf.com/forum/showpost.php?p=203529345&postcount=7001
Yeah no and posting one example doesnt disprove it when like 90% of shots in console thread shows lower quality of shading on characters in gameplay.

Ps. Nothing wrong about it, thats completely fine trade off.
 
Yeah no and posting one example doesnt disprove it when like 90% of shots in console thread shows lower quality of shading on characters in gameplay.

Ps. Nothing wrong about it, thats completely fine trade off.

Yeah, there are some instances (mostly in night levels) where you can get really close to cutscene quality. I posted this over at B3D:
First: Cutscene
Second: Gameplay
hudqtpdtkkw.png

rynyty8sjl8.png


Upon further inspection of the order 1886, i can comfortably say that uncharted completely overpowers it in the character model department. There's polygon edges, low res textures on the characters in the order 1886. Uncharted does much better here. However, once we get to the lighting especially during gameplay, i can say that the order 1886 is on a whole different league. The lighting in uncharted can get embarrassingly flat during gameplay.

theorder_1886_2016052byjhr.png

fo7phhf6ja5.png

khn0tz0ek4x.png

h0c2szy5jlb.png


I'll be doing an npc comparison of uncharted 4 later.
 

nOoblet16

Member
It's not like UC4 models don't have polygon edges visible, I can see visible edges on Nadine's vest for instance and other places. I see nothing in UC4 that tells me it "overpowers" the ones in TO1886...certainly not in the pictures you posted. TO1886 also has full on cloth physics for the characters.

The engine is just optimized to hit the 30fps mark 99.9% of the time. The iq on the character models are not consistent at all. Some chapters have more shaders/effects than others. Here's my example:

You see nice skin shader/sss on Sam here, but a couple minutes later you zoom in and this happens.

Here is an example with Drake
sss reduced heavily (most of the levels it looks disabled to me) and shaders look worse. The engine is seriously impressive and has amazing scale, it's not open world, but it's not linear either and it gives lot's of freedom. The draw distance is amazing too.

But it's seriously annoying that on some levels you pull the camera close to drake and he looks like this:

I just hope that maybe with the ps4k patch the game could look like the cutscenes which sports better lighting on characters and sss.

This is just a difference of direct light in gameplay vs soft lighting in cutscenes that are placed in a way to bring out the best in character. Try to find a soft light source in gameplay to see how close you can get. Nothing to do with IQ, nothing to do with the character model themselves.

You'll notice similar stuff in real life too although not as drastic of a difference ofcourse. For instance, the umbrella lighting the professional photographers use. It's so as to give the light that falls on their objects a more diffuse look with softer self shadows.
 
This is just a difference of direct light in gameplay vs soft lighting in cutscenes that are placed in a way to bring out the best in character. Try to find a soft light source in gameplay to see how close you can get. Nothing to do with IQ, nothing to do with the character model themselves.

You'll notice similar stuff in real life too although not as drastic of a difference ofcourse. For instance, the umbrella lighting the professional photographers use. It's so as to give the light that falls on their objects a more diffuse look with softer self shadows.


Exactly. Here gameplay and custcens comparison that show no difference : https://forum.beyond3d.com/threads/...cuss-much-spoilers.57865/page-39#post-1916046

while in that same area or location some managed to take pics where gameplay models look not quite well lit.



Is it real hair or is it sprite-cards? That's the real question. Only game I've ever seen real hair is Tomb Raider on PC.

I think the hair is created by snapping polygons to the model. These serve as hair cards. Alpha transparencies textures are used to draw stubble.

According to the Senior Shading Artist at Naughty Dog: fur and stubble are not using alpha cards or displacement, they are using similar tech called parallax, combined with "shell" tech,

Here is her answer in the comments: https://www.artstation.com/artwork/4lGm8
 
It's not like UC4 models don't have polygon edges visible, I can see visible edges on Nadine's vest for instance and other places. I see nothing in UC4 that tells me it "overpowers" the ones in TO1886...certainly not in the pictures you posted. TO1886 also has full on cloth physics for the characters.



This is just a difference of direct light in gameplay vs soft lighting in cutscenes that are placed in a way to bring out the best in character. Try to find a soft light source in gameplay to see how close you can get. Nothing to do with IQ, nothing to do with the character model themselves.

You'll notice similar stuff in real life too although not as drastic of a difference ofcourse. For instance, the umbrella lighting the professional photographers use. It's so as to give the light that falls on their objects a more diffuse look with softer self shadows.
I meant skin definition/detail actually. You'll notice that in the order 1886 you are never meant to zoom in on characters, at the normal distance they look stunning!
theorder_1886_201605223kvu.png

theorder_1886_201605227js7.png


But if you zoom in you can see it's clear Uncharted 4 has the upper hand here. Like i said the order is unmatched in the lighting, post process, and cloth physics(like you mentioned) department.

uncharted4_athiefsendggjjw.png

theorder_1886_2016052z2jro.png

uncharted4_athiefsendkckj8.png

theorder_1886_20160525wkb5.png

theorder_1886_20160522sk6r.png

theorder_1886_2016052wwjqw.png

uncharted4_athiefsendlfkf1.png
 

nOoblet16

Member
I meant skin definition/detail actually. You'll notice that in the order 1886 you are never meant to zoom in on characters, at the normal distance they look stunning!

But if you zoom in you can see it's clear Uncharted 4 has the upper hand here. Like i said the order is unmatched in the lighting, post process, and cloth physics(like you mentioned) department.

I do remember seeing some photomode screens of TO1886 with the camera close to Gallahad under softer lighting condition that looked a lot better.
Whose fingers are those suppose to be by the way? Ingraine?
 

Frozone

Member
When what data can be dynamically created per-frame? The transfer behavior isn't calculated in real time, but I wasn't claiming it was. It is factually the case that Far Cry 3 produces its irradiance volume data dynamically as time progresses and as the player moves throughout the map.

Sorry about the misunderstanding HTupolev.

I'm just speaking to the word "dynamic" in my own warped definition. I'd call even baking out data (even if it's in some world space computation) as non-dynamic. I would consider actually casting rays to compute color information (several times/sec) on a per-pixel basis as dynamic. But this is in my own personal opinion. I get that retrieving data (that's already baked or recomputed every few secs) during animation is considered dynamic.
 

wesly999

Banned
According to the Senior Shading Artist at Naughty Dog: fur and stubble are not using alpha cards or displacement, they are using similar tech called parallax, combined with "shell" tech,

Yea, it seems like a much easier technique (i.e. overlapping shells that use a PRNG to pick a pre-masked texture by which hairs grow from the surface) than sprite cards although the approach seems similar. Among the drawbacks I see, getting rid of aliasing would be high on the list. Also, the diffuse wrap lighting (evaluating NdotL beyond the normal range) doesn't really add self-shadowing to the hair. I'd rather see ray-tracing for shadows, or some form of occlusion to give the hair more shape (especially when the hair is indirectly lit). Obviously getting it to react to forces, gravity, etc.. would be a plus.
 
Top Bottom