• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

lazygecko

Member
Here is a diffuse texture I made, with its corresponding height map:

diffusem0qi7.png
heightmapn4obq.png

Here is a mesh it is applied on, without and with POM active:


And as you can see, when the angle isn't optimal, the illusion of depth breaks:

 
There is no real cure for pop-in, maybe one cat mitigate it a bot with Rockstar style dithering effect. Will that suffice for you? Mind though that you'll have to pay for it with other kinds of visual artifacts, like mentioned here https://www.reddit.com/r/GrandTheft...is_there_a_name_for_this_graphical_thing_yet/

I know there are sliders you can adjust or go into some games config files, but damn, this freakin' blows. Hopefully with more research, we can develop techniques that'll eliminate pop ups.

We also need better texture streaming for normal maps and shit popping up.
 
I know there are sliders you can adjust or go into some games config files, but damn, this freakin' blows. Hopefully with more research, we can develop techniques that'll eliminate pop ups.

We also need better texture streaming for normal maps and shit popping up.

I don't know that it's something you can "eliminate." Pop-in is a tradeoff among a multitude of tradeoffs that have to occur to get a game out. If you were to come up with a more aggressive LOD system that would have smaller intervals between the detail levels, then you'd be spending a lot of time swapping resources which eats into your memory budget, disk polling time, and CPU time. When it comes down to it, having an enemy react to your actions in six different ways as opposed to three is more important for the game actually being fun.
 

lazygecko

Member
I find the prominence of the dithered fade-in quite weird. First saw it in Just Cause 2 in 2010 and seems to just be getting more popular since then. I thought dithering was something we left behind in the 90's.

What's worse is that in Wolfenstein TNO/idtech 5, the actual lighting itself seems to be dithered at times. It looks really offputting and I have no idea why they thought it would be a good idea to implement.
 

HTupolev

Member
I find the prominence of the dithered fade-in quite weird. First saw it in Just Cause 2 in 2010 and seems to just be getting more popular since then. I thought dithering was something we left behind in the 90's.
Fading between LODs is sort of a tricky issue.

Doing a simple cross fade between the two LODs with alpha transparency wouldn't be perfect, because the combination of the two would be transparent during the transition (two transparent objects blended together is still a transparent object). What you want is for the combination of the two to be opaque at pixels touched by both LODs and use the cross fade transparency values at pixels touched by only one of the LODs; this would technically be doable, but would require some setup prior to the blend into the main framebuffer, and you might still wind up with some depth oddities regarding how your cross-fading LOD objects interact with other transparent stuff and post-processing.

Fading one in and then removing the other once the new one is fully opaque could be done, but you'd still get a pop unless the new LOD was strictly larger than the old one. You could make this work seamlessly-ish by putting out a requirement that the low LOD meshes always fully enclose the high LOD meshes or vise versa, but that's sort of a bizarre requirement that would have significant implications on how things are authored.

With a dithered fade-in, the old LOD fades out with one dither mask while the new LOD fades in with an inverted dither mask, essentially switching the objects out pixel by pixel. This avoids the oddities of transparency blending approaches while also being cheaper to render. Not having to worry about what happens when an objects switches between transparent and opaque also means that it plays nicely with more lighting/rendering/whatever approaches.
Obviously the penalty is that the dither looks like dither.
 

ss_lemonade

Member
I want developers to focus more on eliminating things like pop-ups. This is currently my biggest annoyance. It started coming back last generation. Things were smoothed out a lot during the PS2/XB/GC era, and BAM, they're back with a vengeance last gen and still happening this generation, even on a powerful PC. Whether it's normal maps popping into view or LOD popping a few feet from your character. What's causing all these problems?

Not a fan of this as well. It's easily noticeable if you have the Master Chief Collection. Halo 1 for instance doesn't have any model swapping, but switch to the remastered version and you'll notice models swapping in and out in just a few feet away.
 

Skinpop

Member
I find the prominence of the dithered fade-in quite weird. First saw it in Just Cause 2 in 2010 and seems to just be getting more popular since then. I thought dithering was something we left behind in the 90's.

What's worse is that in Wolfenstein TNO/idtech 5, the actual lighting itself seems to be dithered at times. It looks really offputting and I have no idea why they thought it would be a good idea to implement.
dithering is cheap. I personally like it. stuff like this is pretty much porn to me.
 

Frozone

Member
There's been a lot of things I've been wondering about, like HBAO and SSAO and how they approach shading the scene, but one thing I've been wondering for a long time is how Parallax Occlusion works.

So, could anyone explain how Parallax Occlusion Mapping works and how it compares to Tessellation?

What I hate about POM is the texture stretching and the obvious breakdown in parallax viewing at extreme angles. It would be much better if the game can just support displacements everywhere. You'd get the shadowing, occlusion and AO automagically.
 

Smash88

Banned
Splinter Cell Conviction did it, even for texts that were used for mission objectives which were projected onto an environment object. The video projection that you see in the second image is not from a projector but rather used to show a flashback but it's the same thing as a projector.



I am breaking a rule here but wanted to show an example of how it exists in other games.

The guy(s) who thought that displaying your objective in a neat projector style should get a pay raise or some sort of love. That method of delivery blew my mind at the time.

If you are out there Splinter Cell Conviction projector objective dude(s). Thank you.
 

Durante

Member
About the LoD popo-in discussion: it's not hard in theory to eliminate pop-in -- you just have to fade in your higher LoD model when the differences are still sub-pixel sized. In practice, that's just not feasible on current hardware.

Regarding how the crossfade is actually accomplished, I wonder if the temporal accumulation buffers (used for modern taa) could somehow be leveraged.
 

Knurek

Member
Anyone can shed a light why SGSSAA is so good at eliminating temporal aliasing?

I've been playing Tales of Zestiria, and FXAA option the game has is terrible when it comes to distant objects - game shimmers so much, it's almost unplayable (and certainly annoying). I've also tried downsampling from 4k to 1080p and while better, it's still annoying.

Now, just a simple SGSSAAx4 is enough to get rid of all the shimmer, while having performance cost... hmm, I want to say between 3k and 4k. Good enough for my single 970 to handle most of the time.
 
Anyone can shed a light why SGSSAA is so good at eliminating temporal aliasing?

I've been playing Tales of Zestiria, and FXAA option the game has is terrible when it comes to distant objects - game shimmers so much, it's almost unplayable (and certainly annoying). I've also tried downsampling from 4k to 1080p and while better, it's still annoying.

Now, just a simple SGSSAAx4 is enough to get rid of all the shimmer, while having performance cost... hmm, I want to say between 3k and 4k. Good enough for my single 970 to handle most of the time.

SGSSAA is supersampling (downsampling). It just uses a different pattern than straightforward downsampling.

First of all, temporal aliasing happens thanks to subpixel changes between frames. Suppose texture pixels (texels) A and B are both within the sample space of screen pixel X, but texel A is just a bit closer to the center of X than B, then A is going to be sampled. If the model stays in the same place (relative to the camera/perspective) or if it moves an integer amount, then texel A will again be sampled. If it moves a non-integer amount, then there's a chance that texel B might be sampled instead of A. So you get flickering between the different texels, often also resulting in a moire pattern.

Straight forward downsampling says sample the texel for each subpixels W X Y and Z, then blend them together. Each of those subpixels will still have flickering, especially if each subpixel happens to sample the same texel, but it will be reduced by some margin as well as smoothed out.

SGSSAA however does the inverse. It says sample the texels A B C and D and then blend them together for pixel X. For this reason SGSSAA can sometimes look blurry. But you're less likely to get flickering because none of the texels are ever being missed.
 
SGSSAA is supersampling (downsampling). It just uses a different pattern than straightforward downsampling.

First of all, temporal aliasing happens thanks to subpixel changes between frames. Suppose texture pixels (texels) A and B are both within the sample space of screen pixel X, but texel A is just a bit closer to the center of X than B, then A is going to be sampled. If the model stays in the same place (relative to the camera/perspective) or if it moves an integer amount, then texel A will again be sampled. If it moves a non-integer amount, then there's a chance that texel B might be sampled instead of A. So you get flickering between the different texels, often also resulting in a moire pattern.

Straight forward downsampling says sample the texel for each subpixels W X Y and Z, then blend them together. Each of those subpixels will still have flickering, especially if each subpixel happens to sample the same texel, but it will be reduced by some margin as well as smoothed out.

SGSSAA however does the inverse. It says sample the texels A B C and D and then blend them together for pixel X. For this reason SGSSAA can sometimes look blurry. But you're less likely to get flickering because none of the texels are ever being missed.

short answer is sparse grid(rotated grid is even better) sampling is far superior to the ordered grid sampling used in driver downsampling and in game resolution scaling.
 

Human_me

Member
Here is a diffuse texture I made, with its corresponding height map:

Here is a mesh it is applied on, without and with POM active:

And as you can see, when the angle isn't optimal, the illusion of depth breaks:

POM can always be a hit and miss, I personally recommend it more on ground materials than walls.
If you want to have good detail when the camera gets closer up try doing adaptive tessellation instead.
It works wonders when applied correctly.
 
short answer is sparse grid(rotated grid is even better) sampling is far superior to the ordered grid sampling used in driver downsampling and in game resolution scaling.

Right. The relatively irregular pattern increases the coverage.


POM can always be a hit and miss, I personally recommend it more on ground materials than walls.
If you want to have good detail when the camera gets closer up try doing adaptive tessellation instead.
It works wonders when applied correctly.

Agreed. It helps improve the look of gravel and small rocks, but something like the brick wall sample has more cons than pros. Mainly the high-contrast edges with stretched texels all look bad to me. A good normal map goes further than POM most of the time.
 

Sakujou

Banned
noob-question:

is there a wiki or list of every effect ever invented?

i remember a time, when i was a school kid throwing words like mode 7 and blast-processing around... z-buffer and goraud shading and stuff like that sounded like it came from another universe. hope there is something where i can look up every effect ever invented with crazy names and a description which will be understood by the standard people out there.

these days i came across words like chromatic abberation and realistic filtering or something (the stuff which was first fully used correctly in battlefront)...

but iam actually not sure if i understood them in the right way...

a picture with switched on/off is the best way to show this, similar like how lens of truth/digital foundry does it....
 

Coll1der

Banned
noob-question:
is there a wiki or list of every effect ever invented?

The regular Wiki has almost everything covered at least to some degree. Implementations vary from place to place and are often proprietary, so you won't be seeing that too often. There is a good book series that's called GPU Gems published by Nvidia every now and then. It has a lot of info on the most popular techniques and their respective open-source implementations. To have something really cool though, you'll either need to research that stuff professionally, reading various university papers or actually working on some of that stuff every day as your day job.
 

Dunkley

Member
lazygecko, AntiAlias, squidyj, thank you so much for the explanations!

I feel like I finally understand how it works a bit better, and it's all thanks to you! Thank you so much.

I know I said thanks three times (and I guess a fourth time with this), but I can't tell you how great it feels to understand POM now.
 
POM can always be a hit and miss, I personally recommend it more on ground materials than walls.
If you want to have good detail when the camera gets closer up try doing adaptive tessellation instead.
It works wonders when applied correctly.

I personally think POM these days is best used for negative surface detail (i.e. cut outs in on a wood surface, not the raised areas) , so that you can almost never have to have problems with it cresting at sharp angles or flattening. That would then work for ground and walls or almost any surface shape.
 

Raist

Banned
I thought this was a nice touch in ROTR. I don't think I've seen a projector cast an image onto a character before in a game. It seems fancy.
screenshot-original-6fnsgm.png

I think there's something like that in Until Dawn, but I can't remember for sure.
It's a nice effect yeah, although I don't know if it's a tech feat or something.
 
You finally went and did it! Good luck...although do you think you could add some carriage returns between the paragraphs in the OP as it's a bit of a mess on mobile.

OK, n00b question - what is a shader?

A generic term for a small program that runs on the GPU. They come in many different types. For a simple render you would have a vertex shader, which would determine the placement of the vertices in the scene, and a fragment shader that would determine the colors of the pixels across the faces of polygons.

There are geometry shaders, tesselation shaders, and compute shaders among others. I'm using OpenGL names here, terminology can vary.
 

NBtoaster

Member
I need to ask why screen space reflections have become so popular. I understand they're more automated/artist friendly(?) than older solutions but there is usually so much artefacting its hard to believe the positives outweigh the negatives. Dying Light had a particularly bad implementation.

A recent example:
http://abload.de/img/justcause32015-12-0900dqfd.png

obvious artefacts on the edge of the screen, around the character and between the land and water. And reflections disappear when the objects off screen. Sometimes it looks good just on wet surfaces but water really needs something else.
 

Durante

Member
I need to ask why screen space reflections have become so popular. I understand they're more automated/artist friendly(?) than older solutions but there is usually so much artefacting its hard to believe the positives outweigh the negatives. Dying Light had a particularly bad implementation.
Like every time when something that is far from ideal gets really popular (like FXAA), the answer is usually that it's really cheap in terms of performance and considered "good enough" in terms of quality (given that low performance requirement).
 

Caayn

Member
I personally think POM these days is best used for negative surface detail (i.e. cut outs in on a wood surface, not the raised areas) , so that you can almost never have to have problems with it cresting at sharp angles or flattening. That would then work for ground and walls or almost any surface shape.
Wouldn't that only work as long as there's no negative surface detail on the edges?
 

Kezen

Banned
I need to ask why screen space reflections have become so popular. I understand they're more automated/artist friendly(?) than older solutions but there is usually so much artefacting its hard to believe the positives outweigh the negatives. Dying Light had a particularly bad implementation.

A recent example:
http://abload.de/img/justcause32015-12-0900dqfd.png

obvious artefacts on the edge of the screen, around the character and between the land and water. And reflections disappear when the objects off screen. Sometimes it looks good just on wet surfaces but water really needs something else.

Everything is here :
http://bartwronski.com/2014/01/25/the-future-of-screenspace-reflections/
 
Wouldn't that only work as long as there's no negative surface detail on the edges?

Yeah, you can get a way with that on lots of surfaces though surprisingly as long as you keep the detail small or maintain a certain standard of shape and application (as I recently found out). Boxes (which have angle cut offs built in), objects where the camera cannot navigate easily to greazing angles, or micro detail make it nigh-indistinguishable from geometric detail as 99% of the time the camera cannot get to the place where you would see it having grazing angle problems.

Examples:
Tiny detail so the camera cannot manage to make it distort (that screw inlay is about .5 cm across in game dimensions and tucked in a corner)
Same with these inlet screws:
You can never get to see this door at a grazing angle
Even micro surface ground detail is possible because the prone camera for the character will all ways be at an acute angle to the ground and not at a grazing one.

Building it so you cannot see the edge because geometry cuts off the viewing angle or by terminating pom before it hits surface edge:
Geometry framing the POM
starcitizen_2015_12_042s2h.png

Here Geometry cuts off the ability to view it at a grazing angle
starcitizen_2015_12_0yisag.png

starcitizen_2015_12_0itsul.png

Here the POM is terminated before it can hit a geometry edge.
starcitizen_2015_12_01iso8.png
This type of use case is far from the Crysis 1 style of applying it to rocky bump surfaces where the Player Character can easily get it to distort just by walking around. I think it really works in a smarter use case.
 

NBtoaster

Member
Like every time when something that is far from ideal gets really popular (like FXAA), the answer is usually that it's really cheap in terms of performance and considered "good enough" in terms of quality (given that low performance requirement).

Interesting, I guess the DX11 requirement in Crysis 2 ingrained in me that it was somehow expensive.


This answers a lot of my questions of how in compares to cubemaps. Seems most implementations haven't followed his advice of augmenting it with another technique though.
 
I don't know that it's something you can "eliminate." Pop-in is a tradeoff among a multitude of tradeoffs that have to occur to get a game out. If you were to come up with a more aggressive LOD system that would have smaller intervals between the detail levels, then you'd be spending a lot of time swapping resources which eats into your memory budget, disk polling time, and CPU time. When it comes down to it, having an enemy react to your actions in six different ways as opposed to three is more important for the game actually being fun.

I know. Maybe someday someone will develop all new techniques and algorithms to better handle LOD better.

Would better texture streaming be something that's more feasible in the near future? Normal maps popping in is another thing I hate.
 

Durante

Member
SSR is not cheap, it can be very expensive, but it adds a lot to the scene, so its worth it.
SSR is cheap compared to other ways of doing dynamic reflections. It's obviously not cheap compared not doing them, or using prebaked data.
 

JNT

Member
I find the prominence of the dithered fade-in quite weird. First saw it in Just Cause 2 in 2010 and seems to just be getting more popular since then. I thought dithering was something we left behind in the 90's.

What's worse is that in Wolfenstein TNO/idtech 5, the actual lighting itself seems to be dithered at times. It looks really offputting and I have no idea why they thought it would be a good idea to implement.

Dithering the lighting helps reduce banding.

ByyAJsSIYAABcta.png:large
 

Danlord

Member
...

This type of use case is far from the Crysis 1 style of applying it to rocky bump surfaces where the Player Character can easily get it to distort just by walking around. I think it really works in a smarter use case.

Everybody's Gone to the Rapture (A CryEngine game) uses POM this way, and can be easily broken. This particular one is okay but later on in the game there's more areas with it and is very easily broken. It also uses the effect for bricks on houses, but as mentioned can be broken when looking it at a specific angle.
everybodysgonetotheraalsjf.png



I'm really interested in learning a lot more of using Distance Fields, I saw a really cool effect for adjusting the flow of water and I would like to know more about it.
 

Mik2121

Member
Everybody's Gone to the Rapture (A CryEngine game) uses POM this way, and can be easily broken. This particular one is okay but later on in the game there's more areas with it and is very easily broken. It also uses the effect for bricks on houses, but as mentioned can be broken when looking it at a specific angle.
everybodysgonetotheraalsjf.png



I'm really interested in learning a lot more of using Distance Fields, I saw a really cool effect for adjusting the flow of water and I would like to know more about it.

Distance Fields seems to be the next big thing for many things like AO and shadows.
I have only read some stuff regarding flowmaps using DF but yes, it looks quite cool!
 

pottuvoi

Banned
Distance Fields seems to be the next big thing for many things like AO and shadows.
I have only read some stuff regarding flowmaps using DF but yes, it looks quite cool!
Here is a gold mine about tricks and rendering of distance fields.
http://iquilezles.org/www/index.htm
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm

Of course you can use the information for particle collisions and plenty of other things as well.
https://www.youtube.com/watch?v=3YH8oTzSQwY
 

KKRT00

Member
SSR is cheap compared to other ways of doing dynamic reflections. It's obviously not cheap compared not doing them, or using prebaked data.

Yes of course :) I was talking about console 33ms frametime.

Everybody's Gone to the Rapture (A CryEngine game) uses POM this way, and can be easily broken. This particular one is okay but later on in the game there's more areas with it and is very easily broken. It also uses the effect for bricks on houses, but as mentioned can be broken when looking it at a specific angle.
everybodysgonetotheraalsjf.png



I'm really interested in learning a lot more of using Distance Fields, I saw a really cool effect for adjusting the flow of water and I would like to know more about it.

Problem with Everybody's Gone to the Rapture is that it is using very crappy AF, which breaks PoM much faster.
 
I think DA:I did the ground the right way, with a tessellation radius surrounding the character. A non flat ground makes such a huge visual difference, and goes way beyond anything that can be produced with normal maps and POM. I wish more games did the same.
 

Durante

Member
I'm really interested in learning a lot more of using Distance Fields, I saw a really cool effect for adjusting the flow of water and I would like to know more about it.
I agree, my first post in this thread was actually about distance fields:
The most exciting new thing for me recently in graphics technology are all the amazing uses for dynamic distance field data (as available in UE4.9+).

E.g. one of my (many) pet peeves with graphics is flowing water reacting to obstacles. So far, you had either manual techniques which take a lot of effort, or offline computation which can't react to dynamic changes. Not so with distance field materials.

Of course, you can also use distance fields for lots of other things like AO, efficient GPU-based particle collision, and volume decals.

Thanks for the links pottuvoi!
If anyone has more smart examples where they are used please do post them.
 

Antialias

Member
If anyone has more smart examples where they are used please do post them.

Distance fields are frequently used as collision geometry representations in high-level computer animation and this will probably filter down into realtime in the next few years. I am certainly planning to investigate their use for realtime cloth and hair.

See also this talk by Jaymin Kessler about using distance fields for lighting.
 

Skinpop

Member
One thing that could bring aa to the next level is, how should I put it, programmable super sampling. From what I remember the hardware AA is designed to sample each pixel the same way, I guess right in the middle. So if you have a camera standing still and without any animations going on you'd have a perfectly stable image. If you could skew the sampling per pixel, you'd get something closer to real life or cameras, a stream of light with slight variations over time. This would also add temporal AA that could help greatly with IQ in motion. People obviously already do this in offline rendering but last I checked it's impossible on current hardware isn't it?

EDIT:
Also you might want to add this to the first post, but I think shadertoy is a great tool for learning about shaders and computer graphics. The code is often really simple so even beginners can look at and modify examples(in the browser) to get an understanding of how things work. It's different from the pipeline in a game but still a really cool app anyone interested should take a look at.

It's made by the same guy linked to above.
 
Top Bottom