• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

I want developers to focus more on eliminating things like pop-ups. This is currently my biggest annoyance. It started coming back last generation. Things were smoothed out a lot during the PS2/XB/GC era, and BAM, they're back with a vengeance last gen and still happening this generation, even on a powerful PC. Whether it's normal maps popping into view or LOD popping a few feet from your character. What's causing all these problems?
 

lazygecko

Member
I want developers to focus more on eliminating things like pop-ups. This is currently my biggest annoyance. It started coming back last generation. Things were smoothed out a lot during the PS2/XB/GC era, and BAM, they're back with a vengeance last gen and still happening this generation, even on a powerful PC. Whether it's normal maps popping into view or LOD popping a few feet from your character. What's causing all these problems?

I think there's two separate issues here: One is LoD popping as the meshes are switched out from the distant ones to the actual full detail ones. The other problem is texture streaming that started getting very common in UE3-powered games and onward. The texture pop in from streaming is probably due to developers trading that off for less loading times.

LoD pop in, and ugly LoDs in general, can be mitgated by covering it all up with a heavy DoF (and maybe some fog to help it along).
 

RoboPlato

I'd be in the dick
Great job finally getting this thread up! I know you've been planning it for a long time now.

What game are you using for the examples in the Post Processing post? Star Citizen or is it something in UE4?
 

HTupolev

Member
OK, n00b question - what is a shader?
The origin of the term refers to determining the shade of a pixel.

It's sort of become a general term to refer to any program that might be executed on the programmable units in a GPU. This initially dealt with calculating lighting for a pixel or vertex, but as the hardware got more versatile and people developed new algorithms, these days people are running things like particle physics as shaders.
"Shaders" is also sometimes used to refer to the programmable GPU units themselves, which is optimized around running many instances of a computational task in parallel (This tends to make sense for graphics tasks. i.e. a whole bunch of pixels that are next to each other might require the exact same sequence of operations to calculate the lighting result, just each pixel has different texture values and is seeing the light source from a slightly different position).
 

Demise

Member
Imho, motion blur should be disabled when you're sure your game can hit 60 fps any time. It looks aweful and is performance costy.

Depth of field too.
 

c0de

Member
OK, n00b question - what is a shader?

I think the OP should include at least some definitions for what the thread is about. Not only rules but also explanations on the basic things so that people who don't know what it is about can learn what they are about.
 
Imho, motion blur should be disabled when you're sure your game can hit 60 fps any time. It looks aweful and is performance costy.

Depth of field too.
Nah, it should stay as an option that's exposed to the user. If you don't like it, it's never turned on, but if you do like it like I do (I have to have it in racing games), then it's there.

Good motion blur also looks incredibly silky smooth above 100fps.
 

cheezcake

Member
Imho, motion blur should be disabled when you're sure your game can hit 60 fps any time. It looks aweful and is performance costy.

Depth of field too.

I'd agree if it's just camera motion blur, but solid per object motion blur can really convey a sense of motion at times.
 

Painguy

Member
Neat thread idea. I'm not graphics savvy enough to give any meaningful input/debate, but I guess this is a place where questions can be asked too?

I thought this was a nice touch in ROTR. I don't think I've seen a projector cast an image onto a character before in a game. It seems fancy.
screenshot-original-6fnsgm.png

That's like a 90's era effect. I do agree that it is a nice touch though.
 

Demise

Member
Nah, it should stay as an option that's exposed to the user. If you don't like it, it's never turned on, but if you do like it like I do (I have to have it in racing games), then it's there.

Good motion blur also looks incredibly silky smooth above 100fps.

Yes of course it has to remain an option. The more yo have the better. It was a piece of advice for player.

That being said, you like motion blur on pc ? Wow. Even on racing game i would rater like to have crisp movement of camera rather than smoothed and somewhat blurry feel.

And since we can ask question : what is the best aa solution ?
 

Durante

Member
Also, is it reusable (i.e. like when you generate a voxel grid from geometry that can be re-used for tons of different things)?

I can only agree though that moving away from pure screen space is something I find really awesome.
I can't answer your first question, but regarding this one, yes, it's reusable for many different things. E.g. you can use the same representation for distance field AO, particle collision and materials with distance input (like the flow map generation I posted).

That's like a 90's era effect. I do agree that it is a nice touch though.
Yeah, projective texturing has been available for a long time. 2001 whitepaper.
 
As mentioned on the on the previous page, splinter cell back in its day had that awesome projects texture/shadows over Sam, with light beaming out of holes in the walls and its soft body physics like cloth made my jaw drop. I don't think there was anything else quite like it at the time. Forgot to mention the volumetric lighting.
 

Durante

Member
And since we can ask question : what is the best aa solution ?
That's not a question with a simple answer (unless you define "best" with only regard to image quality and not performance, which doesn't make much sense).

I wrote an article some time ago which discusses all the different types of aliasing which can occur, and how they can be alleviated. It doesn't include the most recent methods, but it can serve as a starting point for discussion.

Currently, the approaches with the best aliasing-reduction/performance are probably multi-frame accumulation temporal methods like UE4 TAA.
 

Henrar

Member
I have a different question but I think it belongs to this thread.

x86 instruction set - are there any perfomrance benefits compiling games/engines with newer instruction support (like AVX for example)? If so, are there any examples of games that utilise those instructions? For example GRID2 had additional executable compiled with AVX.
 

Demise

Member
That's not a question with a simple answer (unless you define "best" with only regard to image quality and not performance, which doesn't make much sense).

I wrote an article some time ago which discusses all the different types of aliasing which can occur, and how they can be alleviated. It doesn't include the most recent methods, but it can serve as a starting point for discussion.

Currently, the approaches with the best aliasing-reduction/performance are probably multi-frame accumulation temporal methods like UE4 TAA.

Durante like the durante who saves pc games with its fix ? Damn, this is one hell of a circumstance. I did try to contact you back in the time about a project. Do you think i can pm you about it ?

I'll read your article.
 
Yes of course it has to remain an option. The more yo have the better. It was a piece of advice for player.

That being said, you like motion blur on pc ? Wow. Even on racing game i would rater like to have crisp movement of camera rather than smoothed and somewhat blurry feel.

And since we can ask question : what is the best aa solution ?

On phone so can't cut the quote but yeah, I like motion blur on everything provided it's done right. If it's just a shitty streak overlay then I turn it off. If it's beautiful, multiple-sample smooth motion blur like GTA 4/5/Just Cause 3, it stays on, even on PC.

I've always preferred it for some reason, I even keep it on in shooter games that support it like Battlefield and Red Orchestra as much as that sounds sacrilegious lol.

I have a different question but I think it belongs to this thread.

x86 instruction set - are there any perfomrance benefits compiling games/engines with newer instruction support (like AVX for example)? If so, are there any examples of games that utilise those instructions? For example GRID2 had additional executable compiled with AVX.

I'd like to know this too actually. I didn't realise AVX meant as in the instruction set when I ran it, but I don't see any difference at all between stock and AVX .exe for those games.
 

TUSR

Banned
Of course, I added that to the OP.


That would be, based upon what I can see, a projection texture from caster spot light. I am breaking my own rule right now by posting this, but early examples of it can be found in Source Engine games (but apparently not cs_office according to the above), Splinter Cell (Unreal 2.5), idTech 4 and in this original Unreal Engine 3 demo.
Here are some screens from my collection to show how it can be used:

Doom 3 can only render stencil shadows with hard distinct lines as per the shadows in image 1. But notice the shadows from the grating on the right-hand side wall in image 2: which are soft and have a gradation. That is just a faked texture projection casting from the light's perspective onto the world. If that texture were animated, you could also use it for a film projector like in tomb raider.
----
I am currently preparing a megapost on post-processing btw.

Is that similar to the shadows in The Last of Us?
 

cheezcake

Member
I have a different question but I think it belongs to this thread.

x86 instruction set - are there any perfomrance benefits compiling games/engines with newer instruction support (like AVX for example)? If so, are there any examples of games that utilise those instructions? For example GRID2 had additional executable compiled with AVX.

Sure, SSE instruction set is used pretty commonly for low level optimisation iirc. Don't know about AVX though.
 

HTupolev

Member
x86 instruction set - are there any perfomrance benefits compiling games/engines with newer instruction support (like AVX for example)?
There can be. If you have a vector operation that can be done with an AVX-capable execution unit in your CPU, and you use standard scalar operations instead, you'll need to execute more instructions.
 

Coll1der

Banned
N64.

I'm not a fan of fake camera effects like DoF. I really want a new ways to get around this rather than blurring the picture.

https://www.youtube.com/watch?v=-Wjx8gBSTSI
https://www.youtube.com/watch?v=SNFSw107MQs

Look at this trailer. The system requirements are extremely high but I can't get over the LOD popping in.

So here's the thing - games usually want to target mid-level PCs with specs, it's especially true with MMORPGs. Black Desert has a big draw distance and a lot of objects, it also has a lot of fancy new effects that also need some GPU time. What is happening here is developers opting for sacrificing pop-in to keep these effects. After all you can adjust pop-in on the fly, while you cannot adjust a lot of these effects. They are either in or out completely.

PC games usually have these draw distance and LOD sliders. There is no real cure for pop-in, maybe one cat mitigate it a bot with Rockstar style dithering effect. Will that suffice for you? Mind though that you'll have to pay for it with other kinds of visual artifacts, like mentioned here https://www.reddit.com/r/GrandTheft...is_there_a_name_for_this_graphical_thing_yet/
 
Very good idea for a thread. Subscribed.
Thx. It has been on the drawing table for ages. That is not to say though that it did not have an accute catalyst the other day lol.
It seems like most of the bloom flickering out there is due to aliasing in the initial render. Even simple bilinear rescale bloom tends to be blocky more than flickery.
Bloom filters definitely can experience flickering though, FFXIII being the most severe example that comes to mind.
Thx for the input HtupoIev! I should amend what I have written then. How would you best summarize a point about posible artifacting / aliasing in bloom filters?
You finally went and did it! Good luck...although do you think you could add some carriage returns between the paragraphs in the OP as it's a bit of a mess on mobile.
Thx! I cannot tell how many time I created a master document for the thread and scrapped it over the last 2 years or so. Carriage returns. Right!
OK, n00b question - what is a shader?
Some great answers in the thread already thankfully.
I don't think that "no jpegs!" requirement is going to be very fruitful.
I agree with this. Perfect is the enemy of good and all that.
It is mainly to jkeep a high standard of quality in a thread that can, tbh, afford it. It is not like there are time constraints regarding posting :D

BUT! At the same time I agree. What I imagine we should try and do is, if someone cannot provide high quality media... then it is at elast replaced in due time. Either by other posters nicely chipping in with their own image stock or the og poster adding it. What does everyone think?
Great job finally getting this thread up! I know you've been planning it for a long time now.
Thx man. As I said above, this is the 4th time I have made this thread. But the first time I have posted it. lol It is arguably les complete than previous times I "made" it though. I think I want more community feedback from GAF before I start arbitrarily laying down exactly how it "should" work. Also, if you want to contribute some screenshots/media for a certain passage, just post about it.
What game are you using for the examples in the Post Processing post? Star Citizen or is it something in UE4?
Yeah... I should probably require people to label the games from which they post media... (myself included). Star Citizen Alpha 2.0 H are used in the screens for colour grading and motion blur shots. The bloom comparison shot is from Natural Selection 2.
I think the OP should include at least some definitions for what the thread is about. Not only rules but also explanations on the basic things so that people who don't know what it is about can learn what they are about.
That is the general idea and I am going to be collecting suggestions on how exactly the thread should work in the mean time before it is perfectly set in stone. Currently it is working pretty alright though IMO! But the nth degree goal is to collate information in general about how 3d rendering works from a perspective on high at first (not too detailed at the start), then, common terms will be linked in the OP. These will then form basis for any discussion we have here so we are at least on the same page when talking about stuff. Also, it would be nice if the thread were friendly to those interested in asking questions casually / are passively interested in learning about stuff.
That's not a question with a simple answer (unless you define "best" with only regard to image quality and not performance, which doesn't make much sense).

I wrote an article some time ago which discusses all the different types of aliasing which can occur, and how they can be alleviated. It doesn't include the most recent methods, but it can serve as a starting point for discussion.

Currently, the approaches with the best aliasing-reduction/performance are probably multi-frame accumulation temporal methods like UE4 TAA.
Durante,
would you mind if I linked your post among others in the OP? Furthermore, would you perhaps like to take up describing a certain base term / concept regarding VG graphics as part of a mega post as linked in the OP? I currently just started one regarding post-processing, which will have a section on post processed based AA (of which there are so many now...).
Is that similar to the shadows in The Last of Us?
AFAIK, I am pretty sure TLOU, at least for outdoor shadows, uses shadow maps which are actually calculated in real time and take intermediarry geometry into account. Something like those soft shadows in doom 3 are just a texture look up that does not actually line up wit the geometry because it was calculated in real time or beforehand even. Rather, some artist just placed it really smartly. Doom 3 was really smart about the few things it did not manage to do in real time (motion blur is another thing it did in a really cool way in one scene).
Someone should post some Alien Isolation PC screenshots, it has some great looking effects and materials.
Yeah it is a great game to show things off in. Perhaps I will install it again this evening and just snap some stuff.
Don't think there is high quality footage available for FFVII remake.
:(
Anyways, I can do DoF.
It's late at night for me though...so tomorrow
Awesome! Do you think you could capture 2 different things? A game with DOF on DOF off where it is a gaussian DOF? And then a game where it is Bokeh (on off)?
 

squidyj

Member
Is that similar to the shadows in The Last of Us?

No.

The Last of Us bakes it's lighting information into textures called light maps. It stores two components, an ambient component that is directionless and a dominant directional component.
Then when it's rendering, for each background pixel (not quite, it operates at 1/4 res) it asks "how occluded is my ambient component?" and "how occluded is my directional component" and scales each component by that amount
 

gofreak

GAF's Bob Woodward
Beyond the latest U.E. iterrations, where else has this seen application? Also, is it reusable (i.e. like when you generate a voxel grid from geometry that can be re-used for tons of different things)?

I can only agree though that moving away from pure screen space is something I find really awesome.

I'm not sure of other engines/games that are using distance fields like UE - as a proxy for another more primary type of scene geometry. The Dreams engine is distance fields from top-to-toe basically, but it's using them as the primary representation, and I doubt many others will go all the way with it like that.

I imagine it could see more use like in UE4. I'm curious how faithful the distance fields are in UE4 to the original geometry. I presume it is a better approximation than something voxel based - though the properties of distance functions might make them appealing even if not.
 

Antialias

Member
Clothes physics done right - that seems like the proper next step (I won't go into hair...).
Earliest example that I can think of (where it wasn't a robe or anything) is probably the original Uncharted, where Nate's shirt would strech according to how he moved and how his legs were positioned.

But that wasn't truly 3D though, nor was it a simulation. Seemed like some sort of texture transition, I don't know what else to call it (I think Fifa uses it too). But it's not truly 3D.
I think a NBA2K title used full on 3D cloth physics, but I can't remember which one.

But I'm generally referring to action adventure games, or third person shooters. Uncharted 4 seems to have some really amazing physics in general, so here's hoping ND floors me again!
But I'm guessing we'll have to wait until next-gen for proper cloth simulation - Anyone have any proper examples?

The feature you are referring to is joint-actuated normal map blending.

Artists do some offline simulations (these days probably using Marvelous Designer) and bake the results down to normal maps in various poses, then blend between these maps depending on the angles of various joints at runtime. I believe MGS5 also does this (it's very noticeable in the prologue when you're wearing the hospital gown).

FIFA also uses cloth sim underneath, although a fairly low-resolution one. The problem with this is that the wrinkle maps don't know about the state of the cloth sim, so the cloth can be going one way while the wrinkles go the other because of the joint angles. Can be quite jarring if you know to look for it.
 
E.g. one of my (many) pet peeves with graphics is flowing water reacting to obstacles. So far, you had either manual techniques which take a lot of effort, or offline computation which can't react to dynamic changes. Not so with distance field materials.

Looks neat, but I still hope we'll get to see more and better realtime volumetrics within a few years.
While the distant field idea produces a neat effect, they don't impact fluid flow more than streching it and adding a foam line. Water still comes out of the bolder at full speed, instead of flowing around it.

Not just for fluid simulation, but also dust clouds or flame.

Realtime:
Deep Down engine demo:
https://www.youtube.com/watch?v=EYNQMxMPgmU
FlameWorks:
https://www.youtube.com/watch?v=L1577QeCdwk

Offline rendering:
A Stream Function Solver for Liquid Simulations:
https://www.youtube.com/watch?v=86W8ub8j3is

On the high end:
SpaceX advances in combustion CFD:
https://youtu.be/txk-VO1hzBY?t=9m57s
Nanoscale heat transfer:
http://www.engineering.com/DesignSo...cks-Nanoscale-Heat-Transfer-Calculations.aspx
 

Durante

Member
Durante like the durante who saves pc games with its fix ? Damn, this is one hell of a circumstance. I did try to contact you back in the time about a project. Do you think i can pm you about it ?
That's me. You can PM me, you should know though that I get a lot of requests and have a limited pool of free time ;)

I have a different question but I think it belongs to this thread.

x86 instruction set - are there any perfomrance benefits compiling games/engines with newer instruction support (like AVX for example)? If so, are there any examples of games that utilise those instructions? For example GRID2 had additional executable compiled with AVX.
It depends on how good the compiler is, and how many parts of the code can be heavily vectorized (and how significant they are). It can range from nothing to, potentially, a factor of 8 for tight SIMD loops compared to no vectorization. Of course, the better you get at vectorizing/parallelizing your engine code, the more likely you are to run into other limits, e.g. bandwidth on a consumer CPU.

Durante,
would you mind if I linked your post among others in the OP? Furthermore, would you perhaps like to take up describing a certain base term / concept regarding VG graphics as part of a mega post as linked in the OP? I currently just started one regarding post-processing, which will have a section on post processed based AA (of which there are so many now...).
Sure, link away. While it's not directly related to graphics technology, maybe this post of mine could be worth a link for a tech terminology baseline. I don't have much time now but I'll see what else I can do.
 

Noobcraft

Member
I think it would be great if someone could explain the differences between Forward/clustered forward, and deferred rendering. I kind of understand how they work thanks to Google but there's probably quite a bit that I missed.
 
The feature you are referring to is joint-actuated normal map blending.

Artists do some offline simulations (these days probably using Marvelous Designer) and bake the results down to normal maps in various poses, then blend between these maps depending on the angles of various joints at runtime. I believe MGS5 also does this (it's very noticeable in the prologue when you're wearing the hospital gown).

FIFA also uses cloth sim underneath, although a fairly low-resolution one. The problem with this is that the wrinkle maps don't know about the state of the cloth sim, so the cloth can be going one way while the wrinkles go the other because of the joint angles. Can be quite jarring if you know to look for it.
Thanks! Now I can use the proper term for it. It's something they used for Snake's muscles in the opening sequence as well as his trousers.
But I haven't noticed it being used out in the open world as extensively nor nearly as detailed as in the prologue. Maybe on his scarf.
And if I'm not mistaken, they used screen space reflections inside the building where you met Huey, but it was almost never used. It's seems they added detail in more confined areas. It's fairly noticeable if you look for it.
 
Sure, link away. While it's not directly related to graphics technology, maybe this post of mine could be worth a link for a tech terminology baseline. I don't have much time now but I'll see what else I can do.
Thx, I will add both to the OP. Beyond that, of course do not feel obligated to contribute a dedicated post on some subject. Only if you are interested and have the time.
And if I'm not mistaken, they used screen space reflections inside the building where you met Huey, but it was almost never used. It's seems they added detail in more confined areas. It's fairly noticeable if you look for it.
They are actually on everywhere in the PC version at all times, it is just most surfaces in the game are not nearly glossy enough for them to be very obvious. On console they are only on in select directed cutscenes or areas (hospital, huey scene).
 

Durante

Member
I think it would be great if someone could explain the differences between Forward/clustered forward, and deferred rendering.
At a high level, it's not really that complicated.

In traditional forward rendering, the contribution of each light to each surface is evaluated when you render that surface. This is straightforward, but leads to some issues:
1) you might be spending a lot of computational time shading surfaces which are actually hidden behind other opaque geometry.
2) with a large number of lights, you might have to render the geometry multiple times to take all of them into account, each time again also re-calculating light-independent facets such as projection etc.

The first point can be alleviated by performing a Z-only prepass, which simply means rendering your scene once without any materials so that you don't incur shading overhead for hidden surface on the actual rendering pass(es).

Clustered forward and deferred shading are both attempts at solving issue #2. Deferred shading stores all the information required for shading (that is, material properties such as diffuse/specular color and reflectivity as well as geometry properties such as normal directions and Z depth) in an intermediate (set of) buffer(s), usually called a G-buffer. Shading is then performed simply as "2D" passes on that buffer. Clustered forward rendering, conversely, tries to still use a forward rendering approach, but clusters geometry into batches corresponding to the lights which affect them.

There's a distinct set of advantages and disadvantages for each approach, e.g. forward rendering allows you to use hardware-accelerated MSAA, which is cheap and effective.
 

tuxfool

Banned
Clustered forward rendering, conversely, tries to still use a forward rendering approach, but clusters geometry into batches corresponding to the lights which affect them.

I should also add that there is Tiled Deferred, which does a similar thing to Tiled/Clustered Forward, in the sense that it splits up the scene into chunks. The objective here is to reduce the memory bandwidth required to shade a massive amount of data at once. There is another advantage of a Tiling system in that you switch easily between deferred and forward, which one may want to do when rendering transparencies.

I refer people to this paper explaining all the different 4 different forms. It also goes into implementations and analysis of the performance of each method.
 

dogen

Member
I should also add that there is Tiled Deferred, which does a similar thing to Tiled/Clustered Forward, in the sense that it splits up the scene into chunks. The objective here is to reduce the memory bandwidth required to shade a massive amount of data at once. There is another advantage of a Tiling system in that you switch easily between deferred and forward, which one may want to do when rendering transparencies.

I refer people to this [URL=" [/URL]explaining all the different 4 different forms. It also goes into implementations and analysis of the performance of each method.

This is also really good.

https://newq.net/publications/more/sa2014-many-lights-course

Part 1 is a nice overview with lots of good pictures too.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
At a high level, it's not really that complicated.

In traditional forward rendering, the contribution of each light to each surface is evaluated when you render that surface. This is straightforward, but leads to some issues:
1) you might be spending a lot of computational time shading surfaces which are actually hidden behind other opaque geometry.
2) with a large number of lights, you might have to render the geometry multiple times to take all of them into account, each time again also re-calculating light-independent facets such as projection etc.

The first point can be alleviated by performing a Z-only prepass, which simply means rendering your scene once without any materials so that you don't incur shading overhead for hidden surface on the actual rendering pass(es).

Clustered forward and deferred shading are both attempts at solving issue #2. Deferred shading stores all the information required for shading (that is, material properties such as diffuse/specular color and reflectivity as well as geometry properties such as normal directions and Z depth) in an intermediate (set of) buffer(s), usually called a G-buffer. Shading is then performed simply as "2D" passes on that buffer. Clustered forward rendering, conversely, tries to still use a forward rendering approach, but clusters geometry into batches corresponding to the lights which affect them.

There's a distinct set of advantages and disadvantages for each approach, e.g. forward rendering allows you to use hardware-accelerated MSAA, which is cheap and effective.
It's important to make the distinction between deferred rendering and deferred shading. The latter is what you describe. The former is a form of scene capturing where draw calls are collected (per tile or otherwise) until the scene is ready, and only then emitted, ideally with some early occlusion applied.
 

Dunkley

Member
There's been a lot of things I've been wondering about, like HBAO and SSAO and how they approach shading the scene, but one thing I've been wondering for a long time is how Parallax Occlusion works.

So, could anyone explain how Parallax Occlusion Mapping works and how it compares to Tessellation?
 

lazygecko

Member
So, could anyone explain how Parallax Occlusion Mapping works and how it compares to Tessellation?

Parallax is derived from a monochrome heigth map texture, where the brighter it is the more elevated it appears. I don't know as much about tesselation but it looks more like some kind of actual mesh morphing, unlike the "illusion" of 3D that is POM.

I have worked with POM textures but I understand it on about the same level as I understand how to operate a washing machine. I can predict what will happen based on the parameters I specify, just don't ask me what's actually happening on the technical level.
 

Antialias

Member
There's been a lot of things I've been wondering about, like HBAO and SSAO and how they approach shading the scene, but one thing I've been wondering for a long time is how Parallax Occlusion works.

So, could anyone explain how Parallax Occlusion Mapping works and how it compares to Tessellation?

POM and tessellation are both ways of implementing Displacement Mapping, which is the actual feature you are referring to. Tessellation on its own just refers to subdividing geometry dynamically on the GPU.

Displacement mapping means having a texture where each texel stores a vector that we want that part of the object to be moved by, usually in/out of the surface (along the normal).

POM is a screen space effect (i.e you do it in the pixel shader) where you do root-finding along the camera ray to find the intersection point with the displacement map.
ccs-209764-0-56739300-1372616593.jpg

The bonus of POM over normal mapping is that you get occlusion within the texture (deep valleys in your displacement map will be occluded by other parts of the map). However, it will not change the silhouette of your object (without a bunch more work, that is). Another disadvantage is the root-finding generally needs a bit of tuning and gets more expensive the more different the displaced surface is from the mesh. As far as I know POM can only handle Z displacement (in/out of the surface).

Tessellation is, as explained, the ability to subdivide the render mesh at runtime. Since DX10 there are special parts of the graphics pipeline where you specify how to tessellate the mesh (Hull and Domain shaders in DX). Once you've computed the subdivided points, you can just move them around by sampling your displacement map, and then the graphics pipeline proceeds as normal. This has the big advantage of also improving the silhouette of the object. Other advantages are not having sampling issues like POM, and you can use the full X/Y/Z displacement if you want (not sure too many people do, though). Tessellation displacement mapping is notorious for having issues with cracks in geometry, however (the boundary between one level of refinement and another can be very tough to get properly smooth).
 

squidyj

Member
There's been a lot of things I've been wondering about, like HBAO and SSAO and how they approach shading the scene, but one thing I've been wondering for a long time is how Parallax Occlusion works.

So, could anyone explain how Parallax Occlusion Mapping works and how it compares to Tessellation?

okay, mapped to vertices are texture coordinates that tell you where to sample the texture from, we interpolate between vertices to figure out where to sample a given texture for any point we're trying to shade.

With parallax mapping, and parallax occlusion mapping what we're doing is changing where we sample our other textures by using a texture called a height map. The height map describes the 'actual' surface we're trying to model, as opposed to what is represented by the polygons. With parallax occlusion mapping the height of the 'actual' surface is always 'lower' than the polygon as this allows us to start at the point on the surface of the polygon and work forward.
Since we know that we started 'above' the heightmap we march forward along our viewing vector in some increment each time and check to see if we are now 'underneath' the heightmap. When we find a sample where we are, we interpolate between that sample point and our previous sample point to generate a more accurate point as an approximation to where we actually would have crossed the heightmap.

This new sample position is what we use when we read the albedo texture and normal map and any other texture we might need to use. The offset from the sample point on the surface of the polygon gives it that parallax effect, and the marching technique allows one part of the texture to occlude another, we can use the same technique when lighting to see if our new sample point can 'see' the light source, giving us self-shadowing.


In comparison to tesellation it doesn't interact with the geometry at all, so it can't alter the silhouette of an object.
 
Neat thread idea. I'm not graphics savvy enough to give any meaningful input/debate, but I guess this is a place where questions can be asked too?

I thought this was a nice touch in ROTR. I don't think I've seen a projector cast an image onto a character before in a game. It seems fancy.
screenshot-original-6fnsgm.png

Like others have said, this has been done before. What I would like to see is the projected image loosing focus as the character moves closer to the projector. Are there any games that do that?
 
POM and tessellation are both ways of implementing Displacement Mapping, which is the actual feature you are referring to. Tessellation on its own just refers to subdividing geometry dynamically on the GPU.

Displacement mapping means having a texture where each texel stores a vector that we want that part of the object to be moved by, usually in/out of the surface (along the normal).

POM is a screen space effect (i.e you do it in the pixel shader) where you do root-finding along the camera ray to find the intersection point with the displacement map.
ccs-209764-0-56739300-1372616593.jpg

The bonus of POM over normal mapping is that you get occlusion within the texture (deep valleys in your displacement map will be occluded by other parts of the map). However, it will not change the silhouette of your object (without a bunch more work, that is). Another disadvantage is the root-finding generally needs a bit of tuning and gets more expensive the more different the displaced surface is from the mesh. As far as I know POM can only handle Z displacement (in/out of the surface).

Tessellation is, as explained, the ability to subdivide the render mesh at runtime. Since DX10 there are special parts of the graphics pipeline where you specify how to tessellate the mesh (Hull and Domain shaders in DX). Once you've computed the subdivided points, you can just move them around by sampling your displacement map, and then the graphics pipeline proceeds as normal. This has the big advantage of also improving the silhouette of the object. Other advantages are not having sampling issues like POM, and you can use the full X/Y/Z displacement if you want (not sure too many people do, though). Tessellation displacement mapping is notorious for having issues with cracks in geometry, however (the boundary between one level of refinement and another can be very tough to get properly smooth).
Thank you for this post!
Would you mind if I used some of your nice description here for the miscelaous section on the post processing mega post (which partially covers POM)?
I'm posting because it doesn't inform you when I subscribe, but I'm incredibly excited for the future of this thread.

I am excited as well. I hope it turns out well.

I will be dedicated to it at least!
 

Antialias

Member
Thank you for this post!
Would you mind if I used some of your nice description here for the miscelaous section on the post processing mega post (which partially covers POM)?

Please go ahead :)

Edit - Having said that, I wouldn't consider POM a post-process.
 
Top Bottom