• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Graphics Technology Discussion: All games on consoles and PCs..

On the subject of SSS, UE4 is getting a new one this week!

iJovZRa2gpMRl.png


https://www.unrealengine.com/blog/unreal-engine-45-preview-notes

Yes I talked about that in a previous post in this same thread. This Thursday's twitch stream will be about that. It uses the same approach presented by Jorge Jimenez last year.
 

pottuvoi

Banned
Alien isolation has a nice dynamic GI/bounce light solution, quite noticeable with flashlight.
Pretty sure it's used for all lights.
 
So uh, you know that timewarp thingamabob that Nvidia is marketing with Maxwell? Apparently all GCN cards are already capable of it, or something like it.

http://www.maximumpc.com/amd_claims_its_gpus_are_great_tackling_vr_latency_2014

This is interesting because the PS4 and Morpheus should support it by extension, and should have no problem exposing it through their respective APIs (whereas on PC, who knows when DX will have it?)

Apparently this was revealed three weeks ago so I'm super late. It's from Tom's coverage of the Occulus keynotes: http://www.tomshardware.com/news/oculus-oculus-connect-vr-amd-nvidia,27729.html
 
I finally found the GI feature that everyone is talking about on TLOU.

In the DLC, Ellie is trying to heal Joel and comes across a pharmacy store inside a mall. Turning on the flashlight in this store yields the GI feature. However there are several things about their implementation that I don't like:

1) Only certain walls are flagged with casting secondary ray bounces. Therefore the feature isn't always "on".

2) The distance that the secondary light travels is extremely short compared to Alien:Isolation (where an entire room can be almost lit up)

3) The GI seems to only work on objects that have normals totally perpendicular to the direction of the flashlight instead of *all* surfaces.

4) You only get the effect if you are faily close to the wall. So if you walk away from the wall, the GI disappears. In contrast to Alien where you can be several feet away from a wall, and just turning on the flashlight will yield secondary bounce.

5) Color bleeding only works on certain walls that have exaggerated colors on them.


-M
 

pottuvoi

Banned
I finally found the GI feature that everyone is talking about on TLOU.

In the DLC, Ellie is trying to heal Joel and comes across a pharmacy store inside a mall. Turning on the flashlight in this store yields the GI feature. However there are several things about their implementation that I don't like:

1) Only certain walls are flagged with casting secondary ray bounces. Therefore the feature isn't always "on".

2) The distance that the secondary light travels is extremely short compared to Alien:Isolation (where an entire room can be almost lit up)

3) The GI seems to only work on objects that have normals totally perpendicular to the direction of the flashlight instead of *all* surfaces.

4) You only get the effect if you are faily close to the wall. So if you walk away from the wall, the GI disappears. In contrast to Alien where you can be several feet away from a wall, and just turning on the flashlight will yield secondary bounce.

5) Color bleeding only works on certain walls that have exaggerated colors on them.


-M
Yes, it's quite low quality, but it was there on ps3, which made it quite impressive at the time.
I really hope that more games would begin to use decent GI for dynamic lights, even if most of the lighting in scene would be pre-baked.
Also indirect shadows are very important, TLoU had some nice tricks for it. (Character shadows using capsules and volume textures storing AO for big objects similar to what Infamous2 did.)
 

KKRT00

Member
Naughty Dog included sub-surface scattering since Uncharted 2 after they saw Nvidia head tech demo.

Oh a doubtful person will ask for links:

http://lousodrome.net/blog/light/tag/screen-space-sub-surface-scattering/

http://advances.realtimerendering.com/s2010/index.html

Download the Uncharted 2: Character Lighting and Shading PDF.

Many games used SSS similar to one in U2, but thats what was being talked about.

---
I heard from a steam friend that Sierra and Lucas Arts are revived and they are coming back slowly. Any news about that? I was very interested in that Indian Jones game that used DMM (Digital Molecular Matter) for realistic destruction (yes it had a better version than the simplistic version of Star Wars The Force Unleashed)!

https://www.youtube.com/watch?v=UEJDInk1NXQ

The tech demo was nice, sadly the final game wasn't that advanced in that domain with no shards and debris like that , just big chunks of wood getting broken and disappear later. I watched a longer live presentation of DMM at Games Convention in 2007 or 2008 in the same scene with R2D2 breaking crystlas and bending metal so realistically and dinosaur bones breaking too. In Star Wars, you can use the force to open metal doors by bending them but it looked very limited and almost scripted or animated.

Such an amazing tech. This one and Euphoria and both are very expensive unfortunately ;\
 

JordanN

Banned
New UE4 livestream discussion is up, talking about the new ray tracing shadows,improvements to DFAO and SSS.
https://www.youtube.com/watch?v=DQt_OopZadI&list=UUBobmJyzsJ6Ll7UbfhI4iwQ

I was going to type it up, but there's already a thread that covers the video, with the developers themselves posting there.

Some interesting take aways though:
-Ray trace shadows is based on sphere tracing
-Ray trace shadows doesn't have any polygon performance costs. It calculates based off cascades around a mesh (so 10 polygons or 1000 polygons, no difference)
-Ray trace shadows is actually faster to calculate than cascaded shadow maps are. Even while rendering at half resolution, the quality can still show up superior
-With future optimizations, ray trace shadows can have no cpu cost. It's actually GPU intensive.
-Specular occlusion is free [no performance cost]
-DFAO is tracing 9 cones in every direction. Their old SVOGI method was using 4.5 cones.

Here is also Martin Mittring (UE4's graphic architect) programmer blog.
 
Their naming conventions is very controversial. They aren't actually doing real ray-tracing. The dev even mentioned it should have been called sphere tracing. In any case, it looks good for static scenes, but dynamic implementation is still a ways off.
 

pottuvoi

Banned
Alien seems to calculate some lighting to transparent surfaces in domain shader. (view facing quad tesselated in tesselator and vertexes shaded, instead of just normal per-pixel shading or vertex shading.)
Something similar to this.

Very nice idea when one can have huge resolutions and usually low-frequency effects.
Although some AA would have been nice as in some cases it can be quite visible. (spotlight trough a fog.)
 

JordanN

Banned
Not exactly breaking news or the first game to do it, but the Russian PS4 game Without Memory is using tessellation.

B7r79L0.jpg


The rest of the game is expected to look like this (1080p).

I guess what makes this newsworthy is the developers claimed to overcome a bug that prevented it. There's a powerpoint out there by CD Projekt Red where they said they dropped tessellation because they couldn't overcome "cracking" issues.
 

RoboPlato

I'd be in the dick
Not exactly breaking news or the first game to do it, but the Russian PS4 game Without Memory is using tessellation.

B7r79L0.jpg


The rest of the game is expected to look like this (1080p).


I guess what makes this newsworthy is the developers claimed to overcome a bug that prevented it. There's a powerpoint out there by CD Projekt Red where they said they dropped tessellation because they couldn't overcome "cracking" issues.
This is really cool. Hopefully if it was an SDK issue they contacted Sony about how they over came it so it can be fixed for everyone.
 

JordanN

Banned
This isn't a video game but I was watching this kids Cartoon CGI on youtube.

It's particularly low budget, but it blew me away. To me, this is what I expect PS5/XB4 graphics should look like.

What catches my eye is not the actual lighting, but the color. There have been discussions in the game industry about "physically accurate skies" and "sRGB linear curves" but whatever this video is doing, I want to have the same technology for next gen (path tracing? Ubiased rendering? Plus ray tracing and global illumination).

It's so real!

ibj52GVgGW7Io1.png

Look at the natural blue tint when the garbage enters into the shadows.

ibxAfPIcGIPVTx.png

Very realistic sky coupled with accurate sunset theme


ibpGyznfejop7a.png

Another example of hyperealistic global illumination. Also check out the truck's reflection in the bottom right window and the ambient occlusion between all the objects.


I know it's all pre-rendered but I would kill for games to look like this. This is when graphics would finally feel "good enough".
 

Exuro

Member
Not exactly breaking news or the first game to do it, but the Russian PS4 game Without Memory is using tessellation.

B7r79L0.jpg


The rest of the game is expected to look like this (1080p).


I guess what makes this newsworthy is the developers claimed to overcome a bug that prevented it. There's a powerpoint out there by CD Projekt Red where they said they dropped tessellation because they couldn't overcome "cracking" issues.
For someone that knows, what is the difference between tessellation and displacement mapping?
 

JordanN

Banned
For someone that knows, what is the difference between tessellation and displacement mapping?

Tessellation is a form of tiling. For example, take a square and break it into two triangles. Then break it down again and get 4 triangles. Keep going and etc. It basically subdivides a mesh to increase surface geometry.

Displacement mapping takes information from a height map, and pushes polygons in an up or down fashion. Tessellation is very important for this, because it takes a lot of polygons to make a displacement map look good.
 

Exuro

Member
Tessellation is a form of tiling. For example, take a square and break it into two triangles. Then break it down again and get 4 triangles. Keep going and etc. It basically subdivides a mesh to increase surface geometry.

Displacement mapping takes information from a height map, and pushes polygons in an up or down fashion. Tessellation is very important, because it takes a lot of polygons to make a displacement map look good.
Ah, so it's used in conjunction? So for a displacement map you don't want to have an object that originally has a ton of vertices, so you take a simple object, tessellate it and then displace? If so how do you determine the height map, or is that already at some high resolution? I don't know what kind of data the height map is, like if its stored per vertex or something else. I'm finishing up an intro to graphics course and we covered a small amount on displacement mapping so I'm pretty intrigued by the things we barely talked about.
 

JordanN

Banned
Ah, so it's used in conjunction? So for a displacement map you don't want to have an object that originally has a ton of vertices, so you take a simple object, tessellate it and then displace? If so how do you determine the height map, or is that already at some high resolution? I don't know what kind of data the height map is, like if its stored per vertex or something else. I'm finishing up an intro to graphics course and we covered a small amount on displacement mapping so I'm pretty intrigued by the things we barely talked about.
The height map is pre-calculated. Just like how normal maps are created by shooting rays from a low poly to a high poly, a displacement map takes information from a high poly mesh and converts them to a greyscale texture.

And yes, they're used in conjunction. Video games have polygon budgets so you can't bring an already high poly mesh inside. The game engine reads from a material that can tessellate your mesh, giving you lots of polygons, and then takes the displacement map information and applies it. This is great as it means the game engine doesn't have to render millions of polygons everytime. It can render the polygons as you zoom in closer to a mesh, but gets rid of them when the camera zooms far away from it or looks in another direction.

I'm not sure about the true technicalities of height map, I'm only a game artist that uses it when modeling.
 
graphics needs to advance not in how it renders a single frame -- because photo modes in games that don't add any extra processing clearly look amazing already -- but in how it comes together when the camera moves or the world is moving.

It isn't frames per second either. Nor is it resolution increases.

You can build a hugely persuasive 3d moving image at 30fps and 1080p, or even 720p.

What needs to improve is the little things that scream "fake" in 20 seconds of play time

- visible level of detail jumps
- inaccurate or lower resolution shadows
- insufficient number of lights or fakery with multiple lights
- mirrors that don't work (GTA5 has wonderful indoor mirrors! but fails with outdoor ones)
- geometry+textures that uses too many tricks to fool the eye in a static shot but fall apart upon inspection
- aliasing solutions that work everywhere, probably downsampling is necessary
- visible switches in texture quality from near to far
- visible trickery in rendering lots of NPCs or lots of moving vehicles (you can create a traffic jam in GTA5, that is great! but if you turn 180 degrees to look elsewhere, then turn back, half the cars have vanished).
- caching algorithms for assets that reveal themselves instead of working behind the scenes

etc etc. I'd trade resolution and frame rate for applying floating point operations to improvements in these areas. I want GTA5 in motion to look as persuasive as it does in a snapmatic photo. It already looks better than anything else out there in a single frame grab because of the art direction and lighting and assets. Stop improving that and start improving how things look in motion.

Games should sell their graphics (if graphics is their main thing) from 20 seconds of uncompressed video not photos - even if they are actual frame grabs.
 

joesiv

Member
graphics needs to advance not in how it renders a single frame -- because photo modes in games that don't add any extra processing clearly look amazing already -- but in how it comes together when the camera moves or the world is moving.

It isn't frames per second either. Nor is it resolution increases.

You can build a hugely persuasive 3d moving image at 30fps and 1080p, or even 720p.

What needs to improve is the little things that scream "fake" in 20 seconds of play time

- visible level of detail jumps
- inaccurate or lower resolution shadows
- insufficient number of lights or fakery with multiple lights
- mirrors that don't work (GTA5 has wonderful indoor mirrors! but fails with outdoor ones)
- geometry+textures that uses too many tricks to fool the eye in a static shot but fall apart upon inspection
- aliasing solutions that work everywhere, probably downsampling is necessary
- visible switches in texture quality from near to far
- visible trickery in rendering lots of NPCs or lots of moving vehicles (you can create a traffic jam in GTA5, that is great! but if you turn 180 degrees to look elsewhere, then turn back, half the cars have vanished).
- caching algorithms for assets that reveal themselves instead of working behind the scenes

etc etc. I'd trade resolution and frame rate for applying floating point operations to improvements in these areas. I want GTA5 in motion to look as persuasive as it does in a snapmatic photo. It already looks better than anything else out there in a single frame grab because of the art direction and lighting and assets. Stop improving that and start improving how things look in motion.

Games should sell their graphics (if graphics is their main thing) from 20 seconds of uncompressed video not photos - even if they are actual frame grabs.
Sadly, all of the things you list are performance related. Either CPU, GPU, or bandwidth. The only reason they exist is because they're pushing too much, so they need to swap models, occlude objects, etc...

If you don't want that, you need better hardware, or better engines (that hide it even better, or being more efficient), or by dropping down the scale of the games.

You push off framerate/motion, but actually, I think that is one area that I feel videogame renderers really need to focus to give a more realistic look. You're right, still frames often look good, but the way game renderers are setup is they have a sequence of still images. Sure we have some motion blur algorithms, some better than others, but I feel that better motion blur is really a key to giving a convincing visual experience.

People harp on games that drop to 24fps, har har, they're more "filimic", but it's true, 24fps is filmic, but in videogames it's not because it's equivalent to a 24fps movie shot at 1/1000th sec shutter. The motion is very hard on your eyes. In film, people aim to have something like 1/180 or slower shutter, so that the exposure gives a natural blur between frames, that meet up with the next frame. 60fps, or even 48fps is equated to the soap opera look, uncannily real.

There are post processing motion blurs like this one, which is a post effect:
http://udn.epicgames.com/Three/MotionBlur.html
But as a post effect, it can have artifacts which can be quite jarring.
motionmblurtransiation.jpg

As you can see where the blur meets a non blur

There is also vector/directional based motion blurring, but that's based on simple blurring the of multiple renders (sometimes lower resolution renders), but the greater the blur, the greater the chance of the effect looking poor:
RadialBlur.jpg


So is the only solution to have more horse power to do more renders to bridge the gap between blurs? Or is there a better algorithm out there for realistic motion blurs?

Here is an interesting read on vector space motion blur with ray tracing:
http://www.kunzhou.net/2010/mptracing.pdf
 

pottuvoi

Banned

laxu

Member
So is the only solution to have more horse power to do more renders to bridge the gap between blurs? Or is there a better algorithm out there for realistic motion blurs?

I'd rather see them stop trying to play with motion blur too much. In pretty much all the games that have a setting for it, I tend to turn it off because it looks annoying. In reality any motion blur you see is generally your eyes unable to focus something moving quickly and still it doesn't look anywhere near as pronounced as what passes as motion blur in games.

Not to mention aside from recent developments with strobe backlight LCDs, display technology itself is already the biggest cause of motion blur thru display persistence.
 
This isn't a video game but I was watching this kids Cartoon CGI on youtube.

It's particularly low budget, but it blew me away. To me, this is what I expect PS5/XB4 graphics should look like.

What catches my eye is not the actual lighting, but the color. There have been discussions in the game industry about "physically accurate skies" and "sRGB linear curves" but whatever this video is doing, I want to have the same technology for next gen (path tracing? Ubiased rendering? Plus ray tracing and global illumination).

It's so real!

ibj52GVgGW7Io1.png

Look at the natural blue tint when the garbage enters into the shadows.

ibxAfPIcGIPVTx.png

Very realistic sky coupled with accurate sunset theme


ibpGyznfejop7a.png

Another example of hyperealistic global illumination. Also check out the truck's reflection in the bottom right window and the ambient occlusion between all the objects.


I know it's all pre-rendered but I would kill for games to look like this. This is when graphics would finally feel "good enough".

AC:U is closer to that "good enough" game (at least, so far).

7skQOm.png


bCI7tw.png


BeyVuj.png
 

NahaNago

Member
okay that image with the ball and the liquidy floor is weird its like when the ball has finished diving in, the floor is twitching like its alive.
 

joesiv

Member
okay that image with the ball and the liquidy floor is weird its like when the ball has finished diving in, the floor is twitching like its alive.

Yeah it's definitely not ready for prime time. But maybe they just used the wrong textures. It looks more like an advanced cloth simulation rather than a mud/fluid simulation.

In this case, the example looks poor because we would recognize the rocks in the texture as being rigid surfaces, so the fluid should flow around them rather than move with the fluid.
 

joesiv

Member
I'd rather see them stop trying to play with motion blur too much. In pretty much all the games that have a setting for it, I tend to turn it off because it looks annoying. In reality any motion blur you see is generally your eyes unable to focus something moving quickly and still it doesn't look anywhere near as pronounced as what passes as motion blur in games.

Not to mention aside from recent developments with strobe backlight LCDs, display technology itself is already the biggest cause of motion blur thru display persistence.

I agree over use of motion blur really can hamper gameplay, as it's hard to keep an eye on the action. However, a lot of it has to do with camera movement. For example, in racing games like forza motorsports, motion blur makes for a much more realistic image, and doesn't distract from gameplay.

The key is not going overboard with motion blur. It's just like any effect, you can go overboard.
 

NahaNago

Member
Yeah it's definitely not ready for prime time. But maybe they just used the wrong textures. It looks more like an advanced cloth simulation rather than a mud/fluid simulation.

In this case, the example looks poor because we would recognize the rocks in the texture as being rigid surfaces, so the fluid should flow around them rather than move with the fluid.

The mud simulation was fine up until you have the ball getting half way covered in mud with the mud twitching and vibrating on top of it. Not enough fluttering all around for cloth simulation but it does seem interesting like you could use it in a good pressure pushing situation like if you needed to do surgery in a game and you press down on something and the organs and blood beats harder the firmer you press down on it. Granted this all could just be a mess up on their part on what they were trying to show.
 
Great thread!

I'd like to hear your thoughts on interactive geometry deformation, which has lately been cautiously rising its head.

Some examples:

Crytek

SingleGrizzledArrowana.gif


Source: https://www.youtube.com/watch?v=6eNtO4tQWzA&feature=player_detailpage#t=29

NVIDIA PhysX Dynamic Heightfield Modification. Used in SpinTires.

https://developer.nvidia.com/content/physx-dynamic-heightfield-modifications
https://www.youtube.com/watch?v=Mxv8ObEo2qU

This needs to happen soon. It's been one of my biggest gripes with real time graphics. I hate seeing pre-backed shadows substituting wrinkles in clothing and sheets.
 

KKRT00

Member
Great thread!

I'd like to hear your thoughts on interactive geometry deformation, which has lately been cautiously rising its head.

Some examples:

Crytek

SingleGrizzledArrowana.gif


Source: https://www.youtube.com/watch?v=6eNtO4tQWzA&feature=player_detailpage#t=29

NVIDIA PhysX Dynamic Heightfield Modification. Used in SpinTires.

https://developer.nvidia.com/content/physx-dynamic-heightfield-modifications
https://www.youtube.com/watch?v=Mxv8ObEo2qU

Oh wow, they are really pushing Geocache tech to make it completely interactive.
 

LeleSocho

Banned
I always wanted to ask a question that is more hardware based, i hope it's ok if i ask it here...
To have a game render at higher resolution without compromising performance it's enough to have an higher pixel fillrate?
For example i have given game that runs at let's say 1080p, if i want the same game to run exactly the same in terms of both texture/mesh/shaders complexity and framerate in 4k all i have to do is have a GPU that is identical aside having an higher pixel fillrate (which is indicated by the number of ROPs iirc), right?
 

KKRT00

Member
No, because tons of stuff depends on framebuffer, like for example the whole post-processing pipeline depends on framebuffer size, the higher resolution of screen, the more computation You need, but also get higher precision.
 

LeleSocho

Banned
So it depends also on the framebuffer size (which i totally forgot)... but i don't understand if it does need more processing power or it's just a matter of these two things.
 
RESET also has some impressive atmosphere technology. http://www.youtube.com/watch?feature=player_detailpage&v=0ygZCDoCVec#t=236
This and TrueSky look very good. I wonder if something similar can be done with smoke effects. Still waiting for that KZ2 smoke.

Star Citizen is going to be using volumetrics for its in game smoke, particle effects
We’ve also started on one of the largest visual tech features we’ll be developing in the next six months which is a fully volumetric gas shader. The intention is to use this shader for both massive gas clouds to bring our space environments to life, and also smaller vfx like smoke and explosions. Rendering large semi-transparent volumes with real-time lighting is a significant challenge and is rarely tackled in computer games other than perhaps more limited solutions for cloud shaders in flight sims. As a result there are many aspects to this tech we’ll need to research separately such as the building/placement of the volumes, the complex shape and movement, the light scattering and shadowing, and efficient rendering.
 
Great thread!

I'd like to hear your thoughts on interactive geometry deformation, which has lately been cautiously rising its head.

Some examples:

Crytek

SingleGrizzledArrowana.gif


Source: https://www.youtube.com/watch?v=6eNtO4tQWzA&feature=player_detailpage#t=29

NVIDIA PhysX Dynamic Heightfield Modification. Used in SpinTires.

https://developer.nvidia.com/content/physx-dynamic-heightfield-modifications
https://www.youtube.com/watch?v=Mxv8ObEo2qU

I'm loving this, can't wait to blow off chunks of meaty, flexible flesh off of demon spawn from hell in Doom 6 etc.
 
So it depends also on the framebuffer size (which i totally forgot)... but i don't understand if it does need more processing power or it's just a matter of these two things.

Every game needs a buffer to render information to. Most of the time, you want to match the framebuffer to the actual resolution of the game. So if you increase this resolution, you are also increasing all the buffers associated in reading/writing to this buffer. There is a lot of multiple scene rendering going on especially on various "passes" (i.e. light, and shadow, tone mapping, etc..). All of this will increase as you increase framebuffer.
 

As soon as I saw 'per-face texture mapping' I immediately thought of the guy I worked with at Disney who created per-face texturing called Ptex.

The biggest downside to this method is trying to mip-map every single face. Even in the intro to this paper, they mention it. We don't use p-tex at our studio for this very reason. Our path-tracer would have to MIP-map all the faces and it is very slow.

-M
 
Top Bottom