• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

Danlord

Member
I agree, my first post in this thread was actually about distance fields:


Thanks, that might have actually been where I first saw it and forgot about it. The flowing water implementation is very nice, is there a good explanation that's fairly simple albeit with some technical details of how this data is created and then used? I like fairly technical stuff but sometimes I see explanations go over my head.
 
One thing that could bring aa to the next level is, how should I put it, programmable super sampling. From what I remember the hardware AA is designed to sample each pixel the same way, I guess right in the middle. So if you have a camera standing still and without any animations going on you'd have a perfectly stable image. If you could skew the sampling per pixel, you'd get something closer to real life or cameras, a stream of light with slight variations over time. This would also add temporal AA that could help greatly with IQ in motion. People obviously already do this in offline rendering but last I checked it's impossible on current hardware isn't it?

EDIT:
Also you might want to add this to the first post, but I think shadertoy is a great tool for learning about shaders and computer graphics. The code is often really simple so even beginners can look at and modify examples(in the browser) to get an understanding of how things work. It's different from the pipeline in a game but still a really cool app anyone interested should take a look at.

It's made by the same guy linked to above.

if im understanding you properly, amd has supported an msaa version of this for a long time(with no way to access it through direct x), and nvidia started with maxwell. thats how mfaa is done.
 

MajorTom

Member
The image of Nathan Drake in the first post is scary good.
Literally looks like a real person.

Seems like a pretty interesting thread.

Subbed
 

Skinpop

Member
if im understanding you properly, amd has supported an msaa version of this for a long time(with no way to access it through direct x), and nvidia started with maxwell. thats how mfaa is done.

i see, I remember reading up on this as future gpu tech a year or two ago. I guess you can't change sampling pattern on a per pixel basis?
 

watchdog

Member
Great thread. Haven't read through it all yet but it's still fascinating.

I don't know if my question has been brought up by anyone else yet but has there been research done on other methods of generating 3D objects besides using polygons and voxels? If so, are there any games or demos that have been produced using these other methods?
 
Great thread. Haven't read through it all yet but it's still fascinating.

I don't know if my question has been brought up by anyone else yet but has there been research done on other methods of generating 3D objects besides using polygons and voxels? If so, are there any games or demos that have been produced using these other methods?

There's been a bit of discussion in the thread about using distance fields for that purpose. That's an emerging technology with a lot of promise.

An older tech that didn't pan out is NURBs. Basically the Bézier curves of 3D, except hard to work with and not very efficient.
 

Danlord

Member
One thing that could bring aa to the next level is, how should I put it, programmable super sampling. From what I remember the hardware AA is designed to sample each pixel the same way, I guess right in the middle. So if you have a camera standing still and without any animations going on you'd have a perfectly stable image. If you could skew the sampling per pixel, you'd get something closer to real life or cameras, a stream of light with slight variations over time. This would also add temporal AA that could help greatly with IQ in motion. People obviously already do this in offline rendering but last I checked it's impossible on current hardware isn't it?
...

Re-reading this thread and playing through DRIVECLUB it reminded me. Is this how Driveclub has that adaptive AA solution in Photo Mode? Where you keep the camera still and the AA (among other things like more distant shadows and more prop objects in some scenes) are improved. Photo Mode based images of DRIVECLUB make it look incredibly good image quality because of it.
 

Kezen

Banned
I was looking at some Rise of the Tomb Raider shots and was wondering, how "high" quality is the motion blur ?

QQE8uFk.png

wrQNfHf.png


How does it stack up against the best motion blur implementations out there ?

I have another question, for a supposedly authored with PBR I find many materials lacking plausibility (I'm of course not expecting physical correctness).
Like, the stone for instance :
Tf4fIfJ.png


A last one highlighting the crude AF on display :
8qrPE7T.png
 
Great thread OP, subbed.

I have nothing to add here yet other than "graphix is purrddyyy!!!" but this will be a great place for me to learn a lot more. ;)
 

Durante

Member
Could we please not drag this great technical thread into the insufferable "this company is more evil than this other company" ghetto? Thanks.

Hint: they are all evil and none of them love you.
 

Kezen

Banned
Could we please not drag this great technical thread into the insufferable "this company is more evil than this other company" ghetto? Thanks.

Hint: they are all evil and none of them love you.

I understand. I just believe it fitted the theme of this thread.
 

Durante

Member
And of course the old halo effect in games like far cry 3 is disgusting.
But that's just a really really really terrible implementation of AO. I don't think it should even be called AO. It's hardly an indictment of the entire practice.

And in any case, distance field AO can't really be compared to SSAO anyway - for one, it takes into account geometry which isn't on screen.
 

Fafalada

Fafracer forever
Springfoot said:
Like everything in games development, it's a matter of budgeting the available rendering/performance resources.
The problem is in modern games you're dealing with hundreds (or more) different performance-impacting quantities that are usually not orthogonal. So budgeting generally boils down to brute-force generalizations across systems and assets that remove most of the fine-grain control necessary for something like optimal LOD-distance tweaking - and larger team sizes make this problem orders of magnitude worse.

Ie. in controlled scope-scenarios it's actually relatively easy to make optimal LOD transitions, but most (larger)games aren't build with controlled scope.

Anyway as long as we're talking pet-peeves, I'd add Shadow Cascade boundaries to the mix. They're more egregious in open-space games, and outright obnoxious in VR, and there aren't really any good alternatives that don't dramatically increase costs.
 

Durante

Member
So I doubt it would be hard at this point to use the same solution on a dithered crossfade between two LOD models. It would certainly be less jarring and noticeable than simple popping but wouldn't be quite as 'retro' looking as the visible dithering solutions that have been used in some titles recently. It also completely sidesteps the issue of trying to use forward-rendered transparency, which will never quite match the deferred shading of solid or masked objects.
Yeah, I think it could be a good alternative.

I think one of the most egregious issues with the dithered crossfade, though, is when it isn't simply set to trigger at a certain point and take X seconds to happen, but rather set to fade over a particular depth range. The latter approach has snuck its way into some fairly major releases over the past couple years and means that players can sometimes simply stand still or walk slowly at specific distances and see the two dithered LODs just sit there warbling in place without committing to one or the other.
Can you name an example of this? It sounds quite terrible.
 

Jux

Member
I think Far Cry 3 does this. Or maybe the time threshold was so small that you could just go back and forth a few dozens centimeters and make objects dither in and out. It was pretty noticeable anyway.
 
I am currently a bit busy with RL at the moment but will have more than enough stuff to post in here when I am done with that.
I was looking at some Rise of the Tomb Raider shots and was wondering, how "high" quality is the motion blur ?

QQE8uFk.png

wrQNfHf.png


How does it stack up against the best motion blur implementations out there ?

The best way to tell IMO would be to have an object that takes up a lot of screen space close to the camera FOV move. Then you could see the obmb in more detail to compare better to others. It does not seem really artifacty at that distance in these screen though.
 

Dezeer

Member
Aren't the new consoles 64bit? If so then why do they run x84? When my of runs a 32bit programme it's usually x84.

x86-64 natively supports x86 (32bit) and x86-64 (64bit), but what OP probably means with 64bit, is 64bit coordinate space.

Can you name an example of this? It sounds quite terrible.

GTA 5 does distance based dithering crossfade for new models that only changes based on distance to the object and not time. And I think it might even use it for bringing in new LODs, but I haven't really noticed that.



Does any game developer/engine have soft shadows that aren't done with PCSS or CHS?


Is there any game that has full procedural destruction, I know that RB6:S has very well done destruction, but it is only for select surfaces. And SC has a mixture of procedural and with "health pools" for select components and predetermined destruction points.
 

Peterthumpa

Member
Not sure if this was asked yet, although I believe I know the reason, but...

Why colision detection is so hard to accomplish? Things like part of the arm of a character clipping through a wall or through another character, for example.

Could it be explained by using "it is a costly effect" argument? Or is it something related to how i.e. there's no animation for when a soldier's weapon is clipping around and therefore, no way besides basically letting it happen? Couldn't IK be used to overcome that, like recently in games where a given character is standing on a roof and his instance adapts to the roof's angles?
 
Not sure if this was asked yet, although I believe I know the reason, but...

Why colision detection is so hard to accomplish? Things like part of the arm of a character clipping through a wall or through another character, for example.

Could it be explained by using "it is a costly effect" argument? Or is it something related to how i.e. there's no animation for when a soldier's weapon is clipping around and therefore, no way besides basically letting it happen? Couldn't IK be used to overcome that, like recently in games where a given character is standing on a roof and his instance adapts to the roof's angles?

Most collision questions could be explained with "Its just expensive". Characters are tricky because the players body usually only interacts with the ground. You can do collisions like with euphoria but then its hard to make it look realistic, usually the character ends up looking drunk.

The cost of getting realistic character collisions is generally considered not worth it, when the available resources could be used on other things.
 

Durante

Member
Also, if you want truly "accurate" response to arbitrary collisions between dynamic models (and especially human actors) you need to basically model your entire game around that. E.g. Exanima is trying to accomplish that, and another example is Overgrowth.

I don't think it's something you can just plug into a AAA production pipeline, even if you had 10x the hardware power available.
 
Gonna repost a question I asked a while back (not sure if it was in this thread or another)

Does anyone have screenshots or a demonstration of global illumination in use in Driveclub? It supposedly has it, and I'm not doubting it, but it's hard to find a situation where you'd even be able to tell.
 

Kezen

Banned
I am currently a bit busy with RL at the moment but will have more than enough stuff to post in here when I am done with that.


The best way to tell IMO would be to have an object that takes up a lot of screen space close to the camera FOV move. Then you could see the obmb in more detail to compare better to others. It does not seem really artifacty at that distance in these screen though.

Yeah I thought it looked really good.

I have another unrelated question : is it possible for Directx 11 to limit gaming PCs to the extent of missing out on some visual tech ?
 

Ishida

Banned
I thought this was a nice touch in ROTR. I don't think I've seen a projector cast an image onto a character before in a game. It seems fancy.
screenshot-original-6fnsgm.png

Metal Gear Solid 4 also did it, on the scene where Mei Ling is explaining the plans to infiltrate Outer Haven. You can even move the picture of the projector around.
 
Yeah I thought it looked really good.

I have another unrelated question : is it possible for Directx 11 to limit gaming PCs to the extent of missing out on some visual tech ?

yes. far cry 4 HRAA was a recent example

edit - according to ubi anyway
 

Micerider

Member
Yeah, I think it could be a good alternative.

Can you name an example of this? It sounds quite terrible.

I don't have a screenshot at hand to show, but that's what I got out of Rise of the Tomb Raider, Witcher 3 (PS4) and even on very far object in the Uncharted 4 beta. Not frequent in either of those (Witcher 3 being the worst) but sometimes a bit jarring as it looks like very ugly artifacts.
 
In a thread I can no longer remember the title of or find through search (it was a thread about common graphical annoyances in engines), someone said that Unreal Engine 4 doesn't handle screen space reflections properly by default.

Can anyone expand upon this?
 
In a thread I can no longer remember the title of or find through search (it was a thread about common graphical annoyances in engines), someone said that Unreal Engine 4 doesn't handle screen space reflections properly by default.

Can anyone expand upon this?
Properly? I am not sure what they may have meant by that.

It does have some oddness to it, like cutting off on screen edges, fading in slowly to become more detailed and leaving trails (it uses samples from previous frames to increase its quality / cut down on aliasing), as well as not representing some surface properties as well as other SSR's (namely the one in Frostbite).

But I think it is pretty great.
Yeah I thought it looked really good.

I have another unrelated question : is it possible for Directx 11 to limit gaming PCs to the extent of missing out on some visual tech ?

yes. far cry 4 HRAA was a recent example

edit - according to ubi anyway

The way I understand it is so> ay of the hardware features on GPUs that are not exposed under DX11 would be limited or just plain run slowly in comparison to some API which exposes them to be tweaked and messed with. Hence how something like HRAA works or how something like order independent transparencies could be really slow under dx11.

That does not mean though it is impossible, as DX11.3 adds in a lot of the GPU related features, but just cuts out all the memory and threading management. Also, stuff like TXAA, MFAA, CUDA, or physX, and etc all run under DX11, but with vendor specific extensions.
 
Properly? I am not sure what they may have meant by that.

It does have some oddness to it, like cutting off on screen edges, fading in slowly to become more detailed and leaving trails (it uses samples from previous frames to increase its quality / cut down on aliasing), as well as not representing some surface properties as well as other SSR's (namely the one in Frostbite).

But I think it is pretty great.




The way I understand it is so> ay of the hardware features on GPUs that are not exposed under DX11 would be limited or just plain run slowly in comparison to some API which exposes them to be tweaked and messed with. Hence how something like HRAA works or how something like order independent transparencies could be really slow under dx11.

That does not mean though it is impossible, as DX11.3 adds in a lot of the GPU related features, but just cuts out all the memory and threading management. Also, stuff like TXAA, MFAA, CUDA, or physX, and etc all run under DX11, but with vendor specific extensions.

from what i understand a lot of the unique hardware features of amd and nvidia are not exposed under dx11(and even dx12). they can only be accessed with opengl extensions(basically irrelevant) and nvidia has NVAPI but im not sure how transparent it is for developers to use NVAPI. regardless, most arent going to anyway since its a bunch of extra QA work that wont benefit amd, much less the consoles.

OIT is quite fast on intels graphics architecture under dx11, but it seems like it will be slow on maxwell even when dx12 adds support for their implementation. nvidia itself recommends being cautious with its use. amd is even worse, they still have to rely on linked pixel lists.
 

orioto

Good Art™
How is it that teams of teen hackers, writing emulators from their parents basements can get their video output to look like this without breaking a sweat:



while a multi-million dollar corporation is okay with releasing things that look like this:

The answer is really sad, be prepared:

.. Mainstream gamers will find the second screen better looking.
 
This is the first time I've read about distance fields, that's super interesting. If I'm understanding it correctly, it's basically doing a memory for performance tradeoff, yeah? You are caching the distance field every frame and using it to speed up the raymarching. Is that correct?
 

Kezen

Banned
The way I understand it is so> ay of the hardware features on GPUs that are not exposed under DX11 would be limited or just plain run slowly in comparison to some API which exposes them to be tweaked and messed with. Hence how something like HRAA works or how something like order independent transparencies could be really slow under dx11.

That does not mean though it is impossible, as DX11.3 adds in a lot of the GPU related features, but just cuts out all the memory and threading management. Also, stuff like TXAA, MFAA, CUDA, or physX, and etc all run under DX11, but with vendor specific extensions.

Which critical GPU features have yet to be exposed by Directx 12 ?

Looks good to me.
For best motion blur, I keep remembering Cryengine games. That has really good motion blur.
They are in a league of their own for sure. I was blown away by Crysis 2's DX11 motion blur back in the day.
http://international.download.nvidia.com/webassets/en_US/shared/images/articles/crysis2uu/MotionBlur.gif
 

Durante

Member
Switching gears, I also agree about cascaded shadow map boundaries being annoying, but there's just not a great alternative at the moment, and I'm generally in favor of maintaining shadows as far into the distance as possible rather than dropping them entirely as some open world console releases tend to do.
Generally, when you need distant shadows, it's in a landscape. For such cases, UE4's Ray Traced Distance Field Soft Shadows seem like a great alternative.

That actually reminded me of another piece of shadow-related tech that I'd love to see more frequently, which is the bokeh shadows Crytek did starting with Crysis 2. They were still using standard shadow maps at their core, but up close and in revealing situations, like the noisy shadows of tree leaves falling onto flat city streets, they looked great and hid any low resolution shadow maps. I'm trying to find the write-up they gave, but it might be buried in one of their 300mb rendering tech presentations.
Sounds interesting, I can't really imagine how bokeh and shadows relate just from the name.

Which critical GPU features have yet to be exposed by Directx 12 ?
Unless you have a very particular definition of "critical": none.
 

2+2=5

The Amiga Brotherhood
How is it that teams of teen hackers, writing emulators from their parents basements can get their video output to look like this without breaking a sweat:



while a multi-million dollar corporation is okay with releasing things that look like this:

That's not enough though, it just add scanlines but the image basically remains the same, to be faithful to the original vision it needs to make pixel disappear(yes 8 and 16 bit developers didn't want you to be able to distinguish pixels!) applying blur and more contrast
55-30086-ffvi_upres.png

55-30085-ffvi_ntsc.png
 

Durante

Member
So I dug through some of the old Crytek presentations and found some of the material on them. The linked slide is actually leading into more discussion on variable shadow blur based on distance (true soft shadows, similar to other percentage-closer effects where the farther the shadow caster is from the surface, the softer the shadow), but the bokeh effect is used even with that aspect is disabled and it's using a single penumbra blur for all shadows regardless of distance.

I've also attached a few photos I found quickly (jpgs, sorry!) that illustrate the real life effect they're faking.

Note the circular bokeh dots of light on the far walls:
nonLdCA.png


The slide is taken from their 2011 Siggraph paper available here.
Really neat!
 

dogen

Member
Which critical GPU features have yet to be exposed by Directx 12 ?

I know sebbbi mentioned he really wanted to see programmable sample patterns(for his msaa trick to work on pc) along with control over coverage samples(you know, CSAA/EQAA). He also mentioned a few other fancy sounding shader things. Nothing critical I guess.


Note the circular bokeh dots of light on the far walls:

I never noticed that until now, but it looks awesome.



But that's just a really really really terrible implementation of AO. I don't think it should even be called AO. It's hardly an indictment of the entire practice.

And in any case, distance field AO can't really be compared to SSAO anyway - for one, it takes into account geometry which isn't on screen.

Yeah, I didn't mean they're all like Far Cry 3. That's just the worst example I know of. Other games had halos too, the original crysis for example. AO is still generally non essential for me, because I notice performance more than I care about my video game looking like a video game(which I don't mind at all :p).
 
Top Bottom