• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

For the moment, I think that no studio surpasses what has achieved GuerillaGames with their cloudscape engine :
https://www.youtube.com/watch?v=ezKR3VuqD9k

https://www.guerrilla-games.com/read/the-real-time-volumetric-cloudscapes-of-horizon-zero-dawn

But Theory's work is amazing too, just a bit under GG'one, because it GG clouscape is more various, it can create many types of clouds in a same sky. In the video I shared, you can see in the sky, cirrus + cumulus @30sec. In Theroy'sky, I see only cumulus.

You may want to watch the video I linked again, specifically the beginning where it goes over all the layering of the different simulations: cirrus clouds are present.

Likewise, one thing that I think is interesting about the simulation in Reset is how the clouds cast shadows into the earth as well as casting through fog (thus creating cool and obvious volumetric effects or the appearance of variable fog densities)... something that the one from GG does not do tmk. Or at least the video you sent me does not have cloud shadows that are easily visible, or cloud shadows affecting fog that are easily visible).
 

dr guildo

Member
You may want to watch the video I linked again, specifically the beginning where it goes over all the layering of the different simulations: cirrus clouds are present.

Likewise, one thing that I think is interesting about the simulation in Reset is how the clouds cast shadows into the earth as well as casting through fog (thus creating cool and obvious volumetric effects or the appearance of variable fog densities)... something that the one from GG does not do tmk. Or at least the video you sent me does not have cloud shadows that are easily visible, or cloud shadows affecting fog that are easily visible).

Ok you are right about this, I haven't seen the writings about the layering of the different simulations. Now concerning cloud shadows, It can be faked ala UC4 for instance. Is there a viable way to check the accuracy of cloud's shadows?

Oh and have you checked GG's real-time volumetric cloudscapes publication ?
I repost the link here : https://www.guerrilla-games.com/read/the-real-time-volumetric-cloudscapes-of-horizon-zero-dawn

You will see that their clouds are unique in what they call the "beer's-powder' effect :

Here
https://image.slidesharecdn.com/thereal-timevolumetriccloudscapesofhorizon-zerodawn-artrcorrectedspelling-150824100858-lva1-app6891/95/the-realtime-volumetric-cloudscapes-of-horizon-zero-dawn-65-1024.jpg?cb=1440505619
https://image.slidesharecdn.com/thereal-timevolumetriccloudscapesofhorizon-zerodawn-artrcorrectedspelling-150824100858-lva1-app6891/95/the-realtime-volumetric-cloudscapes-of-horizon-zero-dawn-66-1024.jpg?cb=1440505619
https://image.slidesharecdn.com/thereal-timevolumetriccloudscapesofhorizon-zerodawn-artrcorrectedspelling-150824100858-lva1-app6891/95/the-realtime-volumetric-cloudscapes-of-horizon-zero-dawn-67-1024.jpg?cb=1440505619

I doubt that Theory's clouds engine could produce an effect like this, at least, I haven't seen it in your video.
Just recheck my video about that :
https://www.youtube.com/watch?v=bjDuvCifC2A

The effect in motion :
19gztn.gif

HorizonVolumes.gif


The reality :
the-realtime-volumetric-cloudscapes-of-horizon-zero-dawn-59-1024.jpg
 

nOoblet16

Member
So...are there any examples of games, outside of Crysis 3 and Killzone Shadowfall, of area lights in games?

Seems like it's one of those things that devs really wanted to do at the start but went poof later on. I don't know if Star Citizen uses it but I wouldn't be surprised since that game can afford to use everything.
 

illamap

Member
So...are there any examples of games, outside of Crysis 3 and Killzone Shadowfall, of area lights in games?

Seems like it's one of those things that devs really wanted to do at the start but went poof later on. I don't know if Star Citizen uses it but I wouldn't be surprised since that game can afford to use everything.

My understanding is that pretty much every pbr-enabled engine uses area lights for diffuse component but not specular, because there is no fast solution for it.
 

pottuvoi

Banned
My understanding is that pretty much every pbr-enabled engine uses area lights for diffuse component but not specular, because there is no fast solution for it.
If game has area lights it most likely has at least area specular like UE4.
Killzone engine has area diffuse and specular and so has Frostbite.

There was really cool polygonal area light paper a while back which was used for the newest Unity tech demo.
https://eheitzresearch.wordpress.com/415-2/

The next problem is how to shadow these area lights or indirect lights.
 

KKRT00

Member
So...are there any examples of games, outside of Crysis 3 and Killzone Shadowfall, of area lights in games?

Seems like it's one of those things that devs really wanted to do at the start but went poof later on. I don't know if Star Citizen uses it but I wouldn't be surprised since that game can afford to use everything.
Yep, SC is using Area Lights extensively. Most lights here are Area Lights - https://www.youtube.com/watch?v=pRMCI-d1rUc

Next game that uses Area Lights is Mirrors Edge Catalyst and Mass Effect Andromeda will use it as well.

I think Detroid is also using Area Lights, but i need HQ video of it to confirm.
 
Yeah SC has rectangular (4 sided area lights) and bulb-size area lights.. so it can represent sign, postage, or large sized rounded bulbs... but it does not have tube lights or arbitrary shapes like KZ:SF.
Just some examples (you can see both types in these screens):
starcitizen_2016_06_1hqz9y.png

starcitizen_2016_06_12obun.png

starcitizen_2016_06_1zyb44.png


Another game which has some area lights (tube lights and I think rectangular lights) was Alien Isolation. A lot of those standing "repair person" lights (where they are on a trip of sorts and are a long tube) were area lights as well as all the map terminals.
 

Javin98

Banned
So with Star Citizen set to be the most technically impressive game on any platform when it releases, one can't help but wonder what solution of GI it uses. I've been searching on Google for answers but couldn't get anything concrete. So, guys, what GI solution is it using? I'm also going to go out on a limp and assume it's a real time solution.
 
So with Star Citizen set to be the most technically impressive game on any platform when it releases, one can't help but wonder what solution of GI it uses. I've been searching on Google for answers but couldn't get anything concrete. So, guys, what GI solution is it using? I'm also going to go out on a limp and assume it's a real time solution.
ATM the game has no GI. In writing to them on their message boards "Ask a Dev" section, we know that they are considering perhaps a mix of different GI solutions depending upon the scale. Getting one solution that fits all the game's scales and dynamism (from micro interiors all the way to millions of km in space with multiple suns...) is not exactly straight forward even for SVOGI. Here is what they have posted before:
Hi @napoleonic,
There's three (now two) things that I think prevent(ed) us just jumping into SVOGI with both feet:
1) Sun only. We had the impression, though upthread @Dictator has said we were mistaken, that SVOGI is only applicable to sun light. Obviously that would have been a much more limiting problem for us than for Kingdom Come. There's still also the question of how it's able to do local dynamic lights, whether it adds cost to the shadow map calculation, for instance.
2) Local zone only. Assuming we can use indoor lights, that's probably the only place we'll use them. SVOGI takes many frames to build up its octree, meaning that you can't have arbitrarily rotating bits of scenery. Since the zone system is a CIG thing, someone would also have to add the code that lets it distinguish between stationary brushes in your ship, and stationary brushes in someone else's.
3) Manpower. It's an unfortunate fact that whenever we turn on a feature, even a supposedly mature one, it's often broken. Sometimes we'll have trampled over some assumption it was making, sometimes a small mistake was balancing against another mistake that we've fixed, sometimes it'll have clearly always been working wrong. Either way, I'd honestly stake a month's salary on there being at least a month's worth of edge cases, bug hunts and piecewise rewrites if we tried to use it. Once you get to estimates of that size, the discussion stops being "shall we use this feature" and starts being "what GI options are on the table".

Hope that helps explain where we are :)
Hi Dictator,

Cubemaps are currently hand placed, generally one per room inside ships and space stations. These probes can be dynamically tinted to handle minor lighting changes (e.g. lights dimming or turning off), but they won't capture major changes in lighting unless the artists bakes a 2nd version to swap to in these cases.

Currently for the exteriors we place one cubemap per star system, and then multiple near each point of interest. However this isn't sustainable for the PU longer term, and so next year we'll be developing tech to re-bake cubemaps live. Initially this will most likely happen each time you go in and out of quantum drive, because the cost of rendering a cubemap live for full PBR quality can be quite prohibitive (generally only racing games do this and this tends to be a very optimised code path). We'll also need to do tech to allow the live exterior cubemap to pass through windows and affect the interiors (e.g. so the glow of a nearby planet will light up your cockpit).

Cheers,

Ali Brown
Director of Graphics Engineering

Hi @MingX,
I think I talked about this before, but it might have been in the now-dead thread. The short answer is it's probably a bad fit for us for two reasons:
1) "Static" geometry. It builds the voxel grid over several frames by only voxelizing static geometry. We've already had several problems with things that the engine thought would never move, but that we've now built ships out of. So out-of-the-box, it would probably take a little work and then only apply to stations, then with a little more work might apply to the interior of the ship you're in, but would be a monstrous job to make work for the full outdoor environment.
2) Technical debt. I know for a fact the SVOGI implementation has some subtle bugs in it, because it has copy-pastes of code from the tiled lighting system where I already had to fix them. We just can't know how much more of it doesn't really work, and trying to use it could suddenly swallow a man-month or two that we don't have right now. It's tempting to think of engine features as "flipping a switch", but it's more common that only specific subsets of engine features really play ball, even in the unmodified version of the engine.
I absolutely agree, though, that some kind of GI solution would be a major lift in quality, and besides the glass shader it's the thing I most frequently bug Ali about. I just don't think it's necessarily SVOGI that will save us.
Since the voxelisation is done on static geometry, and one would imagine the main things providing interesting light bounce would be ships, there would probably need to be some concept that allows a whole ship to think of itself as "static", and some additional system to let those spaces relate to one another. An alternative would be to only handle the player's current zone, or the most recent zone that has anything worth voxelising in it, though no doubt you'd have weird edge cases where, eg, a player disembarks from a parked ship in a hangar.
etc...

So basically, they have to consider quite a lot more than most devs as to how to get GI for it to work for all the game provides.

LPV?
It's built into the engine, wouldn't be surprised if that's it.

LPV is deprecated in anything past 3.6.3 in CE. You could activate it... but it would disable some secondary features like tiled lighting... but since the guys @ CIG rewrote the entire tiled lighting system in CE.. perhaps it could work again and work well.
I think Detroid is also using Area Lights, but i need HQ video of it to confirm.

It is a bit hard to see in this screen:
vlcsnap-2016-06-18-12p9yuy.png

But the floor lighting there at least has proper diffuse shape... it is hard to see the specular though with the angles provided in the trailer.
 

Javin98

Banned
ATM the game has no GI. In writing to them on their message boards "Ask a Dev" section, we know that they are considering perhaps a mix of different GI solutions depending upon the scale. Getting one solution that fits all the game's scales and dynamism (from micro interiors all the way to millions of km in space with multiple suns...) is not exactly straight forward even for SVOGI. Here is what they have posted before:





etc...

So basically, they have to consider quite a lot more than most devs as to how to get GI for it to work for all the game provides.



LPV is deprecated in anything past 3.6.3 in CE. You could activate it... but it would disable some secondary features like tiled lighting... but since the guys @ CIG rewrote the entire tiled lighting system in CE.. perhaps it could work again and work well.


It is a bit hard to see in this screen:
vlcsnap-2016-06-18-12p9yuy.png

But the floor lighting there at least has proper diffuse shape... it is hard to see the specular though with the angles provided in the trailer.
Thanks for the megapost, man. Pretty informative. So if I'm understanding this right, manpower, budget and time are the reasons a GI solution has not been implemented instead of hardware horsepower. Perhaps it is more convenient for them to mix both baked and real time GI like Uncharted 4?
 

_machine

Member
Thanks for the megapost, man. Pretty informative. So if I'm understanding this right, manpower, budget and time are the reasons a GI solution has not been implemented instead of hardware horsepower. Perhaps it is more convenient for them to mix both baked and real time GI like Uncharted 4?
I would also say that SVOGI in CE is still far from production ready in general, it can work in some use-cases, but generally I would not consider it ready for implementation for most games. Manpower, budget and time is pretty much the reason for any decision though :)

As for convenient, yes, a mixture of baked and real time GI is probably the most convenient and technically least risky solution. Especially the probe-based, half-baked solution used heavily by Ubisoft and other is probably the most researched solution, is very broadly applicable and comes with the least risks. It doesn't mean it's the likeliest as they definitely like to push the envelope, and with a custom CE nothing is simple, but I could definitely see that being one of the best solutions, even if it's not the most visually or technically impressive anymore.

That's some good stuff though, Dictator. Would you happen to have a link to some of those discussions as it's kind of refreshing to have developers talk more in-depth about the risks and realities of graphics technologies, outside of engine manufacturer's own forums.
 
Thanks for the megapost, man. Pretty informative. So if I'm understanding this right, manpower, budget and time are the reasons a GI solution has not been implemented instead of hardware horsepower. Perhaps it is more convenient for them to mix both baked and real time GI like Uncharted 4?

I would also say that SVOGI in CE is still far from production ready in general, it can work in some use-cases, but generally I would not consider it ready for implementation for most games. Manpower, budget and time is pretty much the reason for any decision though :)

As for convenient, yes, a mixture of baked and real time GI is probably the most convenient and technically least risky solution. Especially the probe-based, half-baked solution used heavily by Ubisoft and other is probably the most researched solution, is very broadly applicable and comes with the least risks. It doesn't mean it's the likeliest as they definitely like to push the envelope, and with a custom CE nothing is simple, but I could definitely see that being one of the best solutions, even if it's not the most visually or technically impressive anymore.
I would usually agree that a mixed solution would be good for a game that has single player focused story and set piece areas and whatnot... but considering what we know about the game (i.e. planetary orbits being modeled and being able to shoot out and turn out all lights on ships or station at the flick of a switch and this being a big part of the gameplay) it would be hard to imagine how baking would work there. Especially for diffuse GI.

Edit: Just saw this!
That's some good stuff though, Dictator. Would you happen to have a link to some of those discussions as it's kind of refreshing to have developers talk more in-depth about the risks and realities of graphics technologies, outside of engine manufacturer's own forums.
Here is the old thread: https://forums.robertsspaceindustries.com/discussion/95610/persistent-universe-programming/p1

Here is the new thread: https://forums.robertsspaceindustries.com/discussion/324310/programming-engine-api-hardware-etc/p1

They also release monthly reports and the graphics / engine teams usually write up quite a lot on what they are working on. For example in the last monthly report:
The Graphics team in the UK:
This month the graphics team have finished off a number of features and fixes which we hope will be in the backers’ hands very soon.

The first of these is a major upgrade to the particle lighting system. Previously the particles were lit by a different system to the rest of the game, and while this was cheaper it had many visual problems such as lack of shadows and incorrect brightness or colour for some light types. We’ve now changed it so that it uses the new tiled-lighting system and so matches the rest of the lighting much better, but we’ve also added fully volumetric shadowing. This means we get accurate shafts of light passing through particles which looks very impressive in our tests and we can’t wait to see what the artists can do with this.

The tiled lighting system in general has also had a major upgrade, partly to allow the particle lighting, but primarily to achieve greater performance on modern GPUs. This has been deactivated in recent public builds but we expect it to be enabled in the next major release. While looking at the lighting we’ve also started to improve the quality of rectangular area lights. Real-time renderers need to make many compromises to achieve any form of area lighting in real time, but we’re hoping to improve the results of these because sci-fi environments so often use rectangular lights.

Our work on the various high-dynamic-ranges features continues with the completion of the light-linking feature to allow realistic levels of brightness and glow from our lights. Next we’ll be focussing on the exposure control system to make it better approximate the complexities of the human eye and brain’s amazing ability to adapt to environments which high contrast or very low light levels.

Some other more minor changes include: uncapping the GPU texture budget so that GPUs with more memory can benefit from higher texture resolutions and less likelihood of seeing any ‘popping’ from the texture streaming, completion of our internal render-profiler, continuation of our major UI refactor mentioned last month, and various bug fixes with culling.

Or their DE engineering team:
Renderer refactoring: On the Renderer side, we did some housecleaning and optimizations. Based on the refactorings from last month to increase the object count, we started to simplify the data upload to the GPU. Previously the CryeEngine system is based on reflection, so that the code could find out what data was needed on the GPU and only upload this data. While this sounds like a solid idea, finding out what was needed was more expensive than a straight data upload. So we began to remove those reflection code paths in the time critical areas. This also improved the readability of the code, as we can now see what the code does and not the logic to find out what to do. Related to this change, we also ensured to only upload the same data once to GPU. Previously it uploaded a data buffer for each object, and then uploaded the same data again if the code decided to use instancing. This is now fixed.

Data Patcher: On the Data Patcher (the tool which will be responsible to create the data for the engine to use when we switch to incremental patching), we made a little progress by better defining how to store the data. Not much reportable progress here as much work is about infrastructure discussions.

Optimizations: To further optimize the streaming code, we added timeslicing support to it again. This way the cost to update the distance to objects not visible to the player is done less frequent.

Tag System comes into the ZoneSystem: Initial support was written to support storing tags inside the ZoneSystem. A Tag is a small string which we use to give context information to an entity object. For example if we want to know if something is a chair, we can tag it as a chair. This way the AI system can ask all objects and find out if they are chairs. We already have such systems inside the engine but those are lacking spatial information, so they can only answer: is this object a chair, but not “find me all chairs around me”. This lead to some in-efficient solutions as the code had to brute force get many objects and check their type. To overcome this limitation we are moving the support for tags into the ZoneSystem, our spatial position system. This required some changes and new systems:

Storing and comparing a string is not very efficient on a computer (but very convenient for a human, thus we need it), so we implemented a Trie to allow us to very an efficient way to map unique strings to a fixed integer range. (We wanted to get an integer range instead of a hash as a range allows us some better broad phase checks and more efficient data storage)
Since not all data types which we stored inside the ZoneSystem require tags, we made the whole zone system more flexible to allow the client code to specify the properties to store per object type. This also reduced our memory usage in crusader by 50MB.
We implemented a specialized allocator for the tags so they can be efficiently culled by the low level zone system, which is implemented in SIMD, so the tags must follow a certain size and alignment.
And as a last thing, we implemented code to allow filtering tags by a DNF (Disjunctive normal form), which is a fixed format and can be used for efficient checking of arbitrary boolean expressions.

Runtime Skel-Extensions: The character customization system in Star Citizen is internally using an engine feature called “skinned attachments”. With skinned attachments it is possible to replace every deformable item on a character (i.e. cloth, shoes, spaces suits, helmets, etc) and even entire body parts such as faces, hands, or upper and lower body parts. Each skin-attachment has its own set of joints that are automatically animated and deformed by the base skeleton. It is also possible to use skinned attachments that have more joints and different joints then the base skeleton and it is possible to merge all types of skeletons together, even skeletons from totally different characters. That means you can have a minimalistic base skeleton which can be extended by an arbitrarily complex skinning skeleton. In the original CryEngine this was an offline- or loading-time feature, because the entire process was pretty CPU intensive. For Star Citizen we turned this into a runtime-feature that allows us to extend a base-skeleton anytime while the game is running, no matter if the character is alive and playing animations or in a driven- or floppy-ragdoll state. This means that you don’t have to know in advance the type of joints you might need in the base skeleton nor do you have to carry extra joints around just in case you might need them. Instead the system allows you to add new joints at will and whenever they are required.

Full-Body Experience: we also invested a lot of time to improve the full-body experience in first person. Our main goal was to make the head-bobbing customizable. In Star Citizen the head-bobbing is a natural side-effect of the mocap data, because third- and first-person are using the same animation. To make the controls as smooth and precise as possible, we implemented a new IK-solution to eliminate all unwanted effects from the 3rd person body animations on the first person view and weapon handling.

Yeah, it is darn refreshing to hear so much from a dev about their tech. I wish all devs had as open dev as SC... and as much fan input.
 

nOoblet16

Member
I see.
Imo I wouldn't hold out for any GI solution at all unless they go for a full real time approach like Quantum break (granted this is where lines get blurred over what we consider "real time" and what we consider a mixture of that and offline).
 

Javin98

Banned
I would usually agree that a mixed solution would be good for a game that has single player focused story and set piece areas and whatnot... but considering what we know about the game (i.e. planetary orbits being modeled and being able to shoot out and turn out all lights on ships or station at the flick of a switch and this being a big part of the gameplay) it would be hard to imagine how baking would work there. Especially for diffuse GI.
Huh, in that case, yeah, a baked solution would be pretty hard to use. Well, if you don't mind, keep me updated on this. I'm very interested in the use of GI in games. :)

I would also say that SVOGI in CE is still far from production ready in general, it can work in some use-cases, but generally I would not consider it ready for implementation for most games. Manpower, budget and time is pretty much the reason for any decision though :)

As for convenient, yes, a mixture of baked and real time GI is probably the most convenient and technically least risky solution. Especially the probe-based, half-baked solution used heavily by Ubisoft and other is probably the most researched solution, is very broadly applicable and comes with the least risks. It doesn't mean it's the likeliest as they definitely like to push the envelope, and with a custom CE nothing is simple, but I could definitely see that being one of the best solutions, even if it's not the most visually or technically impressive anymore.
That's what I thought, but it seems like it won't work well.
 

DGaio

Member
Javin98, regarding Parallax Mapping, you can actually see how it works in one of the Dev talks that Epic has every month, where they describe the technic and how its used and all that. This video illustrates how it works in Unreal Engine 4, but it's derivative of the same technic used in the majority of other game engines when implemented: https://www.youtube.com/watch?v=4gBAOB7b5Mg

(starts around 11:50)
 

_machine

Member
I would usually agree that a mixed solution would be good for a game that has single player focused story and set piece areas and whatnot... but considering what we know about the game (i.e. planetary orbits being modeled and being able to shoot out and turn out all lights on ships or station at the flick of a switch and this being a big part of the gameplay) it would be hard to imagine how baking would work there. Especially for diffuse GI.
Yeah, all of the dynamic aspects will be a problem no matter what their solution is. That said, because that kind of implementation is not so "in your face" (as seen in the Division) nor does it necessarily include even close to the full stack (and usually very limited sources) it's still very much a possibility as they do already precompute other things to a degree (like the env/cube maps). Still though, given what they are doing in other areas (like Area Lights for example), any GI solution is going to be a hurdle of serious trade-offs and risks.

Here is the old thread: https://forums.robertsspaceindustries.com/discussion/95610/persistent-universe-programming/p1

Here is the new thread: https://forums.robertsspaceindustries.com/discussion/324310/programming-engine-api-hardware-etc/p1

They also release monthly reports and the graphics / engine teams usually write up quite a lot on what they are working on. For example in the last monthly report:
The Graphics team in the UK:


Or their DE engineering team:


Yeah, it is darn refreshing to hear so much from a dev about their tech. I wish all devs had as open dev as SC... and as much fan input.
Thanks, pretty cool stuff to read (if only had more time...) and I would love to go visit the Foundry 42 in Frankfurt before I leave Germany for my next job. Especially as it shouldn't be too hard to arrange and I already had the pleasure of talking with some of the talented staff at last year's Gamescom/SC event.
 

Javin98

Banned
Javin98, regarding Parallax Mapping, you can actually see how it works in one of the Dev talks that Epic has every month, where they describe the technic and how its used and all that. This video illustrates how it works in Unreal Engine 4, but it's derivative of the same technic used in the majority of other game engines when implemented: https://www.youtube.com/watch?v=4gBAOB7b5Mg

(starts around 11:50)
Hey, thanks! Sounds interesting. But to be honest, I don't know how parallax mapping got into this discussion. We were talking about global illumination after all. :p
 

DGaio

Member
Hey, thanks! Sounds interesting. But to be honest, I don't know how parallax mapping got into this discussion. We were talking about global illumination after all. :p

Oh sorry! I was actually referring to one of your previous posts about spotting Parallax Mapping in Uncharted 4. Even without selfshdowing it can be spotted when you tilt or move the camera, since it changes the parallax elements depending on the perspective.
 

Javin98

Banned
Oh sorry! I was actually referring to one of your previous posts about spotting Parallax Mapping in Uncharted 4. Even without selfshdowing it can be spotted when you tilt or move the camera, since it changes the parallax elements depending on the perspective.
Oh, haha, I thought so. Thanks, I'll watch the video if I have time. Like I said, I noticed bumpy surfaces almost everywhere, but I still have a tough time telling geometry and POM apart. Surprisingly, a lot of the bumpy surfaces in Uncharted 4 are actually geometry. POM is also used more often than originally thought, however. Footprints in the sand seem to be POM as well. But I really haven't found any instances of self shadowing POM.
 

KKRT00

Member
Yeah, the problem with Star Citizen, except for major changes to rendering pipeline and really one of the most complex rendering setups in games to date (a lot of transparencies and 3d holographic scenes), is that they completely scrapped what normal games/engines think about geometry due to ships being practically full playable levels that can be detached and sometimes even deformed.
Game's lighting is also fully dynamic due to nature of spaceship having full dynamic and destructible lighting and planets having full rotations, that any form of bake components is really not viable for them.

---
It is a bit hard to see in this screen:
https://abload.de/img/vlcsnap-2016-06-18-12p9yuy.png
But the floor lighting there at least has proper diffuse shape... it is hard to see the specular though with the angles provided in the trailer.

Thats exactly what i had in mind, but it could be baked. We need more angles in HQ video to really judge some lights.
 

Sakujou

Banned
is there a wiki for every graphics effect listed and explained so a beginner can understand it too?
things like goraud shading, z-buffer, blast processing up to chromatic aberration and whatever these days is important.

would love to see comparison shots or videos to make sure that it is understandable easily.
 
is there a wiki for every graphics effect listed and explained so a beginner can understand it too?
things like goraud shading, z-buffer, blast processing up to chromatic aberration and whatever these days is important.

would love to see comparison shots or videos to make sure that it is understandable easily.

Would love this too
 

Javin98

Banned
is there a wiki for every graphics effect listed and explained so a beginner can understand it too?
things like goraud shading, z-buffer, blast processing up to chromatic aberration and whatever these days is important.

would love to see comparison shots or videos to make sure that it is understandable easily.
I wish there was something like that too. I know a good deal about game graphics technology, but the more complex terms and explaination on implementations of effects can still make my head spin. I'd say the best way is to follow any threads on GAF related to visuals and technical terms in games. DF threads are best for this, but often devolve into fanboy duels. NXGamer can also be a good source, but he is occasionally making wild assumptions. Beyond3D is a great place since the main discussions are pretty much this thread in forum form. It takes a long time to catch up on the terms and techniques, though, and even longer to become trained to spot them with your own eyes.
 

Javin98

Banned
I've been reading up on Hairworks in some games and damn, that stuff looks pretty impressive. Still not sure about the implementation on Geralt's hair in The Witcher 3, but at least the fur on animals and monsters look much better. Do you guys think a method similar to Hairworks is feasible on next gen consoles?

Also, I've been wondering this for a while, but do we have a detailed analysis on how fur is done in Bloodborne? I recall it being the typical fur shaders, but it looks far better than most I've seen in other games and it even sways somewhat realistically. Here's a pic of the mighty Cleric Beast:
2835195-6795550820-28351.jpg


Here's a video of the boss fight showing the fur swaying according to movement:
https://youtu.be/PwHgNxF7v9c
 
I've been reading up on Hairworks in some games and damn, that stuff looks pretty impressive. Still not sure about the implementation on Geralt's hair in The Witcher 3, but at least the fur on animals and monsters look much better. Do you guys think a method similar to Hairworks is feasible on next gen consoles?

Also, I've been wondering this for a while, but do we have a detailed analysis on how fur is done in Bloodborne? I recall it being the typical fur shaders, but it looks far better than most I've seen in other games and it even sways somewhat realistically. Here's a pic of the mighty Cleric Beast:
2835195-6795550820-28351.jpg


Here's a video of the boss fight showing the fur swaying according to movement:
https://youtu.be/PwHgNxF7v9c

hairworks uses isoline tessellation. i wouldnt expect anyone in the industry to adopt it outside of nvidia because its an inefficient solution. but it makes amd look bad in benchmarks so its purpose is served
 

Javin98

Banned
hairworks uses isoline tessellation. i wouldnt expect anyone in the industry to adopt it outside of nvidia because its an inefficient solution. but it makes amd look bad in benchmarks so its purpose is served
Well, when I said similar methods, I meant looks similar enough but possibly does things differently, like, say, TressFX. Basically, I'm asking do you guys think that next gen games will still use polygon strips for hair or more advanced methods like individual strains of hair?

Also, inefficient as Hairworks may be, at least the fur on animals look excellent. I recall TressFX being applied on animal fur in RotTR, but I don't think it looked good at all.
 
Well, when I said similar methods, I meant looks similar enough but possibly does things differently, like, say, TressFX. Basically, I'm asking do you guys think that next gen games will still use polygon strips for hair or more advanced methods like individual strains of hair?

Also, inefficient as Hairworks may be, at least the fur on animals look excellent. I recall TressFX being applied on animal fur in RotTR, but I don't think it looked good at all.

RotTR only used purehair on lara. It certainly looks much better than hairworks and is like 3% perf hit. Hairworks absolutely craters perf while looking worse. I expect most games will continue to use strips
 

Javin98

Banned
SquareEnix at Siggraph 2016 - Rendering techniques of Final Fantasy XV

https://www.youtube.com/watch?v=ZuF_-aAZmNk&t=40m25s
Sweet! Will watch this later.

RotTR only used purehair on lara. It certainly looks much better than hairworks and is like 3% perf hit. Hairworks absolutely craters perf while looking worse. I expect most games will continue to use strips
Hmm, that would be disappointing if devs continue using strips. Looking at the past three generations, polygon count on main characters are typically tripled from the previous gen. If the trend continues, I think devs should definitely spend a significant amount of those polygons on the hair of characters next gen. Polygon edges on bodies and especially faces are already pretty hard to notice on characters this gen, so it would be a waste to continue spending it all there.
 

pottuvoi

Banned
Also, I've been wondering this for a while, but do we have a detailed analysis on how fur is done in Bloodborne? I recall it being the typical fur shaders, but it looks far better than most I've seen in other games and it even sways somewhat realistically. Here's a pic of the mighty Cleric Beast:
2835195-6795550820-28351.jpg


Here's a video of the boss fight showing the fur swaying according to movement:
https://youtu.be/PwHgNxF7v9c
Looks like basic hair sheet method with quite low polycount. (Described earlier in this thread.)
Not sure if they use hair lighting method for it or are they handling it just like any other opaque object.
 

Javin98

Banned
Looks like basic hair sheet method with quite low polycount. (Described earlier in this thread.)
Not sure if they use hair lighting method for it or are they handling it just like any other opaque object.
Hmm, interesting. Do you have a link that describes this method? I know it's earlier in this thread, but it would help me immensely since I'm on mobile data.
 

Noobcraft

Member
Havok cloth for the polygon strips to simulate movement of fur? That's actually a pretty good idea if true.
That seems like it would be the standard (cheapest) method of doing physics based hair. I never thought the hair in Bloodborne looked good in motion. I actually think Shadow of the Colossus on PS2 did hair much better.
 
Top Bottom