• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Siggraph 2015 game graphics papers

I had not really seen a compilation thread set up for this yet, but I thought it would be a good idea since people really like looking through this stuff.

As in years past Stephen Hill collates compilation pages for the various courses, and this year is no exception. So far a good amount have been posted!

Physically based shading Course

Advances in Real time rendering

Open Problems in real time rendering

Not all of the papers have been put out through official channels yet (website has to update), so if people could share the ones they find I would definitely update the OP with ones that are interesting.

So far I have read through a number of RAD's papers:
Advanced Lighting RnD at RAD
Rendering the alternate history of the Order 1886

Media Molecule's presentation about alternative renderers

GPU Driven Rendering Pipeline used in AC Unity

4Gamer Article on Witcher 3 Siggraph Presentation.

Paper on recognized similar style in 3D objects.
Article: http://phys.org/news/2015-08-scientists-graphics-software.html
Paper: http://people.cs.umass.edu/~zlun/papers/StyleSimilarity/StyleSimilarity.pdf

I like to think about how this can be used to make so more cohesive procedurally generated levels.

Better skin rendering with microstructure simulation:
Article: http://gizmodo.com/a-graphics-breakthrough-makes-perfect-cgi-skin-1723675920
 

ShutterMunster

Junior Member
Thanks for this. I've been reading through the "Advanced lighting RnD @ RAD" while on set this morning. It was a bit much for 7:30 in the AM, but I'm learning!
 

gofreak

GAF's Bob Woodward
This presentation is crazy. What a pity that more ambitious versions didnt pan out, still awesome use of point cloud tech.
I wonder whats their frametime right now, how much overhead do they have and whats are the limits for splats on PS4 with all the lighting applied.

Slide 125 mentions 'tens of millions' of point splats per frame...which is a bit vague I guess.

Beyond the geometry side of things, it's a regular enough deferred lighting pipe, so the number of splats and the lighting cost would be fairly independent.

There is debug output on some of the screens in the slide. One has mention of 28.2m 'main splats' and the frametime for all the components listed adds up to just shy of 19ms. But I don't know if that list includes everything involved in the rendering, there isn't a 'total frametime' displayed.

In the heel of the hunt, I imagine they have a splat budget for a target framerate, and those point splats are distributed as judiciously as possible across the objects in the frame. The slides mention various LODs for objects and the ability to decimate and expand point clouds dynamically in between those LODs. Scalability in handling varying scene complexity would come from the distribution of a fixed number of splats.

(Thinking about the hinted VR mode, it could be that if your target is 60fps, it simply means you have a reduced avg splat count per object and rely heavier on temporal filtering and coherence to cover up artefacts that may result from that depending on the scene complexity.)
 
I have links to a lot of the techincal papers I attended. I'll post some when I get time(and after deciphering my chicken-scratch notes).

Here are some production talks from Ready at Dawn that were pretty fun to watch.

http://readyatdawn.com/presentations/

Defintely post em when you can!

I enjoyed how honest the RAD papers were, they were extremely happy with their AA and material pipeline because it was consistent and physically correct, but found their rather arbitrary lighting and post processing to be problematic, too time consuming, and not accurate.
 
Defintely post em when you can!

I enjoyed how honest the RAD papers were, they were extremely happy with their AA and material pipeline because it was consistent and physically correct, but found their rather arbitrary lighting and post processing to be problematic, too time consuming, and not accurate.

Is there actually a thread where people can talk about rendering / game engine tech in
general? I know of the indie thread but not sure if that is the right place to post
presentations and ask tech questions.
 

KKRT00

Member
Is there actually a thread where people can talk about rendering / game engine tech in
general? I know of the indie thread but not sure if that is the right place to post
presentations and ask tech questions.

We talked here a little :)
http://www.neogaf.com/forum/showthread.php?p=171085271#post171085271

---

Slide 125 mentions 'tens of millions' of point splats per frame...which is a bit vague I guess.

Beyond the geometry side of things, it's a regular enough deferred lighting pipe, so the number of splats and the lighting cost would be fairly independent.

There is debug output on some of the screens in the slide. One has mention of 28.2m 'main splats' and the frametime for all the components listed adds up to just shy of 19ms. But I don't know if that list includes everything involved in the rendering, there isn't a 'total frametime' displayed.

In the heel of the hunt, I imagine they have a splat budget for a target framerate, and those point splats are distributed as judiciously as possible across the objects in the frame. The slides mention various LODs for objects and the ability to decimate and expand point clouds dynamically in between those LODs. Scalability in handling varying scene complexity would come from the distribution of a fixed number of splats.

(Thinking about the hinted VR mode, it could be that if your target is 60fps, it simply means you have a reduced avg splat count per object and rely heavier on temporal filtering and coherence to cover up artefacts that may result from that depending on the scene complexity.)

I seem to missed the VR talk, but yeah it definitely is scalable down with their tech.
I know that they are some metrics, but they were just metrics, without any full context of how much of frametime they take in general and what are current the limits of PS4 in this tech.
 
Is there actually a thread where people can talk about rendering / game engine tech in
general? I know of the indie thread but not sure if that is the right place to post
presentations and ask tech questions.

This has litterally been an idea I have had for ages that unfortunately never comes to fruitition, mainly because it requires a lot of image work and text to set up the main OP.

The problem about it is people use tons of words to describe things at times that are either inspecific or misplaced. If you are gonna have a discussion about game graphics we need at least a common language and set of references! Especially so people who are more passingly interested can enjoy the conversation and not be brow beaten down. Hence the longer more detailed OP.

Added a GPU driven rendering pipeline from ubisoft in the OP. Cool info there about how it helped there game performance on the consoles and explains partially why Unity is so GPU heavy!
 

spekkeh

Banned
i thought siggraph was a silicon graphics thing, and they no longer exist, right?
As SquirrelWide said, Siggraph is a conference organized by ACM, the American national computer science organization (actually one of, the USA confusingly has two, ACM and IEEE-CS). It's mostly intended for the advancement of science, but Siggraph has become so popular that it has branched out (or deteriorated, depending on what type of scientist you are) into industry, and became something of an all things cgi conference/trade show. Another well known conference hosted by ACM that you may have heard of is CHI, which recently started a game/play focused conference CHI Play.
All the world's national computer science organizations are organized in the international federation for information processing (IFIP), which is a subsidiary of UNESCO.
 

Durante

Member
Not a scientific presentation, but still highly relevant and interesting: a Vulkan / OpenGL ES / OpenGL BOF video is up.

I care mostly about the Vulkan part, which starts here: https://youtu.be/faYDPjI2zhU?t=1h11m20s
In the big picture, they confirmed a 2015 release, and that conformance testing is in great shape, which should be music to the ears of everyone who ever fought different OpenGL driver's "interpretation" of the API.

I personally also found some of the specifics about windowing system integration interesting, but that was probably not too exciting for most. The Why Vulkan is great part of the presentation should be easily digestible though.

Also, there was confirmation that Vulkan development will be fully integrated and supported in the Android NDK, which was to be expected after the initial Android announcement but is still good to hear.
 
Me reading MM's presentation

tumblr_inline_mocmirmhCa1rlpk9c.gif
 

JNT

Member
Also, there was confirmation that Vulkan development will be fully integrated and supported in the Android NDK, which was to be expected after the initial Android announcement but is still good to hear.

Has Apple mentioned a plan to support Vulkan on their devices?
 

tuxfool

Banned
Has Apple mentioned a plan to support Vulkan on their devices?

No. Their logo has also been absent on more recent presentations. Not saying it wont happen, but they seem pretty happy with Metal and can bullishly force their api onto the faithful. Will probably see it in OSX before iOS (if at all).
 

Chobel

Member
How different is Metal API from DX12/Vulcan? Or in other words, how easy is it to port DX12/Vulcan code to Metal API?
 

tuxfool

Banned
How different is Metal API from DX12/Vulcan? Or in other words, how easy is it to port DX12/Vulcan code to Metal API?

Basically if you have Dx11 and DX12. Think of Metal as dx11.5. It keeps things like the binding model from older apis, but also has the separate command buffer and submission process from the newer apis.

This means it is easier to use than than Vulkan or Dx12, but it doesn't do things like bindless resources, which enable things like command buffer reuse.

Hopefully I have characterised it correctly, but that is my understanding from what I've read. Somebody stomp on me if this is wrong.

tl;dr: Metal is easier to use but not as flexible as dx12/Vulkan.
 

dr_rus

Member
qnYZjuG.png


Haha

This one is good as well:

ufnbNr9.png


Kinda illustrates why we're unlikely to get a lot of (good) DX12 ports of already released or releasing soon DX11 games.
 

Kezen

Banned
We already knew high performance APIs are a hell of hill to climb but the benefits should be worth it for everyone.
It helps that multiplatform developpers already have such a layer by virtue of doing Xbox One/PS4 work.

I expect the transition to go relatively smoothly for experienced AAA multiplatform devs. Can't wait to see how far they can push PCs with DX12.
Deus Ex Mankind Divided should look really great with it, and hopefully run well.
 

tuxfool

Banned
We already knew high performance APIs are a hell of hill to climb but the benefits should be worth it for everyone.
It helps that multiplatform developpers already have such a layer by virtue of doing Xbox One/PS4 work.

I expect the transition to go relatively smoothly for experienced AAA multiplatform devs. Can't wait to see how far they can push PCs with DX12.
Deus Ex Mankind Divided should look really great with it, and hopefully run well.

Any developer familiar with console APIs should be able to use the new ones fine, it is a similar paradigm if not the specifics of the api. This is only something new to those that aren't familiar with consoles.
 

The hope expressed by many presenters, even prior to the Khronos BoF sessions, was that Vulkan would lead to many different implementations of the rendering pipeline and brand new techniques that the stateful APIs could not effectively take into account.

Until people start building more interesting, higher level infrastructure on top of Vulkan, most developers will probably build directly on top of the existing, stateful APIs unless they have very demanding performance requirements.*

Think about what Three.js has done for WebGL for some idea of what I mean by "higher level infrastructure on top of Vulkan". Or what Core Animation did for the iPhone, by (allegedly) building directly against the PowerVR chipset rather than through a higher level graphics API.


* - Which, granted, could be interpreted as "all games" to some degree. :)
 
Top Bottom