• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Game Graphics Technology | 64bit, procedural, high-fidelity debating

KKRT00

Member
Thx. It should get even more structured in time hopefully! I just wish more people would post :D
---
GDC 2016 talks are starting to pop up and they are looking already pretty interesting:

Object Space Rendering in DirectX 12 sounds awesome.

Also interesting to see that Timothy Lottes is working on HDR displays.

-----
There will be new, big release of CryEngine version in March, so expect some nice tech trailer from Crytek on GDC.
http://www.cryengine.com/news/cryengine-s-new-release-cycle
 

RoboPlato

I'd be in the dick
Something I've been wondering about for a while is the question of 4k potential in the next gen consoles. I'm of the mind that thinks that in order to hit a $399 price point, native 4k with an expected next gen leap in fidelity would be unlikely. However after getting Rainbow Six: Siege, I think it's MSAA reconstruction technique could be quite beneficial on consoles next gen. Being someone who is primarily a console gamer (can't run anything more complex than Source games on my laptop) I would love to see widespread use of these techniques next gen in order to prevent scaling artifacts so we don't have another gen like the last where everything has to be scaled in some way.

For those of you unaware on consoles Siege renders at a halved resolution in order to hit 60fps in the competitive mode and uses the data from a 2x MSAA sample to output at 1080p on PS4 and 900p on Xbox One. I do not have experience with the Xbox One version but on PS4 the final output is shockingly convincing, even in motion.

Example:
24504519551_e81efb9f6d_o.png


There is some artifacting but it's mostly only visible on high contrast edges and when there is a lot of motion. It's relatively minor and looks more reminiscent of aliasing than a more distracting aberration.

Example (around the gun):
24586791135_1cc7eb8758_o.png


Some of you may remember that Killzone: Shadow Fall's multiplayer mode attempting to do something similar. Guerrilla's technique used alternating lines from temporal sampling in order to reconstruct a 1080p image and I was personally not a fan of it. It didn't save enough performance in order to justify the overall loss in clarity and it was a less temporally stable image than what Siege is doing.

I'd love to get more information on methods like this and their practical application. Dictator93 pointed me to this piece by Matt Pettineo from Ready At Dawn. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/

If anyone has more insight into these or similar techniques I'd like to hear about it, examples would be excellent as well. Do you agree that it would be a useful pursuit for consoles in the next generation? I think it would be, especially with a much higher resolution for the original image.
 

dogen

Member
Something I've been wondering about for a while is the question of 4k potential in the next gen consoles. I'm of the mind that thinks that in order to hit a $399 price point, native 4k with an expected next gen leap in fidelity would be unlikely. However after getting Rainbow Six: Siege, I think it's MSAA reconstruction technique could be quite beneficial on consoles next gen. Being someone who is primarily a console gamer (can't run anything more complex than Source games on my laptop) I would love to see widespread use of these techniques next gen in order to prevent scaling artifacts so we don't have another gen like the last where everything has to be scaled in some way.

For those of you unaware on consoles Siege renders at a halved resolution in order to hit 60fps in the competitive mode and uses the data from a 2x MSAA sample to output at 1080p on PS4 and 900p on Xbox One. I do not have experience with the Xbox One version but on PS4 the final output is shockingly convincing, even in motion.

Example:


There is some artifacting but it's mostly only visible on high contrast edges and when there is a lot of motion. It's relatively minor and looks more reminiscent of aliasing than a more distracting aberration.

Example (around the gun):


Some of you may remember that Killzone: Shadow Fall's multiplayer mode attempting to do something similar. Guerrilla's technique used alternating lines from temporal sampling in order to reconstruct a 1080p image and I was personally not a fan of it. It didn't save enough performance in order to justify the overall loss in clarity and it was a less temporally stable image than what Siege is doing.

I'd love to get more information on methods like this and their practical application. Dictator93 pointed me to this piece by Matt Pettineo from Ready At Dawn.

If anyone has more insight into these or similar techniques I'd like to hear about it, examples would be excellent as well. Do you agree that it would be a useful pursuit for consoles in the next generation? I think it would be, especially with a much higher resolution for the original image.

Next trials game will be doing a somewhat similar trick.

They discuss it and the the differences between it and RS:Siege's trick in this thread.

https://forum.beyond3d.com/threads/gpu-driven-rendering-siggraph-2015-follow-up.57240/
 
Does anyone know if HDR will inflict a performance penality? With DisplayPort 1.3 coming later this year, there will be two new and interesting types of display:

4K HDR with 60hz

4K SDR with 120hz

But 120hz at such a resolution is obviously not really feasible for quite some time, at least not for AAA gaming with very high settings, even with the new GPU generation starting soon. If HDR does not come with a performance penality, this could make those displays very attractive.
 

pottuvoi

Banned
Gemüsepizza;193837325 said:
Does anyone know if HDR will inflict a performance penality?
Games already render to HDR buffers, so the cost of HDR display shouldn't be that big.
What will be interesting to see is how much HDR will magnify aliasing.. (And thus it might need more processing power to get decent overall quality..)

We should know more after GDC and Timothy Lottes presentation.
 
Am I alone in thinking 4K is going to massively stifle graphics tech? If developers are aiming for 2160p30 on next-gen consoles then I think we're in for a very underwhelming generation indeed. 4K is just a massive burden I personally do not want.
 

Smash88

Banned
Am I alone in thinking 4K is going to massively stifle graphics tech? If developers are aiming for 2160p30 on next-gen consoles then I think we're in for a very underwhelming generation indeed. 4K is just a massive burden I personally do not want.

Consoles are struggling to hit 1080p/30fps. Most gaming PCs (based on Steam survey) are now able to handle 1080p/60fps. I honestly don't see next-gen consoles hitting 4K, nevermind 2K, while trying to be affordable and generate a profit for their company. I have a 980Ti and these new games coming out make it hard for me to hit 1440p(2K)/60fps steady - and that GPU cost me what amounts to two current-gen consoles; which doesn't take into account the rest of the computer, components, shipping, packaging, etc. I think next-gen's will prioritize 1080p/60fps gaming, I can see that attainable at a reasonable price to consumers. I wouldn't expect 2K support until next-next-gen.

I was wondering if anyone here with more in-depth knowledge knows when actual full scene raytracing in games will be a possibility? How far are we away from it becoming a regular thing in PC games? Will DX12 make it more manageable? Do you think a GTX 1080 (or whatever it will be called) could do it?
 

_machine

Member
Consoles are struggling to hit 1080p/30fps. Most gaming PCs (based on Steam survey) are now able to handle 1080p/60fps. I honestly don't see next-gen consoles hitting 4K, nevermind 2K, while trying to be affordable and generate a profit for their company. I have a 980Ti and these new games coming out make it hard for me to hit 1440p(2K)/60fps steady - and that GPU cost me what amounts to two current-gen consoles; which doesn't take into account the rest of the computer, components, shipping, packaging, etc. I think next-gen's will prioritize 1080p/60fps gaming, I can see that attainable at a reasonable price to consumers. I wouldn't expect 2K support until next-next-gen.
A minor quibble, but 1920x1080p is actually "2K", but no one uses it since 1080p rose to popularity before 4K was the next big marketing term.

But, I would say you are definitely right in predicting that 4K will not make it consoles in a quite a while. At least in my experience with UE/Unity and the consoles, the renderers use a lot of effects that scale heavily with resolution (it's not just video memory usage) and the step-up in performance requirements is massive, so I doubt that the next generation will be able to cope with it and the more and more complex rendering solutions.
 
Games already render to HDR buffers, so the cost of HDR display shouldn't be that big.
What will be interesting to see is how much HDR will magnify aliasing.. (And thus it might need more processing power to get decent overall quality..)

We should know more after GDC and Timothy Lottes presentation.

I hope this year the GDC presentations are rather quickly online, last year a number of them never made it even after the wait period :/

Does the vault access for GDC include taped precedings from each session? It would be something I would consider if that were the case.
---

OT - I am curious how HDR screens will actually affect VR, it could increase aliasing, but also combat problems of believability. Do VR games have post processing for lighting in general? Or is that against comfort-guides... For example, would post process bloom be something desirable in VR? Is such a thing a good idea? Especially on LDR monitors? Are things like eye adaptation a no-no? Or perhaps good to prevent eye fatigue?
 
More GDC talks are coming online and some caught my eye:

Photogrammetry in Star Wars Battlefront
Photogrammetry has started to gain steam within the Games Industry in recent years. At DICE, this technique was first used on Battlefield and fully embraced the technology and workflow for Star Wars: Battlefront. This talk will cover their research and development, planning and production, techniques, key takeaways and plans for the future. The speakers will cover photogrammetry as a technology, but more that than, show that it's not a magic bullet but instead a tool like any other that can be used to help achieve your artistic vision and craft.
Takeaway

Come and learn how (and why) photogrammetry was used to create the world of Star Wars. This talk will cover Battlefront's use of of the technology from pre-production to launch as well as some of their philosophies around photogrammetry as a tool. Many visuals will be included!
Rendering Rainbow Six Siege
Rainbow Six | Siege is based on the first iteration of a new current gen only rendering engine. With massively and procedurally destructible levels, it was important to invest in techniques that allow for better scaling on both CPU and GPU. This session will describe the most interesting work done by the R6 graphics team to ship a competitive game on Xbox One, PS4 and up to 5 year old PCs. It will focus on architectural optimizations that leverages compute that are only possible with the current generation hardware. The talk will also present their new checkerboard rendering technique that allows for up to 50% faster rendering times without great quality loss.
Takeaway

With Temporal Checkerboard rendering you can win up to 50% GPU time without significant quality loss. This technique also allows you to cut the memory footprint of all fullscreen render targets by half.
Using a GPU-Driven pipeline can vastly improve drawcall count scalability allowing you to draw tens of thousands of possibly unique objects per frame.

Rendering Mirror's Edge Catalyst

Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway

Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Temporal Reprojection Anti-Aliasing in Inside
Playdead's INSIDE makes strong use of Temporal Reprojection Anti-Aliasing to deliver satisfactorily clean and stable images.

Temporal Reprojection Anti-Aliasing is a spatio-temporal post-process technique, where fragments from the most recent frame are correlated with fragments from a history buffer through reprojection. By carefully jittering the view frustum, and by making sensible choices for when to accept or reject a history sample, this technique can produce images that are superior to the input in terms of information density, because the information in every fragment accumulates over time.

This talk will focus on Temporal Reprojection Anti-Aliasing in the context of INSIDE. It will touch on the process, the initial research, and the pleasant side-effects. Most importantly, it will discuss in-depth the individual stages of the implementation written for INSIDE, and how it deals with common problems such as disocclusion and trailing artefacts.
Takeaway

Attendees will gain insight into the general concepts and intuition behind Temporal Anti-Aliasing, as well as a detailed look at how this technique was implemented in INSIDE. For programmers, there will be aspects of the presentation that are directly applicable in constructing their own temporal anti-aliasing solution.
Taming the Jaguar: Insomniac Games
In this session the low-level optimizations in the AMD Jaguar CPU used in PS4 and XBOX ONE will be analyzed. Optimizing for the out of order Jaguar CPU is very different from previous console CPUs, and in this session a few key optimization techniques will be reexamined in the context of out of order CPUs.
Takeaway

Attendees will learn a variety of practical techniques for optimizing code on the Jaguar CPU, including proper handling of prefetching, unrolling and SIMD code.
Optimising the GPU pipeline with Compute
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway

Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Mixed Resolution Rendering in Skylanders
Skylanders: Superchargers leveraged mixed resolution rendering to help optimize the fill heavy effects that established the feeling of the skylands such as clouds and localized atmospherics. This lecture will cover the improvements in quality and performance that were developed to make this possible for the wide range platforms that needed to be supported. The topics will include best practices for depth downsampling, bilateral filter weighting improvements, and a performant branchless upsample technique.
Takeaway

Attendees will gain some useful insight into best practices when working with offscreen vfx and mixed resolution rendering. There will be aspects of the presentation that they can walk away from and implement that day to improve quality and performance.
Global Illumination in Tom Clancy's Division
The session will describe the dynamic global illumination system that Ubisoft Massive created for "Tom Clancy's The Division". Our implementation is based on radiance transfer probes and allows real-time bounce lighting from completely dynamic light sources, both on consoles and PC. During production, the system gives our lighting artists instant feedback and makes quick iterations possible.
The talk will cover in-depth technical details of the system and how it integrates into our physically-based rendering pipeline. A number of solutions to common problems will be presented, such as how to handle probe bleeding in indoor areas. The session will also discuss performance and memory optimization for consoles.
Takeaway

Attendees will gain understanding of the rendering techniques behind precomputed radiance transfer. We will also share what production issues we encountered and how we solved them - for example, moving the offline calculations to the GPU and managing the precomputed data size.
 

kinn

Member
Great thread. Not sure if this is the best place to ask but I'll ask anyway.

Seems like a few console games are using dynamic resolution scaling to maintain 60fps etc. How hard is this to incorporate into an engine? Is this something that 3rd party engines have already or can be modified to use? Or is this something that needs to be planned from the beginning of creating a custom engine?

And finally, any PC games use this? If not, why not? Seems like a good solution to maintain a solid framerate.
 

KKRT00

Member
Great thread. Not sure if this is the best place to ask but I'll ask anyway.

Seems like a few console games are using dynamic resolution scaling to maintain 60fps etc. How hard is this to incorporate into an engine? Is this something that 3rd party engines have already or can be modified to use? Or is this something that needs to be planned from the beginning of creating a custom engine?

And finally, any PC games use this? If not, why not? Seems like a good solution to maintain a solid framerate.

All id tech 5 games use it. People generally turn it off, but for low end machines, its quite decent feature. But it definitely needs some additional settings, like how far resolution drop can go for example.
 

kinn

Member
All id tech 5 games use it. People generally turn it off, but for low end machines, its quite decent feature. But it definitely needs some additional settings, like how far resolution drop can go for example.

Thanks for the info. Id love to see this become a common option.

id tech 5 is being phased out though right?
 

RoboPlato

I'd be in the dick

These all sound super interesting. I know Naughty Dog is doing a talk at GDC as well. Is it a tech or design talk? I'd love for a look at the final tech of Uncharted 4 shortly before launch.
 

_machine

Member
These all sound super interesting. I know Naughty Dog is doing a talk at GDC as well. Is it a tech or design talk? I'd love for a look at the final tech of Uncharted 4 shortly before launch.
One tech art (with production tips), one texturing pipeline (with Allegorithmic):
Texturing Uncharted 4: a matter of Substance (presented by ALLEGORITHMIC)
Technical Art Culture of Uncharted 4

Too bad I couldn't fit a trip to GDC to my schedule and budget this year, though I do hope to have a chat with some tech developers in European studios at GDCEU (even if the event itself is merely a shadow of the main event).

Interesting that they're doing a talk on this, considering it's a feature that has been almost completely cut from the game.
You don't really "almost cut off" feature; it either is in or not, and GI itself definitely seems to be in the game even if the effect itself isn't that noticeable. In any case, it's an area that definitely needs more research in and hopefully the talk will shed some light into how the complete feature compares against the early implementation for the vertical slice.
 

RoboPlato

I'd be in the dick
s in or not, and GI itself definitely seems to be in the game even if the effect itself isn't that noticeable. In any case, it's an area that definitely needs more research in and hopefully the talk will shed some light into how the complete feature compares against the early implementation for the vertical slice.

Not just the verticle slice. Footage from early last year showed a much stronger presence of GI. Then again it's not as though we don't know why it was cut :/
 

etta

my hard graphic balls
Do you think the Xbox SDK has received some significant improvements as we see more games coming out in 1080p? I initially thought there was a physical limitation (more or less) with the ESRAM having barely enough space for a 1080p buffer, but I'm curious if they have found a way to come around that via software? The Division (open world), Hitman is now also confirmed 1080p with the same assets as PS4, Quantum Break is confirmed 1080p, Gears of War 4 campaign is said to be 1080p/30. Also, speaking of The Division, they said the engine will be used for other Ubisoft games, so could that mean the next Assassin's Creed will be 1080p for both consoles?

Paging Dictator93.
 
Heyo!

Sorry everyone that I have not updated the front page in a bit (last time I did was a bout a month ago). Unfortunately real life has made the pursuit of my passions difficult.
Paging Dictator93.
Heyo!
Do you think the Xbox SDK has received some significant improvements as we see more games coming out in 1080p?
I obviously would not know directly, but I think there are a convergence of things in general where by the largest part would be devs coming to a greater understanding of how to manage for the xb1's shortcomings. I would not doubt that MS has provided guidelines for how to manage the ESRAM depending upon different usecases.
I initially thought there was a physical limitation (more or less) with the ESRAM having barely enough space for a 1080p buffer, but I'm curious if they have found a way to come around that via software?
ESRAM should still be one of the main factors getting in the way of a game reaching 1920X1080 if it has a traditionally fatter framebuffer (I think but cannot know completely, but I believe this is the main reason fox engine games are 1280X720 for example). Even then you come into the problem in which games also hit shading and other GPU limitations if they do not have the afforementioned problem. Devs have to know where to cut corners IMO, and that is one of the reasons where native 1080p games start becoming a thing on xb1.
The Division (open world), Hitman is now also confirmed 1080p with the same assets as PS4, Quantum Break is confirmed 1080p, Gears of War 4 campaign is said to be 1080p/30.
One must not forget that the Division is only 1920X1080 sometimes. So even if they managed to have ESRAM work for their framebuffer size in a performant manner, the oomph is missing elsewhere to maintain that resolution in terms of raw shading / bandwidth performance all the time. This is where I again want to mention the basic thing that has probably been happening: devs understanding where and how to cut corners to mitigate large and visible quality drops. This means often lower resolution internal rendering for lots of effects instead of the final output resolution.

And that is where quantum break and the new gears game will probably be made possible IMO. I am really curious about Gears though especially as UE4 is really unproven on new consoles, and I think its base rendering costs for things like realtime shadows and post-processing makes the possibility of a 1080p/30 somewhat questionable. Perhaps they will have to render many elements at lower than output resolution to achieve that (SSAO, SSR, etc). They even apparently have added volumetric lighting to the UE4 branch... lot's of expensive stuff to have at 1.3 TF at 1920X1080! The last game with a similar suite of rendering effects on xb1 that I can think of is Ryse (obmb, global volumetric lighting, visible bokeh, etc).

bokeh
spot light volumetric
what looks like global volumetric lighting
obmb

All of which look to be rather high qualty / high internal resolution in the first video they released, which is atypical of what he have seen for similar effects in other UE4 game son xb1, other engines on xb1, or even nearly all games on PS4.

Basically, I am curious to see what compromises will have to be made for both if they do want to hit 1080 on xb1. We already know the compromises of QB to a certain extent (many 720p buffers with a 1080p output).
Also, speaking of The Division, they said the engine will be used for other Ubisoft games, so could that mean the next Assassin's Creed will be 1080p for both consoles?
I do find the Divisions engine as being really well balanced as the AC engine obviously overshot the gen in its first outing... hence the 1600X900 and barely 30 it managed. Then again, AC has really different requirements than the Division in terms of AI routines and animation (there need to be way more animated characters on screen at once in AC than there are in the division).
Do you really think they would toss out all the iterations to Anvil Next and start using Snowdrop?
----

Maybe my post is satisfactory in answering your question, maybe not! It is a lot of speculation of course, but in the end I would conservatively say that it is more reasonable to assume that the xb1's achievements in hitting certain resolutions seems to be up to dev's and their design priorities rather than deft overarching decisions from MS.
 

etta

my hard graphic balls
Heyo!
.
.
.
Interesting, so render some elements at lower resolution to save buffer space. Thanks, and it will certainly be interesting to see where this goes, if indeed more games start coming out at 1080p on the Xbox. I'm personally hoping Dues Ex will get to 1080p as I pre-ordered the Xbox version. I can always switch if they do mention the resolution prior to release.
 

gofreak

GAF's Bob Woodward
Object space shading isn't an entirely new concept, but will be interesting to see if it takes flight now. Lots of potential wins in a variety of contexts (e.g. VR). Tempted to try an implementation.
 

AmyS

Member
Hopefully high-end console / PC games during the 2020s will make major advances in using real-time ray tracing and related techniques.
 

Raticus79

Seek victory, not fairness
When VR headsets get eye tracking and foveated rendering, do you think their eye tracking could pass through to benefit old flat-screen games running in virtual cinema mode? e.g. the headset's layer knows where the player is looking on the virtual movie screen, and that gets passed to the driver which handles rendering non-focus regions at lower resolution without the game knowing about any of it.

Could actually be a strong case for virtual cinema mode if that concept works.
 

Javin98

Banned
Just to bump this thread since there hasn't been any interesting discussion for a long time. Now we know that Star Wars Battlefront is using Enlighten for its global illumination solution. Here's the link for confirmation:
http://www.geomerics.com/news/enlighten-in-star-wars-battlefront/

Anyway, my question is, is the GI solution real time or baked? Well, I tried to verify this by going under an area with color bleeding in effect. Here's a pic of it, not the best, but I can't take a pic of it now:

maxresdefault.jpg


I'm referring to the orange hue under the sheet. I went under it and the character was completely unaffected by it. No orange hue is cast on the character. So I'm guessing that means the GI is baked. Can anyone who understands GI better leave some thoughts?
 

RoboPlato

I'd be in the dick
Just to bump this thread since there hasn't been any interesting discussion for a long time. Now we know that Star Wars Battlefront is using Enlighten for its global illumination solution. Here's the link for confirmation:
http://www.geomerics.com/news/enlighten-in-star-wars-battlefront/

Anyway, my question is, is the GI solution real time or baked? Well, I tried to verify this by going under an area with color bleeding in effect. Here's a pic of it, not the best, but I can't take a pic of it now:

maxresdefault.jpg


I'm referring to the orange hue under the sheet. I went under it and the character was completely unaffected by it. No orange hue is cast on the character. So I'm guessing that means the GI is baked. Can anyone who understands GI better leave some thoughts?
Enlighten has been pretty much just a baked solution until very recently. GDC they demoed their real time GI in UE4 but as far as I know Frostbite isn't using it yet.
 

Javin98

Banned
Enlighten has been pretty much just a baked solution until very recently. GDC they demoed their real time GI in UE4 but as far as I know Frostbite isn't using it yet.
That's strange. I thought Enlighten proudly claimed that their GI solution was real time for a long time now. Also, UE4 has real time GI now? Last I heard it was only using Lightmass.
 

RoboPlato

I'd be in the dick
That's strange. I thought Enlighten proudly claimed that their GI solution was real time for a long time now. Also, UE4 has real time GI now? Last I heard it was only using Lightmass.
Really? I only remember them talking about how much faster it is to bake with them. Could you give me a link?

Enlighten is compatible with UE4, I don't think it's a part of the engine.
 
I wonder how Naughty Dog was able to achieve their volumetric lighting solution in Uncharted 3 and Last of Us. I thought it looked really good.
 

pottuvoi

Banned
Just to bump this thread since there hasn't been any interesting discussion for a long time. Now we know that Star Wars Battlefront is using Enlighten for its global illumination solution. Here's the link for confirmation:
http://www.geomerics.com/news/enlighten-in-star-wars-battlefront/

Anyway, my question is, is the GI solution real time or baked? Well, I tried to verify this by going under an area with color bleeding in effect. Here's a pic of it, not the best, but I can't take a pic of it now:

maxresdefault.jpg


I'm referring to the orange hue under the sheet. I went under it and the character was completely unaffected by it. No orange hue is cast on the character. So I'm guessing that means the GI is baked. Can anyone who understands GI better leave some thoughts?
That test doesn't really test if game has real-time GI, just if there is lightprobe placed under the orange object.
Lightmass and pretty much all completely baked methods can pass the test.

Better would have been to see if moving lightsource gives bouncelight. (Which dynamic enlighten can do.)
 

Javin98

Banned
So another post from me today in my attempt to revitalize this thread briefly.
I also have genuine questions to ask. :p

Now Batman Arkham Knight has a model viewer mode that lets you get up close to the character models to appreciate all the details. The mode is in complete pitch dark with only a spotlight on the character model. So I've always been wondering if the character models here are the gameplay/cutscene models or entirely different models with higher fidelity. To put that to test, I made this comparison:
evjhv4p.png

G5NdFeb.png


The first pic is a Photomode shot, which as far as I know, doesn't improve the gameplay model, with DOF cranked up to focus only on Batman. So, in my opinion, at least, both models are very similar, different lighting conditions aside. The model viewer mode probably uses higher quality AA, though. Thoughts anyone?
 

Javin98

Banned
So I know double posting is frowned upon but my last post got no replies and I have another question, so....

I've been playing The Witcher 3 and I'm close to the ending, 85 hours in or so. Anyway, the game looks great on PS4 and I think the water looks great, but I'm wondering, does the water use screen space reflections? It certainly behaves like SSR as the reflected objects disappear from the surface when you pan the camera to move away from said objects.
 
A minor quibble, but 1920x1080p is actually "2K", but no one uses it since 1080p rose to popularity before 4K was the next big marketing term.

A minor quibble but 2k is a defined resolution - 2048x1080 as defined by DCI. 4k gets a little complicated as DCI refer to 4k as 4096x2160 - double 2k, while UHD-1 is 3840x2160. Depending on you point of view both are technically 4k, but from different bodies, whereas 2k only ever referred to a resolution of 2048x? (There are variations of 2k with differing horizontal resolutions, such as 2048x1556 anamorphic), none of which included 1080p as '2k'. Although many still consider 4k the DCI standard of 4096x2160, and TVs are UHD 4k.
 
So I know double posting is frowned upon but my last post got no replies and I have another question, so....

I've been playing The Witcher 3 and I'm close to the ending, 85 hours in or so. Anyway, the game looks great on PS4 and I think the water looks great, but I'm wondering, does the water use screen space reflections? It certainly behaves like SSR as the reflected objects disappear from the surface when you pan the camera to move away from said objects.

Hey Javin,
sorry for not responding at all to your last quesiton!

Yeah, TW3 uses SSR... just seemingly for water surfaces (which is arguably a pretty hard area to do it, since your are projecting them on a transparency). TW3 very rarely has extra smooth glossy metal surfaces to see if the game uses SSR elsewhere, but I do not think it does.

Sand Bar reflected
witcher3_2016_04_25_145a2m.png

Sand Bar not reflected
witcher3_2016_04_25_1fryfa.png


And like you said, the best way to check (beyond looking for occlusion errors and stuff) would be to just pan the camera down below the object in question along the surface to see if its reflection disappears.
So another post from me today in my attempt to revitalize this thread briefly.
I also have genuine questions to ask. :p

Now Batman Arkham Knight has a model viewer mode that lets you get up close to the character models to appreciate all the details. The mode is in complete pitch dark with only a spotlight on the character model. So I've always been wondering if the character models here are the gameplay/cutscene models or entirely different models with higher fidelity. To put that to test, I made this comparison:
/snip

I am not sure if the AA is different, rather if just the lighting conditions and lack of sub-pixel detail in the distance make the AA coverage appear that much better. But yeah, I think the model is the same in gameplay and elsewhere. You can especially tell in the intro seuqence of the game when you get the new bat suit and it quickly zooms in on batman's suit then cuts immediately to gameplay.
 

Javin98

Banned
Hey Javin,
sorry for not responding at all to your last quesiton!

Yeah, TW3 uses SSR... just seemingly for water surfaces (which is arguably a pretty hard area to do it, since your are projecting them on a transparency). TW3 very rarely has extra smooth glossy metal surfaces to see if the game uses SSR elsewhere, but I do not think it does.

Sand Bar reflected
witcher3_2016_04_25_145a2m.png

Sand Bar not reflected
witcher3_2016_04_25_1fryfa.png


And like you said, the best way to check (beyond looking for occlusion errors and stuff) would be to just pan the camera down below the object in question along the surface to see if its reflection disappears.


I am not sure if the AA is different, rather if just the lighting conditions and lack of sub-pixel detail in the distance make the AA coverage appear that much better. But yeah, I think the model is the same in gameplay and elsewhere. You can especially tell in the intro seuqence of the game when you get the new bat suit and it quickly zooms in on batman's suit then cuts immediately to gameplay.
Thanks a lot, man! I was quite sure it was SSR in The Witcher 3, but I wanted confirmation. I tried looking around the Internet, but I couldn't find a definitive answer, so I decided to ask it here. And you came to the rescue, thanks again! ;) Also, yeah, The Witcher 3 has very few, if any, glossy surfaces around the world and I haven't seen SSR anywhere else besides the water surface. I'm actually curious to see if a better reflection technique will come that removes the biggeet limitation of SSR.

As for Batman Arkham Knight, your observations do make sense. The character models appear cleaner overall, but there are still some nasty shimmering on some of them. The Man Bat's character model, especially. Hair is also a victim of shimmering. In any case, pretty impressive that the gameplay models look that good.

Also, if you don't mind, there will be a new trailer for Uncharted 4 later and it would be nice if you could post your impressions on the visual fidelity here. ;)
 

HTupolev

Member
I'm actually curious to see if a better reflection technique will come that removes the biggeet limitation of SSR.
Classic planar reflections and realtime cubemaps don't suffer from SSR's limitation, although they have applicability and efficiency challenges.

Some modern games (Tomorrow Children is a decent example) have been playing around with generating low-frequency volumetric irradiance maps in real time, which can allow for a pretty general-purpose solution, although it basically always takes a lot of power and memory to get such things to offer particularly sharp reflections.
 
Top Bottom