• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VRFocus: Epic’s UE4 showdown demo shown on Project Morpheus at 120hz

Very encouraging news, especially re UE4 optimisation on PS4.

Doesn't VR prove that people who claim to be unable to perceive the difference between 30 and 60 fps have no idea what they're talking about?

Clearly human vision runs up to 120Hz or higher if that's needed to produce convincing VR.

Ignorance is a bliss, kinda like 1080p vs 900p, fps and other shit. But then there a difference between not seeing and seeing and not caring.
 

arter_2

Member
My issue with this demo is many of the effects they are using are cheated effects. The shadows are just blob shader projections and the explosions are geometry deformations. While it looks good the demo is on rails and heavily art directed around the flaws in its creation. It is a brilliant showcase in old school 3D tricks. I think vr will be amazing just not anytime soon. We are just going to have to get used to the fact that many of the techniques used to make modern games look so good do not work in Vr. We just don't have the graphical power yet, especially not the with the ps4. Effects you have gotten used to like dynamic lighting/shadows and many shader effects just don't work well in vr yet. There is no getting around 1500 draw calls and at the bare minimum 90fps.
 

Seanspeed

Banned
Very encouraging news, especially re UE4 optimisation on PS4.

Doesn't VR prove that people who claim to be unable to perceive the difference between 30 and 60 fps have no idea what they're talking about?

Clearly human vision runs up to 120Hz or higher if that's needed to produce convincing VR.
Framerates have a more meaningful impact in VR than they do on a flat display.

First off, the farther away you are from a screen, or just generally the smaller a display is in your vision, the less 'sensitive' you will be to choppy motion/low framerates, as the choppiness takes up very little of your overall vision. Being closer up, a display takes up more of your vision, making any choppy video become more noticeable and uncomfortable. So clearly in VR, where the video is taking up nearly ALL of your forward vision, choppy video is going to be far, far more uncomfortable. So already, we've made a case for higher framerates in VR. I also think this partly explains why gaming in a living room situation(as with consoles more often than not) tends to be more comfortable at 30fps than it is on PC, with people playing on close-up monitors.

Anyways, back to VR, there comes a point with a high enough frame/refresh rate that we pass a flicker fusion threshold, which is basically the point where the human eye stops noticing individual frames being 'flashed' to it(as each update from a display is, turning an image on and off very quickly), and this is basically the magical point where, in VR, our brains fully accept the motion of the video to be 'real'. This doesn't mean that there isn't additional benefits from going higher, but anything below this and the core sense of presence is dramatically reduced, or extinguished entirely. So that's why it's so important to VR.

But yes, it also would show that we can indeed detect differences in framerates, but that never exactly needed proving in the first place.
 

Seanspeed

Banned
My issue with this demo is many of the effects they are using are cheated effects. The shadows are just blob shader projections and the explosions are geometry deformations. While it looks good the demo is on rails and heavily art directed around the flaws in its creation. It is a brilliant showcase in old school 3D tricks. I think vr will be amazing just not anytime soon. We are just going to have to get used to the fact that many of the techniques used to make modern games look so good do not work in Vr. We just don't have the graphical power yet, especially not the with the ps4. Effects you have gotten used to like dynamic lighting/shadows and many shader effects just don't work well in vr yet. There is no getting around 1500 draw calls and at the bare minimum 90fps.
Well actually, consoles can handle far more draw calls than on PC currently. And with DX12/Vulkan coming up, PC will gain a similar capability. Combined with further support from graphics card manufacturers and even OS devs like Microsoft, there is a ton of reason to be optimistic about things going forward.

Foveated rendering is another technique in development that will improve the rendering process for high res/high framerate VR.
 

arter_2

Member
Well actually, consoles can handle far more draw calls than on PC currently. And with DX12/Vulkan coming up, PC will gain a similar capability. Combined with further support from graphics card manufacturers and even OS devs like Microsoft, there is a ton of reason to be optimistic about things going forward.

Foveated rendering is another technique in development that will improve the rendering process for high res/high framerate VR.

Granted,I'm not negative on the whole thing I think VR and especially AR are super exciting. I just think many of the current VR demos are disingenuous and don't actually portray the current state of VR tech. I saw this demo at GDC using a computer with the newest titan black card and it still stuttered. I'm just trying to keep expectations in check. We cannot get around some very important flaws found in VR rendering. The biggest one being that normal mapping doesn't really work anymore.
 

Seanspeed

Banned
Granted,I'm not negative on the whole thing I think VR and especially AR are super exciting. I just think many of the current VR demos are disingenuous and don't actually portray the current state of VR tech. I saw this demo at GDC using a computer with the newest titan black card and it still stuttered. I'm just trying to keep expectations in check. We cannot get around some very important flaws found in VR rendering. The biggest one being that normal mapping doesn't really work anymore.
Valve seems to think that normal maps can still be used, you just have to be careful with how you use them:

AKl4ulT.png
 

DieH@rd

Banned
That eve footage on PS4 is pretty slick...

Which is why people are really wanting return of Colony Wars/G-Police. :D

Genre of cockpit games [cars/spaceships/mech] is uniquely suited to be great at VR without almost any additional tweeks being needed for gameplay. User at home is sitting in its chair, and it is immediately and easily transported into the cockpit of his VR vehicle.
 

arter_2

Member
Valve seems to think that normal maps can still be used, you just have to be careful with how you use them:

Eh, from my experience they kinda work but they end up acting more like spec maps aiding in lighting detail. But normal maps serve a very important purpose at the moment in real time 3d graphic,this no longer works and breaks down. You can solve this by using real amounts of high poly geometry or using tessellation mapping. This however in a way is completely counter intuitive to the 90fps necessary to have clean non dizzying VR.
 

Zaptruder

Banned
Valve seems to think that normal maps can still be used, you just have to be careful with how you use them:

AKl4ulT.png

Basically, assets should be designed in such a way as to keep in mind that users can get up close and personal with them.

Texture detail is really fantastic candidate for normal mapping though; those subtle bumps that affect the object's lighting dramatically should be part of an intelligently considered PBR approach to rendering.
 

arter_2

Member
Which is why people are really wanting return of Colony Wars/G-Police. :D

Genre of cockpit games [cars/spaceships/mech] is uniquely suited to be great at VR without almost any additional tweeks being needed for gameplay. User at home is sitting in its chair, and it is immediately and easily transported into the cockpit of his VR vehicle.

This is also an easier way to control what's in the renderable area to keep draw calls down. It's much easier to render a window where all that's on screen is a few ships, asteroids, and the vastness of space.
 

DieH@rd

Banned
This is also an easier way to control what's in the renderable area to keep draw calls down. It's much easier to render a window where all that's on screen is a few ships, asteroids, and the vastness of space.

Well... space is big, but that does not mean that user always surrounded by nothingness. :D And mecha/driving games can very easily put a metric shitton of stuff on screen [pCARS can handle ~44 cars on screen on PS4].

I would say that it is easier to keep drawcalls down in a slow-paced adventure game than in space cockpit shooter where users expect a lot of action, large fleets, debris, lasers [that cast light], particles, fancy space environments [nebulas, cracked planets, asteroid fields, detailed orbiting stations/massive alien tech remnants], etc. :)
 

arter_2

Member
Well... space is big, but that does not mean that user always surrounded by nothingness. :D And mecha/driving games can very easily put a metric shitton of stuff on screen [pCARS can handle ~44 cars on screen on PS4].

I would say that it is easier to keep drawcalls down in a slow-paced adventure game than in space cockpit shooter where users expect a lot of action, large fleets, debris, lasers [that cast light], particles, fancy space environments [nebulas, cracked planets, asteroid fields, detailed orbiting stations/massive alien tech remnants], etc. :)

You'd be surprised how much easier it is to do anything but a landscape. Things like foliage really add up. Also the way shadows are handled are a lot easier in a space game. Games like Elite Dangerous do a great job of hiding the fact that they are rendering a much smaller live area than the entire screen.
 

Synth

Member
Which is why people are really wanting return of Colony Wars/G-Police. :D

Genre of cockpit games [cars/spaceships/mech] is uniquely suited to be great at VR without almost any additional tweeks being needed for gameplay. User at home is sitting in its chair, and it is immediately and easily transported into the cockpit of his VR vehicle.

I want a VR remaster of Virtua Racing. It makes too much sense.
 

Synth

Member
My wish is VR adaptation of the 1st track from Moto Racer
http://www.youtube.com/watch?v=3oml876_Cg0

Such a phenomenal game and track. I loved racing it so much.

That looks pretty awesome. I've never played it before. I'll probably check to see if it's available via PSN.

One of the reasons I'd like to see Virtua Racing in VR is because I think the limitations of polygon rendering at the time led the game to employ a graphical style that's infinitely scalable without looking unpleasant. You can add additional flourishes (better lighting etc), but the game will never suffer from ugly texturing regardless of what size it's blown up to, and probably wouldn't even struggle much on mobile hardware.

Something like this PS2 release (running through PCSX2) with a much deeper draw distance would be perfect imo.
 

DieH@rd

Banned
Holy shit, that is a lot of chromatic aberration.

Actually, its anti-chromatic aberration. Lenses in the VR headset create strong CA effect, so game rendering need to eliminate that using opposite colors. When viewing game from inside the headset, there is zero CA visible [except if dev want's to purposefully introduce it].
 

Shin-Ra

Junior Member
Volumetric or no, I fucking hated that explosion effect. Good riddance.
I don't like you any more. :(

Those mini-nukes were a thing of beauty. Bionic Commando had a similar effect later, and Sunset Overdrive more recently. Gears 2 added the geometry blood splatter I think.
 

BlazinAm

Junior Member
I've found this PDF which describes some of the optimisations and stereo improvements made for the Showdown demo, including those lovely volumetric effects.

http://static.oculus.com/connect/slides/OculusConnect_Epic_UE4_Integration_and_Demos.pdf

Hopefully this'll help coerce developers away from billboard particles in all kinds of games.

edit: I'm still salty we lost this after Drake's Fortune:

UnchartedExplosion.gif

I am expecting the smoke monster from Lost to jump at the screen or something, looks hella weird.
 
Granted,I'm not negative on the whole thing I think VR and especially AR are super exciting. I just think many of the current VR demos are disingenuous and don't actually portray the current state of VR tech. I saw this demo at GDC using a computer with the newest titan black card and it still stuttered. I'm just trying to keep expectations in check. We cannot get around some very important flaws found in VR rendering. The biggest one being that normal mapping doesn't really work anymore.

Well the exciting thing to me is that presence ushers in a new opportunity to capitalize on robust poly work, which modern systems are absolutely beastly at. I'm hoping voxels get some love. But when you're in there, it all looks so good. So yeah, many of our current tricks won't work as well, but we'll develop new ones.
 

Bsigg12

Member
I know you are not entirely downplaying the achievement here, but equating this with VR 3d videos?

To be fair, it is being rendered on the console in realtime where as a 3D, 360 degree video is just being decoded and played. It's just the lack of interactive elements allows the developer of the experience to make a very compelling and interesting product because they don't have to worry about the user wandering off to a corner. I think those sort of experiences will play a large part of demoing VR to the masses to show them the idea of presence without overwhelming them by forcing them to then have to control something.

I do think it's cool they have got it running on the PS4 and Morpheus, I just want to see more interactive things like the Heist demo.
 

Fafalada

Fafracer forever
Zaptruder said:
but their SDK seems like it should be flexible enough to allow for other approaches if developers want to delve into it.
Not so much anymore - latest SDK removes all control from the application, so as of this moment you get no flexibility as such. It'll happen whenever Oculus puts it in their SDK.

All that aside - Async shaders are coming from NVidia and AMD both as outlined in their respective "VR apis" approach, and thus won't be tied to just DX12 or Win10.

It's a semantic quibble.
It is at the moment - eventually we'll get to more elaborate approaches as I mentioned in the other post, which would also be more application specific as well. Current warping implementations are generic enough to "just work" across the board, which is why VR vendors adopted them first.
 

Kinthalis

Banned
is the reprojection going to cause input lag aka motion sick?

No, it's not really interpolation in the same way your TV does it to increase perceived frame rate. That would indeed increase latency.

Infact I don't see any way that this thing isn't still outputting the original frame rate of the game. All reprojeciton is doing is warping the current frame as a response to your head movement, while it waits for a new frame.

My guess is that games will NEED to be at least 60 FPS for the experience to feel good, regardless of what Sony is saying about their tech - given that similar tech exists on the PC side, and yet no one is syaing you cna have a game running at 30 FPS.
 

Seanspeed

Banned
Infact I don't see any way that this thing isn't still outputting the original frame rate of the game. All reprojeciton is doing is warping the current frame as a response to your head movement, while it waits for a new frame.
The prediction of the movement data creates a new intermediate frame. It is, for all intents and purposes, 120fps.

My guess is that games will NEED to be at least 60 FPS for the experience to feel good, regardless of what Sony is saying about their tech
They haven't said otherwise. Reprojecting from a 1/2 half refresh rate is important to reduce artifacts as much as possible. When you get into varying framerates, things get more difficult. So essentially, the goal will be a v-sync'd and rock solid 60fps. The alternative being a true 120fps. With nothing in between.
 

bj00rn_

Banned
is the reprojection going to cause input lag aka motion sick?

Helps lag and motion sickness. And if used as a "framerate-doubler"; framerate and thus more details than if native. But it does of course come with drawbacks, like decreased image quality (judder, ghosting/blur) since it's not a "true scene" but a 2D warp of the last frame rendered vs tracking data. Thus it's best suited in scenarios when head rotation from a static viewpoint. Native framerate is of course best, unless you're a developer who needs a crutch to reach higher and stable framerates and while keeping a certain amount of details. As a framerate doubler this is mostly only a PS4 thing, on the Oculus dev kit 2 it has only been used for lag/framerate transient-smoothing.

(Disclaimer: All AFAIK)
 

Nzyme32

Member
That's interesting. Thanks, I did not know that.

Chromatic aberration is emulating the aberrations that occur through film camera lens. The great thing about VR is moving away from the approach current games have of trying to emulate all the artifacts of film to be "cinematic", as if the player is a camera. None of this makes sense in VR - unless you are using lenses or a helmet or something else that could actually cause such effects in the virtual reality .
 
The prediction of the movement data creates a new intermediate frame. It is, for all intents and purposes, 120fps.
Are you sure about that? I was under the impression that the second output frame (of2) would be the same as the first rendered frame (rf1), but shifted to compensate for any head movement since the rendering began. Then they'd start work on rf2 — used as the basis for of3 and of4 — using any predictive hints offered by the system. That said, even of1 would also be subject to reprojection, based on any movement which took place between starting rendering on rf1 and final output of of1.

With native 120 fps output — where the engine really is rendering 120 frames a second, as in the Magic Controller demo — reprojection is still used to compensate for any head movement between rendering and output.

So with the former, you get 60 unique frames, adjusted for current head position 120 times a second, and with the latter, you get 120 unique frames, all adjusted for any late movement by the user.

Is that not how it works? Was my explanation even clear? lol
 
Top Bottom