• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sony London details PSVR optimization & in-house VR engine

Man

Member
Saw iWaggleVR tweet a link to a GDC session by Sony London studios as reported in Japanese on Game Watch. Here's the Google translate link:
https://translate.google.co.uk/tran....impress.co.jp/docs/news/20160315_748313.html

Seems like Sony London Studios have created an in-house engine from scratch purely for VR named LSSDK (that is actually a multiplatform engine with Windows but using the latter only as a quick-iteration development environment). Some technologies are being shared with third-parties.
Some cool aspects is that it has Voxel-based global illumination as well as a really effective Resolution Gradient technique (saving 25% GPU power). Valve presented a similar radial density masking method at GDC last week but it seems like Sony's technique is much more mature as it has a shifting pattern that also applies temporal AA to fix-up the masked pixels. Really smart stuff.
The presentation also confirms Sony renders 1.2-1.3x the screen resolution (minus masked pixels) like the other VR hmds.

03.jpg


04.jpg


05.jpg

Voxel-based global illumination technique called 'Lightfield':

Shifting pattern for Resolution Gradient masking:
 

Kaako

Felium Defensor
Thanks for linking. In-house VR engine from scratch sounds absolutely like the way to go IF you have the technical prowess and resources backing you up.

Now I'm curious, can we potentially see PSVR PC support from Sony themselves since this engine is actually PC & PS4? OR even PSVR <--> PCVR cross-platform game(s)!? Cause that would be dope as hell too and I'm sure it's fully possible. Just requires a shit ton of work.
 
I'm not smart enough to understand half this stuff but it sure sounds neat. It will be interesting to see what this studio does after shipping VR Worlds.
 

luoapp

Member
In last bombcast, Jeff (?) mentioned the image from PSVR gets fuzzy faster than the other two when close to the edge. I guess that explains it.
 

gofreak

GAF's Bob Woodward
Sounds like it will be available to ps devs.They know most devs will need to be mutiplatform, hence the PC support. PhyreEngine is the same.
 

Panajev2001a

GAF's Pleasant Genius
Saw iWaggleVR tweet a link to a GDC session by Sony London studios as reported in Japanese on Game Watch. Here's the Google translate link:
https://translate.google.co.uk/tran....impress.co.jp/docs/news/20160315_748313.html

Seems like Sony London Studios have created an in-house engine from scratch purely for VR named LSSDK (that is actually a multiplatform engine with Windows but using the latter only as a quick-iteration development environment). Some technologies are being shared with third-parties.
Some cool aspects is that it has Voxel-based global illumination as well as a really effective Resolution Gradient technique (saving 25% GPU power). Valve presented a similar radial density masking method at GDC last week but it seems like Sony's technique is much more mature as it has a shifting pattern that also applies temporal AA to fix-up the masked pixels. Really smart stuff.
The presentation also confirms Sony renders 1.2-1.3x the screen resolution (minus masked pixels) like the other VR hmds.



Voxel-based global illumination technique called 'Lightfield':


Shifting pattern for Resolution Gradient masking:

Really cool approach :).
 

Man

Member
In last bombcast, Jeff (?) mentioned the image from PSVR gets fuzzy faster than the other two when close to the edge. I guess that explains it.
Valve is using a similar technique for their stuff (Aperture VR Demo and The Lab).
 

Shin-Ra

Junior Member

So it looks like:

INNER
Zone 1: full res each frame
Zone 2: 3/4 res each frame
Zone 3: 1/2 res each frame
Zone 4: 1/4 res each frame
OUTER

Dropping resolution towards peripheral vision.

Artefacting is probably most similar to KZSF multiplayer if it's on a fixed pixel grid, reliability of reprojection determines how accurate the pixels NOT RENDERED ANEW each frame are, with less accurate pixels looking more fuzzy, rather than the blurry appearance of an upscale.

KZSF had full vertical fields instead of the varying checker pattern though, so even the 1/2 res zone would differ.


Is that 1.2-1.3* the x*y or x&y resolution?
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Resolution Gradient technique (saving 25% GPU power).

That's a biggy and the improvements will only get better.

This will ensure great looking games on PS VR.
 

Man

Member
Dropping resolution towards peripheral vision.

Artefacting is probably most similar to KZSF multiplayer if it's on a fixed pixel grid, reliability of reprojection determines how accurate the pixels NOT RENDERED ANEW each frame are, with less accurate pixels looking more fuzzy, rather than the blurry appearance of an upscale.
Actually, the rendered-image is warped (after rendering) to compensate for the optics. Meaning: If the rendering resolution was equal across the image the engine would be rendering an unnecessary amount of pixels. In other words: If the resolution didn't lower gradually towards the edges one would end up rendering i.e 2 software pixels to 1 physical screen pixel (ergo: wasting GPU resources for nothing). The aim with this technique is to keep it as close as possible to render 1 software pixel to 1 physical screen pixel across the image. So the 'artifacts' are actually (truly) invisible.
 

Shin-Ra

Junior Member
I'd be very interested to see the results of a resolution gradient applied to a non-VR display, especially a UHD frame where the lowering of density toward the edges would be 1/4res = 1080p density at worst.

It'd be even better if the lower density zones were dynamically proportioned and only introduced as needed.
 

Shin-Ra

Junior Member
Valve's Radial Density Masking technique's described in this PDF.

They appear to have two zones, inner at full density, outer at half density and their reconstruction filter's just averaging neighbouring pixels, no temporal reprojection.


The KZSF temporal reprojection technique only fell back to using neighbouring pixels if the motion-predictability of the pixel was low.
 

teiresias

Member
Can someone explain the whole "masking out pixel quads on edges" thing? What is it masking exactly and what are they doing to mask it? Is this a screen artifact due to the optics that they're trying to hide?
 

Urthor

Member
Hopefully Oculus and Valve can rip off these uber efficient VR techniques and implement them on their platform.

Realistically though, PSVR isn't going to close the gap except by downgrading visuals to maintain the framerate, perfectly reasonable but still.

Doesn't get 60 FPS in regular gaming.
 
Can someone explain the whole "masking out pixel quads on edges" thing? What is it masking exactly and what are they doing to mask it? Is this a screen artifact due to the optics that they're trying to hide?

Is it (and please someone more tech-minded intervene here) so the image fidelity is optimized similarly to how human vision works, as in maybe not so much fidelity is needed at the borders of our peripheral vision?
 

Shin-Ra

Junior Member
The presenter probably explained it better, "at edges" can be interpreted too many ways.

The other part that's not immediately obvious is if the closeup masking patterns shown are 2x2 pixels with 4 subsamples each (4xMSAA) or just 4x4 pixels.

edit: it's also a bit confusing how the gameplay shot has dark pixels representing the not-rendered parts, whereas the closeup masking pattern uses white to show the not-rendered parts.
 

cheezcake

Member
Well I supposed that's finally confirmation that Sony is supersampling as well.

Can someone explain the whole "masking out pixel quads on edges" thing? What is it masking exactly and what are they doing to mask it? Is this a screen artifact due to the optics that they're trying to hide?

image006.jpg


So the lens in the VR headset has an effect called pincushion distortion on the image displayed on the screen. So when something is rendered normally and displayed on the screen it will actually look warped (like the middle part in this example image) to the user. To counteract this you do a barrel distortion and send that to the screen, then when the pincushion distortion happens they counteract each other and you get the "normal" image as the end result.

The thing with barrel distortion as you can see is, the middle of the image ends up getting expanded to take up more than the screen while the edges get contracted. As rendering the edges at max resolution would be wasteful since they end up taking a small amount of the screen, what Sony and Valve detail is to effectively only render a portion of that screen. They're actually just putting a checker board like mask on the screen so it doesn't have to render anything that's masked, what you get is a much lower detail image but they then reconstruct it (i.e. fill in those masked gaps) with temporal AA on PSVR and seperate reconstruction filter for Valve. It's still a much lower detail image but it doesn't matter so much as those parts of the screen get contracted due to the barrel distortion.

You can think of it as a way to save quite a bit of GPU power with a small hit on end image quality.
 

dr_rus

Member
Why is it Voxel based? Just curious.

Any world space global illumination technique is likely to be based on 3d volumes as there are no other ways to trace the light bounces in 3d space made of polygons.

Can someone explain the whole "masking out pixel quads on edges" thing? What is it masking exactly and what are they doing to mask it? Is this a screen artifact due to the optics that they're trying to hide?

This is a performance optimization for VR rendering. They're excluding some pixels on the periphery of human's eye from processing and filling the resulting holes with copies of adjacent processed pixels essentially lowering the rendering resolution in places where a human eye have natively low resolution perception.
 

cheezcake

Member
This is a performance optimization for VR rendering. They're excluding some pixels on the periphery of human's eye from processing and filling the resulting holes with copies of adjacent processed pixels essentially lowering the rendering resolution in places where a human eye have natively low resolution perception.

Its not so much to do with the human eye as your eyes still move and focus on what's on the edge of the screen a fair amount.
 
Why is it Voxel based? Just curious.

Because of Minecraft.
not at all

My best guess not knowing anything about this..... it's basically a baking GI into the scene using a static mesh system like voxels, or around light sources like lightmaps.

Just my guess. It says that kind of in one of the slides I think.
 
Well I supposed that's finally confirmation that Sony is supersampling as well.



image006.jpg


So the lens in the VR headset has an effect called pincushion distortion on the image displayed on the screen. So when something is rendered normally and displayed on the screen it will actually look warped (like the middle part in this example image) to the user. To counteract this you do a barrel distortion and send that to the screen, then when the pincushion distortion happens they counteract each other and you get the "normal" image as the end result.

The thing with barrel distortion as you can see is, the middle of the image ends up getting expanded to take up more than the screen while the edges get contracted. As rendering the edges at max resolution would be wasteful since they end up taking a small amount of the screen, what Sony and Valve detail is to effectively only render a portion of that screen. They're actually just putting a checker board like mask on the screen so it doesn't have to render anything that's masked, what you get is a much lower detail image but they then reconstruct it (i.e. fill in those masked gaps) with temporal AA on PSVR and seperate reconstruction filter for Valve. It's still a much lower detail image but it doesn't matter so much as those parts of the screen get contracted due to the barrel distortion.

You can think of it as a way to save quite a bit of GPU power with a small hit on end image quality.

Very interesting, cool.
 

dr_rus

Member
Its not so much to do with the human eye as your eyes still move and focus on what's on the edge of the screen a fair amount.

While you're right that the immediate benefit comes from how VR lenses distort the render target today the underlying idea is fundamentally about rendering the peripheral viewing areas with lower spatial resolution as this is how our eye works - it can't see full resolution outside of some 5% central viewing cone. This same idea will eventually work with eye tracking techniques and on very wide FOV VR displays whenever these will become a reality.
 

Shin-Ra

Junior Member
Into the Deep and Dangerball look by far the most beautiful and polished in the latest VR Worlds trailer.

Scavenger's Odyssey is new to me and mixed, nice metal materials and lighting, but obvious pop-in in the hanger environment, very basic geometry and low-res shadows in open space.

The London Heist looks very good apart from the very grey motorway and baldy interrogation gangster. Intricate environment detail in the desk-cover shooting and interrogation scenes but low-res shadows again. Briefcase gangster and most of the environment behind him looks great.

VR Luge looks plentifully detailed for how fast you're whizzing along, could benefit from a bit more contrast and shadow draw distance is close but maybe not noticeable when focusing on the upcoming dangers.
 

cheezcake

Member
While you're right that the immediate benefit comes from how VR lenses distort the render target today the underlying idea is fundamentally about rendering the peripheral viewing areas with lower spatial resolution as this is how our eye works - it can't see full resolution outside of some 5% central viewing cone. This same idea will eventually work with eye tracking techniques and on very wide FOV VR displays whenever these will become a reality.

I'm not so sure if that's the fundamental idea behind this. Effectively you would have to be very confident that people don't move their eyes from the center of the screen very much. I don't personally know if this is true, if it is you would be right, but my instinct says that we tend to move our eyes gaze around the screen often enough that simple foveated style rendering with only a center focus is far from ideal.
 

Javin98

Banned
So can someone please explain if this voxel based global illumination technique is dynamic or baked into the environment?
 

slapnuts

Junior Member
Will this have the same affect that the current Gear VR has with the blurring of the outer portions of the screen that a lot of us moaned about the last few days?
 
Well I supposed that's finally confirmation that Sony is supersampling as well.



image006.jpg


So the lens in the VR headset has an effect called pincushion distortion on the image displayed on the screen. So when something is rendered normally and displayed on the screen it will actually look warped (like the middle part in this example image) to the user. To counteract this you do a barrel distortion and send that to the screen, then when the pincushion distortion happens they counteract each other and you get the "normal" image as the end result.

The thing with barrel distortion as you can see is, the middle of the image ends up getting expanded to take up more than the screen while the edges get contracted. As rendering the edges at max resolution would be wasteful since they end up taking a small amount of the screen, what Sony and Valve detail is to effectively only render a portion of that screen. They're actually just putting a checker board like mask on the screen so it doesn't have to render anything that's masked, what you get is a much lower detail image but they then reconstruct it (i.e. fill in those masked gaps) with temporal AA on PSVR and seperate reconstruction filter for Valve. It's still a much lower detail image but it doesn't matter so much as those parts of the screen get contracted due to the barrel distortion.

You can think of it as a way to save quite a bit of GPU power with a small hit on end image quality.

That's also the reason for rendering at a higher (> than 1080p) base resolution, because the middle gets expanded and covers more physical pixels on the display. So combining higher res rendering with some selective resolution drop in the edges gives you closer to 1:1 resolution on the display.
 

Peltz

Member
I've posted my slides up here now, as you guys seem interested. :)
Cool. Thanks for sharing.
Well I supposed that's finally confirmation that Sony is supersampling as well.



image006.jpg


So the lens in the VR headset has an effect called pincushion distortion on the image displayed on the screen. So when something is rendered normally and displayed on the screen it will actually look warped (like the middle part in this example image) to the user. To counteract this you do a barrel distortion and send that to the screen, then when the pincushion distortion happens they counteract each other and you get the "normal" image as the end result.

The thing with barrel distortion as you can see is, the middle of the image ends up getting expanded to take up more than the screen while the edges get contracted. As rendering the edges at max resolution would be wasteful since they end up taking a small amount of the screen, what Sony and Valve detail is to effectively only render a portion of that screen. They're actually just putting a checker board like mask on the screen so it doesn't have to render anything that's masked, what you get is a much lower detail image but they then reconstruct it (i.e. fill in those masked gaps) with temporal AA on PSVR and seperate reconstruction filter for Valve. It's still a much lower detail image but it doesn't matter so much as those parts of the screen get contracted due to the barrel distortion.

You can think of it as a way to save quite a bit of GPU power with a small hit on end image quality.

This is all so fascinating. Very interesting stuff.
 

twobear

sputum-flecked apoplexy
so if they're doing all of this stuff on low-mid range gpu hardware......why does oculus rift demand so much more?
 

darkinstinct

...lacks reading comprehension.
so if they're doing all of this stuff on low-mid range gpu hardware......why does oculus rift demand so much more?

Because it comes at a cost, image quality. These are workarounds to make it run on low-end hardware, it's always better to just use better hardware and have optimal image quality. Especially for VR, where you can be easily taken out of the experience due to jaggies or artefacts. Which is why Sony plans that PS4K for PSVR.
 
Top Bottom