• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

30min PSVR technical presentation (Feb.2016)

Joystick

Member
Quick question: how readable is text while playing VR games?

It depends on the size of the text and contrast to background, of course. With DK2 I have found some text a little tricky in some games due to resolution, lack of AA or SDE. If you move your head around a little it is easier to read as you get to see pixels in the text that are missing when motionless.
 

RoadHazard

Gold Member
There is no secret sauce.

There is (although it's not very secret) in the full RGB pixels vs the pentile displays of the OR/Vive. Anyone who has owned a phone with a pentile display (at a not-super-high resolution) can tell you this is not insignificant for image quality. Which is probably why so many are saying the PSVR has surprisingly good and clear IQ despite its lower res.
 
Sadly idk if much has been done
in this area. Personally using psvr idk if it would fit small heads. You're best bet probably is people using their own dk1 and dk2 and gear vr/cardboard.

There's also the warnings on video games (particularly 3D) for visual development but I never
knew how serious those were

es as



Put it in my veins. The applications for horror in vr (especially with eye tracking) are staggering. Having some silent hill esque creatures always hiding in your peripherals...I'm hoping for a Renaissance with horror movies and games.

And I never thought about the heat maps. That has interesting implications. They could inject that tech into movies too.




Yeah me too. While I'm glad they'll support move I'm still hoping they will come out with a move with an analog stick. I understand fragmenting the market but the experience would be so much better and not all games would require it. It would be like wii motion plus or the circle pad pro and could be marketed similarly.

I always thought it was really weird that there was no 2nd analog stick, considering they were developing VR when the moves came out.
 
Are you sure about that? I would expect them to use frame-packing in HDMI, as is used with 3D video, to send two 1920x1080 images, since that would require very little effort to split to the two screens.

Fairly sure, it's from an interview with Ito, and he should know!

Link, it's Japanese, but Google Translate does a reasonable job.
 

bj00rn_

Banned
Anyone who has owned a phone with a pentile display (at a not-super-high resolution) can tell you this is not insignificant for image quality.

..for static image quality basically in certain monochrome instances with certain colors. Video is different, and there are also ways to arrange the sub-pixels and even diffuse them to recuperate some of the drawbacks. Sony is doing all the right things from where they are, I commend them for that, but sub pixel density is not making pentile exactly 33% "worse" like they would have you believe. The challenge is a lot more dynamic than that. Anyone who has owned a DK2 with pentile display can tell you this ;)
 

gofreak

GAF's Bob Woodward
So their secret sauce is rendering at a lower resolution and lower frame rate. Interesting.
They rely on re-projection to "hide" it, but i highly doubt it's going to be as good as actual 90fps and the higher resolution.
Edit: It can be good enough for a lower price point (I would assume ~400$)

There are a number of games doing native 90+ fps. Sony seems to be warding devs off 60fps in fact, depending on the content. There's absolutely no limitation to run at 60 and rely purely on reprojection to cover the latency. Indeed there'll be games on this 'good enough' platform pushing and displaying a higher native frame rate than the others... These are all just options.
 
Quick question: how readable is text while playing VR games?

Try something like Quake VR on a Cardboard/Xingear, it's perfectly fine as the whole image is sharp. Quake has blocky text though so resolutionwise just don't expect to be playing text an UI heavy games like Civilization with the first gen devices.
 

pottuvoi

Banned
So they are doing all the warping and re-projection on the PS4's GPU? strange...
Not really, having all necessary data from rendering is near and ready to use.
Re-projection is 0.5ms compute job so it's not that bad. (Also it's now always on, apparently even on 120hz.))
Engine takes last rendered frame (that was already sent to users eyes), waits for correct time, samples the latest tracking data, applies the rotation to that old frame, and sends it to the user.
Apparently finished frame is re-projected for every refresh, so it's also done for the frame that just finished rendering.
 

DieH@rd

Banned
Apparently finished frame is re-projected for every refresh, so it's also done for the frame that just finished rendering.

Yes, when brand new frame is rendered [in ~16ms], just before it is sent to user it also gets reprojected to match latest head position data.
 

Joystick

Member
Yes, when brand new frame is rendered [in ~16ms], just before it is sent to user it also gets reprojected to match latest head position data.

I understand how reprojection works, but the way that devs explain it is that they take the sensor data at the start of the frame, render, then as late as possible poll the sensors again and reproject for the new orientation.

What I haven't seen or heard mentioned is if the engine uses that sensor data as-is or instead estimates the final orientation/position for the future point in time that the image will reach the player's eyes (based on remaining frame time & latency to send to the display), renders for that later estimate, then does the same when estimating final orientation for reprojection.

For example, if head rotation is constant it is easy to calculate the orientation in 16ms time, just add the same rotation that occurred during the last frame and render for that. Even acceleration could be taken into account. After all, the polling rate of the sensors and camera is very high (~1000Hz) and even fast head movement is comparatively quite slow so you could calculate speed and acceleration quite well.

I'm assuming that this is what is actually being done, right? In which case reprojection is only filling a small gap, especially if natively rendering at 120fps. I'd like to know how many degrees and typical pixels we're talking about.
 

mrklaw

MrArseFace
I wonder if reprojection can help with wireless VR?

Have the host computer do the brunt of the work for rendering, but then do reprojection locally on the headset to reduce the effects of latency. Would that be feasible?
 

bj00rn_

Banned
I wonder if reprojection can help with wireless VR?

Have the host computer do the brunt of the work for rendering, but then do reprojection locally on the headset to reduce the effects of latency. Would that be feasible?

Don't you need, among possibly other things (I'm no VR dev..), the z-buffer data first?
 

DieH@rd

Banned
I understand how reprojection works, but the way that devs explain it is that they take the sensor data at the start of the frame, render, then as late as possible poll the sensors again and reproject for the new orientation.
Yes, that is how it works. Engine does a very complicated routine to calculate final image for a initial sensor data [camera position], and at the end of rendering very quick reprojection happens to finalize the output with latest tracking data.

As for estimating future positions, there is some of that happening already [all VR headset makers promote that as essential part of tracking] before engine gets tracking data and starts working on a frame.

I don't know if prediction is used when engine asks for tracking data for repojection. In any case, from the time work on reprojection is started to the moment user sees that image, very little time is passed. Maybe there is no need for movement prediction for such small timeframe, or if it is used, prediction has a very small influence on camera rotation.
 
Haven't followed this whole conversation but with PSVR, head movement and position is not taken into account for reprojection, only rotation is (think google street view).

So no depth map required if you wanted to defer that to a theoretical headset based processor.
(i'm guessing the lens correction is done *after* the reprojection, so such a design would need to take care of that too.)

This is confirmed again at 10 minutes into the talk.
 

ZehDon

Gold Member
I wonder if reprojection can help with wireless VR?

Have the host computer do the brunt of the work for rendering, but then do reprojection locally on the headset to reduce the effects of latency. Would that be feasible?
Microsoft showed off some interesting work with this back in 2015, actually, titled Project Irides. Cloud based VR that used frame prediction and re-projection to accomplish the final result. They used Doom 3 for their example, and it was considered about the same quality to the original Doom 3 VR demo John Carmack showed off, back when the Rift was still a backyard project. Video and breakdown here. When Microsoft join the VR space, I suspect it'll look more like this than PSVR.
 

TTP

Have a fun! Enjoy!
I understand how reprojection works, but the way that devs explain it is that they take the sensor data at the start of the frame, render, then as late as possible poll the sensors again and reproject for the new orientation.

What I haven't seen or heard mentioned is if the engine uses that sensor data as-is or instead estimates the final orientation/position for the future point in time that the image will reach the player's eyes (based on remaining frame time & latency to send to the display), renders for that later estimate, then does the same when estimating final orientation for reprojection.

For example, if head rotation is constant it is easy to calculate the orientation in 16ms time, just add the same rotation that occurred during the last frame and render for that. Even acceleration could be taken into account. After all, the polling rate of the sensors and camera is very high (~1000Hz) and even fast head movement is comparatively quite slow so you could calculate speed and acceleration quite well.

I'm assuming that this is what is actually being done, right? In which case reprojection is only filling a small gap, especially if natively rendering at 120fps. I'd like to know how many degrees and typical pixels we're talking about.

Some degree of prediction is "always on" I think, therefore it is factored in whenever you are rendering or just re-projecting.

As for how many pixels we are talking about, you can get n idea by looking at the re-projection artifacts in this direct feed video from The London Heist.

https://www.youtube.com/watch?v=m2CXbjwLv2w

Pay very close attention to the top corners when view turns around. Especially the top/left one. It's *very* hard to notice in motion (they seem to be hiding it with some sort of edge mirroring so you don't see black areas popping in) but here are a few frames I've captured that clearly show the artifacts.

0NRI42X.jpg

jeiAsWd.jpg

wi4DIH4.jpg

Reminds me of the old PS1 games :D

Anyway... It is worth noting this is a 1+ year old footage. Stuff might have changed since then.

Finally, here is a good video showing re-projection in action on the Rift with VorpX

https://youtu.be/12R8Z4mssoY?t=12m43s
 

mrklaw

MrArseFace
Microsoft showed off some interesting work with this back in 2015, actually, titled Project Irides. Cloud based VR that used frame prediction and re-projection to accomplish the final result. They used Doom 3 for their example, and it was considered about the same quality to the original Doom 3 VR demo John Carmack showed off, back when the Rift was still a backyard project. Video and breakdown here. When Microsoft join the VR space, I suspect it'll look more like this than PSVR.

nice. Although a bit cheeky how the guy is like this :| when they are showing the baseline example, and then is like this :) when the prediction is on.

Also lol at 'trilemma'.

Although this is looking at cloud computing, I think it'd apply well to a local server with lower and more predictable latency. You'd probably have to render a larger area to allow for changes in direction or inaccurate predictions, which would place additional burden on the GPU but the mobility it allows could be worth it.
 

pottuvoi

Banned
I understand how reprojection works, but the way that devs explain it is that they take the sensor data at the start of the frame, render, then as late as possible poll the sensors again and reproject for the new orientation.

What I haven't seen or heard mentioned is if the engine uses that sensor data as-is or instead estimates the final orientation/position for the future point in time that the image will reach the player's eyes (based on remaining frame time & latency to send to the display), renders for that later estimate, then does the same when estimating final orientation for reprojection.

For example, if head rotation is constant it is easy to calculate the orientation in 16ms time, just add the same rotation that occurred during the last frame and render for that. Even acceleration could be taken into account. After all, the polling rate of the sensors and camera is very high (~1000Hz) and even fast head movement is comparatively quite slow so you could calculate speed and acceleration quite well.

I'm assuming that this is what is actually being done, right? In which case reprojection is only filling a small gap, especially if natively rendering at 120fps. I'd like to know how many degrees and typical pixels we're talking about.
https://twitter.com/id_aa_carmack/status/254009866690121728
If rotation speed is at around 300 degrees/s the final head movement in 120hz is around 2.5 degrees. (5 times the size of moon.. if PSVR has horizontal resolution of 90 degrees, it should be at around 100 pixels for the whole movement.)

Agreed that movement estimation should be a good choice before rendering the frame and just have a small adjustment at the end of frame.
 

Fafalada

Fafracer forever
bj00rn_ said:
Async timewarp is already a standard in the Oculus SDK for DK1 and DK2..
Minor quibble - TW has been there pretty much since beginning, but it's not Async yet as of 0.8.

mrklaw said:
I wonder if reprojection can help with wireless VR?
There's a limited amount of latency you can absorb with reprojection - and it's basically consumed with normal game updates already. If Wireless adds any non-trivial delay (>10ms) it falls into realms of bad experience already.
I'd argue there are other ways to hide latency - for instance scanline based rendering and scanline ordered transmission - but the former is extremely difficult to balance on non-closed platforms, and I don't have the expertise to speak on complexity of the latter.
 

Z3M0G

Member
That entire youtube channel looks to be loaded with interesting information... I may give more of it a look.

What I found to be a scary take-away from this video is that the Disk Battle thing was put together last minute before a stage show that happened not that long ago, if I'm not mistaken... I guess they could have other actual games further along in development, but even from that project they learned some basic things and are clearly still learning what does and does not work... But I guess that's more towards the connected-experience and physics-based-play sides of things. Early games will likely just be basic shooting experiences with minimal interaction with objects in the world. So a retail experience like Disk Battle is still a long ways away, and may not even be possible until future generations of hardware.

Edit: Lol, most videos on that channel have 100-400 views or so... yet this video is over 21k.

This highlight reel is cool, shows a LOT of games and random projects: https://www.youtube.com/watch?v=QxGf8h_7lQc
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
When you look at a virtual girl she will smile back. Characters can make true eye-contact with you (or notice when you're perving). "Don't roll your eyes at me!".
When you look away from a window a silouette slowly appears (and hides again when you look back).
Game designers can record heat-maps based on eye-movements during focus testing, which can help them make better interfaces and level cues.

WHOA! This just blew my mind.
 

Blanquito

Member
Some degree of prediction is "always on" I think, therefore it is factored in whenever you are rendering or just re-projecting.

As for how many pixels we are talking about, you can get n idea by looking at the re-projection artifacts in this direct feed video from The London Heist.

https://www.youtube.com/watch?v=m2CXbjwLv2w

Pay very close attention to the top corners when view turns around. Especially the top/left one. It's *very* hard to notice in motion (they seem to be hiding it with some sort of edge mirroring so you don't see black areas popping in) but here are a few frames I've captured that clearly show the artifacts.



Reminds me of the old PS1 games :D

Anyway... It is worth noting this is a 1+ year old footage. Stuff might have changed since then.

Finally, here is a good video showing re-projection in action on the Rift with VorpX

https://youtu.be/12R8Z4mssoY?t=12m43s

Thanks for the pictures, I had never seen/noticed those artifacts before.
 
Quick question: how readable is text while playing VR games?
The newest Oculus SDKs can render 3D text using distance fields so smoothness is always maximized (preserving letter shapes better at small sizes).

This is the same type of rendering used (for everything, not text) in Media Molecule's Dreams, a method that doesn't use polygons like traditionally.
 

Ogni-XR21

Member
Some degree of prediction is "always on" I think, therefore it is factored in whenever you are rendering or just re-projecting.

As for how many pixels we are talking about, you can get n idea by looking at the re-projection artifacts in this direct feed video from The London Heist.

https://www.youtube.com/watch?v=m2CXbjwLv2w

Pay very close attention to the top corners when view turns around. Especially the top/left one. It's *very* hard to notice in motion (they seem to be hiding it with some sort of edge mirroring so you don't see black areas popping in) but here are a few frames I've captured that clearly show the artifacts.



Reminds me of the old PS1 games :D

Anyway... It is worth noting this is a 1+ year old footage. Stuff might have changed since then.

Finally, here is a good video showing re-projection in action on the Rift with VorpX

https://youtu.be/12R8Z4mssoY?t=12m43s

Could this distortion also be due to it being the social screen and have nothing to do with reprojection?
 
Wow, that's a pretty impressive spin. The funny thing is, what it really means is cunningly baked into the paragraph. But look at people eating it up. Works like a charm. PR speak 101.

Doesn't really matter. In the end the comparisons will come to see what the games look like and what we get.

If repolarization was truly a good solution for all games, I'd figure OR would be all over using it to fill in for 60 frames as well allowing older pcs to use it. Why wouldn't you do this to spread vr like mad?

My guess is it will be limited to use in certain games where artifacting wouldn't happen as much like other posters are saying while others will run native 90. Some have said that close up objects give artifacting so I guess the real question is, what does using 60 frames in repolarization limit games with? Since sony has recently recommended a native 90, it seems that method is not a complete solution.

The OR will constantly aim for 90 so it wouldn't be limited or hampered in any game or application devs want to run. But because it does this, sony can say stuff like 60% better since the OR will be aiming for 90 at all times in all apps/games and not use repolarization as a fill in for say games that run at 60 fps and use timewarp for another 60. It will only use it for small drops to keep that 90. Doesn't also include the fact that dx12/vulkan are hitting soon and will assist with vr as well.

Either way we'll see in time. I really do wish timewarp/repolarization was a silver bullet that could be used 100% of the time because that would mean weaker pcs can use it and vr would spread faster over time.
 

bj00rn_

Banned
Minor quibble - TW has been there pretty much since beginning, but it's not Async yet as of 0.8.

No it's not a minor quibble, I appreciate you correcting me. So Oculus is still working on Async then. But Nvidia enabled preemption for them no?
 

ZehDon

Gold Member
nice. Although a bit cheeky how the guy is like this :| when they are showing the baseline example, and then is like this :) when the prediction is on.

Also lol at 'trilemma'.

Although this is looking at cloud computing, I think it'd apply well to a local server with lower and more predictable latency. You'd probably have to render a larger area to allow for changes in direction or inaccurate predictions, which would place additional burden on the GPU but the mobility it allows could be worth it.
I tend to agree. I think the first generation of this would probably be more like the next Xbox console or Windows 11 PC streaming it to a headset, then a true cloud based solution. It does require a more substantial processing footprint, since you have to render potential outcomes instead of just what it's actually doing, but yes - the result is high quality VR without any cables. Frankly, its just nice to see people are working on these problems, because it means that Gen 2 VR is going to be another huge leap forward in terms of quality.
 

Joystick

Member
As for how many pixels we are talking about, you can get n idea by looking at the re-projection artifacts in this direct feed video from The London Heist.

https://www.youtube.com/watch?v=m2CXbjwLv2w

Pay very close attention to the top corners when view turns around ... here are a few frames I've captured that clearly show the artifacts.

Thanks for the example!

Could this distortion also be due to it being the social screen and have nothing to do with reprojection?

That's what I thought at first, or that it is simply filling the otherwise black area around the circular lense projection, considering that it is a large area of the top-left of the frame in these shots. But then I watched the video and there is no fill when the head is still or moving right - it's actual rendered content - and only appears when the head moves left. The amount of fill increases as the head movement increases.

Strange that it's only/mostly the top left side. I would guess that this is from the left eye view but still you'd expect some on the right unless this is somehow cropped or there is less gap in the area between the eyes.
 

Joystick

Member
https://twitter.com/id_aa_carmack/status/254009866690121728
If rotation speed is at around 300 degrees/s the final head movement in 120hz is around 2.5 degrees. (5 times the size of moon.. if PSVR has horizontal resolution of 90 degrees, it should be at around 100 pixels for the whole movement.)

Thanks. I had previously estimated my max head rotation at around 360deg/s (6deg/frame at 60fps) for anything sustained (quickly turning left and right 90 degrees), so I guess a peak of around 1000deg/s is plausible.


So with 90 degree horizontal field of view that's up to ~9% (1000/90/120) of the view at 120fps. At 960 pixels wide that would be around 90 pixels. Double that at 60fps. But it would likely average only around 10% of that.
 

Man

Member
Pay very close attention to the top corners when view turns around. Especially the top/left one. It's *very* hard to notice in motion (they seem to be hiding it with some sort of edge mirroring so you don't see black areas popping in) but here are a few frames I've captured that clearly show the artifacts.

*images*

Reminds me of the old PS1 games :D
Nice detective work. :)
 

gmoran

Member
It's the spin post which sells stenciling out the invisible part of a viewport as a "classified technique" again :(

Stuff like that just drives me up a wall. This mystification of standard technology once it is associated with one particular product.

Hi Durante, hope what I'm about to say comes across the right way, but I think you, and the other PC VR defenders are sort of getting the wrong end of the stick on this sort of stuff.

People are getting excited because Sony have put a lot—of what I would call—emgineering into PSVR to get it to the right level, They have had to, PS is undoubtedly a bit weak in comparison to PC, but their VR solution is a fully fledged one where they have also prioritized VR comfort and presence.

To do this they have used a variety of techniques, including some software techniques pioneered else where.

Sony have had to do this, PC, don't have to so much.

As s result all of us PS VR junkies, are all super excited, because we know Sony have put that work in to make a good product that will be very competitive with seated and standing PC VR (but obviously not room-scale).

And I think you PC VR guys should give us a bit of leeway on this.

However accurate corrections and general VR insights are always welcome, after all we are all just excited and enthusiastic for VR.
 

gmoran

Member
It's the spin post which sells stenciling out the invisible part of a viewport as a "classified technique" again.

p.s. with the YantraVR quote, i've assumed from how he referenced this that this is now in a Sony library, he also infers that he can't directly talk about it so he might be operating under an NDA.

In other words it may come across as Sony secret sauce, but really its just a dev trying to explain about something where he can't explicitly do so because of NDA.
 
I'm pretty much convinced that Sony hires wizards as engineers at this point. Never in my wildest dreams did I expect VR to come to PS4. Then I actually tried it myself. I'm totally blown away at some of the tech that comes from this company.
 

mrklaw

MrArseFace
Hi Durante, hope what I'm about to say comes across the right way, but I think you, and the other PC VR defenders are sort of getting the wrong end of the stick on this sort of stuff.

People are getting excited because Sony have put a lot—of what I would call—emgineering into PSVR to get it to the right level, They have had to, PS is undoubtedly a bit weak in comparison to PC, but their VR solution is a fully fledged one where they have also prioritized VR comfort and presence.

To do this they have used a variety of techniques, including some software techniques pioneered else where.

Sony have had to do this, PC, don't have to so much.

As s result all of us PS VR junkies, are all super excited, because we know Sony have put that work in to make a good product that will be very competitive with seated and standing PC VR (but obviously not room-scale).

And I think you PC VR guys should give us a bit of leeway on this.

However accurate corrections and general VR insights are always welcome, after all we are all just excited and enthusiastic for VR.

its a relief that Sony are doing it properly. They could easily have half-assed it and risked damaging VR uptake generally as they were always likely to be the most affordable entry to VR (aside from mobile solutions)
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
I'm pretty much convinced that Sony hires wizards as engineers at this point. Never in my wildest dreams did I expect VR to come to PS4. Then I actually tried it myself. I'm totally blown away at some of the tech that comes from this company.

What games did you try out? Do you remember the names?
 

Inuhanyou

Believes Dragon Quest is a franchise managed by Sony
Although i'm not personally interested in VR, i hope it turns out well for Sony and reinvigorates apart of the industry and consumer base that is hungry for a new experience
 

Abdiel

Member
What games did you try out? Do you remember the names?

I personally tried EVE Valkyrie, RIGS, and The Kitchen demo. I'm not really affected by horror, so the impact of the last one was a bit muted for me, but I was certainly impressed by the level of atmosphere and 'oh shit' they'd managed to build into a roughly 5-10 minute experience.

EVE and RIGS were incredible for me though. Exactly the sort of gaming experience I'm looking to get out of them. Exhilarating and intense.
 

DeepEnigma

Gold Member
I personally tried EVE Valkyrie, RIGS, and The Kitchen demo. I'm not really affected by horror, so the impact of the last one was a bit muted for me, but I was certainly impressed by the level of atmosphere and 'oh shit' they'd managed to build into a roughly 5-10 minute experience.

EVE and RIGS were incredible for me though. Exactly the sort of gaming experience I'm looking to get out of them. Exhilarating and intense.

Nice! Making me even more excited.

My only experience with VR as at a trade show when I was younger over 20 years ago, with the big helmet and controls, in this big ring thing, and blocky Lawnmower Man (less texture detail), type graphics. And it was still cool as shit back then, and something I always was hyped to have at consumer level some day.

I am holding off trying first hand, until I get my own. The impressions have me excited enough, since they sound very similar to how it was back then, only way more advanced tech, PQ, graphics, etc.
 

Mindlog

Member
Nice! Making me even more excited.

My only experience with VR as at a trade show when I was younger over 20 years ago, with the big helmet and controls, in this big ring thing, and blocky Lawnmower Man (less texture detail), type graphics. And it was still cool as shit back then, and something I always was hyped to have at consumer level some day.

I am holding off trying first hand, until I get my own. The impressions have me excited enough, since they sound very similar to how it was back then, only way more advanced tech, PQ, graphics, etc.
zxYFauv.jpg

It's almost here!

I have about a half dozen games I've been sitting on so I can experience them in a high quality VR product. We've been waiting years for an EVE space combat game and we're finally getting one in VR. To go from never going to happen to here it is in VR so quickly is absurd and wonderfui.
 

DeepEnigma

Gold Member
zxYFauv.jpg

It's almost here!

I have about a half dozen games I've been sitting on so I can experience them in a high quality VR product. We've been waiting years for an EVE space combat game and we're finally getting one in VR. To go from never going to happen to here it is in VR so quickly is absurd and wonderfui.

Yes, lol. It was very similar to that one.
 
If predicted position from accelerometers in headsets is already used (forward prediction of orientation for time T that includes latency for the entire pipeline), then reprojection should not be necessary if we only ever made linear head motions, right?
It only fixes up the incorrect headset predictions made when there is a non linear change in head position?

I imagine in future some clever electrode placement could get an advance look at nerve signals to neck muscles and remove the need for reprojection because the lag of a VR system is probably already better than the lag humans have when commanding muscle movement.

I guess the brain does loads of internal reprojection as well, otherwise we would get nauseous from the lag involved with commanding laggy eyeballs and head muscles.
 

Joystick

Member
zxYFauv.jpg

It's almost here!

Hard to read the name in that image but I think it is Virtuality. It was running on Amiga computers (at least initially) which were crap at 3D graphics due to the bitplane video memory layout being slow to update all bits of a single pixel.

I played the Dactyl game in all it's 5-10fps (and about that many polys) glory, so I'm stoked for all of the major VR platforms this time around.
 

Occam

Member
That looks terrible. Completely immersion breaking.

People who have used VR, is this as bad as it looks in the pic? Does it ruin the experience?

DK2 was slightly better when I tested it, but it's definitely quite noticable and annoying to me.
DK1-DK2IndianaCompare.png
 

Skyrise

Member
As for how many pixels we are talking about, you can get n idea by looking at the re-projection artifacts in this direct feed video from The London Heist.

https://www.youtube.com/watch?v=m2CXbjwLv2w

Pay very close attention to the top corners when view turns around. Especially the top/left one. It's *very* hard to notice in motion (they seem to be hiding it with some sort of edge mirroring so you don't see black areas popping in) but here are a few frames I've captured that clearly show the artifacts.

We're developing on PSVR and the effect you have here is something you see only when the game drops frames and you move your head, not an artifact that's always on if you because you're using reprojection.

In this case I think it's old code + the capturing equipment used, I've never noticed it in person while trying out The London Heist. :)
 
Top Bottom