• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

30min PSVR technical presentation (Feb.2016)

mrklaw

MrArseFace
DK2 was slightly better when I tested it, but it's definitely quite noticable and annoying to me.
DK1-DK2IndianaCompare.png

Thats a big improvement. I survived for years with a 480p projector, watching DVDs at 100" across. If I can manage that, I think I can handle VR :)
 

TTP

Have a fun! Enjoy!
We're developing on PSVR and the effect you have here is something you see only when the game drops frames and you move your head, not an artifact that's always on if you because you're using reprojection.

In this case I think it's old code + the capturing equipment used, I've never noticed it in person while trying out The London Heist. :)

Thanks for the clarification. It goes without saying that I was not implying the artifacts are that visible through the headset. They occur in the periferal vision anyway.
 
zxYFauv.jpg

It's almost here!

I have about a half dozen games I've been sitting on so I can experience them in a high quality VR product. We've been waiting years for an EVE space combat game and we're finally getting one in VR. To go from never going to happen to here it is in VR so quickly is absurd and wonderfui.

While I never had an opportunity to try it this is the virtual reality experience I remember being featured the most from the early 90s. I remember being desperate to play it as I was obsessed with virtual reality at the time.
 
Ugh, ffs The Verge....

The "Ultimate" VR Buyers Guide

The one headset that we don’t know anything about right now is PlayStation VR. Early price estimates for both the Rift and Vive were way off — people underestimated the former and overestimated the latter — so we’ll refrain from making any guesses on PSVR. But we know it’s going to be "several hundred dollars," and it’s not just a headset; there’s also a box that upgrades the PlayStation 4’s graphics capabilities.

I know you shouldn't take technology advice from a populist site like The Verge, but this persistent miss-information about that dumb little box really annoys the hell out of me.
 

ZehDon

Gold Member
What amuses me is that Dr. Marks has taken every opportunity to explain the break out box, and correct this misinformation, and people are still mis-reporting what it does.
 
Here's the screendoor effect on Oculus DK2:

Fuck, I only tried the DK1 and the screen door effect was terrible but I thought they would solve it. Will it be like this in the consumer version? :/


EDIT: that Harrison Ford image is crazy when zooming in. Blows my mind how can we make that trickery from just 3 colors of light.
 

Man

Member
It's not like that in person and it might be from the DK1 as someone pointed out (or zoomed in). I never really noticed a screendoor on DK2 unless I was looking for it.
 
I know you shouldn't take technology advice from a populist site like The Verge, but this persistent miss-information about that dumb little box really annoys the hell out of me.

Isn't it kinda right, though? By processing 3D audio in the box, not in the PS4 itself, there should be a small performance increase. Of course, their description sounds too promising, though.
 

III-V

Member
Isn't it kinda right, though? By processing 3D audio in the box, not in the PS4 itself, there should be a small performance increase. Of course, their description sounds too promising, though.

Yes, this makes sense. If the PS4 was capable of this alone, a box would not be required. Therefore, the box function does enhance the standard PS4 capability.
 

pottuvoi

Banned
It's not like that in person and it might be from the DK1 as someone pointed out (or zoomed in). I never really noticed a screendoor on DK2 unless I was looking for it.
The pentile arrangement was quite visible in Elite Dangerous when I tested it. (The dark of space and bright thin objects were not kind for it..)
Haven't seen DK1 so I cannot really comment on that.
 
Can you expand on why that is needed for VR? Isn't that the point of surround sound in games, rendering it live based on the positional data (usually of the camera/point of view)? I don't get the difference. Why would you need to process the surround audio synced to the camera (i.e. VR helmet) movements afterwards?
As Dahaka was saying, it's more about accuracy than anything else. Sure, having seven speakers is great, but Sony's solution actually simulates a 60-speaker rig, and they built a real-life 60-speaker sound stage to test it against, to ensure the super-complicated math was really doing what they wanted.

So yeah, it's not fundamentally different from surround sound. It's just got much higher resolution, so it's sorta like saying a 4K display isn't really any different from an 480p display.


So if I follow this information correctly the breakout box basically handles all of the audio freeing up the PS4 itself to focus solely on gpu tasks?
They were probably doing this stuff on the GPU. When Cerny was doing the rounds pre-launch, "ray-casting for 3D audio" was one of the examples he generally gave when people asked what GPGPU was good for.

Regarding how much GPU time we're talking about, they haven't really said, and it would vary by game and situation anyway. All we really know is that it was cheap enough that doing it on the GPU was always the plan, but expensive enough that they decided it was worthwhile to spend a couple extra bucks to offload it to the breakout box. Of course, if you can spend $3 to make it free for devs, that's a pretty easy decision with any non-trivial usage on the GPU.

Obviously, a CPU can do this kind of work too, but it's more efficient to do it on a GPU or a dedicated chip.


I think what else helps Sony is that their HMD refresh rate is 120hz, so they can reproject 60fps games to 120. They can't do that trick with 90fps games, and I've seen a few Dev threads talking about this and the consensus is that you can't reproject 60fps to 90, and that 45fps is just too low,
You can't do 60->90 because while you can reuse frames, you need to use all of the frames an equal number of times. If you tried to dupe every alternate frame of a 60 fps feed, you'd have weird pacing problems.

So I'm guessing Sony designed some aspects of their HMD around the limitations of PS4: RGB 120z display, whereas OR and Vive don't have to because of the raw power of the PC platform and hence save some money.
Sony used a 120 Hz panel because it was the best available to them. The others are using 90 Hz for the same reason; it was the best they could reasonably do. If they had 120 Hz displays, they'd likely be offering the same options and recommendations that Sony currently do; 60 duped to 120, 90 native, or 120 native. As always, higher is better, but it becomes especially important when you have objects translating near your face.


The Vive/Oculus CV2 SDK's will assuredly offer reprojection as an option having screens that can perform 120hz+. You need a base framerate of 60 to pool off (as stated by Sony), lower wasn't good enough.
Sony's jargon is sorta confusing, to be honest. "Reprojection" tends to make people think, "reuse," but the reuse is actually optional, and the "reprojection" part is actually more of a last-minute realignment.

There is a breakthrough incoming in 2017/2018 which is eye-tracking (capable of tracking saccadic eye movement). The importance of this for VR cannot be understated. It will massively enhance visuals thanks to foveated rendering and it will open up a million doors for game design.
Time has come today.

TIME!


The far bigger problem is disocclusion due to parallax movement.
Doing nothing certainly doesn't solve that.

Reprojection is certainly not a panacea.
So what should we be doing instead?


What I haven't seen or heard mentioned is if the engine uses that sensor data as-is or instead estimates the final orientation/position for the future point in time that the image will reach the player's eyes (based on remaining frame time & latency to send to the display), renders for that later estimate, then does the same when estimating final orientation for reprojection.

For example, if head rotation is constant it is easy to calculate the orientation in 16ms time, just add the same rotation that occurred during the last frame and render for that. Even acceleration could be taken into account. After all, the polling rate of the sensors and camera is very high (~1000Hz) and even fast head movement is comparatively quite slow so you could calculate speed and acceleration quite well.

I'm assuming that this is what is actually being done, right? In which case reprojection is only filling a small gap, especially if natively rendering at 120fps. I'd like to know how many degrees and typical pixels we're talking about.
Marks said they use "a little bit of prediction" (emphasis his), so maybe? They may use prediction on the front end as well. You may be facing 0º currently, but based on how your head is currently turning to the right, they may be able to predict you'll be facing 5º by the time the photons are leaving the display. So instead of doing the rendering with a 0º facing, you do it with a 5º facing, and once that picture is drawn, they then slide it to its final position of 4.55º which is where your actually head is now, or rather where it's likely to be by the time the signals travel the length of the wires, as you say.

That would help minimize stuff you can't really compensate for, like parallax and disocclusion. Most of your latency comes from the rendering phase, so you want to do as much prediction ahead of time as you can, assuming you're doing of a good job of it. Obviously the best solution is to increase your frame rate, but even if you're already at 120 fps, this will make things smoother still; that's why Sony are using it at 60, 90, and 120 fps.


If predicted position from accelerometers in headsets is already used (forward prediction of orientation for time T that includes latency for the entire pipeline), then reprojection should not be necessary if we only ever made linear head motions, right?
It only fixes up the incorrect headset predictions made when there is a non linear change in head position?

I imagine in future some clever electrode placement could get an advance look at nerve signals to neck muscles and remove the need for reprojection because the lag of a VR system is probably already better than the lag humans have when commanding muscle movement.
Well, the movements are inherently non-linear, and we already know that part, but they're still hard to predict perfectly. Sure, measuring muscle impulses could help improve prediction even further, but it probably still won't be "perfect." That may not even be required to hit "good enough," to be honest. If you want "perfect," go for direct neural stimulation.

I guess the brain does loads of internal reprojection as well, otherwise we would get nauseous from the lag involved with commanding laggy eyeballs and head muscles.
Most of that "drastic" movement is done by the eyes during saccades. The nausea is eliminated by saccadic masking.
 

DeepEnigma

Gold Member
^ Thanks for the info serversurfer. Learned a few more new things today. Can't not wait for the announcement of price/release date.
 
Here's a random idea I got from reading this tweet, and this is probably as good a place to stick it as any.

So, the Vive has an outward-facing camera that shows the view "from inside" the headset. VentureBeat has an article with a little video showing the view from that camera shown as a virtual window that springs out the side of the wand. At one point, the guy held it up to his face and basically went, "Oh, sweet. You can do this, and it's like you're really looking out yourself."
><

So anyway, I thought that was dumb, and didn't really understand why "hanging from your face" wasn't the default location for that view to appear. I conceded to myself it might be nice to have it be detachable, but I couldn't really imagine any situation where you'd want a picture frame sitting on your desk showing a live feed of wherever your head was pointed, unless you're on acid, and even then, it suspect you'd find the resolution somewhat disappointing. Having a live feed on your desk like that doesn't really provide much utility unless it allows you to passively observe a given location, kind of like a feed from a security camera.

So then it occurred to me that the PS4 Camera is actually positioned to provide just such a view. Presumably, you've got it positioned such that it has a good view of your playing area, so you can simply pass its view of the room along to your virtual display, which can be a frame propped on your desk as described above, or part of a body-relative HUD that moves around with you. This would allow you to keep an eye on yourself as you shuffle about in your "standing experience," and it would also allow you to monitor the space itself, to see if any children/pets/burglars enter the room while you're preoccupied. Hell, it's a stereo camera, so they could even show a little virtual hologram of the room, though that'd require some additional computing resources to be diverted from the game, I suspect.

Oh, and speaking of acid, somebody should make a multiplayer Mandelbrot explorer for us to fly around in, please. kthxbai
 

bj00rn_

Banned
So anyway, I thought that was dumb, and didn't really understand why "hanging from your face" wasn't the default location for that view to appear

Eh, because that's just how that particular application (or debug view) is handling it? It's an optional experimental detached first person view where the main full FOV experience is not completely abandoned? Hardly dumb.

Here's a different version of how the chaperone system of the Vive could augment details and room limits into the VR experience in normal full FOV first person view.


Edit: Wait, what are you even talking about:

I couldn't really imagine any situation where you'd want a picture frame sitting on your desk showing a live feed of wherever your head was pointed

"a picture frame sitting on your desk"..? What do you mean?

The camera isn't about external feed, it's about safety, and for not having to take off the HMD to do other momentary stuff. Hence the reason there's also apps for augmenting your phone into the Vive. I guess you could also do rudimentary AR stuff on it.
 
Edit: Wait, what are you even talking about:

"a picture frame sitting on your desk"..? What do you mean?

The camera isn't about external feed, it's about safety, and for not having to take off the HMD to do other momentary stuff. Hence the reason there's also apps for augmenting your phone into the Vive. I guess you could also do rudimentary AR stuff on it.
Yes, that was my point, actually. Seems like you'd want it to be a momentary-on switch, that effectively raises the blast shield on your helmet so you can see what's going on around you in meatspace, right? Shouldn't that be positioned directly in front of your eyes, as though you were actually using them instead of the camera?

Instead, the view from his virtual eyeballs was being projected in to his virtual hand, and he was waving it around, seemingly not really having any understanding of why he was doing so, until he finally held it up to his face and went, "Oh hey, this would be sorta useful; now I can see what I'm doing…"

So, much like the dude in the video, I wasn't really sure what the point of waving your own first-person view around in your hand was meant to be. I thought it might be nice if he could set it down somewhere — either on his virtual desk or just hanging in space if he wanted — but that didn't seem super useful either, unless maybe you were making a virtual window to check on your actual infant, or something like that.

Anyway, I get the safety thing. It just seemed like a weird place to locate a first-person viewport, and it got me to thinking about how the Camera would actually provide a security-cam type view, which would allow the user to not just grab stuff, but also survey their surroundings a bit while they played.
 
Yes, this makes sense. If the PS4 was capable of this alone, a box would not be required. Therefore, the box function does enhance the standard PS4 capability.

A box was always going to be required because there's only one HDMI out on the PS4. You need something to split the feed to TV for normal games or headset for VR, otherwise you'd be constantly unplugging cables.
 

TTP

Have a fun! Enjoy!
A box was always going to be required because there's only one HDMI out on the PS4. You need something to split the feed to TV for normal games or headset for VR, otherwise you'd be constantly unplugging cables.

On a related note, I assume the box with be "always on" even when playing normal games on TV (otherwise you'd need to unplug cables anyway). I guess it will be a low consumption/stand by kind of "on" acting as a pure splitter.
 

bj00rn_

Banned
Shouldn't that be positioned directly in front of your eyes, as though you were actually using them instead of the camera?

Yes, and my point was that's how the chaperone system has been presented to us already with visible room limits inside apps.. And now with a camera there's also the possibilty of showing additional camera pass-through with more detail like in the animated gif. And it's not unlikely you will be able to choose the fullscreen detail/color camera feed as well.

What you saw in the video you're talking about is yet another way of doing it, and that was via a debug view.
 
Oh, that was a debug mode? Right on. I assumed it was meant to be a cool tech demo.

In any case, that wasn't really the point anyway. It was just backstory for what prompted my idea.

On a related note, I assume the box with be "always on" even when playing normal games on TV (otherwise you'd need to unplug cables anyway). I guess it will be a low consumption/stand by kind of "on" acting as a pure splitter.
Yeah, and it'll probably switch to a low-latency pass-through mode when the headset is inactive.
 
Ugh, ffs The Verge....

The "Ultimate" VR Buyers Guide



I know you shouldn't take technology advice from a populist site like The Verge, but this persistent miss-information about that dumb little box really annoys the hell out of me.
Honestly until Sony comes out with videos and material directed at consumers about the box being a splitter that undoes the barrel distortion and handles 3D audio, just expect everyone to be varying degrees of wrong. Even Giantbomb talked about it on their podcast about being extra horsepower for the PS4 to render VR.
 
Top Bottom