DK2 was slightly better when I tested it, but it's definitely quite noticable and annoying to me.
Thats a big improvement. I survived for years with a 480p projector, watching DVDs at 100" across. If I can manage that, I think I can handle VR
DK2 was slightly better when I tested it, but it's definitely quite noticable and annoying to me.
We're developing on PSVR and the effect you have here is something you see only when the game drops frames and you move your head, not an artifact that's always on if you because you're using reprojection.
In this case I think it's old code + the capturing equipment used, I've never noticed it in person while trying out The London Heist.
It's almost here!
I have about a half dozen games I've been sitting on so I can experience them in a high quality VR product. We've been waiting years for an EVE space combat game and we're finally getting one in VR. To go from never going to happen to here it is in VR so quickly is absurd and wonderfui.
The one headset that we dont know anything about right now is PlayStation VR. Early price estimates for both the Rift and Vive were way off people underestimated the former and overestimated the latter so well refrain from making any guesses on PSVR. But we know its going to be "several hundred dollars," and its not just a headset; theres also a box that upgrades the PlayStation 4s graphics capabilities.
Here's the screendoor effect on Oculus DK2:
I know you shouldn't take technology advice from a populist site like The Verge, but this persistent miss-information about that dumb little box really annoys the hell out of me.
Isn't it kinda right, though? By processing 3D audio in the box, not in the PS4 itself, there should be a small performance increase. Of course, their description sounds too promising, though.
The pentile arrangement was quite visible in Elite Dangerous when I tested it. (The dark of space and bright thin objects were not kind for it..)It's not like that in person and it might be from the DK1 as someone pointed out (or zoomed in). I never really noticed a screendoor on DK2 unless I was looking for it.
Thats a big improvement. I survived for years with a 480p projector, watching DVDs at 100" across. If I can manage that, I think I can handle VR
No.Isn't it kinda right, though?
As Dahaka was saying, it's more about accuracy than anything else. Sure, having seven speakers is great, but Sony's solution actually simulates a 60-speaker rig, and they built a real-life 60-speaker sound stage to test it against, to ensure the super-complicated math was really doing what they wanted.Can you expand on why that is needed for VR? Isn't that the point of surround sound in games, rendering it live based on the positional data (usually of the camera/point of view)? I don't get the difference. Why would you need to process the surround audio synced to the camera (i.e. VR helmet) movements afterwards?
They were probably doing this stuff on the GPU. When Cerny was doing the rounds pre-launch, "ray-casting for 3D audio" was one of the examples he generally gave when people asked what GPGPU was good for.So if I follow this information correctly the breakout box basically handles all of the audio freeing up the PS4 itself to focus solely on gpu tasks?
You can't do 60->90 because while you can reuse frames, you need to use all of the frames an equal number of times. If you tried to dupe every alternate frame of a 60 fps feed, you'd have weird pacing problems.I think what else helps Sony is that their HMD refresh rate is 120hz, so they can reproject 60fps games to 120. They can't do that trick with 90fps games, and I've seen a few Dev threads talking about this and the consensus is that you can't reproject 60fps to 90, and that 45fps is just too low,
Sony used a 120 Hz panel because it was the best available to them. The others are using 90 Hz for the same reason; it was the best they could reasonably do. If they had 120 Hz displays, they'd likely be offering the same options and recommendations that Sony currently do; 60 duped to 120, 90 native, or 120 native. As always, higher is better, but it becomes especially important when you have objects translating near your face.So I'm guessing Sony designed some aspects of their HMD around the limitations of PS4: RGB 120z display, whereas OR and Vive don't have to because of the raw power of the PC platform and hence save some money.
Sony's jargon is sorta confusing, to be honest. "Reprojection" tends to make people think, "reuse," but the reuse is actually optional, and the "reprojection" part is actually more of a last-minute realignment.The Vive/Oculus CV2 SDK's will assuredly offer reprojection as an option having screens that can perform 120hz+. You need a base framerate of 60 to pool off (as stated by Sony), lower wasn't good enough.
Time has come today.There is a breakthrough incoming in 2017/2018 which is eye-tracking (capable of tracking saccadic eye movement). The importance of this for VR cannot be understated. It will massively enhance visuals thanks to foveated rendering and it will open up a million doors for game design.
Doing nothing certainly doesn't solve that.The far bigger problem is disocclusion due to parallax movement.
So what should we be doing instead?Reprojection is certainly not a panacea.
Marks said they use "a little bit of prediction" (emphasis his), so maybe? They may use prediction on the front end as well. You may be facing 0º currently, but based on how your head is currently turning to the right, they may be able to predict you'll be facing 5º by the time the photons are leaving the display. So instead of doing the rendering with a 0º facing, you do it with a 5º facing, and once that picture is drawn, they then slide it to its final position of 4.55º which is where your actually head is now, or rather where it's likely to be by the time the signals travel the length of the wires, as you say.What I haven't seen or heard mentioned is if the engine uses that sensor data as-is or instead estimates the final orientation/position for the future point in time that the image will reach the player's eyes (based on remaining frame time & latency to send to the display), renders for that later estimate, then does the same when estimating final orientation for reprojection.
For example, if head rotation is constant it is easy to calculate the orientation in 16ms time, just add the same rotation that occurred during the last frame and render for that. Even acceleration could be taken into account. After all, the polling rate of the sensors and camera is very high (~1000Hz) and even fast head movement is comparatively quite slow so you could calculate speed and acceleration quite well.
I'm assuming that this is what is actually being done, right? In which case reprojection is only filling a small gap, especially if natively rendering at 120fps. I'd like to know how many degrees and typical pixels we're talking about.
Well, the movements are inherently non-linear, and we already know that part, but they're still hard to predict perfectly. Sure, measuring muscle impulses could help improve prediction even further, but it probably still won't be "perfect." That may not even be required to hit "good enough," to be honest. If you want "perfect," go for direct neural stimulation.If predicted position from accelerometers in headsets is already used (forward prediction of orientation for time T that includes latency for the entire pipeline), then reprojection should not be necessary if we only ever made linear head motions, right?
It only fixes up the incorrect headset predictions made when there is a non linear change in head position?
I imagine in future some clever electrode placement could get an advance look at nerve signals to neck muscles and remove the need for reprojection because the lag of a VR system is probably already better than the lag humans have when commanding muscle movement.
Most of that "drastic" movement is done by the eyes during saccades. The nausea is eliminated by saccadic masking.I guess the brain does loads of internal reprojection as well, otherwise we would get nauseous from the lag involved with commanding laggy eyeballs and head muscles.
So anyway, I thought that was dumb, and didn't really understand why "hanging from your face" wasn't the default location for that view to appear
I couldn't really imagine any situation where you'd want a picture frame sitting on your desk showing a live feed of wherever your head was pointed
Yes, that was my point, actually. Seems like you'd want it to be a momentary-on switch, that effectively raises the blast shield on your helmet so you can see what's going on around you in meatspace, right? Shouldn't that be positioned directly in front of your eyes, as though you were actually using them instead of the camera?Edit: Wait, what are you even talking about:
"a picture frame sitting on your desk"..? What do you mean?
The camera isn't about external feed, it's about safety, and for not having to take off the HMD to do other momentary stuff. Hence the reason there's also apps for augmenting your phone into the Vive. I guess you could also do rudimentary AR stuff on it.
Yes, this makes sense. If the PS4 was capable of this alone, a box would not be required. Therefore, the box function does enhance the standard PS4 capability.
- Console effectively ~60% more powerful than same-spec PC (as reported by middleware providers, not Sony).
A box was always going to be required because there's only one HDMI out on the PS4. You need something to split the feed to TV for normal games or headset for VR, otherwise you'd be constantly unplugging cables.
Shouldn't that be positioned directly in front of your eyes, as though you were actually using them instead of the camera?
Yeah, and it'll probably switch to a low-latency pass-through mode when the headset is inactive.On a related note, I assume the box with be "always on" even when playing normal games on TV (otherwise you'd need to unplug cables anyway). I guess it will be a low consumption/stand by kind of "on" acting as a pure splitter.
Honestly until Sony comes out with videos and material directed at consumers about the box being a splitter that undoes the barrel distortion and handles 3D audio, just expect everyone to be varying degrees of wrong. Even Giantbomb talked about it on their podcast about being extra horsepower for the PS4 to render VR.Ugh, ffs The Verge....
The "Ultimate" VR Buyers Guide
I know you shouldn't take technology advice from a populist site like The Verge, but this persistent miss-information about that dumb little box really annoys the hell out of me.