• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

positional tracking: What it is, how its done, and why Valves tracker is a revolution

Krejlooc

Banned
there is a lot of talk about positional tracking going on at the moment, and for good reason, but I'm seeing a bunch of people struggling with the concept and why what is going on is such a big deal to those of us who work with this technology. So I figured a very rough, basic, layman's explanation of the problems that have faced positional tracking might enlighten some as to why it's such a big deal right now, and maybe even answer why they should give a damn when they grew tired of "waggle."

The Wiimote

Any modern discussion on positional tracking needs to begin with the Wii remote, because it really did spark something big. The Wii remote operated using an Inertial Measurement Unit inside the controller, colloquially known as an accelerometer. This type of sensor is essentially blind - I can detect motion in 3 vectors (x, y, and z) but has no solid point of origin to determine where in space it is moving, only which direction. To make matters worse, the IMU inside the wiimote is pretty innacurate, which causes problems. In essence, the Wiimote's IMUs can know, for example, that is moved "right 3 times, left once, up twice, then down 5 times" and will translate that motion on screen.

The problem with this method of positional tracking is revealed in how one actually arrives at position from these types of IMU. Using a bit of calculus (think back to those classes) - you can arrive at position from acceleration by first integrating acceleration over time, which yields velocity. Then, by integrating velocity over time again, you will yield position. This double integration leaves room for error in calculation, primarily because force vectors (think back to physics, now) are additive. Hence, every acceleration it detects is also affected by the constant vector of gravity. Now, math is awesome and we can cancel out gravity from our calculations to arrive at pure acceleration, but the problem is figuring out which direction gravity point to.

Typically, this is solved by placing a gyroscope within the controller to gain a constant reading of "down" - but this isn't perfect either. Gryoscopes poll at odd and slow intrervals, and thanks to the magic of timing, it's actually possible for the IMUs inside the wii remote to read their acceleration in space before the gyroscope can determine which direction gravity faces. Since all translations in space done on the wiimote using IMUs are blind, a small error adds up very quickly. Over time, these errors become greater and greater until, before you know it, you are in a completely different direction than you expected to face. This is called drift.

Now, the Wiimote did something else really cool to combat this - it placed a camera on the wiimote itself to watch a set of stationary IR LEDs in space. This is called inside out positional tracking and it is, far and away, the single best tracking solution we currently have. In truth, the wiimote used what is called Sensor fusion - a method of combining input from multiple senors into one single type of reading - to come up with precise pointer tracking, relying mainly on sensor fusion between the camera and the IMUs inside the wiimote. But to keep things simple, we'll ignore that and look at the camera tracking in isolation.

Inside-Out positional tracking vs outside-in positional tracking

This type of positional tracking relies on line of sight from a camera. The placement of the camera determines whether the tracking is inside-out or outside-in. Inside-out tracking places the camera on the device being tracked - the device itself is looking from the inside out, in the world. The opposite, of course, is a camera placed externally, looking "outside-in" at the device being tracked.

Let's conceptualize how these tracking systems work with a quick scenario. Imagine placing a flashlight onto a table - picture this flashlight has a very narrow spread (like a maglight). Now imagine that, any time the light being emitted by the flashlight comes in contact with the color blue, it can determine where, in space, that object that is colored blue is.

Now let's say you're wearing a blue shirt. When you stand in front of the flash light, you are being tracked. However, obviously if you step left or right outside of the cone of light being put off by the flashlight, no more light is touching the blue on your shirt, and thus you stop being tracked. Further, imagine you stood with your back to the flashlight. Imagine you had a blue wristband on your right hand. When the wristband is in the light, it can be tracked. So, imagining yourself standing with your back to the flashlight, pretend you put your wrist on your chest, so that your body blocks it from touching the light. Even though your wrist is within this cone of light technically, no light touches the wristband because your own shadow blocks it.

This is called occlusion and it's a very real problem with the type of tracking just described, which is outside-in positional tracking. The Playstation Move, Oculus Rift DK2, and Playstation Morpheus all use this type of tracking. It's very accurate, but this problem of line of sight is major. For example, the move and rift are placing complex arrays of flashing IR LEDs on the back of their headsets to try and compensate for people turning around and leaving the gaze of the camera - the lack of said array of LEDs on the Rift DK2 is precisely why it can't do 360 degrees tracking, only forward 180 degrees.

So let's contrast this to inside-out positional tracking. Using the same situation, imagine now that you have a bunch of blue posters on your walls, and you are actually holding the flashlight in your hand. So long as the beam of light being emitted by your flashlight falls somewhere on a strip of blue poster on your walls, you can determine exactly where that flashlight is in space.

With this method of tracking in mind, you realistically never are subject to occlusion so long as you have enough blue on your walls. If you painted the entire room blue, as an example, you would never ever lose tracking. you'd be tracked with accurate precision everywhere you went.

This is very useful, clearly, but the problem is obvious - you need to paint your big white room solid blue in order for this to be feasible. The Wiimote, when used as a pointer, operated with inside-out positional tracking watching stationary LEDS. But, as everyone is aware, if you moved too far away, such that your wiiremote camera couldn't see the LEDs anymore, you lost position.

Thus was the problem facing those trying to do positional tracking. Everybody knew how to do it right - inside-out tracking is the best method of tracking - but the logistics of doing it right are not feasible for mass consumer use. How many people would really want to paint their walls to play a game, using our example?

Enter valve - they had a really good idea. Rather than painting your walls blue, why not shine a blue light in the room? The blue light in the room would turn all your walls blue for you, without needing for you to manually paint your walls every time you wanted to play your game. Valve's Lighthouse positional tracking base does just this. Essentially, it works like this:

xp2BJqD.jpg


Except invisibly. The fundamental problem of logistics for inside-out positional tracking is now solved. This has been the holy grail of tracking for many years now, and valve has deftly solved it.

But wait - you hate "waggle" so you're not excited, right?

Gesture Recognition vs positional tracking

I hate waggle. So much, actually. Waggle wound up dominating the wii, which was sad. Waggle is not positional tracking. Remember above when I said the IMUs in the wii remote could describe in which directions it was moving, but not where it was? That is how waggle worked - rather than actually tracking your hand through space, it merely watched which directions your hands were moving and tried to match them up with an expected gesture.

As an example, say you are throwing a pitch in a baseball game. What would happen is they would record several test pitches with a wiimote in hand, and watch the readings from the sensors. They would wind up looking something like this:

KImkqD9.png


This is a few senosors mapped out over time. You can actually visually recognize some of the motions - the dip in the beginning, for example, is the headset being raised and lowered (this was from work reverse engineering the DK2 tracking LEDs). After identifying a pattern of motion that one typically goes through when they'd "throw a pitch," the developers would then check every motions your made against this expected motion. Hence, if your hands ever moved in a motion that looked similar to the curve a pitching motion made, it would trigger an event in the code to send a pitch.

This, obviously, does not feel very good or realistic. In truth, you are basically pantomiming motions to trigger button presses when you do this. That sucks.

Positional tracking is what everybody always wanted motion controls to be. In the above example, if we could accurately track the position of our hand in space (i.e. knowing with absolute certainty where it was every single time we checked, in X, Y, and Z) then we could let a physics engine take over and actually launch the ball correctly. It's not triggering a pitching event in this example, but rather our actual position of the hand is influencing a physics engine. This is how reality works, and it feels much better.

So why is valve's positional tracking so revolutionary? Because, by nature of their inside-out tracking, you can now always have accurate 1:1 positional tracking, in all space, all at once.

This is a very big deal going forward. This not only influences tracking of limbs, but also the head itself. Valve's lighthouse tracker is about to change the name of motion controls in a big way. If anybody ever dreamed of a light saber demo with the wii, valve's tracker makes it possible today.

Feel free to ask any clarification, I adore talking about positional tracking. I think it's fundamentally fascinating.
 

Krejlooc

Banned
Can you explain what the motion+ contributed? Because it made positional tracking much more accurate on the Wii

It is a gyroscope to determine which direction gravity is.

Today, throwing in an IMU for sensor fusion is basically what every single device does. It's not perfect in isolation, but it provides a very nice tool to work with.

The best things about IMUs is that they poll extremely fast - 1000hz or more. That means, in 1 second, an IMU will report its acceleration 1000 times. That is compared to what is typically 60 hz refresh rate on a camera (both inside-out and outside-in). So, for moment to moment movement, you use your IMU for quick updates, and sensor fusion with the camera for accurate updates.

People also misunderstand the extent of "drift." The name makes it sound gradual. "Ramping to infinity" is more accurate.
 
What about the kinect and it's current version? The method used is similar to "lighthouse" with invisible rays being projected into the room. Didn't see you mention it in the OP.
 
Thank you very much for the writeup! I am just going to post my question from the other thread though...

Maybe there's something I'm not understanding

What's the appeal of having a room with finite space set up for VR movement when no game is going to conform to that room size anyway?
 
What about the kinect and it's current version? The method used is similar to "lighthouse" with invisible rays being projected into the room. Didn't see you mention it in the OP.

Yeah, IIRC, the kinect has a "depth camera" that consists of an infrared projector and camera, and it basically does the same thing. Looks at the pattern distortion and spacing to figure out the geometry and distance of what it's seeing

That is compared to what is typically 60 hz refresh rate on a camera (both inside-out and outside-in). So, for moment to moment movement, you use your IMU for quick updates, and sensor fusion with the camera for accurate updates.

I'm pretty sure the wiimote camera was much higher than a normal one. Around 100hz (and the kinects was 120Hz from what I read some time ago)
 

Krejlooc

Banned
What about the kinect and it's current version? The method used is similar to "lighthouse" with invisible rays being projected into the room. Didn't see you mention it in the OP.

That's because Kinect isn't really using optical tracking, not like I'm describing. Kinect uses time of flight cameras to measure how long it takes for a fired beam of light to reach a surface and reflect back. By counting the fractions of a second it takes to complete this process, they can map a single point of "depth." By doing this hundreds of thousands of times a second, you can form a depth map from this data. From this 3D depth map, Microsoft then does lots of post-image processing to try and figure out the skeletal shape of the figure being captured, and then maps position onto that skeleton.

It's not a very accurate or quick process.

There is also magnetic induction tracking, which the razer hydras and sixense stems use, but those have problems as well. I was mainly concentrating on the 3 biggest methods of positional tracking used - pure IMU data, outside-in positional tracking, and inside-out positional tracking.

As far as our collective knowledge about positional tracking is concerned, inside-out positional tracking is the current apex of positional tracking outside of things like mechanically tracking your limbs (i.e. using servos attached to your body).
 

Krejlooc

Banned
Thank you very much for the writeup! I am just going to post my question from the other thread though...

Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

C3qZ9n5.jpg


The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

Valve's positional tracking lighthouse does this for your computer automatically.
 
Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

http://i.imgur.com/C3qZ9n5.jpg[img]

The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

[i]Valve's positional tracking lighthouse does this for your computer automatically.[/i][/QUOTE]

Holy. Crap.
 

Alx

Member
Yeah, IIRC, the kinect has a "depth camera" that consists of an infrared projector and camera, and it basically does the same thing. Looks at the pattern distortion and spacing to figure out the geometry and distance of what it's seeing

That's how the first kinect worked (Primesense tech, now belongs to Apple). Like Krejlooc said, MS switched to time of flight for Kinect 2.
 

Krejlooc

Banned
Is this, or couldn't this be used for greenscreen solutions as well? (Lightroom)

I'm not sure I understand your question. You mean can you use this to better subtract "selected" color in film? Like, traditional green screening?

Actually, a 3D camera like Kinect is much better suited for that job.
 

epmode

Member
Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

C3qZ9n5.jpg


The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

Valve's positional tracking lighthouse does this for your computer automatically.

This is pretty awesome.
༼ つ ◕_◕ ༽つ Give HOLODECK pls Volvo
 

Krejlooc

Banned
Holy. Crap.

This is a concept Oculus has been high on for a while. I've done work with it, too - it is really cool stuff. This is definitely something big. Here's a cool representation of it:

eTFBf3F.jpg


The path on the left is what the user thinks he is walking. The path on the right is what he actually walks.
 

onQ123

Member
This is what separated the PlayStation Move from the Wiimote but sadly people was caught up on it just being a Wiimote.
 
This is a concept Oculus has been high on for a while. I've done work with it, too - it is really cool stuff. This is definitely something big. Here's a cool representation of it:

http://i.imgur.com/eTFBf3F.jpg[img]

The path on the left is what the user thinks he is walking. The path on the right is what he actually walks.[/QUOTE]

And the headset does this automatically, or do games have to be made with this method in mind?
 

Krejlooc

Banned
This is what separated the PlayStation Move from the Wiimote but sadly people was caught up on it just being a Wiimote.

Correct - the Move could do very accurate positional tracking in 3D space, with the sole really big problem being occlusion. Occlusion is actually a huge problem but it's still better than the alternatives.

To further conceptualize inside-out tracking vs outside-in: Inside-out tracking is the device watching the world around it to figure out it's position. Outside-in is the world watching the device to figure out its position.

The BEST thing about inside-out tracking: You can have multiple cameras tracking independently.
 

Krejlooc

Banned
And the headset does this automatically, or do games have to be made with this method in mind?

Right now, no API will do this for you. You basically have to build a demo specifically to show off redirected walking.

Sometime in the future, this could be done automatically for the developer at a driver level or possibly through the API. The tech valve uses to fade-in real life walls as you approach them in VIVE is a precursor to this.
 
Thank you so much for this writeup (and subsequent clarification posts). It's so informative.

Although that feels like a wasted post as I've only ever seen you post outstanding insight, regardless of topic. Definitely one of the best posters on here.
 

Krejlooc

Banned
Thank you so much for this writeup (and subsequent clarification posts). It's so informative.

Although that feels like a wasted post as I've only ever seen you post outstanding insight, regardless of topic. Definitely one of the best posters on here.

I wrote this because this is a huge deal to me. I think many here realize I do dev work, but I'm not the guy who is making levels or coming up with encounter mechanics. I can do that stuff, mind you, but that's not my area of focus. Positonal tracking is. Today is like the superbowl for me. When I first heard that valve had a markerless inside-out positional tracking system I lost my shit. This is such an enormous deal but sadly, because it's pretty technical in nature, it probably got lost on many people.

To me, this is like going from the N64 to the dreamcast. That "WOW" factor you had when you saw the graphics the first time? I'm having that same reaction to this. I thought markerless inside-out positional tracking was a good 3-5 years off. This is incredible.

Like, don't get me wrong, I knew this would eventually be coming. As I said, oculus has been doing this research as well, and John Carmack speaks about the superiority of inside-out tracking all the time in lectures. It just legitimately shocked me that valve nailed it on their first attempt. They seriously have blown me away with their ingenuity.
 
Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

C3qZ9n5.jpg


The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

Valve's positional tracking lighthouse does this for your computer automatically.
My problem with that is that it requires too much space to be practical and even more if you are running. Would a better solution be to have a button press be used to indicate forward movement but the player real life direction determine their direction in the game. You could also use real life movement for actions like ducking or dodging that don't invole much lateral movement. You could then use redirected movement to keep the player in a much smaller confined area.
 

Krejlooc

Banned
My problem with that is that it requires too much space to be practical and even more if you are running. Would a better solution be to have a button press be used to indicate forward movement but the player real life direction determine their direction in the game. You could also use real life movement for actions like ducking or dodging that don't invole much lateral movement. You could then use redirected movement to keep the player in a much smaller confined area.

You can pull off redirected walking in the space of a bedroom - I did. But yes, you won't be running with this stuff. That falls on the developer to recognize good design practices.
 

JoeBoy101

Member
Again, thanks for the writeup. Truly was interesting and gave me a better appreciation on what the tech has done and what Valve has done.
 
Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

http://i.imgur.com/C3qZ9n5.jpg[img]

The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

[i]Valve's positional tracking lighthouse does this for your computer automatically.[/i][/QUOTE]

I was excited for VR before, but you've made me a believer. Thank you.
 

Krejlooc

Banned
I was excited for VR before, but you've made me a believer. Thank you.

You can do really cool stuff with VR. I don't know if many grasp what it actually does - it's taking over an entire sense. Being a man of science, I believe we are nothing more than series of impulses in our brain, fed by our senses. Hijacking a major sense like sight gives us the power to alter our ability to perceive. We create realities in our heads.

This is the coolest of cool techs, IMO.
 
The balance stuff is really interesting. I was highly sceptical about full motion based VR due to that. If people want to experiment themselves stand on one leg and time yourself to see how long it takes before your movements become large to balance yourself, then do it again but close your eyes.

Still the problem of power and mobility of the hardware but in an arcade space hypothetically it shouldn't be too big a problem.
 

Krejlooc

Banned
Again, thanks for the writeup. Truly was interesting and gave me a better appreciation on what the tech has done and what Valve has done.

Valve has evidently been busy as shit. They have been revealing a lot of really incredible information today. As an example - they claim that multi-GPU rendering using AMD's liquid VR yeilds double the framerate. That is a massive performance gain which is outside the usual realm of SLI/Crossfire.
 
You can pull off redirected walking in the space of a bedroom - I did. But yes, you won't be running with this stuff. That falls on the developer to recognize good design practices.
I think we had this conversation before about this, because I remember that response. I assume you had to create a special virtual environment to make that work. I'm guessing one that forced the player to change direction often and not allow them to go in a straight line for any period of time. Is that true? If it is then that would greatly reduce its applicability.
 

Krejlooc

Banned
I think we had this conversation before about this, because I remember that response. I assume you had to create a special virtual environment to make that work. I'm guessing one that forced the player to change direction often and not allow them to go in a straight line for any period of time. Is that true? If it is then that would greatly reduce its applicability.

right, see the picture I posted above? The path isn't a linear straight line, it plays off of a sin-wave like pattern. This is because there are two fundamental types of movements in redirected walking - natural curves as generated by the computer, and pivots by the person. Pivots basically fuck up our ability to discern our direction - spin around a few times with your eyes closed and you'll completely loose track of your direction. So you can use pivots to sort of "reboot" the process as a shortcut.
 
Like, don't get me wrong, I knew this would eventually be coming. As I said, oculus has been doing this research as well, and John Carmack speaks about the superiority of inside-out tracking all the time in lectures. It just legitimately shocked me that valve nailed it on their first attempt. They seriously have blown me away with their ingenuity.

Any idea why Oculus didn't adopt a similar system? Is it a cost issue? Or do they want to jump right to markerless tracking?
 

markot

Banned
People want to walk around their rooms while wearing VR?

And people bitched about having to stand up to play wii golf >.<
 

Arkage

Banned
I don't see this as a living room device, and I can't imagine people having dedicated dark rooms void of furniture just to play VR games in this way. It's interesting, but there are just too many levels of inconvenience to use for any sort of widespread adoption or use.

But hey, maybe there will be dedicated VRcades we can go to in the future.
 

Dmax3901

Member
Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

C3qZ9n5.jpg


The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

Valve's positional tracking lighthouse does this for your computer automatically.

Holy mother fucking crap.
 

Krejlooc

Banned
Any idea why Oculus didn't adopt a similar system? Is it a cost issue? Or do they want to jump right to markerless tracking?

I assume they will, eventually. I massively simplified the process above, mind you. While the process was understood, getting it to work as intended is not trivial. Valve pulled off something special here. This is testament to the sort of talent that works at Valve.
 
Have you ever tried to walk in a straight line with your eyes closed? Even if you are being very careful, over a distance, you will assuredly veer off course into a weird direction, even without you knowing. This is because our visual information is the single most important tool we use to determine our orientation as we move through space. While, true, we have other senses that help us determine where we are (this is called proprioception), our eyes are the primary thing we use to orient our bodies while we walk.

With this in mind, you can take advantage of this and curve your display (have it rotating slightly as you walk) to direct your person into walking in concentric circles. This is called redirected walking, and it actually works:

C3qZ9n5.jpg


The people walking in these curved paths would swear they walked in a straight line. The obvious problem with redirected walking is that your computer has to know where you are in relation to walls to help direct you into the room. Hence it had the same problems inside-out tracking used to have - you'd have to meticulously set up your environment so your computer would know the parameters of your room to get you walking in circles.

Valve's positional tracking lighthouse does this for your computer automatically.
This, too, is great. Wonderful "hack" of human perception.
 
How does the vive account for occlusion? Doesn't the light from the lighthouse run into the same issues? I imagine that would leave omissions in tracking space as well when the light doesn't pass through you no?
 

Krejlooc

Banned
People want to walk around their rooms while wearing VR?

And people bitched about having to stand up to play wii golf >.<

I don't see this as a living room device, and I can't imagine people having dedicated dark rooms void of furniture just to play VR games in this way. It's interesting, but there are just too many levels of inconvenience to use for any sort of widespread adoption or use.

But hey, maybe there will be dedicated VRcades we can go to in the future.

So this doesn't just extend to walking around the room. This is basically the ideal wireless positional tracking system. This stuff will make tracking your leans while sitting down, as an example, much more accurate and failsafe.

Games made for Gear VR currently recommend using a sivvel chair to let yourself rotate around in space. An obvious drawback is the complete lack of positional tracking. What valve has here can be used to add positional tracking back to gear VR.
 

Krejlooc

Banned
How does the vive account for occlusion? Doesn't the light from the lighthouse run into the same issues? I imagine that would leave omissions in tracking space as well when the light doesn't pass through you no?

The light painting the room is redundant. In the example I proposed, I said that, so long as the beam of light can see any color blue, it can track. While your shadow will occlude some light, the entire room is blue in the example. You basically can't occlude enough to prevent tracking.
 
So this doesn't just extend to walking around the room. This is basically the idea wireless positional tracking system. This stuff will make tracking your leans while sitting down, as an example, much more accurate and failsafe.

Games made for Gear VR currently recommend using a sivvel chair to let yourself rotate around in space. An obvious drawback is the complete lack of positional tracking. What valve has here can be used to add positional tracking back to gear VR.

Does a tracker need to communicate with a base station? Could multiple HMDs/devices share the same marker source?
 
The light painting the room is redundant. In the example I proposed, I said that, so long as the beam of light can see any color blue, it can track. While your shadow will occlude some light, the entire room is blue in the example. You basically can't occlude enough to prevent tracking.

Gotcha, so what is it exactly that valves controllers track? I'm very much not in the know regarding these things. I assume it's not actually tracking the color blue for example, but what is it based on?
 
You can do really cool stuff with VR. I don't know if many grasp what it actually does - it's taking over an entire sense. Being a man of science, I believe we are nothing more than series of impulses in our brain, fed by our senses. Hijacking a major sense like sight gives us the power to alter our ability to perceive. We create realities in our heads.

This is the coolest of cool techs, IMO.

The fact that this can more or less hijack our walking is incredibly awesome. I have a DK2 and I'm fully aware of how completely and utterly it takes over your vision, but never thought of how it could take over everything else. I have noticed however that when playing Elite, if I look down at my hand, even though it's off by about 10 inches, my brain is still completely convinced that not only is that my hand that I'm looking at, but also that it's where my hand is.

As far as the positional tracking, I know that while I was able to visualize it just from that PM very well, I noticed a lot of people were having difficulty, and starting a new thread describing it is about the only way to ensure your description gets maximum visibility. You might also want to make a few other threads, since I notice you talk yourself blue in the face. (Also important to explain the key differences between VR controls and gamepad controls, and why you don't need nearly as many buttons haha).

EDIT:
Does a tracker need to communicate with a base station? Could multiple HMDs/devices share the same marker source?

No, the Lighthouse itself is very much like the Wii's sensor bar -- it doesn't actually do any communication, it just emits IR lights. As such, each device itself is what interprets the IR data, and figures out its position in the world.
NOW, the Lighthouse MIGHT be given a specific instruction on HOW to emit those IR lights, such as to make patterns that the other devices can recognize, but with Lighthouse, every device is its own thing. Theoretically you could have an entire room of Lighthouse capable devices, from keyboards to chairs to desks and tables and couches, and as long as the game in question knew to pick them up, they'd work.

Personally, I'm waiting for a Lighthouse Keyboard attachment and mouse, and a Lighthouse mug to drink from.
 

Krejlooc

Banned
Does a tracker need to communicate with a base station?

The fine details of the base station aren't known, but I'm assuming not, because Gabe Newell talked about eventually building this tech into televisions themselves. Keep in mind, this tracker replaces sheets of QR codes they used to hang around the room just a year ago. They are merely creating landmarks to be tracked around the room.

Could multiple HMDs/devices share the same marker source?

Absolutely. In fact, that's what happens with Valve's VR system - the controllers are basically little tiny headsets, minus the screens.

Gotcha, so what is it exactly that valves controllers track? I'm very much not in the know regarding these things. I assume it's not actually tracking the color blue for example, but what is it based on?

It's infrared light. It's the part of the light spectrum that we can't normally see, but can be really easily detected. So it actually is tracking light/color, just the kind that is invisible to our eye.
 

markot

Banned
So this doesn't just extend to walking around the room. This is basically the ideal wireless positional tracking system. This stuff will make tracking your leans while sitting down, as an example, much more accurate and failsafe.

Games made for Gear VR currently recommend using a sivvel chair to let yourself rotate around in space. An obvious drawback is the complete lack of positional tracking. What valve has here can be used to add positional tracking back to gear VR.

....

What the hell are controllers for?

Why does it need any tracking at all? Cant I move with my fingers?

A swivel chair?

Yeah.

This is going to take off like 3d gaming and movies.

If you thought having to wear terrible glasses was bad, now imagine a terrible headset for long periods of time where you have to spin in a chair! Ill take 12!
 

Arkage

Banned
So this doesn't just extend to walking around the room. This is basically the idea wireless positional tracking system. This stuff will make tracking your leans while sitting down, as an example, much more accurate and failsafe.

Games made for Gear VR currently recommend using a sivvel chair to let yourself rotate around in space. An obvious drawback is the complete lack of positional tracking. What valve has here can be used to add positional tracking back to gear VR.

True, I was just focusing on the "walking around" pics when it's obviously applicable to any range of motion with your body. Though, is it literal light that tracks your motion? Because in that case I think there would be issues with shadows or positioning, if your hands were obscured by other parts of your body when pivoting, etc.
 
Top Bottom