• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Verge: New Hololens impressions "demo videos are all basically a lie"

Raonak

Banned
Ugh, this is really awesome technology, but MS's constant overselling is so annoying.

Ah I see. That's disappointing. How come nobody has mentioned that about the VR solutions? Seems like something to criticize equally as they are doing with Hololens.

With VR, because there's nothing outside the screen FOV, it's not nearly as jarring. It's just all pitch black. All it creates is an edge to your vision. There's no clipping or anything going on.

With hololense, you can see outside the FOV. so when you move, you see the AR objects clip out of existance. I imagine that's jarring as /fuck.
 
AFAIK Hololens does not use retinal projection. It uses a special kind of optic fiber to project the light through the glass surface until it leaves through the display surface, lighting up a pixel. The light is bounced several times before getting there, which allows the virtual screen to have a much larger focal distance than the display surface without using lenses.

I'm also pretty sure Hololens uses a laser video projector to produce the images, not a LCD screen. I'm not sure what kind of resolutions miniature LVPs have nowadays, but they work very differently from LCD displays.

From what I understand it has varying focus across the image rather than just a large focal distance, so your eyes can focus naturally on the AR objects as you'd expect to with the matching real life distances. I guess using a laser makes this easier but I wouldn't be able to explain how exactly.
 

xxracerxx

Don't worry, I'll vouch for them.
What area? So far every article I've read says Hololens uses a projector that shines directly into your eyes, not onto a surface in front of them. Which is part of how the image meshes so well with your surroundings.

Directly into your eyes? Hhahahaaha...no.
 
Welp, can't say I'm surprised tbh. It's just not all that impressive. I hope they rethink the whole "glasses" thing and move this out into a single touch peripheral that can make this a true evolution. This is just not cutting it for me.
Not impressive? o_O

You know, if you are gonna show a concept video, make sure you mention that it's a concept video.
Wasn't a concept. Camera we saw through just had a larger FOV configured.
 

Skux

Member
Something about this photo just amuses me. It's like kids at a concert taking cellphone pics.

P4QSeKy.png
 

Durante

Member
That what everyone expected to be the most difficult parts of HoloLens are working astoundingly well, and the biggest problem seems like one that could be easily solved with a revision or two?


(disclaimer I guess, I work at Microsoft, but not on HoloLens. I'm just going based off of the impressions I've read online including The Verge)
Actually, the FoV was the most obvious thing in the intital pitch which was completely unrealistic, and it's certainly not easy (and maybe not possible) to fix in the form factor they are selling. Which is one thing people seem to be very keen on compared to so-called "clunky" VR devices (which feature immersive FoVs).

It wasn't a concept video--it just wasn't representative of the field of view of the actual HoloLens wearer. All of the "holograms" were there and functional.
There's a massive qualitative difference between a "hologram" (ugh) which takes up a small portion of the center of your FoV (and disappears entirely outside it) and one presented as being substantial across the entire FoV.
 

Three

Member
From what I understand it has varying focus across the image rather than just a large focal distance, so your eyes can focus naturally on the AR objects as you'd expect to with the matching real life distances. I guess using a laser makes this easier but I wouldn't be able to explain how exactly.

Has this been confirmed? I hear that it is fixed.
And the third, and final, big display question was: does HoloLens provide accommodation cues, i.e., does it present virtual objects at the proper focal distance, like a real hologram or a light field display? This one I can’t answer definitively. I was going to test it by moving very close to a virtual object and comparing the object’s focus against my hand right next to it, but it turns out the HoloLens’ near plane is set at about 60cm, meaning objects can’t be viewed up close. As HoloLens is supposed to augment human-sized environments, it can assume that virtual objects only appear between 60cm (near plane) and a few meters distance, and could get away with a fixed focal distance somewhere in the middle, which I think is exactly what it does.

From here
http://doc-ok.org/?p=1223

Wasn't a concept. Camera we saw through just had a larger FOV configured.

It was more than just a different FOV. It had no jitter and markerless tracking that had little error. No delays, that have been seen in the prototypes, for example. It was staged and the person on stage was not controlling it. That onstage demonstration was very much a concept. A concept that they may reach but a concept of the experience at this stage. The device has no eye tracking so that is not a valid reason why the interface performed actions without user interaction. It uses head position, the finger click gesture and voice only.
 

DrPizza

Banned
It was more than just a different FOV. It had no jitter and markerless tracking that had little error. No delays, that have been seen in the prototypes, for example. It was staged and the person on stage was not controlling it. That onstage demonstration was very much a concept. A concept that they may reach but a concept of the experience at this stage. The device has no eye tracking so that is not a valid reason why the interface performed actions without user interaction. It uses head position, the finger click gesture and voice only.
The demo given during the Build keynote wasn't "staged", and the spatial mapping/room tracking in the Hololens uses the same basic tech as Kinect, which is what the cameras used for the "audience" view used. The real Hololens (both the prototype I used in January and the one I used yesterday) has astonishingly good tracking, and it really does keep virtual objects anchored to the correct position in reality.
 

Jomjom

Banned
Ugh, this is really awesome technology, but MS's constant overselling is so annoying.



With VR, because there's nothing outside the screen FOV, it's not nearly as jarring. It's just all pitch black. All it creates is an edge to your vision. There's no clipping or anything going on.

With hololense, you can see outside the FOV. so when you move, you see the AR objects clip out of existance. I imagine that's jarring as /fuck.

Ahhh I see. Thanks for the great clarification. That does seem like it would be super jarring. Like an object just disappearing into the ether. I'm pretty disappointed hear that VR has a rectangular screen of sorts too though.
 

Three

Member
The demo given during the Build keynote wasn't "staged", and the spatial mapping/room tracking in the Hololens uses the same basic tech as Kinect, which is what the cameras used for the "audience" view used. The real Hololens (both the prototype I used in January and the one I used yesterday) has astonishingly good tracking, and it really does keep virtual objects anchored to the correct position in reality.
It was, especially as the guy missed his cue. It has kinect tech but it cannot use the kinect tech to anchor at those distances, not the distances shown on the staged demo. It would completely kill battery. there was a demonstration with the distance of the kinect like sensor with the wire mesh remember. It would much more than likely rely on markerless image tracking at those distances. The demos done had jitter they also had a delay in tracking as you moved your head. Check the link above for example. It was still impressive markerless tracking but not the same as what they showed on stage.
 
Yeah this will sell a bit based on ignorance i think.
They're betting on shitty tech again imo.
Well, not shitty tech. But i guess it's not that great for gaming right now.
 

jem0208

Member
Damn it that's so disappointing, the article is glowingly positive about the technology other than the fov. Such a shame.
Isn't that a good thing?

The FoV is almost certainly one of the easiest aspects to solve. If the problem was that the holograms weren't very good then yes it would be a shame.
 

Fliesen

Member
Isn't that a good thing?

The FoV is almost certainly one of the easiest aspects to solve. If the problem was that the holograms weren't very good then yes it would be a shame.

people keep saying that. if it was so easy to solve, wouldn't MS have solved it and just demoed the device a year later?

this isn't just "increasing the size of the semi-transparent-screen". This isn't as much of a "screen technology" issue as it's about optics, i presume.
to be fully immersive, the holograms would adapt to being in your peripheral vision, which might come with additional challenges.

not saying it's unsolveable, but people shouldn't dismiss this issue as easily. If it was so trivial, MS would have "fixed" it by now.

if the FoV means you'll have to turn your head to face your "virtual displays", it'll be far less useful for office work, as it'd put a huge strain on your neck because of every single glance you'd have to tilt and turn your head.
 

EVIL

Member
from http://doc-ok.org/?p=1223

Let’s start with the display. The biggest question going in was: how big is the field of view? And the answer is: small. As I was stripped of all devices and gadgets before being allowed into the demo room, I had to guesstimeasure it by covering the visible screen with my hands (fingers splayed) at arm’s length, ending up with 1 3/4 hands horizontally, and 1 hand vertically (in other words, a 16:9 screen aspect ratio). In non-Doc-Ok units, that comes out to about 30° by 17.5° (for comparison, the Oculus Rift DK2′s field of view is about 100° by 100°). From a practical point of view, this means that virtual objects only appear in the center of the viewer’s field of view, which turns out to be very distracting and annoying. Interestingly, this is compounded by the visor’s much larger physical field of view: on the plus side, the user doesn’t get tunnel vision (it feels a bit like wearing lab safety glasses), but on the other hand, the virtual objects get cut off for no visible reason, such as spectacle frames, and simply vanish into thin air at the edges of the screen.

edit:

Isn't that a good thing?

The FoV is almost certainly one of the easiest aspects to solve. If the problem was that the holograms weren't very good then yes it would be a shame.

Actually, in this case .. its one of the hardest problems to solve.
In the case of VR HMD's, you look at a screen which displays a warped image, with lenses in front of it that unwarp the image on the screen making it look like it fills your FOV around you. To increase FOV in these type of headsets you can do that with better optics, or larger displays. these are pretty cheap and easy solutions. because everyting in such an image is virtual, you can do all kinds of tricks.

With hololens if you would do the same: (pre-compute a distortion in the image) you do not have a way to bend the image around you with optics because this would mean it would also warp whatever you see behind the image you are trying to display. Increasing the FOV will propably most likely only happen by having large transparent displays.

Take for example magic leap. they have been working on developing AR glasses for a couple of years now and from what I hear they still have the best AR prototypes in the world but even they haven't been able to get a higher FOV than 40 degrees horizontal and vertical
 
Isn't that a good thing?

The FoV is almost certainly one of the easiest aspects to solve. If the problem was that the holograms weren't very good then yes it would be a shame.

Oh yes, absolutely - IMO they've proven the "hard bit" works - the bit that seems like an impossible pipe dream. I wonder how much of the FoV issue is to do with the display and how much is due to the room sensors only being able to work in a certain zone.
 

Fliesen

Member

K1fB5P3.png


Oh yes, absolutely - IMO they've proven the "hard bit" works - the bit that seems like an impossible pipe dream. I wonder how much of the FoV issue is to do with the display and how much is due to the room sensors only being able to work in a certain zone.

i'm not so sure about that. We're talking immersive optics here. going beyond overlaying a rather tiny bit of your entire field of view with holographic visuals to adding elements to not just what you are looking at directly, but also your peripheral vision, seems to be non trivial.
It's far easier to create a composite holographic image for the center of our field of vision as that is mostly just the same imagery with tiny differences in perspective. Once you go to peripheral vision, the two images created for each separate eye would be far more complicated to "compute".

again: i don't think this is trivial at all, the heavy lifting is yet to happen.
 

Zedox

Member
It was, especially as the guy missed his cue. It has kinect tech but it cannot use the kinect tech to anchor at those distances, not the distances shown on the staged demo. It would completely kill battery. there was a demonstration with the distance of the kinect like sensor with the wire mesh remember. It would much more than likely rely on markerless image tracking at those distances. The demos done had jitter they also had a delay in tracking as you moved your head. Check the link above for example. It was still impressive markerless tracking but not the same as what they showed on stage.

So ur gonna tell a guy who has tried it both times and wrote about it on arks technical that it was a lie? Smh...
 
That little FOV box they did in the video says it all. Been skeptical of this device since it was announced and I'll remain so.
 
Gotta say I was duped, I did not know about the FOV issues.

And for me that was the thing, you know the minority report style dragging things in and out of your peripheral vision. If they vanish when outside of that window I don't know man just put it on a screen instead.
 

DrPizza

Banned
It was, especially as the guy missed his cue. It has kinect tech but it cannot use the kinect tech to anchor at those distances, not the distances shown on the staged demo. It would completely kill battery. there was a demonstration with the distance of the kinect like sensor with the wire mesh remember. It would much more than likely rely on markerless image tracking at those distances. The demos done had jitter they also had a delay in tracking as you moved your head. Check the link above for example. It was still impressive markerless tracking but not the same as what they showed on stage.

It uses Kinect tech to build a spatial map of the environment. It's difficult to know exactly how far it can see (I didn't get a chance to play with the render front and backplane distance) but I think it should be good for 15 feet or more. In the tutorial we made the spatial map visible (Unity can render it as a triangle mesh) we could see it being continuously built and rebuilt as people moved around.
 
This has nothing to do computing power for resolution when we have mobiles pushing higher res displays than other bulky hardware. It has to do with the screen tech being flat. The fact that they can't use lenses for this type of AR without distorting your realworld vision means they have a low FOV. With flat display technology the display would need to protrude from the side of your head and the distance it protrudes becomes more and more for less and less gain in FOV.

The demo press saw back in January was the same except for a larger FOV and the headset they were using was tethered to a larger pack for the processing hardware. It's nothing to do with a flat screen.
 

mrklaw

MrArseFace
How much larger was the field of view on the tethered January units? I would have though one of the primary difficulties would be creating that larger FoV, but it seems like it might be reduced of demo units as a function of the processor power needed to render that large a view. If so, that's good IMO because it suggests they can increase it as processor power increases - they aren't limited to a narrow FoV for optical reasons.
 
How much larger was the field of view on the tethered January units? I would have though one of the primary difficulties would be creating that larger FoV, but it seems like it might be reduced of demo units as a function of the processor power needed to render that large a view. If so, that's good IMO because it suggests they can increase it as processor power increases - they aren't limited to a narrow FoV for optical reasons.

According to Paul Thurrott on Yesterdays Windows Weekly, it was waaay larger. He said he honestly felt the unit was broken. He also said that in the one back in January you could easily see an entire Scene whereas with this version not to much.
 

Pie and Beans

Look for me on the local news, I'll be the guy arrested for trying to burn down a Nintendo exec's house.
How much larger was the field of view on the tethered January units? I would have though one of the primary difficulties would be creating that larger FoV, but it seems like it might be reduced of demo units as a function of the processor power needed to render that large a view. If so, that's good IMO because it suggests they can increase it as processor power increases - they aren't limited to a narrow FoV for optical reasons.

Its worth mentioning the prototype was a giant blocky, bulky one, which required the user to wear a CPU box around their neck and all sorts of other stuff. The HMD was most likely a larger and not the 'sleek' goggles of now, and thus would have allowed for a marginally bigger FOV. Not huge mind you, people were comparing it to holding up a sheet of paper at arms length back then as well.

I90PN09.jpg

There was this photo of a prototype at one point, but how that differed in FOV size is debatable. Thurrot exaggerates quite often.

The reason this FOV thing and other issues like occlusion are new for even peeps on tech enthusiast forums like NeoGAF? That classic Microsoft spin and all those "concept" videos. When you lie about a product today, you trade against its tomorrow.
 

Pie and Beans

Look for me on the local news, I'll be the guy arrested for trying to burn down a Nintendo exec's house.
What is so difficult about having a wide FOV I wonder

From a tech standpoint I mean

In detail:

from http://doc-ok.org/?p=1223



edit:



Actually, in this case .. its one of the hardest problems to solve.
In the case of VR HMD's, you look at a screen which displays a warped image, with lenses in front of it that unwarp the image on the screen making it look like it fills your FOV around you. To increase FOV in these type of headsets you can do that with better optics, or larger displays. these are pretty cheap and easy solutions. because everyting in such an image is virtual, you can do all kinds of tricks.

With hololens if you would do the same: (pre-compute a distortion in the image) you do not have a way to bend the image around you with optics because this would mean it would also warp whatever you see behind the image you are trying to display. Increasing the FOV will propably most likely only happen by having large transparent displays.

Take for example magic leap. they have been working on developing AR glasses for a couple of years now and from what I hear they still have the best AR prototypes in the world but even they haven't been able to get a higher FOV than 40 degrees horizontal and vertical

Short version: in VR its all fake so you can play with warpy optic lenses to give huge FOV. In AR, real life and thus standard lenses, you can't. So you either need a much larger surface area that curves around, and even higher resolution (and of course theres the issue of stereoscopy meaning theres only so much space between your two eyes to expand to), or you're shit out of luck.

Not entirely sure theres a solution until the "directly onto your retinas!" level of future tech.
 

Durante

Member
What is so difficult about having a wide FOV I wonder

From a tech standpoint I mean
It's exceedingly simple really: you can only ever see something in any direction (as in, direction of a ray cast through the eye's pupil) where a device can either (a) display something directly or (b) optically route an existing image (e.g. by lenses).

In the form factor advertised for Hololens since the beginning, neither of those allow for filling out a large FoV -- and everyone who thinks about these things should have realized that.
 

StudioTan

Hold on, friend! I'd love to share with you some swell news about the Windows 8 Metro UI! Wait, where are you going?
It's exceedingly simple really: you can only ever see something in any direction (as in, direction of a ray cast through the eye's pupil) where a device can either (a) display something directly or (b) optically route an existing image (e.g. by lenses).

In the form factor advertised for Hololens since the beginning, neither of those allow for filling out a large FoV -- and everyone who thinks about these things should have realized that.

Then how did the version from January have a larger FoV?
 

eso76

Member
Wasn't a concept. Camera we saw through just had a larger FOV configured.

Camera was probably the same as the real unit, but that was what, 24mm focal length ?
Your FOV is A LOT wider than that, so even if the hololens prototype they had could render stuff to fill a 24mm focal length FOV, that would still appear tiny overlayed onto your actual FOV.

That can be improved, but I don't think it can be solved entirely, unless they somehow artificiality limit your FOV.

Unrelated, I think the computing unit should be elsewhere and images on the glasses should be streamed wirelessly from it
 

Pie and Beans

Look for me on the local news, I'll be the guy arrested for trying to burn down a Nintendo exec's house.
Then how did the version from January have a larger FoV?

Closer to the eyes, slightly larger display window, that sorta thing. Either way, theres a hard limit on how high that FOV can be in the form factor and its gonna be a box until huge evolutionary changes in AR (5-10 years out).
 

hodgy100

Member
Camera was probably the same as the real unit, but that was what, 24mm focal length ?
Your FOV is A LOT wider than that, so even if the hololens prototype they had could render stuff to fill a 24mm focal length FOV, that would still appear tiny overlayed onto your actual FOV.

That can be improved, but I don't think it can be solved entirely, unless they somehow artificiality limit your FOV.

Unrelated, I think the computing unit should be elsewhere and images on the glasses should be streamed wirelessly from it

you dont wireless for this kind of thing really. it introduces too much of a delay. All the same problems that apply to VR also apply to AR and then there are some further issues to solve. OVeral MS's tech is impressive its a shame they feel the need to oversell it though.
 

AegisScott

Neo Member
Directly into your eyes? Hhahahaaha...no.

This is not so far off. There have be numerous demos of drawing directly onto the eyes on player/viewers/riders. Its just not very good, nor cost effective and it makes people squeamish. But the tech exists!
 

Azriell

Member
I kinda forgot about this, and wasn't really sure what it was until I watched the very nice video. It seems odd to me if MS sees this as a competitor to Rift or Morpheus. Holo Lens seems almost like something that should work with Kinect for those crappy moron games where you're "touching" stuff that doesn't exist. For me, the best case scenario would be projecting a game made for TV onto a TV that isn't really there (wall/ceiling/back of the seat in front of you). But VR can do this already (albeit not in the same way, but VR can create a flat screen for you to look at e.g. that Rift horror game where sitting on a couch playing a horror game while creepy stuff happens behind you in your VR house). I'm actually really excited for these sorts of applications, if only so I can play console/PC games in bed at night with out lighting the room up with my screen. But VR can also do so much more, and that's why this is a confusing move. Of course, if they're looking at this as more of a Google Glass, then that makes a lot more sense.
 

Bluefoot

Banned
Thats too bad the FOV is such an issue. I really thought it completely covered your perspective.

Thats too bad. THe main reason I wanted one was so I could watch movies on Imax size screens at home, but for entertainment purposes seems like Halo Lens version 1, won't be very useful.
 

Juanfp

Member
I kinda forgot about this, and wasn't really sure what it was until I watched the very nice video. It seems odd to me if MS sees this as a competitor to Rift or Morpheus. Holo Lens seems almost like something that should work with Kinect for those crappy moron games where you're "touching" stuff that doesn't exist. For me, the best case scenario would be projecting a game made for TV onto a TV that isn't really there (wall/ceiling/back of the seat in front of you). But VR can do this already (albeit not in the same way, but VR can create a flat screen for you to look at e.g. that Rift horror game where sitting on a couch playing a horror game while creepy stuff happens behind you in your VR house). I'm actually really excited for these sorts of applications, if only so I can play console/PC games in bed at night with out lighting the room up with my screen. But VR can also do so much more, and that's why this is a confusing move. Of course, if they're looking at this as more of a Google Glass, then that makes a lot more sense.

They are promoting this as an separate product, not a xbox complement.
 

KoopaTheCasual

Junior Member
Thats too bad the FOV is such an issue. I really thought it completely covered your perspective.

Thats too bad. THe main reason I wanted one was so I could watch movies on Imax size screens at home, but for entertainment purposes seems like Halo Lens version 1, won't be very useful.
This was my primary interest as well. It would (ideally) function as a scale-able display that would take up infinitely less room at home with none of the drawbacks. I guess that future is still a ways off. Sad.
 

vio

Member
The concept is great, just not there yet. Use of word"hologram" for this is a bit annoying though.
 
Top Bottom