• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

New Microsoft hololens demo. Goddamn future

Status
Not open for further replies.
I think too many people are looking exclusively at what MS has demo'ed and then try to evaluate whether or not it has value to them, as opposed to the potential. Even the Kinect had great applications outside of what MS tried to purpose it for.

"You musn't be afraid to dream a little bigger, darling."

That's sort of their job, isn't it?
 
That's sort of their job, isn't it?
Yes and no. Microsoft is only one company, you cant reasonably expect them to show off every use case of a product, especially not one which hasn't released yet; They are doing what every hardware developer does, they show off its capabilities, give you a few handpicked demos which are easy to build and then give out the APIs to developers to do stuff with.

Wait until developers get their hands on it, that's when you see the novel uses coming out for it.
 
Yes, but "creator of product has to show that there is some value to it" is their job, I would think. And for some of us, at least, they haven't met that. Some of us obviously believe they have!
 
What would it take to make the hololens change the environment to say a forest or jungle?

Why didn't they demo it showing a holographic human sitting on his couch? Is that really hard to do currently?
 
Interesting.. Windows Holographic is an operating system designed to run on AR devices and HoloLens is just the first and Microsoft's implementation. Kipman hopes for many devices running Windows Holographic in the near future.

Makes sense. I think we're still missing one piece of the puzzle though : tablets (or phones) with integrated depth sensing. I think Dell intended to produce one, but I don't know how far they are. They could be a better entry level product for that AR environment.

And I'm not sure what you mean by hiding your eyes.

Well even if the screen is mostly translucent, it does make it very hard to see the user's eyes, which is a big drawback for social interaction.

Microsoft_HoloLens.jpg

We communicate a lot with eyes ("mirrors of the soul") and hiding them is not very socially acceptable, that's why people look like douches when they keep their raybans on when talking to other people (especially indoor).
When Fortaleza leaked I hoped you could see enough of the face through the visor for that to be a minor annoyance, but we're not there yet. That's too bad because otherwise it could be a cool device for social interaction : you could 3D-skype people by having them scanned by a kinect on their side, and display them in your living room sitting on your couch (just like in that Kingsman scene). It's still possible with the current system, but you would see a bunch of people wearing gigantic sunglasses.
 
Seen some pretty stupid replies, to many screens and sensory overload... Really?

You can pin you're screens behind your lounge if you want, un-pin when you need it... It's pretty much like having apps on a phone.

I can see this being useful, I thought the first demo was impressive but gimmicky, now I see the potential.

Imagine if apple showed this, best thing since sliced bread!!
 
Are there any examples of how they deal with real-life objects being placed between the person wearing the hololens and the holographic object? Do they bother to deal with occlusion in that scenario, or does the holographic image just overlay the real-life object?
 
Are there any examples of how they deal with real-life objects being placed between the person wearing the hololens and the holographic object? Do they bother to deal with occlusion in that scenario, or does the holographic image just overlay the real-life object?

I just watched the videos again, and apparently the current version does always overlay the holographic object over the real image. They were very cautious for it not to happen on stage actually (they had a skilled cameraman there), but you can see it for a frame or two here :
https://youtu.be/mSCrviBGTeQ?t=166

Kipman's hand brushes over the robot head for a few frames, and it seems to be hidden by it although he's closer to the camera. You can also see the dots of the planned trajectory appearing over his feet.

I think in theory the hololens has enough information to know there are objects in front of the hologram, using its depth sensors. But it may be difficult to do a perfect rendering of it, since the depth information would be less accurate near the edges of objects (there's also a slight offset between the depth sensor position and the wearer's eyes to consider).
 
I just watched the videos again, and apparently the current version does always overlay the holographic object over the real image. They were very cautious for it not to happen on stage actually (they had a skilled cameraman there), but you can see it for a frame or two here :
https://youtu.be/mSCrviBGTeQ?t=166

Kipman's hand brushes over the robot head for a few frames, and it seems to be hidden by it although he's closer to the camera. You can also see the dots of the planned trajectory appearing over his feet.
yeah, i noticed it in that video too, which prompted me to ask if there were any other, more obvious examples.

i imagine that would be a really hard problem to solve.
 
The base problem is not that hard, it was actually one of the very first hacks with the original kinect.
https://www.youtube.com/watch?v=P3gfMXwQOGI

Handling occlusions when you're dealing with 3D information is quite straightforward, but the main issue is making it look good in all the places where you have less accurate (or missing) measurements. The new kinect data is already less noisy and more accurate than what we see in the above video, but it's not perfect (and will never be).
 
The base problem is not that hard, it was actually one of the very first hacks with the original kinect.
https://www.youtube.com/watch?v=P3gfMXwQOGI

Handling occlusions when you're dealing with 3D information is quite straightforward, but the main issue is making it look good in all the places where you have less accurate (or missing) measurements. The new kinect data is already less noisy and more accurate than what we see in the above video, but it's not perfect (and will never be).

Easier to solve when the whole thing is a set video quality being fed back, a lot harder when you're overlaying real life where it would have to then be overlaying crap video feed of the occlusions.

Its going to be something Microsoft hides all the way up until launch and probably even beyond that. Along with the box FOV, its gonna be something AR struggles to solve for a long time.
 
Easier to solve when the whole thing is a set video quality being fed back, a lot harder when you're overlaying real life where it would have to then be overlaying crap video feed of the occlusions.

You don't need to overlay a video feed in that case, just consider in the CG layer a mask of the pixels where it should not appear, so that you would just see "real life" instead. In short, detect where an obstacle is in front of the object, and disable object rendering there.
Only when you're not perfectly accurate, the transition between the real obstacle and the half hidden object will appear weird and/or fuzzy. But it should be better than always having the CG layer at the top, since that would break the depth illusion.
 
I just watched the videos again, and apparently the current version does always overlay the holographic object over the real image. They were very cautious for it not to happen on stage actually (they had a skilled cameraman there), but you can see it for a frame or two here :
https://youtu.be/mSCrviBGTeQ?t=166
Good find, that's exactly what I was saying earlier in the thread, this needs to be solved and working first otherwise the illusion is completely broken. Imagine pets jumping in front of you, people walking past etc
jLYHs8Y.gif
 
Good find, that's exactly what I was saying earlier in the thread, this needs to be solved and working first otherwise the illusion is completely broken. Imagine pets jumping in front of you, people walking past etc
jLYHs8Y.gif

There was a very noticeable situation like that during the HoloLens reveal when they switched to a "first person" view for a moment to show the glance and hand gestures.
 
Video of 3 devs talking about what they did during the HoloLens dev session. They go through some of the coding and talk about their experience with the device itself.

The comments are very positive. One thing I thought was interesting was when they talked about the audio capabilities which I haven't seen talked about much. They were extremely impressed with it.

Battery life seems like another big issue right now.
 
Well rehearsed and contrived demo, and it still looks janky and laggy.
 
Video of 3 devs talking about what they did during the HoloLens dev session. They go through some of the coding and talk about their experience with the device itself.

The comments are very positive. One thing I thought was interesting was when they talked about the audio capabilities which I haven't seen talked about much. They were extremely impressed with it.

Battery life seems like another big issue right now.

Good thing I checked before I posted, was just about to post this.

They seemed pretty excited and apparently the over all impression from everyone there is an extremely positive one.

They spoke about the FOV a bit and that even though it was smaller than they liked they still felt immersed which is good news. However, since the product still seems a couple months from release and the FOV is everyone's biggest gripe hopefully it can be addressed by then.
 
Good find, that's exactly what I was saying earlier in the thread, this needs to be solved and working first otherwise the illusion is completely broken. Imagine pets jumping in front of you, people walking past etc
jLYHs8Y.gif

There was a very noticeable situation like that during the HoloLens reveal when they switched to a "first person" view for a moment to show the glance and hand gestures.

I'm curious how they'll handle that. They clearly have some way to designate objects that are exceptions to the overlay--when they debuted this in January, one of the demos had you walking around the surface of Mars, but there was an actual computer (which was in the room) on the surface of Mars with you.

Well rehearsed and contrived demo, and it still looks janky and laggy.

Every stage demo you've ever seen has been well-rehearsed, and several impressions have mentioned little to no lag:

Through HoloLens, I saw a 3D projection of the new building on top of the physical model. On the monitor, I could click and drag parts of the building and raise them up or shrink them down, and my adjustments reflected on the hologram in real time. There was no discernible lag between my mouse events on the monitor and that of the hologram, which I found impressive.

Next, I moved the mouse cursor from the 2D monitor to my view of the hologram. (Effectively, the hologram was simply a screen extension of the drawing on the monitor.) Again, there was no lag.

I clicked a spot on the hologram with the mouse, and suddenly I had a photo rendering of the area in Denver I was supposed to build in. I looked around the room, all 360 degrees, and I could see a 360-degree view of Denver. Again, there was no lag, unless I really moved my head as quickly as possible. (Are you seeing a theme here?)

As I noted above, the HoloLens did not feel quite like a consumer-ready product. However, the demos did show many of the potential uses for the headset, and although the limited display size was a notable limitation, the device did a marvelous job of providing a snappy, lag-free, intuitive user experience. There was virtually no learning curve.
 
At least the gaming side can have a rational discussion about the pros and cons of the device. Such a breath of fresh air over there.
 
A smartphone already provides me with all the applications they demoed here- without having to wear something on my head.
 
At least the gaming side can have a rational discussion about the pros and cons of the device. Such a breath of fresh air over there.

ha, are you joking? It's just old stuff with Kinect over there...

it's the reason i'm over here..lol.

so the whole FOV thing...is it because of the (current) limited rendering power of the machine or is it because of the size of the lense? or both, for the time being.

also is there any other devices that use photons to display an images? just trying to get an idea of the image quality here.
 
A PC already provides me with all the applications a smartphone does- without having to keep a bulky device in my pocket.
LOL @ the thought of a smartphone being "bulky."

Congrats.

P.S. The crucial difference between a PC and a smartphone is that a smartphone is readily accessible anywhere you go. Hololens can't compete either, because you'll look ridiculous in public and you'll quickly feel discomfort with something on your face all the time.
 
A smartphone already provides me with all the applications they demoed here- without having to wear something on my head.

I don't really understand this argument. Phones do what computers did before, just more conveniently and portably. Hololens does what TVs/PCs/Phones did before, just more conveniently and portably.
 
LOL @ the thought of a smartphone being "bulky."

Congrats.
When I actually did upgrade to a smartphone from a tiny flip phone, my pocket did get a lot more cramped. That's not really the point though.

What you posted has been said about every piece of technology that attempts to encapsulate and improve on the features of an existing product. Most people don't need it now but it's likely that it will become a near-necessity in the future like smartphones are now.

I for one welcome our new holographic overlords.
 
I'm curious how they'll handle that. They clearly have some way to designate objects that are exceptions to the overlay--when they debuted this in January, one of the demos had you walking around the surface of Mars, but there was an actual computer (which was in the room) on the surface of Mars with you.

I'm assuming they might've modeled a near exact replica of the computer as an AR object that was then used to hide the Mars terrain. In a controled environment, I'd say it's doable but doing that in realtime situations with all kinds of rooms, it gets much more difficult.

Although, I can imagine having the Kinect hardware create these occlusion meshes as well as having something like the Leap Motion create such a mesh for the user's hands. I've no idea how feasible all of this is at the moment, probably won't be quite there yet for a few years.
 
Really? The gamingside threads on Hololens were straight-up dumpster fires the last time I checked.

ha, are you joking? It's just old stuff with Kinect over there...

it's the reason i'm over here..lol.

so the whole FOV thing...is it because of the (current) limited rendering power of the machine or is it because of the size of the lense? or both, for the time being.

also is there any other devices that use photons to display an images? just trying to get an idea of the image quality here.

Yeah, I was being sarcastic.

The interesting thing about the FOV discussion is that it seems the prototypes they had the press try in January had a much better FOV. So that likely means there is another reason the FOV has gotten smaller that's not related to inherent design limitations of the visor. Maybe it does have to do with processing power and battery life?
 
Video of 3 devs talking about what they did during the HoloLens dev session. They go through some of the coding and talk about their experience with the device itself.

The comments are very positive. One thing I thought was interesting was when they talked about the audio capabilities which I haven't seen talked about much. They were extremely impressed with it.

Battery life seems like another big issue right now.

Yeah I'd like to know if there is a headphone jack or how the audio works.
 
Yeah, I was being sarcastic.

The interesting thing about the FOV discussion is that it seems the prototypes they had the press try in January had a much better FOV. So that likely means there is another reason the FOV has gotten smaller that's not related to inherent design limitations of the visor. Maybe it does have to do with processing power and battery life?

Windows Central and Paul Thurrott thinks the lower FOV was to reduce motion sickness apparently.
 
Yeah, I was being sarcastic.

The interesting thing about the FOV discussion is that it seems the prototypes they had the press try in January had a much better FOV. So that likely means there is another reason the FOV has gotten smaller that's not related to inherent design limitations of the visor. Maybe it does have to do with processing power and battery life?

There's also this rumor: "Paul Thurrott had heard but was unable to confirm that the hologram field-of-view was purposefully kept smaller to reduce motion sickness (something common with VR headsets like Oculus)." (source)

Edit: damn you, Lazaro
 
Hmm, that's disappointing but interesting.

EDIT: Having said that it means that it's something that can be increased in the future.
 
There's also this rumor: "Paul Thurrott had heard but was unable to confirm that the hologram field-of-view was purposefully kept smaller to reduce motion sickness (something common with VR headsets like Oculus)." (source)

Edit: damn you, Lazaro

People get motion sick from augmented reality? That seems interesting.
 
Well even if the screen is mostly translucent, it does make it very hard to see the user's eyes, which is a big drawback for social interaction.



We communicate a lot with eyes ("mirrors of the soul") and hiding them is not very socially acceptable, that's why people look like douches when they keep their raybans on when talking to other people (especially indoor).
When Fortaleza leaked I hoped you could see enough of the face through the visor for that to be a minor annoyance, but we're not there yet. That's too bad because otherwise it could be a cool device for social interaction : you could 3D-skype people by having them scanned by a kinect on their side, and display them in your living room sitting on your couch (just like in that Kingsman scene). It's still possible with the current system, but you would see a bunch of people wearing gigantic sunglasses.

They should make it so networked Hololenses see googly eyes superimposed on other Hololenses that dynamically change with the info from the eye tracking.
 
I don't really understand this argument. Phones do what computers did before, just more conveniently and portably. Hololens does what TVs/PCs/Phones did before, just more conveniently and portably.
If you're already wearing the visor, perhaps that's true. But 1) I'm not going to wear a visor outside my home or workplace, and 2) I'm not going to wear it for any extended period of time.
 
The base problem is not that hard, it was actually one of the very first hacks with the original kinect.
https://www.youtube.com/watch?v=P3gfMXwQOGI

Handling occlusions when you're dealing with 3D information is quite straightforward, but the main issue is making it look good in all the places where you have less accurate (or missing) measurements. The new kinect data is already less noisy and more accurate than what we see in the above video, but it's not perfect (and will never be).
This is a problem that Magic Leap also claims to have solved. They talk about their depth perception and recognition a lot, and they have that mock video they put out about a month ago. I really want to see the tech in action though, only a few people have so far.
 
Status
Not open for further replies.
Top Bottom