• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

HoloLens Minecraft Demo

SigSig

Member
What they didnt show was that the guys hand when pulling and pushing blocks is located BEHIND the hologram.

This is (mockup) what youll see... the hologram will always be in front of your hands...

3HojmHa.png


This alone removes all WOW factor for me... it looks cool when nothing is in FRONT of what im looking at... but as soon as something is supposed to move in front of it... all immersion is lost.

Shouldn't this be possible to solve by adding a kinect-style depth camera and just masking out objects "in front" of the projected content? The mask could even be fairly crude, I'm sure the Wow-effect is stronger than wanting everything to be hyperperfect at launch.
 

Soi-Fong

Member
THIS I would sign, definitively. There are really good and helpful opportunities. And here Microsoft could shine.
After that, add some tabletop stuff and some nice puzzle games and that's nice and dandy for gaming.
But there is no need to drag it onto the E3 stage and give us heroes of smoke and mirrors.

They know Morpheus will be on Sony's stage. They had to show something futuristic really. And Kinect is well.. Yeah..
 
People saying AR is not useful for gaming: I don't know, Illumiroom looked pretty cool to me. No reason why you couldn't do everything they showed off in those Illumiroom videos with a HoloLens. I think it would be fun to use a HoloLens in conjunction with regular console/PC games.The RoomAlive stuff also looks promising to me especially for certain genres like horror. HoloLens could make that whole RoomAlive projection setup possible for the average consumer without much fuss.

Obviously Minecraft on HoloLens is going to be a big seller to millions of gamers as well.

They know Morpheus will be on Sony's stage. They had to show something futuristic really. And Kinect is well.. Yeah..

That's the thing though, Xbox Kinect should be a component of HoloLens in the same way PS4 camera and Move are components of Project Morpheus. In my opinion Sony is poised to crush Xbox at this E3 now. We'll see what happens tonight. The Xbox team is looking very lost and as ill-prepared for the future as the day Nintendo unveiled the Wii.

When you look at all the major gaming headsets (Oculus, Vive, Morpheus) all of them are pushing forward with motion controllers. Meanwhile the HoloLens team are stuck with lame finger tapping. It's really pathetic to see the Xbox team fall so far behind everyone in motion control.
 

golem

Member
Please tell me your reasons. I have many for my statement, but I wanna hear your thoughts first.

I have more than 500 hours in the Oculus DK2 developing and playing in it. I have many many reasons.

I had a DK2 too. Was not impressed.

First, IMO it is a pretty limited experience as it stands now. With better input methods it will become more versatile but right now most gaming experiences seem awkward on it. Things just don't work quite as you expect them to, whether from the first person perspective or others. Also even with better input devices, you will be divorced from seeing yourself act upon those input devices. In a cockpit or racing simulator, it is much more preferable IMO to see the actual physical objects you are interacting with. AR seems more flexible, especially if it can actually block out large parts of reality. For example, I have no idea why Insom's new game is third person in VR, but I think it could work better as an AR game from the same viewpoint.

Second, I believe there will be a social stigma against strapping essentially a blindfold on top of your head. It is not a very social (as in living room) experience and may not be well accepted by the masses. In its current form I think it will be seen as a novelty, not as a mainstream choice for entertainment. When I showed my gf my DK2, she was impressed but I was left twiddling my thumbs while she played in her world. Now, yes I can strap another DK2 on and jump in-- most likely eventually causing as collision with her as we try to maneuver things around the living room in the dark. Yes VR could recreate the whole scene correctly spatially but after my experience with Kinect, I believe motion tracking has a long, long way to go before it actually works with any sort of acceptable accuracy. With AR we can have a shared playspace and maintain a connection of sorts to the outside world.

Finally, I believe that VR is being rushed to market. The resolution and pixel density of the DK2 was not impressive and I'm worried that solutions coming out next year won't actually have much of an advance in display technology. Also as noted above, alternative input methods are still unproven and traditional input methods dont quite cut it. This can all lead to VR being seen as a failed experiment and a novelty at best at least in the short term.
 
Shouldn't this be possible to solve by adding a kinect-style depth camera and just masking out objects "in front" of the projected content? The mask could even be fairly crude, I'm sure the Wow-effect is stronger than wanting everything to be hyperperfect at launch.

The HoloLens has two Kinect cameras built into it. It does map out and mask all objects it sees in front of it.
 
So, isn't the quoted a non-issue then?

Not necessarily. There are processing/rendering speed limitations of the HoloLens computer and field of view limitations with HoloLens Kinect cameras. I've never used a HoloLens so I have no idea where the limitations are for it's masking/object tracking capabilities. Early press reports are that the room masking capability worked very well though.
 

mrklaw

MrArseFace
The comments.

The demo was embellished. I wouldn't go so far as to call it fake, or to call Microsoft liars.

in the original demo when hololens was first shown, with the second camera rig setup (same as today), the presenter said that the camera is as though it had its own hololens mounted on it. Literally telling the audience that whatever the camera guy sees is how the actual hololens unit will work.

That seems a tad more than embellishment.
 
in the original demo when hololens was first shown, with the second camera rig setup (same as today), the presenter said that the camera is as though it had its own hololens mounted on it. Literally telling the audience that whatever the camera guy sees is how the actual hololens unit will work.

That seems a tad more than embellishment.

Considering the product isn't even fully-developed or finished yet, I disagree.

It very well could be how the actual HoloLens unit will work. Eventually.
 
What's it like to go through life being this jaded? You must be a riot at parties.
Jaded? I'm one of the biggest technophiles you're likely to encounter. As such, I have a fairly good grasp of what is and isn't possible with current and near-future technology. Kind of like how, after years of intense training, a sommelier can discern the difference between a fine Chardonnay and a bottle of wombat piss. The ironic thing is, you know full well you're drinking wombat piss, yet you eagerly gulp it down and say, "C'mon, guys! If you pretend it's Chardonnay, it's delicious!"

"It pays to keep an open mind, but not so open your brains fall out." ~Carl Sagan


Yes... theres a specific name for it... like pixel occlusion or something. but you cant have anything actually be in front of the hologram... it is overlayed on top of everything, unless they can make the image "disappear in the shape of your arm" when you wave your hand around in front of it.

If MS can solve the occlusion problem... then yeah itll be amazing... but they have yet to show a demo with Occlusion.
In theory, the Kinect on the front of the headset should be able to do just that, but if they haven't demonstrated it yet, my guess would be that putting the theory in to practice is far easier said than done.


I wouldn't go so far as to call it fake,
It was completely fake. The "game" was responding to commands the guy hadn't even issued yet, FFS.

or to call Microsoft liars.
Fair enough. The clip in the OP was cut off a bit, so I didn't see how it was introduced. Did they say something like, "We'd like to show you a little play that gives you an idea of what we might be able to achieve in the future," or did they say something like, "And we're going to demonstrate it for you right now"? I didn't see any kind of "Product Vision" disclaimer in the clip.


It was a good conference apart from this part which was just MS falling back into bad old habits.

Clearly a lot of smoke and mirrors, years away from becoming a product, and only in the conference as a diversionary tactic for Sony's Morpheus showcase coming later.
Yup, and as you said, same ol' MS. Just like they hurriedly purchased Kinect and made a bunch of fake videos for it when they got wind that Sony were getting ready to launch Move.
 

CoG

Member
Why are so many people stretching to find negatives?

The demo is bullshit. Everyone who used the actual units at Build say that the POV is limited to a small window dead center that clips as holograms go out of view. Maybe MS is trying to represent the end game (2 - 3 years down the road) but if people expect what they showed today anytime before that, prepare for disappointment.
 

DavidDesu

Member
Impressive. I've heard FOV is poor atm. Of course this will improve. This is a fantastic concept and I hope it comes to fruition. One day I'd love my regular spectacles to have this in built and I can augment stuff in my living room etc.

This and VR go hand in hand. The stuff in the Hololens demo can be done in VR too, and is a great example on non first person VR and the possibilities it has. Microsoft are doing great things here and I just hope its not BS. Of course Hololens is supposedly all run from the headset itself so graphics won't be hugely more advanced than the Minecraft stuff for now, which is why VR driven by either home consoles or PC will still be where the widest range of experiences will be found.

However you look at it the future is bright. Unless you're a complete cynic who can't bare to see anything new happen, which is where some of the anti VR crew seem to come from.:p
 

KiraXD

Member
In theory, the Kinect on the front of the headset should be able to do just that, but if they haven't demonstrated it yet, my guess would be that putting the theory in to practice is far easier said than done.

The problem is that it COULD be done... but hasnt so far. Occlusion is the make or break point... i also assume it would take a LOT of processing to get failry decent 1:1 of image motion... anything that passes in front of the hologram would need to render the hologam, MINUS the "object" needed to pass in front... for the entire time its in motion.

The masking would need to omit the section of processed hologram... with any lag... you are would always appear to clip behind the image...

MS isnt showing this... why? they do a precision dance everytime its demoed so things never end up in front...

It will probably be as precise as Kinect 1.0 was... ie: not very.
 
The demo is bullshit. Everyone who used the actual units at Build say that the POV is limited to a small window dead center that clips as holograms go out of view. Maybe MS is trying to represent the end game (2 - 3 years down the road) but if people expect what they showed today anytime before that, prepare for disappointment.

How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?

The problem is that it COULD be done... but hasnt so far.. Occlusion is the make or break point... i also assume it would take a LOT of processing to get failry decent 1:1 of image motion... anything that passes in front of the hologram would need to render the hologam, MINUS the "object" needed to pass in front... for the entire time its in motion.

The masking would need to omit the section of processed hologram... with any lag... you are would always appear to clip behind the image...

MS isnt showing this... why? they do a precision dance everytime its demoed so things never end up in front...

It will probably be as precise as Kinect 1.0 was... ie: not very.
How do you know? As for all that processing time needed, it *already* does full depth sensing on the entire scene in realtime and rendering based on the positional changes, are you seeing a ton of latency in the motion? And why would it be only as precise as Kinect 1.0 when they have Kinect 2.0 technology?
 

CoG

Member
How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?

The camera being far away is deceptive because they impression they are portraying is you're seeing what the guy at the table sees. The limitations are both in the display and processing and experts in the field put full FOV AR 2 - 3 years down the road based on what I read coming out of Build. There's no way the tech advanced that much in a month, in fact it's been scaled back quite a bit since the Windows 10 event in January when you needed a PC strapped around your neck to use it.
 

Grinchy

Banned
I was blown the fuck away by this. Then I started reading all the comments about how it might be bullshit. If Microsoft wasn't so well known for faking everything at these conferences, my excitement would still be there. Now I'm a little annoyed but still hopeful.
 
What matters most right now are journalist impressions of using HoloLens at E3. If the FoV sucks then they're going to say so. We have no idea where the FoV is at right now or where it will be when the product ships. We don't even know when they plan to ship this thing.

They have successfully overcome most of the issues that should make a product like HoloLens impossible. When we talk about FoV we're talking about an issue of scale, processing power. The tethered units in January achieved completely what they are promising on stage. By Moore's law they will eventually be able to deliver that experience in the HMD.


in the original demo when hololens was first shown, with the second camera rig setup (same as today), the presenter said that the camera is as though it had its own hololens mounted on it. Literally telling the audience that whatever the camera guy sees is how the actual hololens unit will work.

That seems a tad more than embellishment.

The tethered devices they had the media use in January had the FoV they were promising in the camera setup. They should have been more upfront at BUILD regarding the challenges they are working to overcome in the portable unit as Oculus has always done with their Rift.

FoV seems like something they can potentially improve over time as software gets more optimized and computational power increases. It would be incredibly disappointing if HoloLens FoV doesn't improve between BUILD demo and whatever the ship date is.

Oculus has had to overcome a lot of limitations and problems over the past few years while trying to sell people on the idea of VR and the thing that was cool is that they acknowledged the challenges openly. I would be more excited to hear Microsoft talk about how they're working on improving the product before release rather than pretending there are no challenges to overcome.
 

KiraXD

Member
How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?


How do you know? As for all that processing time needed, it *already* does full depth sensing on the entire scene in realtime and rendering based on the positional changes, are you seeing a ton of latency in the motion? And why would it be only as precise as Kinect 1.0 when they have Kinect 2.0 technology?

how do i know it hasnt been done yet? because they havent shown it yet... please show me one instance of Hololens Occlusion. theres lag with the hologram as is... to process occlusion... if you think itll be 1:1 youre delusional.

If its not a big deal... then why havent we seen it yet? were not talking JUST depth sensing... were talking about rendering an entire scene 1:1 and removing the part of these hologram in relation to the images in front of the hologram based on the scene and objects (including moving objects) in the scene.
 

Josman

Member
Seems like it has potential, but for gaming it has as much as Kinect had, there are still issues that need to be adressed like the FOV, the depth perception like the GIF posted in this thread where the hand is out of place with the hologram, size, looks and cost.

This tech is not even close to be released, at least not for Xbone compatibility, it seemed like preemptive demo against Morpheus.

Outside of gaming and once those issues above are fixed, I would buy one, just for virtual screens.
 
If you don't include the most important one, occlusion, then yeh, they have. They've not shown any solution for that yet.

I heard a few complaints from BUILD impressions that if you get too close to the holograms you go right through it. But I didn't hear many complaints about occlusion with your hands or anything saying that the holograms are always covering your hand.

This tech is not even close to be released, at least not for Xbone compatibility, it seemed like preemptive demo against Morpheus.

Outside of gaming and once those issues above are fixed, I would buy one, just for virtual screens.

HoloLens at this point has absolutely nothing to do with Xbox or Morpheus. It is a stand alone Windows 10 PC that does AR. It is not a VR headset or peripheral for a game console. We've known about HoloLens existence or Project Fortaleza for many years before Morpheus too.

Partnering with Oculus was a preemptive against Morpheus VR. A pretty lame maneuver in my opinion too. Xbox attaching it's name and controller to Oculus was a pathetic move at seeking relevance in a market that they are quickly falling behind. Phil Spencer doesn't seem to have a workable plan if VR gaming does take off on console and PC.
 

Stinkles

Clothed, sober, cooperative
If you don't include the most important one, occlusion, then yeh, they have. They've not shown any solution for that yet.

Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.
 
Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.

Niiiiiice.

Seriously, that's killer. Will definitely help for immersion.
 
Finally, I believe that VR is being rushed to market. The resolution and pixel density of the DK2 was not impressive and I'm worried that solutions coming out next year won't actually have much of an advance in display technology

I think you're underestimating how much the quality can improve based on one spec (resolution). DK2 uses an off the shelf Note 3 screen overclocked to 75fps. It wasn't designed for VR, nor is it even really well suited to VR (pixel density is poor and it being pentile makes the SDE even worse). The consumer version is using two state of the art screens custom made for VR, and based on the response from the Crescent Bay prototype it's a vast improvement.
 

KiraXD

Member
Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.

so if you waved your hand in front of the level... it would be in front of the hologram?

Thats my only complaint... if thats taken care of ill be hyped.
 
The problem is that it COULD be done... but hasnt so far. Occlusion is the make or break point... i also assume it would take a LOT of processing to get failry decent 1:1 of image motion... anything that passes in front of the hologram would need to render the hologam, MINUS the "object" needed to pass in front... for the entire time its in motion.

The masking would need to omit the section of processed hologram... with any lag... you are would always appear to clip behind the image...

MS isnt showing this... why? they do a precision dance everytime its demoed so things never end up in front...

It will probably be as precise as Kinect 1.0 was... ie: not very.
Yes, we're in agreement. :) There's the processing power involved with processing the information from the embedded Kinect, and also just the general issue of frame rates. IIRC, K2 only runs at 30 Hz even on the Bone, and they'll need much higher refresh rates to avoid "z-buffer artifacts" when physical objects pass in front of virtual objects.


How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?
When they pushed in on the window, the virtual objects filled the entire view. That's far from what's actually achievable.

Incidentally, your Point of View is where you're standing, and your Field of View is what you can see from there. <3

How do you know? As for all that processing time needed, it *already* does full depth sensing on the entire scene in realtime and rendering based on the positional changes, are you seeing a ton of latency in the motion? And why would it be only as precise as Kinect 1.0 when they have Kinect 2.0 technology?
Does it? I mean, I know that's what it's supposed to do, but is there any evidence it's actually doing that yet? I know some members of the press were treated to private demos. Did anyone try kicking the coffee table out from under the game board? What happened? Did the board move with the table? Did it fall to the floor? Did it keep chugging along as though nothing had happened?


The tethered devices they had the media use in January had the FoV they were promising in the camera setup. They should have been more upfront at BUILD regarding the challenges they are working to overcome in the portable unit as Oculus has always done with their Rift.
Are you sure about that? I thought I remembered reading that none of the actual demo units have had the FOV they've been "demonstrating," but the tethered unit wasn't nearly as bad as the untethered unit.


Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.
So, the virtual objects are occluded by the furniture, but not by your body? That would support my hypothesis that they aren't actually scanning the room in real time, and even the private demos were using a pre-mapped room. So, more fakery, even in the "real" demos, basically.
 

Froli

Member
What they didnt show was that the guys hand when pulling and pushing blocks is located BEHIND the hologram.

This is (mockup) what youll see... the hologram will always be in front of your hands...

3HojmHa.png


This alone removes all WOW factor for me... it looks cool when nothing is in FRONT of what im looking at... but as soon as something is supposed to move in front of it... all immersion is lost.

Wow, I didn't know about this and then add the FOV problem. This is going to bite their ass in the end
 

Alx

Member
Complaining about the field of view is like complaining about image resolution in VR sure it's an issue but it's missing the point of why the tech is impressive. And just like resolution, it will improve in the future, maybe even before the thing is released.
 

cheezcake

Member
so if you waved your hand in front of the level... it would be in front of the hologram?

Thats my only complaint... if thats taken care of ill be hyped.

Do you have a source for this "hand problem" or are you just theorising? Doesn't this thing have depth cameras? Seems trivial to occlude objects like your hand if so.
 

Bsigg12

Member
so if you waved your hand in front of the level... it would be in front of the hologram?

Thats my only complaint... if thats taken care of ill be hyped.

I'm pretty sure they have already shown that the system will anchor things and allow for what you're talking about.

https://www.youtube.com/watch?v=XLljp8CVpKg

The way things are anchored as well as when the guy with the Hololens walks past one of the things it shows it's placed in space and not like you pictured a few pages back.

The biggest issue facing Hololens is a small FOV.
 

Crema

Member
Jaded? I'm one of the biggest technophiles you're likely to encounter. As such, I have a fairly good grasp of what is and isn't possible with current and near-future technology. Kind of like how, after years of intense training, a sommelier can discern the difference between a fine Chardonnay and a bottle of wombat piss. The ironic thing is, you know full well you're drinking wombat piss, yet you eagerly gulp it down and say, "C'mon, guys! If you pretend it's Chardonnay, it's delicious!"
.

Come on, there's no need to be so unpleasant just because you're on the internet.

The tech looks really cool. It's obviously not going to work as seamlessly as we wish yet but the most enjoyable part of E3 is getting an insight into new ideas and the future direction of the industry. Best thing I've seen so far and really hope they can pull it off.
 

NoPiece

Member
Complaining about the field of view is like complaining about image resolution in VR sure it's an issue but it's missing the point of why the tech is impressive. And just like resolution, it will improve in the future, maybe even before the thing is released.

Well, note that the FOV is an issue in VR as well, and Oculus hasn't substantially improved it since DK1. Increasing FOV isn't trivial, it means exponentially more pixels rendered, and Hololens is compute power constrained as a standalone device.


Also, the optics start to get much more complicated as FOV gets wider. The lenses Oculus and Valve are relying on can't work for AR, because you also need to see the real world undistorted.
 

Bsigg12

Member
I'm 99% sure this was fake as hell, just like when kinect got revealed with that "amazing" star wars demo

If this thing was supposed to be powered by an Xbox One, sure I would be in the same boat. The biggest mistake people are making is assuming it's a gaming accessory when in fact it's a fully self contained computer. It'll probably end up running $800-$1000 dollars when it comes out. Bringing games to it is is probably the least of Microsoft's concerns with the device but they are sure as hell going to show it off at every opportunity they get.
 
Top Bottom