What they didnt show was that the guys hand when pulling and pushing blocks is located BEHIND the hologram.
This is (mockup) what youll see... the hologram will always be in front of your hands...
This alone removes all WOW factor for me... it looks cool when nothing is in FRONT of what im looking at... but as soon as something is supposed to move in front of it... all immersion is lost.
THIS I would sign, definitively. There are really good and helpful opportunities. And here Microsoft could shine.
After that, add some tabletop stuff and some nice puzzle games and that's nice and dandy for gaming.
But there is no need to drag it onto the E3 stage and give us heroes of smoke and mirrors.
They know Morpheus will be on Sony's stage. They had to show something futuristic really. And Kinect is well.. Yeah..
They know Morpheus will be on Sony's stage. They had to show something futuristic really. And Kinect is well.. Yeah..
Please tell me your reasons. I have many for my statement, but I wanna hear your thoughts first.
I have more than 500 hours in the Oculus DK2 developing and playing in it. I have many many reasons.
You don't have to apologize. AR and VR will probably be through one device in the future so it's not like they're competing.Sorry, but from what I've seen and experience VR is the future, not AR.
Shouldn't this be possible to solve by adding a kinect-style depth camera and just masking out objects "in front" of the projected content? The mask could even be fairly crude, I'm sure the Wow-effect is stronger than wanting everything to be hyperperfect at launch.
The HoloLens has two Kinect cameras built into it. It does map out and mask all objects it sees in front of it.
So, isn't the quoted a non-issue then?
The comments.
The demo was embellished. I wouldn't go so far as to call it fake, or to call Microsoft liars.
in the original demo when hololens was first shown, with the second camera rig setup (same as today), the presenter said that the camera is as though it had its own hololens mounted on it. Literally telling the audience that whatever the camera guy sees is how the actual hololens unit will work.
That seems a tad more than embellishment.
Jaded? I'm one of the biggest technophiles you're likely to encounter. As such, I have a fairly good grasp of what is and isn't possible with current and near-future technology. Kind of like how, after years of intense training, a sommelier can discern the difference between a fine Chardonnay and a bottle of wombat piss. The ironic thing is, you know full well you're drinking wombat piss, yet you eagerly gulp it down and say, "C'mon, guys! If you pretend it's Chardonnay, it's delicious!"What's it like to go through life being this jaded? You must be a riot at parties.
In theory, the Kinect on the front of the headset should be able to do just that, but if they haven't demonstrated it yet, my guess would be that putting the theory in to practice is far easier said than done.Yes... theres a specific name for it... like pixel occlusion or something. but you cant have anything actually be in front of the hologram... it is overlayed on top of everything, unless they can make the image "disappear in the shape of your arm" when you wave your hand around in front of it.
If MS can solve the occlusion problem... then yeah itll be amazing... but they have yet to show a demo with Occlusion.
It was completely fake. The "game" was responding to commands the guy hadn't even issued yet, FFS.I wouldn't go so far as to call it fake,
Fair enough. The clip in the OP was cut off a bit, so I didn't see how it was introduced. Did they say something like, "We'd like to show you a little play that gives you an idea of what we might be able to achieve in the future," or did they say something like, "And we're going to demonstrate it for you right now"? I didn't see any kind of "Product Vision" disclaimer in the clip.or to call Microsoft liars.
Yup, and as you said, same ol' MS. Just like they hurriedly purchased Kinect and made a bunch of fake videos for it when they got wind that Sony were getting ready to launch Move.It was a good conference apart from this part which was just MS falling back into bad old habits.
Clearly a lot of smoke and mirrors, years away from becoming a product, and only in the conference as a diversionary tactic for Sony's Morpheus showcase coming later.
Looks super cool and super fake.
Why are so many people stretching to find negatives?
How is it stretching exactly?Why are so many people stretching to find negatives?
Why are so many people stretching to find negatives?
In theory, the Kinect on the front of the headset should be able to do just that, but if they haven't demonstrated it yet, my guess would be that putting the theory in to practice is far easier said than done.
The demo is bullshit. Everyone who used the actual units at Build say that the POV is limited to a small window dead center that clips as holograms go out of view. Maybe MS is trying to represent the end game (2 - 3 years down the road) but if people expect what they showed today anytime before that, prepare for disappointment.
How do you know? As for all that processing time needed, it *already* does full depth sensing on the entire scene in realtime and rendering based on the positional changes, are you seeing a ton of latency in the motion? And why would it be only as precise as Kinect 1.0 when they have Kinect 2.0 technology?The problem is that it COULD be done... but hasnt so far.. Occlusion is the make or break point... i also assume it would take a LOT of processing to get failry decent 1:1 of image motion... anything that passes in front of the hologram would need to render the hologam, MINUS the "object" needed to pass in front... for the entire time its in motion.
The masking would need to omit the section of processed hologram... with any lag... you are would always appear to clip behind the image...
MS isnt showing this... why? they do a precision dance everytime its demoed so things never end up in front...
It will probably be as precise as Kinect 1.0 was... ie: not very.
How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?
in the original demo when hololens was first shown, with the second camera rig setup (same as today), the presenter said that the camera is as though it had its own hololens mounted on it. Literally telling the audience that whatever the camera guy sees is how the actual hololens unit will work.
That seems a tad more than embellishment.
How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?
How do you know? As for all that processing time needed, it *already* does full depth sensing on the entire scene in realtime and rendering based on the positional changes, are you seeing a ton of latency in the motion? And why would it be only as precise as Kinect 1.0 when they have Kinect 2.0 technology?
If you don't include the most important one, occlusion, then yeh, they have. They've not shown any solution for that yet.They have successfully overcome most of the issues that should make a product like HoloLens impossible.
If you don't include the most important one, occlusion, then yeh, they have. They've not shown any solution for that yet.
This tech is not even close to be released, at least not for Xbone compatibility, it seemed like preemptive demo against Morpheus.
Outside of gaming and once those issues above are fixed, I would buy one, just for virtual screens.
If you don't include the most important one, occlusion, then yeh, they have. They've not shown any solution for that yet.
Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.
Finally, I believe that VR is being rushed to market. The resolution and pixel density of the DK2 was not impressive and I'm worried that solutions coming out next year won't actually have much of an advance in display technology
Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.
Yes, we're in agreement. There's the processing power involved with processing the information from the embedded Kinect, and also just the general issue of frame rates. IIRC, K2 only runs at 30 Hz even on the Bone, and they'll need much higher refresh rates to avoid "z-buffer artifacts" when physical objects pass in front of virtual objects.The problem is that it COULD be done... but hasnt so far. Occlusion is the make or break point... i also assume it would take a LOT of processing to get failry decent 1:1 of image motion... anything that passes in front of the hologram would need to render the hologam, MINUS the "object" needed to pass in front... for the entire time its in motion.
The masking would need to omit the section of processed hologram... with any lag... you are would always appear to clip behind the image...
MS isnt showing this... why? they do a precision dance everytime its demoed so things never end up in front...
It will probably be as precise as Kinect 1.0 was... ie: not very.
When they pushed in on the window, the virtual objects filled the entire view. That's far from what's actually achievable.How does that make the demo bullshit? As far as I can tell the camera usually (but not always) stayed far enough away that the entire table would always be within its POV (remember, POV isn't a tube, it's a cone). Also, this isn't Build, it's very possible they've improved that, since the very first prototypes didn't have the POV problem. And how do you figure "2-3 years down the road"? Are you an expert in the technology they are using, so that you can say without a doubt that the POV problem is so technically hard that it'll take 2-3 years to fix?
Does it? I mean, I know that's what it's supposed to do, but is there any evidence it's actually doing that yet? I know some members of the press were treated to private demos. Did anyone try kicking the coffee table out from under the game board? What happened? Did the board move with the table? Did it fall to the floor? Did it keep chugging along as though nothing had happened?How do you know? As for all that processing time needed, it *already* does full depth sensing on the entire scene in realtime and rendering based on the positional changes, are you seeing a ton of latency in the motion? And why would it be only as precise as Kinect 1.0 when they have Kinect 2.0 technology?
Are you sure about that? I thought I remembered reading that none of the actual demo units have had the FOV they've been "demonstrating," but the tethered unit wasn't nearly as bad as the untethered unit.The tethered devices they had the media use in January had the FoV they were promising in the camera setup. They should have been more upfront at BUILD regarding the challenges they are working to overcome in the portable unit as Oculus has always done with their Rift.
So, the virtual objects are occluded by the furniture, but not by your body? That would support my hypothesis that they aren't actually scanning the room in real time, and even the private demos were using a pre-mapped room. So, more fakery, even in the "real" demos, basically.Do you mean occlusion by real life objects? Like if you ducked down really low, the bottom of the minecraft level would be occluded by the edge and underside of the table? Because that works right now.
What they didnt show was that the guys hand when pulling and pushing blocks is located BEHIND the hologram.
This is (mockup) what youll see... the hologram will always be in front of your hands...
This alone removes all WOW factor for me... it looks cool when nothing is in FRONT of what im looking at... but as soon as something is supposed to move in front of it... all immersion is lost.
so if you waved your hand in front of the level... it would be in front of the hologram?
Thats my only complaint... if thats taken care of ill be hyped.
so if you waved your hand in front of the level... it would be in front of the hologram?
Thats my only complaint... if thats taken care of ill be hyped.
Jaded? I'm one of the biggest technophiles you're likely to encounter. As such, I have a fairly good grasp of what is and isn't possible with current and near-future technology. Kind of like how, after years of intense training, a sommelier can discern the difference between a fine Chardonnay and a bottle of wombat piss. The ironic thing is, you know full well you're drinking wombat piss, yet you eagerly gulp it down and say, "C'mon, guys! If you pretend it's Chardonnay, it's delicious!"
.
Complaining about the field of view is like complaining about image resolution in VR sure it's an issue but it's missing the point of why the tech is impressive. And just like resolution, it will improve in the future, maybe even before the thing is released.
I'm 99% sure this was fake as hell, just like when kinect got revealed with that "amazing" star wars demo