MightyHedgehog said:
Sure, you can track where the Move wands are...and those act as your hands, but unless you're going to require the well-lit conditions for your playing environment that a normal RGB cam, like PS Eye, requires in order to see the player, the Move ball points are only going to provide a rough point of reference to guess where your arms, shoulders, and neck and head are. Natal has no such weak point in this way since it can work in the dark or very bright and/or unevenly-lit rooms thanks to the depth pickup inherent to its design.
I'd hold your horses on assumptions about Natal's (in)sensitivity to lighting conditions. Last I read the tracking uses RGB data also - that automatically implies at least some dependency on the quality of that data, and thus lighting conditions. It may not be exclusively dependent on RGB like eye is, but I wouldn't assume independence from it yet based on what's been reported.
The range of conditions PS eye is usable for in this context I think is also reasonable enough to allow devs to use that if they wish.
MightyHedgehog said:
So, Move could have more accurate sub-pixel positioning of the wands to represent your hands, but has to require the buttons and triggers to act in the normal video game fashion and has limited to no sight beyond that unless you want to require well-setup lighting that I can find impossible to believe most people already have in their gaming environment.
Eyetoy didn't sell into millions of homes because they were perfectly lit. It's more robust than is sometimes suggested, and PSeye has made advances over it also.
As for buttons, you say it like it's a bad thing, but buttons at the finger tips are an advantage. If you need to trigger something very reliably, a button is still by far the best way to do that.
MightyHedgehog said:
The detected positions in the space in front of the camera for the wands' balls and their feedback of angle is the only real data to rely on in all lighting situations...and it will have limited accuracy for everything that's estimated from that data.
This is true, turn off the lights and you just have the spheres. But turn on the lights and, again, I think the operating environment would be
good enough for that not to prevent a dev from looking at data beyond the wands.
I mean, I know I'll have some reply who says 'I had to setup 20 1000W spotlights in my living room before it would work!', but I think typically most people are able to get these things working OK in their environment. A consumer product wouldn't have already spawned out of this and been successful in its own way if this were some huge showstopper.
MightyHedgehog said:
Natal can pick up your entire body because it's tracking yourself to match a skeleton to map an on-screen avatar/game character to or just virtual hands and such while doing things like pinning your position in front of the camera a point in the game environment. That means that any number of simultaneously-tracked stances, movements, and special gestures can be acted upon and considered for input for your game character at the same time with more natural movement and control because you aren't limited to buttons and sticks.
*snip*
In any case, you could be tracked with Natal doing multiple things at once with just your body and still communicate specific spatial relationship to the game world at the same time, like walking along a virtual path on-screen that corresponds to the space in front of the camera...sort of like augmented reality without the need for the looking-at-myself-in-the-mirror effect as your avatar/game character would face into the screen, too, if that's how it was designed.
The sum of your points here seems to be that Natal can track you doing multiple different things simultaneously...but that's not really some unique characteristic. If I jump to the left and turn my head at the same time, a PSeye (with the right software...like the demo Marks did) would be able to detect that.
The bit I snipped, about putting your leg or foot in different areas around you to indicate movement in that direction, that is something a PSeye would have problems with for sure.
MightyHedgehog said:
Move is limited primarily by its weakness of standard camera pickup which needs adequate lighting to see you and has no consistent way to track space and distance without the wands and those only represent two points, one for each controller if you are using them. It could guesstimate distance to the camera by comparing relative size versus a capture reference point before the session begins. Natal has much better vision in both hardware and software because of its focus on nothing but the player without the aid of devices or props...which it could still use if desired. It's easy to forget that there's nothing wrong with the possibility of Natal plus controller, too.
True, although that wasn't part of comparison that was offered. Regarding though spatial awareness of your body... eye's relative weakness here would be depth, but it's not helpless. If I step back, my torso will get smaller, but remain in proportion. If I step forward it'll get bigger. If I lean forward my head will get bigger and my torso shorter (or disappear depending). If I step backward my head will get smaller and my torso shorter. It is possible to track these changes and correlate them to this body movement. The difference with eye is that the granularity of depth-change detection is going to be coarser...but whether this yields a game-breaking difference or not might be different question.
A lot of eyetoy games didn't even try to identify parts of the body discretely, they just looked for motion, but as we've seen with Kung Fu Live, at least, segmentation of the body and labelling of parts with bounding boxes is possible with eye.
MightyHedgehog said:
Oh, I know the basic gist of what I described could be achieved with Move, just as it could be done in a rougher fashion with Wii.
No, it could be done a lot better than on Wii
Wii is entirely blind and the original Wiimote, at least, is like a man who is spun around every few seconds, with only passing relative knowledge of where it's at. With eye and move you can know very accurately the position in space in front of the TV of the controller (hands), and you can also track head/torso, and indeed even legs (though in a much more limited way). The difference of course is that for the parts of your body that go in front of other parts, you're going to difficulty with eye (arms and legs). With a move in your hand, though, that's solved for the arms, and then the only difference I can see between what's being done in Richochet and was done in the Marks 'robot' demo, for upper body, was how well elbows would be dealt with. Elbows are generally a pain in the ass for any tracking because there's so much potential for occlusion, so Marks' demo doesn't even bother with it. It'll be interesting to see just how reliably Natal 'gets' elbows.
MightyHedgehog said:
There's nothing stopping prop-use with Natal, but even without it, the accuracy of position should be more than good enough for most gameplay scenarios, IMO. What if I want to elbow someone? Shoulder charge them? Grab them with my arms? Use a palm strike as opposed to a close-fisted jab or even alternate between the two? What if that palm could be tracked to trap and capture an incoming strike as opposed a closed fist that is detected as a purely offensive move? Then there are my legs and ability to dodge an incoming attack with more realistic flexibility that accounting for my whole body's movement with full skeletal tracking would. What if I want to hop over a sweep kick? Or how about the crane kick from Karate Kid? :lol Joking...
Some of those things are problematic with Move. Legs for example, because we can't really ask you to strap moves to them (whatever Sony's patents might say
). But Shoulder charge? Sure (at least where there's overt translation of the shoulder/torso). Opening and closing palm? If that were required and I was making a game for Move, I'd map that to a button. It'll probably be far more reliable/unambiguous than the state detection of your hand Natal offers as your hand moves around and becomes more or less visible (and funnily enough, in the Marks robot demo the trigger was indeed mapped to opening and closing the palm, and because it's nice and analog you can a nice range of open-ness and closed-ness). Elbow? Very tricky with move. Jumping could actually I think be detected with eye. Ducking certainly.
MightyHedgehog said:
Sure, man. But again, it's a bit too simple to simply lop off the level of body tracking and estimation Natal brings to the table in order to make Move look better in the comparison.
I mean, you just made the poor on-screen character lose the ability to use his legs and body to hit and block with.
I'm not and didn't intend that. I intended to make the comparison on what eye and move in total could offer beyond just the hands, though I thought it was worth noting that for the hands, having a controller here I think is an advantage, and will probably remain so for a while. As I said in my first post, the rest of the tracking is not as good or as general as Natal, but it's also not absent, and if you're wanting to make a game with more full body awareness in tandem with the controllers, you definitely have options.
As I also said, though, Sony isn't as interested at the moment in encouraging development along those lines - or so it seems. They seem to be focussing on building a bridge to Wii which strategically might make sense, since that's where the target content's at right now. But as a technological comparison it's a bit more in the middle between the two approaches (camera | controller).