chubigans said:
I don't disagree with you there...but the purpose of the article was if MS was able to get close to their original vision. They mostly succeeded. Whether or not this will turn into anything games wise is another discussion.
I personally thought you were a bit too generous here, and too strict there. Above all, it feels as if this kind of article just doesn't really do justice to either of the technologies, and it feels as if a lack of technical understanding is to blame. It's more than fine for a blog entry, mind, but personally I would categorise the type of experiences shown into general techniques (say, augmented reality, 3d object recognition), put them against what we now know against the tech to determine its theoretical possibilities and limitations, and then catalogue the examples that we can currently find of actual gameplay out there.
With that in mind, let's look at a few details:
Body Tracking
The Vision: As Anton walks around the stage, the game (in a first person viewpoint) also follows Antons body as he moves close, far, and side to side. Hes not simply stepping slightly in a direction
hes walking around actively as the Move is being used as the gun/camera orientation.
The Reality: since we are following the direction of the gun, the only really relevant part here is the 3D location of the gun. In other words, we've got the camera viewpoint and aiming reticule at the position of the tip of the Move controller. Since we can track the exact position and orientation of the Move controller in 3D space, you don't need any body tracking for this whatsoever.
The first real body tracking shown for the Move in tech demo form is actually the robot-overlay thing. It uses a combination of two Move controllers and face recognition / head-tracking. Parts of this is actually used already in a few games. In table tennis, you can move closer to and further away from the table, which I think was pointed to use face recognition to help determine the position, although since when you are playing you are in first person view, here too only the position of the paddle is relevant so face tracking may have been foregone altogether in the end. In The Shoot, you can duck down and sideways, though again I don't know if this uses face tracking. Face tracking is in Gran Turismo 5, but here there's no Move being used. The only game where I do think face tracking is actually used, is in The Fight: Lights Out. Of course there are also games like the Kung Fu one, but that one belongs more to the object scanning, of which more later.
Controller Utilization
The Vision: A variety of different Move implementations shown in painting, in an RTS type situation and more.
The Reality: Beat Sketchers, EyePet, Socom 4, and Start the Party show the painting bit. The RTS situation shown at E3 2009 was very primitive compared to this years version with the clever paint selection mechanism, and RUSE contains a lot more than that, showing a really effective implementation of the Move (pull up/down for zooming out/in for instance, left/right for turning the camera left right, etc. besides the regular selection options). Even if you find this vision unambitious (being: you'll find that this controller works well in a variety of more hardcore game types, not just a hand full of party game stuff), it has been fully realised - there is a wide spectrum of Move support in games out there (including, say, RUSE, MAG beta, and Resident Evil 5) showing that the Move controller is indeed effective for these types of more hardcore game types.
Sports demo and 1:1 combat
You raise an interesting point. Although in earlier versions of Archery in SC I saw it was still in there, it may be that in the final game (I have no access to that), they have changed it because the second Move controller controlling the arrow can too easily be blocked out by the first one. In this case they may be relying more on less precise measurements from the combination of gyro and accellerometer alone. (Was it already confirmed that this 100% force thing isn't just for the lower difficulty settings?)
Move: Final Thoughts
I think a little bit of credit should perhaps also go to not only the sheer volume and breadth of games (almost 50 games to support Move this year, if not more, and covering game control types ranging from mouse, touch, and lightgun all with a high degree of success as well as adding the 3D pointer, 1:1 tracking and AR tech of its own), but also to the actual implementations of the tech demoes often going beyond the tech demoes considerably in many games, already before the hardware is out there. Kudos I think to EyePet in particular for doing a bunch of things that were shown in Milo, like for instance taking a real-life drawing from you and bringing it into the game, and then even bringing it to life in the form of for instance a car that you can drive yourself with the DS3 as a remote controller, that the EyePet then chases (and EyePet already did it in 2009, when the European version released - I've recently been playing it again, look forward to the Move version).
As for all the Wii-mote vs Move vs EyeToy discussions, Richard Marks made this pretty clear himself: for the longest time he and his team believed that controllerless gaming was the holy grail, and they tried and tried to get it working to the point where it worked well enough in a large variety of games. They couldn't figure it out (even PrimeSense, which was shown first to Sony and Nintendo before it came to Microsoft, and there are ancient videos out there were Sony tries out the tech) didn't match up to their own requirements. Then Nintendo comes along, brings out a controller based system that proves to be wildly popular. Lesson for Marks' team?: we don't actually have to get rid of the controller completely at all. As long as we keep it simple enough, plenty of 'casuals' still get it, and get it in a big way even with the limitations of Nintendo's pioneering device. So credit, from Marks himself, to where credit is due. At the same time, there should be no mistake whatsoever that the Move takes things considerably beyond what Nintendo is doing. It may be confusing for onlookers that the Move can replicate everything the Wii and Motion+ can do (bar mimicking a phone), but you really have to want to not look to say that the Move doesn't take things considerably beyond that with persistent and highly accurate 1:1 tracking (and pointing), lag-free augmented reality, analog triggers, quality rumble and visual feedback.
Going to Kinect:
Voice Communication/Fighting Gameplay
You give Microsoft too much credit here. The technology is not fast enough to do this at this point. Look at Ubisoft's fighting game as a good example by ending up doing things completely differently. Lag on full body motion tracking is simply too high for a fighting game, unless you're simulating a fight where both fighters are already worn out, beaten to a pulp and very, very tired. Both more shocking (but also to some extent redeeming) perhaps is that a surprising number of games forego the more laggy skeletal tracking offered by Kinect's SDK and instead process the 3D point cloud themselves. When it comes to voice communication, this has always been about libraries in the first place. Several games and these days even phones (the iPhone is pretty good at this I'm finding), including Sing Star on the PS3, have done this in the past. The vision was more than anything else that Microsoft could do it using the Kinect's microphones, and offer an SDK for all developers to use. Right now however, they seem to struggle to even get it into the NXE in time and everywhere, and can only offer a few languages day one. So yeah, while in theory you can fight using your own body, in practice no game will be able to use it the way we were promised it would, and the same as yet holds for voice recognition.
Virtual Peripherals
I'd say
mostly a success. You cannot do it sitting down as yet,and when you can, there is currently not yet any hint that the game will be able to track your leg for accelleration. Even when standing that seemed not to work in anything more than an on/off manner, hence both Joyride and Forza so far showing only auto-accelleration gameplay. Kinect can't track anything from the elbow down either, so that poses additional limits. As multiplayer is confirmed to be limited to 2 players, that amounts to a large number of reasons why Kinect would not be able to do the four players simulating pressing a quizz button. It would have to be something like raising hands - a manual interpretation of the point-cloud may be able to do this, perhaps. In the scope of ambition perhaps all this seems almost trivial, but these are still far larger limitations than any qualification we have had to make on Move functionality as it was presented to us.
Full Body Motion Capture
No real comments here, other than the qualifications already made above: fewer body points are tracked, and bodies can't reliably be tracked unless standing clear of anything around you, and there's a decent amount of lag limiting its real-time use.
Your use of Your Shape as an example is unfortunate though, as this actually doesn't appear to be using full body motion capture, but analyses only what it needs (as the exercises are pre-defined, they just need to match that with what you're doing so that's relatively easy to do) straight from the point-cloud itself. Dance Central is the best example of using this as lag-free as possible that you're ever going to get I think, as they very cleverly hide the lag by not showing your movements on the screen, and instead ranking your movement against the dancer showing you what to do. A small delay there is not a problem, so that your real-time movements are equal to that on-screen. The game can also guess if you're going to have done well enough (by matching the first 90%, say), so it can give the feedback right at the end of the movement, further reinforcing the appearance of lag-free gameplay. Excellent work, and the game deserves all the recognition it's been getting.
Scanning Real World Objects
Actually, isolating an object like shown in the picture should be fairly easy for Kinect. The point cloud is 3D, and this helps a great deal. See also the discussion on this matter from the Yoostar developers. However, as you point out the resolution of the point cloud is only 320x240, so that is indeed a limitation and gives a really jagged outline of what to grab (which is exactly what an early version of Yoostar on 360 showed). This means in the end you still need a clever algorithm to use both the point cloud and the regular camera feed to get a nice image. But yeah, we haven't seen it used like that yet, though perhaps you could take the Your Shape demonstration as an example where the demonstrating lady at E3 2010 takes off her vest and you can see it separate from her and then disappear. It shows that, of course with the qualifications we made regarding resolution, it should be possible.
The other topics are covered already above (phew, as this post is getting LONG)
Kinect: Final Thoughts
I think Kinect can be very, very powerful in the two very big genres of dancing and fitness. And that may be already more than enough to make this device a big success. What it's applications will be beyond that though, I have fairly strong doubts on, and can't help but feeling that it will take adding a motion controller with buttons to make it
really great.
Here's why:I think, let there be no mistake about this, that both Kinect and Move bring in highly valuable new control mechanisms. Full body tracking is a small miracle. A precise, practically lag free 3D pointer another. However, people's most valuable 'controllers' are their hands, and their abilities to handle tools, and all our most important interfaces with computers, machines, tools, cars and whatnot depend on those more than anything else. The Move covers by far the larger part of this toolset, with accurate and immediate representation of whatever we do to manipulate its position with our fingers and wrists, and the ability to apply pressure from two sides through the analog Trigger and Action buttons.
So, for general interfacing, and for games in particular, I highly prefer Move for now, and dream of a future that gives us both technologies on one platform.