I thought it'd be interesting to look back at all the vision/prototype videos we saw of Kinect/Move last year and compare them to the final tech specs that are rolling out to market this holiday season. How close did they come to their original concepts?
Link is below, as well as the full article sans images.
http://www.gamasutra.com/blogs/DavidGalindo/20100903/5883/Kinect_and_Move_from_Vision_to_Retail.php
Link is below, as well as the full article sans images.
http://www.gamasutra.com/blogs/DavidGalindo/20100903/5883/Kinect_and_Move_from_Vision_to_Retail.php
As we near this holiday season, Microsoft and Sony are starting to gear up their retail plans for a battle of motion control peripherals. It was just a year ago at E3 2009 that MS and Sony gave the public their first view of Natal and Sony Motion Controller, with both giving their pitches at their own conferences.
Now that the official, final tech specs have come out for both devices as they near their shipping date to retail, I thought it would be interesting to go back and see just how close Kinect/Move came to their original visions. Ill be comparing what we know about motion control now verses what we saw in the concept and tech demos just a year ago.
The first thing to note is that Sony had the advantage of showing no conceptual footage, only tech demos. In that regard those demos were already up and running theres not a vision here so much as a foundation for something better and greater. What we didnt know about the tech demos is whether or not they would be easily applied to games, or how much of the PS3s processing power it was using up. Kinect, or Natal at the time, chose to show off a bit of the tech but showed a majority of their features via a concept video (a "vision video" as they called it.)
So, lets start with the first release of the holiday season with Playstation Move how close did they come to the original E3 presentation?
Playstation Move
Demoed by Richard Marks and Anton Mikhailov of Sony, the Move was showcased via a variety of different tech demos over a ten minute or so presentation at the Sony E3 2009 Conference.
3D Space and Interaction
The Vision: We see the controller being manipulated within a 3D space backwards and forwards, side to side, in a level of tracking weve never quite seen before.
The Reality: Tumble (PSN) has taken the concept and built the game using this kind of precision seen in the demo. While we havent seen any object molding/transforming quite like the tech demo, it wouldnt be outside the realm of possibility to do.
Augmented Reality
The Vision: Using the Playstation Eyes camera as a video feed, the player sees themselves on the screen but holding a 3D object tracked in real time. You can rotate it, interact with it via the Moves buttons and use it as an object within the game space.
The Reality: Probably the only time where the tech demo was transformed nearly flawlessly into an actual game, Start the Party uses these features in a variety of different minigames and party modes.
Body Tracking
The Vision: As Anton walks around the stage, the game (in a first person viewpoint) also follows Antons body as he moves close, far, and side to side. Hes not simply stepping slightly in a direction hes walking around actively as the Move is being used as the gun/camera orientation.
The Reality: The camera could be implemented easily enough, but the body tracking would be a bit more difficult to do. Is it being done via the Eyes tracking, or the positioning of the Move itself? Using the Eye would be prone to the usual Eye problems seen in earlier Eye-only games such as difficulty tracking a player in low level light conditions. Using the Move would be a better solution, but can it track depth in the amount that Anton was doing on stage? If it was, thats rather impressive, but aside from head tracking there hasnt been any games to utilize this demo in the way it was shown.
Controller Utilization
The Vision: A variety of different Move implementations shown in painting, in an RTS type situation and more.
The Reality: Games like RUSE have taken advantage of the Move, but none of this stuff in the demo was very ahead tech wise; it was more a demonstration of the kind of implementations Move might be able to provide in normal, everyday games.
Sports Demo and 1:1 Combat
The Vision: Multiple scenarios in an arena-type environment are shown. Theres archery, blade throwing, and sword/shield swinging.
The Reality: All of these demos have been implemented into Sports Champions. By watching the videos you could say its a near perfect port of the tech demo into an actual game, but there are some differences. The sub-millimeter precision in the tech demo has been altered in Sports Champions; while you could alter how far back you can pull your bow in the tech demo, in SC youre forced to fire at 100% tension every time. The arena battles arent quite as 1:1 as the demo was either, but are very close. Were these changes done to make the game friendlier to gamers, or done as a result of the software implementation/changes needed to make the game run as smooth as possible? Its not quite clear right now, but regardless, its still a very strong effort.
Move: Final Thoughts
Its clear that Sony has been polishing the Move and working with devs in order to give the controller the best support it can get out of the launch gate. Its quite amazing to see all of the demonstrations shown just a year ago be met right at the release date; it certainly bodes well for all the new tech demos weve seen since then. Perhaps Sony has learned from the pie-in-the-sky expectations that the Killzone 2 pre-rendered trailer fiasco caused; no matter how solid the game became graphically, the general media would always compare it to the E3 trailer. Move debuted with fairly high expectations grounded in reality (aka the tech demos running in real time), and met those on launch day. You could argue about the quality of some of those launch games, but regardless, its still quite a feat to pull off.
Microsofts Kinect
Kinect was debut under the Natal project name at E3 09, showing a few game demos before throwing to a four minute trailer of features. Unlike Move, the footage was all conceptual, but presented as a feature list. So how many of these features were able to come to fruition at launch? And can these features be added down the line?
Its important to note that this concept video was shown as a perfect 1:1 presentation of the person using Kinect; obviously this cant be done without some lag with any motion device on the market today, so Ill be ignoring that aspect in the video.
Voice Communication/Fighting Gameplay
The Vision: The player walks into the living room as the virtual challenger looks on, his head tracking the players body as his head moves across. The challenger addresses the player by name, exchanges a few words and then the fight begins. The player dodges the punches left to right, and then kicks in the air as the challenger goes flying.
The Reality: This is our first look at what Kinect might be able to do, and in fact is a very realistic representation of all its features. Kinect can very easily track the players body so that the virtual challenger could turn his head and follow them, and even the fact that the challenger addresses the player by name is very feasible, using a databank of names much like the racing game GRID allowed you to do (by selecting a name, the announcer would say it during certain menu options and scenarios). The small clip of fighting is also easy for Kinect to pull off, using its body tracking capabilities.
Virtual Peripherals
The Vision: A family watches together as a player mimics holding a steering wheel and using shifting/pedal motions. She then pulls into a pit stop, where its another players turn to jump up and act as a pit crew as they unfasten the lug nuts, place the tire outside the TV area, then grab another one to place back on the car. Then the car races off as the other player gets behind the wheel again.
The Reality: Controlling a car in this manner has been a rather popular tech demo for Kinect, as weve seen it implemented in Burnout Paradise, Forza 3 and the launch title Joyride. The pit crew mechanic should be no problem for Kinect as well, except for the player standing and playing the pit game. Kinect seems to have a bit of trouble when the player keeps shifting from a seated to a standing position; the easier and more comfortable solution would simply do the mechanic game while seated. Aside from that, this vision has been easily accomplished.
Body Tracking/Monster Gameplay
The Vision: The player clomps around the room as a giant monster fills the screen, swinging his arms around to swat at planes before yelling, causing the monster to emit a laser beam out of its mouth.
The Reality: Again, most of the mechanics shown here (arm tracking, etc.) can be easily pulled off by Kinect. Even the yelling, while not tracked by Kinect, could be picked up by the microphone on the camera and initiate the laser beam in the game. The only problem is the way the player is walking closer to the TV as the monster walks downtown; Kinect requires a minimum distance away from the TV, moreso than most motion control devices since it has to track the entire body. Getting too close to the TV will result in your head or legs being cut off from the devices, and messing up the tracking in the process.
Full Body Motion Capture
The Vision: Two players face each other in a soccer goal showdown using split screen mode. One player kicks the ball, the other tries to block the shot, but the ball makes it in. Goalllllllllll!
The Reality: No problems here; full body motion tracking is Kinects bread and butter, as seen in games like Ubisofts Your Shape.
Scanning Real World Objects
The Vision: The player brings his real skateboard into the living room, and tells Kinect to scan as he turns it from back to front. Kinect then imports that skateboard into the actual game, decals and all. The player than does some motion based skateboarding.
The Reality: The video leaves a lot of questions. Did Kinect know it was a skateboard? How did it know what was a skateboard, and what was the background? The answer is, Kinect probably doesnt know the background from foreground accurately enough to make a 3D representation of it in fact, no consumer level device can for the foreseeable future. While its feasible that Kinect could use pictures as textures on objects, that wouldnt be anything new from normal web/game cameras, and the downgraded resolution on the final design specs (to 320x240) makes it all the more unlikely that scanning objects could ever be accomplished on Kinect. New design specs aside, this was a pretty far goal to reach in the first place.
Facial Recognition/Video Chat and Interaction
The Vision: The player walks up to the TV, waves, and Kinect automatically signs in to the correct user based on facial recognition software. She then chats it up via video to a friend online, and the friend shows a dress she might like by bringing up a menu on the 360. The player grabs the dress with their hand, throws it to a picture of their body on the left side of the screen, and models it as the dress/body is rotated on screen.
The Reality: The waving gesture has made it in the launch day features of Kinect, however the facial recognition has changed to the much easier to support voice recognition. Video chat has already been demonstrated with Kinects ability to track the player in the room. The friend showing off something she might think her friend might like could easily be added, but the idea of having an accurate 3D model of you and modeling clothes is pretty much another idea in the clouds. While that could never be implemented in the way the video showed, it could be easily done using your avatar and using avatar outfits and would make a lot more sense, too. But nothing like that has been announced so far.
Voice Recognition/Family Play
The Vision: A family sits closely together on the couch, using their hands as buzzers. The virtual host asks a question, and one family member buzzes in first. They say the answer out loud, the game repeats the answer, and then tells the player theyre correct. The game then switches to another family that is playing the same game online.
The Reality: Voice recognition can be taken as far as Microsoft and other devs are willing to go; everything shown can be done by Kinect. The buzzer gestures might be a bit tougher to implement Kinect has some problems tracking that many players at once, and in a lighting fast gesture like the buzzer motion, it would have to determine who was first, and who that person is. Its not insurmountable, but its going to be a tough thing to pull off. But then again, theres nothing here that a controller wouldnt fix, as none of the stuff shown in this segment is really something that requires Kinect in the first place.
Voice Commands
The Vision: A couple sits on their sofa as the Xbox Dash is navigated using their hands. They find a movie, and say Play Movie out loud to Kinect. After the movie is finished, the Xbox is turned off with the Goodnight voice command.
The Reality: While voice commands are going to be part of the day one functionality of Kinect, it wont be used within the normal dashboard players will have to use a special Kinect Hub to use voice commands. The differences of the Hub and the normal Dashboard remains to be seen, but its only a matter of time before MS eventually makes the whole navigation experience voice command-able, hopefully.
Kinect: Final Thoughts
Despite a few completely unlikely scenarios, the original concept trailer has been pulled off rather admirably by Kinect. And while launch games might not use a majority of these features, its definitely within Kinects power to do so when devs find a way to use them. The launch day of Kinect might not have the gee-whiz feeling that Move has been able to accomplish with their tech, but itll certainly be an original experience that will draw even more people to the casual side of videogames and the more people that play and buy games with us, the better.
-----------
Check out my indie game releases at http://www.vertigogaming.net. Thanks for reading!