I don't think it's unreasonable to have to prompt the system with a simple voice command, like "XBOX, menu" to 'wake' the system in order to acquire your following gesture, like to differentiate any movement from one meant to navigate a menu. Even in the case the system cannot detect your full skeleton because you're partially out of the detected trackable volume of space in front of the camera, it should still be possible to act from determining your head from another point while singling out an outreached arm and hand for navigation or arms/hands-only tracking and input. Again, if this is an issue, this is something they'll have to work in the Kinect and game software before launch and probably tweak post-release. As well, the system should be able to capture the play volume and all of its points, such as sofas, chairs, etc., in front and determine and store information about that space if unchanged while adapting for what does. If it's not possible, MS didn't do their homework well enough for their intended applications.