• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Kudo Tsunoda and Kinect on the Engadget Show

Adam J. said:
I'm totally being serious here...You guys think it would be possible to make a game for dogs with Kinect? My dog loves to bark and jump around in front of my TV whenever I'm watching Animal Planet. First time I saw Kinectimals, my initial reaction was "man, my dog would totally go nuts if it saw this cute little cat hoping around on screen". It'd be neat they could make a game that could recognize an animal and respond accordingly. It's a dumb idea, but I thought it'd be an interesting concept to explore. In the end your pet would probably just get pissed off and knock your TV over.:lol

Dogs and cats would play with anything... why bother with a sensor ? Just play the right DVD, and it'll get them busy for a while. :D

But to answer your question, yes it would probably be possible, even if dogs could create specific difficulties, having a bigger variations of size, shape, fur, ... than humans.
Anyway you probably won't need to do full body analysis, unless you want the "game" to train your dog to sit, roll-over etc. :D (after the virtual pet, the virtual master ! Maybe you could have the two games playing with each other... :P)
 
TheOddOne said:
epk4fa.jpg


And

http://www.neogaf.com/forum/showpost.php?p=21853654&postcount=8171

TheOddOne said:
Can you link to that second statement?
Sure. http://www.neogaf.com/forum/showpost.php?p=21911396&postcount=1
UPDATE: A Microsoft spokesperson told me after the publication of this article that the company is certain that Kinect gesture control will work for movies, ESPN and other "entertainment" features before the sensor is launched. As I originally reported, that is not an implemented feature yet. The spokesperson was not able to provide any update on the Kinect's tolerance of a person who sits while playing games.
 
expy said:

the forza demo behind closed doors *was* realtime. those screenshots were not from the real time demo. the keynote demo was likely pre-rehearsed video but a day later they were showing the exact same thing running real time for the press.

there are tons of videos and first hand impressions of people who played the demo. did you just choose to ignore all of those?
 
Zabka said:
Watched the HD video. Here's the weird black hole effect I was seeing with the dude wearing all black.

2qa4e89.jpg


And for the kids:

http://i48.tinypic.com/250077a.jpg


It's a very interesting gif.
From what I understand, the depth sensor in kinect works by projecting a grid pattern into the room (in IR light) from its laser or diode (the lens that is offset from the centre two). The IR camera then captures an image of the room, and by detecting the grid pattern, it can work out depth by looking at how the grid is offset based on perspective projection (given the separation between the projection laser/diode and camera).
The fact it can pick up depth as well as it can is amazing enough. You can see the ripples in Kudos tee-shirt.

I imagine the reason the green presenter isn't being picked up properly is because of his shirt. His legs, hands and face are fine. But his shirt is dark black (satin-like?) material. Not an ideal fabric for reflecting light. If it is satin-like, then that will explain it fairly well (as that sort of surface usually reflects light in very unusual ways - Although I would have thought Kudos sunglasses would have confused the system pretty badly).

The fact it's still able to detect his approximate skeletal form, even though the data is clearly totally corrupted up is pretty amazing.
Recognizing a human form is hard wired into the human brain, but teaching a computer to do it - in real time with very limited data - is bloody well incredible.

And as for kinectimals. I imagine Frontier will do a good job, and it's nice to see another casual game with really polished visuals... And is that finger tracking? :lol
 
flyinpiranha said:
Are we going to make a NEW thread for every time Kinect appears somewhere? Seriously? That's what you want?


Like the man said, relax, and dont click on the goddamn threads if you dont like the content. We had these exact kind of copy pasta threads for the Wii. Fucking threads started for martha stewart making a wii cake, a new thread for every magazine cover, a new thread for any title that looked like it MIGHT be a core game, and dont forget a new thread for every laughable shovelware title so we could laugh, Ninjabread man, Horsez, etc etc.

This is common place, stop acting like your new here and get over yourself.
 
expy said:
Yes, that Forza thing on stage was faked. But it is working, there are tons of impressions and videos released showing it working behind closed doors. Its funny that you went into the Forza thread and found that post, but forgot to read further where atleast one real-time video is linked :lol

Second, that PR thing does not confirm that it wasn't working properly while sitting for certain applications.
 
REMEMBER CITADEL said:
It was real-time: http://gamevideos.1up.com/video/id/29918

You can even see the journalist changing the car color from red to yellow and it's reflected both in interactive and scripted sequences - all real-time.

Now, those screenshots might or might not be real-time grabs (probably some kind of photo mode), but the demo was genuine.

Uhhhh, yea. I proved that what "che" said was false (that the footage seen at the press conference was real-time). My point is still valid and backed-up. What happened afterward is irrelevant.

Edit: Just watched the video... And yea, you can clearly see by just the ground textures alone (in the vid) that the screenshots posted in the FM3 thread are bullshots and that the actual in-game demo runs a much lower texture resolution among other things.
 
Zabka said:
Watched the HD video. Here's the weird black hole effect I was seeing with the dude wearing all black.

2qa4e89.jpg


:o at the flipping out skeleton

I'm sure some smart guys could work out a way to rule out some of those false positives by them being physically impossible based on the previous frames... this thing is meant to work if people walk past in front of you. Or was it dogs walking in front of you? I think I'm trying to say it's meant to cope with imperfect conditions.
 
expy said:
Uhhhh, yea. I proved that what "che" said was false (that the footage seen at the press conference was real-time). My point is still valid and backed-up. What happened afterward is irrelevant.

You do realize that he was talking about the footage being real-time, as in not pre-rendered? The real-time rendered on-stage demo was, of course, pre-recorded.


expy said:
Edit: Just watched the video... And yea, you can clearly see by just the ground textures alone (in the vid) that the screenshots posted in the FM3 thread are bullshots and that the actual in-game demo runs a much lower texture resolution among other things.

The interactive part, yes. However, the scripted sequences look much better - seemingly about on par with those screenshots - and they're obviously running in real-time as well. It's hard to tell whether the ground textures in scripted sequences are as detailed as in those screenshots, what with watching an SD video of an off-screen recording. But yeah, like I already said, those are probably from some sort of a photo mode, although we have no way of either confirming or debunking that at the moment.
 
REMEMBER CITADEL said:
You do realize that he was talking about the footage being real-time, as in not pre-rendered? The real-time rendered on-stage demo was, of course, pre-recorded.
When you say "real-time" you have to mean real-time as being rendered as the guy was controlling it (but he wasn't). So, it wasn't real-time, it was pre-recorded footage.
 
expy said:
When you say "real-time" you have to mean real-time as being rendered as the guy was controlling it (but he wasn't). So, it wasn't real-time, it was pre-recorded footage.

That's just.... Jesus. I quit, you're too smart.
 
REMEMBER CITADEL said:
The interactive part, yes. However, the scripted sequences look much better - seemingly about on par with those screenshots - and they're obviously running in real-time as well. It's hard to tell whether the ground textures in scripted sequences are as detailed as in those screenshots, what with watching an SD video of an off-screen recording. But yeah, like I already said, those are probably from some sort of a photo mode, although we have no way of either confirming or debunking that at the moment.
That's the thing, it's so blatantly obvious from SD footage that it pretty much confirms that those screenshots are bullshots taken from some type of photomode. The ground textures are at such a low resolution it's not funny in comparison to those "screenshots" posted in the other thread. Just compare the "cracks" you can clearly see in the SD footage to the "clean dirt textures" of the screenshots.

The Kinect part works fine, tracking the player's movement to move around the car, nothing spectacular about that, you're just moving the game camera around the car model.
 
expy said:
That's the thing, it's so blatantly obvious from SD footage that it pretty much confirms that those screenshots are bullshots taken from some type of photomode. The ground textures are at such a low resolution it's not funny in comparison to those "screenshots" posted in the other thread. Just compare the "cracks" you can clearly see in the SD footage to the "clean dirt textures" of the screenshots.

The Kinect part works fine, tracking the player's movement to move around the car, nothing spectacular about that, you're just moving the game camera around the car model.
Its like Turn10 ran over your dog or something :lol
 
CoG said:
Kudo always does the same over-exaggerated arm wave when he demos Kinect. He did it at E3 last year and on Fallon with the orange jumpsuits.


It's interesting he wears such baggy clothing, which is clearly dong it no favours. Look how long his legs are!
 
Graphics Horse said:
It's interesting he wears such baggy clothing, which is clearly dong it no favours. Look how long his legs are!

He's probably the one person in the world that Kinect works on 100% of the time. Think about it, the camera and code have most likely been deeply calibrated to his body and movements.
 
expy said:
Uhhhh, yea. I proved that what "che" said was false (that the footage seen at the press conference was real-time). My point is still valid and backed-up. What happened afterward is irrelevant.

Edit: Just watched the video... And yea, you can clearly see by just the ground textures alone (in the vid) that the screenshots posted in the FM3 thread are bullshots and that the actual in-game demo runs a much lower texture resolution among other things.
Twist and turn. The important thing is, that your argument is justified. Right? You sound mad.
 
CoG said:
He's probably the one person in the world that Kinect works on 100% of the time. Think about it, the camera and code have most likely been deeply calibrated to his body and movements.

You may have to dress like that to have Kinect work perfectly which is going to be a problem for me because I like my pants and shirts to actually fit. Cant believe that guy's wife lets him walk out of the house dressed like that. Lol. His shirts are so huge!
 
Graphics Horse said:
:o at the flipping out skeleton

I'm sure some smart guys could work out a way to rule out some of those false positives by them being physically impossible based on the previous frames...

They could, but it would likely have unrealistic resource requirements.
 
Raistlin said:
They could, but it would likely have unrealistic resource requirements.
For raw capture, like what's shown there, I think it's probably unlikely you'll see that sort of correction because it's just raw data before too much work is done to it besides creating the skeleton, but if you're animating an Avatar using that same data, you could employ a filter (and possibly some latency depending how aggressive it is) to prevent that sort of crazy point jumping from producing equally jumpy poses on the model based on a set of tolerance thresholds that maintain stability. I mean, a great deal of the basis for Kinect's pose estimation system is about working within what known limitations from constantly updated libraries about the human form and skeleton has, down to norms for how far a specific joint or bone can move.

Basically, I think the noticeable tradeoff, at this point, seems to be quality and stability of the end data to use for the game versus more latency introduced to perform more and better perform checks against larger libraries and enforce constraints. That's something they could keep refining and optimizing in a given situation, using known quantities. So, for instance, the more you use Kinect under your profile, the more data it could have stored on your specifics of range of motion of different extremities, understood metrics like your dimensions, and other data gathered such as known ideal (for that person for that profile) gestures. All of that could be shared and used by any Kinect title to better suit itself to you and possibly eliminating the future need for specific calls for a calibration. A constantly evolving profile of limits, ideals, etc. I believe that's how Kinect already works, at least on some level. And along with a more refined and a better understanding of you, from voice to movement to appearance, better optimization for traversing the trees of possible poses is custom tailored to you...possibly resulting in less latency and, most certainly, an output in games that is far less prone to error. Basically, a problem of heuristics.

Eventually, on better hardware with more and faster memory and better quality of capture, you could have AIs that could even learn your tendencies in a way that has long been limited to very specific and known range of possibilities, like estimating how you'll move on the chessboard, but in a case where the range is a great deal larger, with more spatial possibility, like trying to intercept an attack coming from your body before you actually have committed to the movement enough for it to determine with greater accuracy due to the need to act now because of the speed of attack involved. Real learning mechanisms with important local histories/memories that can be accessed by individual AI entities or shared in any fashion with many or groups or just those that can 'witness' it and even recognize it with training.

For Kinect on X360, though, I think it'll get faster and more efficient based on application specific optimization. So, by second gen releases as late as end of next year to mid-to-late 2012, the software should have made marked improvements in speed, accuracy, and general range of use. I think the most incredible uses will end up being more believable and 'alive' virtual characters in more densely useful and interactive virtual environments with a focus on virtual props and tools that have more simulation elements and, in general, more complicated mechanical workings for direct interaction during action game moments. The direct body to body mapping stuff should obviously improve, but I think a strong blend of gestural use and 1:1 mapping will be implemented in most games to ensure the most fun and usable mix abstraction of game activity and player activity. I know a lot has been made about 1:1 being this holy grail of motion control or whatever, but games are still mostly about learning a language of game mechanics and gestures can work better and be more efficient than having super-fidelity movement as the main input. Considering fatigue and being friendly to the player through longer play sessions is a requirement if you're going to do more than short-burst-style games.
 
Top Bottom