• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Kinect and Move: from Vision to Retail...were their respective visions met?

Noshino said:
The main problem for Microsoft right now is that everything that they have shown thus far CAN be replicated by the PS Eye (w/ or w/o the Move). Even the one game that looks decent, Dance Central, it has been mentioned by the developers themselves that it can be done with the PS Eye....

And that doesn't seem like it is going to change for a while... as even the Eye Toy was able to do motion tracking (http://www.youtube.com/watch?v=5-MRi67GCoM first scene)

I don't agree with this, but I'm going to leave this alone. It's been beaten to death.



beast786 said:
If support like gimicky 6-axis. Then it would be a fail.

Using with a controller would be more than just be a gimmick because the game would be basded on controller IMO like 6-xis.

A game based on kinect only from gound up is what it needs.



Possibly, but I think it's a bit early to make these calls. A developer may have some very inventive ideas that we're not considering. I think navy seal type gestures could add a level of depth in shooters. Maybe there's opportunities for head tracking. Who knows.
 

RoadHazard

Gold Member
chubigans said:
The Vision: The player walks into the living room as the virtual challenger looks on, his head tracking the player’s body as his head moves across. The challenger ...
The Reality: This is our first look at what Kinect might be able to do, and in fact is a very realistic representation of all its features. Kinect can very easily track the players body so that the virtual challenger could turn his head and follow them, and even ...

I haven't read the whole thread, so I'm sorry if this has already been brought up (and I also know it's not what the thread is about), but I just can't not comment on the bolded parts:

A polygonal character on a 2D screen, tracking you by turning his head and thus "following" your with his eyes? Really? Sorry chubigans, but you really didn't think this through. If you want the eyes of someone on a screen to follow you (and his face to face you), you simply make him look dead ahead (right into the virtual camera) at all times, without moving a muscle. No matter from where you are observing the screen the character will be looking right into your eyes, since what you're looking at is a flat image. It's not a holographic 3D image, so you can't actually look at the side of his head by moving to the left. If you make the character turn his head as you move, he will no longer be looking at you.

Again, sorry for this bit of OT ranting, I just had so say something about this logical fallacy...

Or are you talking about what the stationary, non-participating observers are seeing? If so, yes, having the on-screen character turn his head as the player moves would probably look cool, but it would not be cool at all for the player (who should be the focus, after all).

---

Other than that, nice read! I personally think Move has come really close to the initial vision, while Kinect has not really convinced me at all. And of course that has much to do with the fact that Move was shown off with actual tech demos right from the get-go, while Kinect was mostly smoke and mirrors for a long time.
 

yurinka

Member
Alx said:
What they don't show you is that before playing, you have to take a snapshot of the background without anybody there, to help the game detect where you are. That means that :
- it will be disturbed if there is something moving in the background, or if light conditions change
- you probably won't be able to play with your friends in the field of view of the camera, unless they stay perfectly motionless.
- you may have difficulties with the color of your clothes, if they're too similar to the background.

Besides, the system don't look like it tracks the body parts, like in the old eyetoy games it only gets what's moving in the picture (or in this case, what's different from the background). The main difference is that it recognizes key shapes, but that's still far from body tracking (and limited to 2D anyway).
I know about the 1 time (per game session I assume) "calibration" that requires you to go out from the camera. I saw it in a video interview, and isn't a big issue.

The game features 5P off-line multiplayer (the other 4 using pads), so I assume other players moving it isn't a problem.

Here is an interview/gameplay video in gamescon were they demo the game and say they developed a tech to recognize separated body parts:
http://www.in.com/videos/watchvideo-gamescom-2010-kungfu-live-interview-gameplay-9325705.html

BTW the game features some movements that are activated performing certain poses (special attacks) so it isn't just "every pixel that changed from previous frame".

In any case it doesn't have a lot of differences with a lot of Kinect games where the main actions to be detected are just a few, an single ones like "jump", "crouch", "scratch the face of the tiger", etc

Alx said:
The depth info is not there to help : it's the main info used for body tracking (and most probably even the only one), and that's what makes the whole difference. Everything that makes image processing difficult (light, colors, reduction of information from 3D to 2D) is made almost irrelevant by this tech.
So to be the main input for body recognition and to get rid of the main problems of a normal 2D camera doesn't help?
 

Belfast

Member
PopcornMegaphone said:
The reason we're not seeing any core games at launch is partly marketing, but mostly the early Kinect SDK was kind of gimped. We know support for sitting wasn't ready until recently which would make it impossible to develop hybrid kinect/controller games for launch. I'm disappointed with Kinect's launch lineup, but I think 2nd and 3rd gen games could be very interesting. Clearly Kinect's SDK is improving a great deal. MS reps have said there will be core game support sometime next year.

But at the end of the day, the "core" game functionality will be only marginally different from that on the Move or Wii Motion+ (well, without a camera, that will be left behind). The difference between throwing a grenade with Move and throwing a grenade with Kinect is simply whether or not you're holding a wand in your hand, and at that point you can't even use the excuse of controller-less gaming since you'll still be holding onto your 360 pad to do the majority of the actions in hardcore games.
 
RoadHazard said:
A polygonal character on a 2D screen, tracking you by turning his head and thus "following" your with his eyes? Really? Sorry chubigans, but you really didn't think this through. If you want the eyes of someone on a screen to follow you (and his face to face you), you simply make him look dead ahead (right into the virtual camera) at all times, without moving a muscle. No matter from where you are observing the screen the character will be looking right into your eyes, since what you're looking at is a flat image. It's not a holographic 3D image, so you can't actually look at the side of his head by moving to the left. If you make the character turn his head as you move, he will no longer be looking at you.

I've pointed this out myself, but nobody seemed interested. The original video never made any sense if you took it literally, but it did show what Chubigans described.
I suggested to acheive that in a real game the character could rotate their body away from the player with their head staying aimed at the camera, but I'm not sure how weird that would look.
 

RoadHazard

Gold Member
Graphics Horse said:
I've pointed this out myself, but nobody seemed interested. The original video never made any sense if you took it literally.
I suggested the character could rotate their body away from the player with their head staying aimed at the camera, but I'm not sure how weird that would look.

Well, what would actually work (and which is kind of what you're saying, I think) is having the virtual camera (representing the player's point of view) rotate as you move left and right. Then you could have the on-screen character's head rotate (relative to his body) to keep looking into the camera.
 
Belfast said:
But at the end of the day, the "core" game functionality will be only marginally different from that on the Move or Wii Motion+ (well, without a camera, that will be left behind). The difference between throwing a grenade with Move and throwing a grenade with Kinect is simply whether or not you're holding a wand in your hand, and at that point you can't even use the excuse of controller-less gaming since you'll still be holding onto your 360 pad to do the majority of the actions in hardcore games.


Perhaps, but we'll have to see what developers come up with. Even if Kinect provide virtually the same experiences as M+ and Move for core games, it wouldn't be the end of the world. Clearly Kinect wasn't designed with core games in mind, however I believe as the SDK improves and developers become more familiar with it, the odds of good Kinect "core games" improves significantly.

I guess I have more of a wait and see attitude than other posters.

It's worth noting everything I'm saying will be moot if Kinect bombs, which is a real possibility.
 
RoadHazard said:
Well, what would actually work (and which is kind of what you're saying, I think) is having the virtual camera (representing the player's point of view) rotate as you move left and right. So as you move left, everything on the screen would be rotated to the right. Then you could have the on-screen character's head rotate (relative to his body) to keep looking into the camera.

Exactly that, might work well much like the old reversed wii trick, I wouln't like to see it freak out when my head goes out of frame though.
 

Alx

Member
yurinka said:
So to be the main input for body recognition and to get rid of the main problems of a normal 2D camera doesn't help?

Maybe I misunderstood what you meant by "help", but at this level, I don't consider it "helps" for the tracking, it's doing it, from start to finish. It's not a support to an existing tech, it's an all new (and efficient) way to do it. Actually there is currently no other way to achieve the same results, except for the old "ping pong balls" motion capture systems.

Graphics Horse said:
I suggested to acheive that in a real game the character could rotate their body away from the player with their head staying aimed at the camera, but I'm not sure how weird that would look.

I don't know exactly what they wanted to illustrate in the original video, but it's also feasible to adapt the virtual camera to the player's position, to give a virtual window effect. Then the gaze following thing could work.
 

cgcg

Member
Alx said:
Well first I can't know for sure if it tracks the head or not, but as long as it works, it works, doesn't it ? But more logically, if you can track the arms with enough precision to use the hand as a pointer, then it's not so hard to localize the trunk (it should be the big blob in the depth map attached to the arm...), and that's enough for leaning detection.

If it's only tracking the head then it's a problem. What if a person doesn't raise their arms above the couch? It can't track it. So basically it requires a person to hold their hands above a certain level at all times. They can't operate the game at relaxed positions.

Good lord just saw that video. Just as I figured, he was sitting in a chair with low and skinny back. Nothing was obstructing his body, and he still has to get up to do certain things. How does that address the issue of sitting on a couch?

Raw data are interesting because they show the level of information you really have. The interesting thing is that what we saw in the video was the raw data and nothing else. But we could still understand what was going on, even without the visible pixels mapped on the depth data (except maybe when the two guys were wrestling together). That means that the information is there, and that a software smart enough can extract it.
Of course the difficult part is to build this software, but MS has shown that they were able to add new features to their libraries (which is not really a surprise).

Well then until it happens it's not really working. The wrestling part demonstrate the problem perfectly don't try to brush that off. It's the main reason why it doesn't work while sitting on a couch.
 

Alx

Member
cgcg said:
If it's only tracking the head then it's a problem. What if a person doesn't raise their arms above the couch? It can't track it. So basically it requires a person to hold their hands above a certain level at all times. They can't operate the game at relaxed positions.

You keep forgetting that we're dealing with depth, here. You don't need to raise your hand above the couch, because your body, arm etc. is in front of the back of the couch. With at least a good 10cm depth difference, the sensor should have no trouble separating the upper body from the couch.
The difficult part with the couch scenario is the legs : the thighs are not visible ("hidden" behind the knees, and maybe too deep into the cushions. And the lower legs can have many different positions (crossed or not, on the floor or not, ...). So if you use the same algorithm that tries to find thighs and legs, you get random results.


cgcg said:
Well then until it happens it's not really working. The wrestling part demonstrate the problem perfectly don't try to brush that off. It's the main reason why it doesn't work while sitting on a couch.

You will always find scenarios for which it won't work, it's all about the use conditions. The first games made the assumption that the user is standing, and will always spazz out when he's not. Other scenarios can be handled (like the sitting situation was, or "handling objects" can be done provided the objects respect certain conditions).
I doubt the wrestling scenario will be solved, because it's very difficult but also because there's no point to do it. So whatever game you'll play, you'll be able to distract the system if you wrestle with the player. The solution is simple : just don't do it.
 

Mr_Zombie

Member
cgcg said:
If it's only tracking the head then it's a problem. What if a person doesn't raise their arms above the couch? It can't track it. So basically it requires a person to hold their hands above a certain level at all times. They can't operate the game at relaxed positions.

Why do you think it won't work when the player is sitting on a couch? We know that the user will be able to at least navigate Kinect HUB from couch; to do so, you don't need to rise your arms above couch (where did you even get that idea?), that would not only be uncomfortable, it would also made impossible to aim with your arms (hands) at TV unless you have your TV hanged somewhere on a wall.


The problem with the wrestling part was that there were two people closely interacting with each other, moving constantly. That's not the case with sitting on a couch. The couch doesn't move, it's just a static background; there's no difference here between a couch's headrest and a wall or any other furniture.
 

yurinka

Member
Alx said:
Maybe I misunderstood what you meant by "help", but at this level, I don't consider it "helps" for the tracking, it's doing it, from start to finish. It's not a support to an existing tech, it's an all new (and efficient) way to do it. Actually there is currently no other way to achieve the same results, except for the old "ping pong balls" motion capture systems.
The depth sensor just provides a depth value for each pixel of each capturated 320x240 frame and sends it to the console, skipping all the work of handle lighting problems, check the colors etc as you said.

Then the software (in the console, not in the camera) searches here a human body in that depth data, etc.

In early versions, Kinect had a chip to make that body recognition algorithms but it was removed to reduce costs, speed up the lag (obviously the CPU of the console was faster than this chip), to have easier future revision / updates / optimizations for this code, and to have the possibility to customize it for each game.
 

Alx

Member
yurinka said:
The depth sensor just provides a depth value for each pixel of each capturated 320x240 frame and sends it to the console.

Then the software (in the console, not in the camera) searches here a human body in that depth data, etc.

In early versions, Kinect had a chip to make that body recognition algorithms but it was removed to reduce costs, speed up the lag (obviously the CPU of the console was faster than this chip), to have easier future revision / updates / optimizations for this code, and to have the possibility to customize it for each game.

Sure, but apart from performance questions, it doesn't change anything if the data is processed by a dedicated chip or by the console... the way it works is still the same, and it's still a new and specific way to capture motion.
 
Gee I wonder what Neogaf think of kinect compared to move?

Who would have ever guessed how this thread turned out? how utterly predictable. Hive mind is well in place here.
 
swanlee597 said:
Gee I wonder what Neogaf think of kinect compared to move?

Who would have ever guessed how this thread turned out? how utterly predictable. Hive mind is well in place here.

Well its clear one has delivered on its vision more than the other. That was the point of the thread. One was more ambitious yes, but one has clearly met its vision.

I watched both the press conferences and after Microsoft's I thought 'Sony is totally screwed'. When I saw Anton and Dr Marks come on stage like a school science project I did facepalm. . but then I saw the technology actually working.

Ultimately I have to be selfish about this, because I don't really care about other people when I'm playing alone. I'll get more satisfaction out of Move. However, I was expecting a lot more from Microsoft before this years E3. When they went to all of that effort to have MTV and that circus thing I really thought they were going to have something massive to show us.

Then we had the leaks and I was left scratching my head. I expected at least one 'core' game to be shown built from the ground up by Microsoft at their conference but there wasn't any. I thought before E3 that Microsoft would target both the casual and the core. I was wrong. Kinects just not for me, it seems. Fair enough.

Sony have impressed me though. I'm actually really looking forward to the second wave of games.
 

Shurs

Member
swanlee597 said:
Gee I wonder what Neogaf think of kinect compared to move?

Who would have ever guessed how this thread turned out? how utterly predictable. Hive mind is well in place here.
117csi0.jpg
 

cgcg

Member
Mr_Zombie said:
Why do you think it won't work when the player is sitting on a couch? We know that the user will be able to at least navigate Kinect HUB from couch; to do so, you don't need to rise your arms above couch (where did you even get that idea?), that would not only be uncomfortable, it would also made impossible to aim with your arms (hands) at TV unless you have your TV hanged somewhere on a wall.


The problem with the wrestling part was that there were two people closely interacting with each other, moving constantly. That's not the case with sitting on a couch. The couch doesn't move, it's just a static background; there's no difference here between a couch's headrest and a wall or any other furniture.

The part where the guy grabs the other guy and knees him the camera loses track of the guy on the left completely and it also got the player confused. The guy on the left might not be static but he was hardly flailing around. That's just one out of many issues in that video. If you people think that's good then what can I say?
 

cakefoo

Member
REMEMBER CITADEL said:
People can't seem to wrap their heads around the concept of the whole launch line-up being specifically tailored to attract new, different customers. Microsoft obviously didn't want core/violent titles there as the Xbox brand already suffers from the shooter box status in many people's eyes. It couldn't have been that hard to tack some Kinect functionality onto Halo Reach or Black Ops, but it won't even be in Fable III at launch. Changing perception and being careful not to scare away more casual audiences is what it's all about.

Hybrid stuff is coming later, people. Attracting you, guys, isn't the primary goal at this moment.
You seem convinced FPS's would have good Kinect integration- how exactly do you envision them working with it?
 

PSGames

Junior Member
cgcg said:
This has to be a joke right? :lol

This was done on the original Eyetoy. Please enlighten us how this is different than what Dance Central's doing?

http://www.youtube.com/watch?v=Au4d5anfjnA

If you still don't get it after watching the video here's a hint: look at the yellow silhouette.

That game is using basic image analysis and is in no way equivalent to kinect. It detects motion on the screen but it can't tell what body parts are what and doesn't detect you on the Z axis. It can't reproduce your skeleton. It's a very crude implementation that won't work unless you are in perfect lighting conditions.

For one you notice he's wearing Yellow gloves right? Here's what the guy in the video had to say:
SidepocketPro
Question: Why do you use gloves when you are not hitting anything physical?

xiayu27 Good question! The yellow colouring on the gloves reflect light much better than my hands, and that makes the camera more responsive. :)

And here's what he had to say about the lighting required:


79siggy
good vid! how did you set the eye toy up - did you have really bright lights? Do you still play the game now? thx

xiayu27
Honestly, these videos were recorded on my first and second runs of the exercises I did. I followed the entire program for 6 out of the 8 weeks and then stopped. Haven't had the space to play where I live at now. I used 100w lighting on both sides of me throughout most of the play for best performance, and was about 12~15 feet from the TV.

It's almost like some people are scared to admit that Kinect does anything right. Kinect has full body motion capture for use in games regardless of lighting conditions and detects you in 3D space. The PSEye does not and no amount of FUD is going to change that.
 

teiresias

Member
PSGames said:
Ok this is becoming annoying. The amount of FUD being spread around by rabid Sony fans is at an all-time high. That game is using basic image analysis and is in no way equivalent to kinect. It detects motion on the screen but it can't tell what body parts are what and doesn't detect you on the Z axis. It can't reproduce your skeleton. It's a very crude implementation that won't work unless you are in perfect lighting conditions.

........

It's almost like some people are scared to admit that Kinect does anything right. Kinect has full body motion capture for use in games regardless of lighting conditions and detects you in 3D space. The PSEye does not and no amount of FUD is going to change that.

I think the point of this comparison is that despite what Kinect is supposed to be able to do better than a vanilla PSeye I have yet to see any software that really does anything with this stuff that Kinect is supposed to be better at doing than typical edge detection and image analysis.

I mean, you have to hand it to MS though, they market well in a controlled environment, I still have people watch the Dance Central video and come away thinking the characters on screen are actually doing the moves you're doing in front of the TV, which is not the case.

PSGames said:
For one you notice he's wearing Yellow gloves right? Here's what the guy in the video had to say:

Yellow gloves increase the responsiveness of PSeye? Just like not sitting in a fluffy couch increases the responsiveness of Kinect?
 

PSGames

Junior Member
teiresias said:
I think the point of this comparison is that despite what Kinect is supposed to be able to do better than a vanilla PSeye I have yet to see any software that really does anything with this stuff that Kinect is supposed to be better at doing than typical edge detection and image analysis.

I mean, you have to hand it to MS though, they market well in a controlled environment, I still have people watch the Dance Central video and come away thinking the characters on screen are actually doing the moves you're doing in front of the TV, which is not the case.
The launch games Joy Ride, Kinect Sports (Bowling, Boxing, Running, Table Tennis, Soccer), Your Shape, Kinect Adventures (River Rush, Ritcochet, etc..), Dance Central, and a bunch of others all require 3D full body limb detection in-order to work. They can not be done on PSEye.

Yellow gloves increase the responsiveness of PSeye? Just like not sitting in a fluffy couch increases the responsiveness of Kinect?

Kinect works perfectly seated. See for yourself.
 

Alx

Member
cgcg said:
The part where the guy grabs the other guy and knees him the camera loses track of the guy on the left completely and it also got the player confused. The guy on the left might not be static but he was hardly flailing around. That's just one out of many issues in that video. If you people think that's good then what can I say?

The software is meant to analyze human shapes. Two people grabbing each other do not look like a human shape, so the software doesn't work correctly. What's so hard to understand about that ?
 

distrbnce

Banned
Belfast said:
But at the end of the day, the "core" game functionality will be only marginally different from that on the Move or Wii Motion+ (well, without a camera, that will be left behind). The difference between throwing a grenade with Move and throwing a grenade with Kinect is simply whether or not you're holding a wand in your hand, and at that point you can't even use the excuse of controller-less gaming since you'll still be holding onto your 360 pad to do the majority of the actions in hardcore games.

You're saying games like MAG, Killzone, Socom, Ruse would function similarly between Move and Kinect? (assuming they were all 3rd party titles)

Not really fathomable.

And I'm willing to bet we'll never see a game that can use Kinect accurately enough to throw a grenade, unless it's a fixed distance kind of thing. Just a cynical guess though.
 
So let's get a solid example going and this is a question for everyone: from what we have seen so far and from what we know of the technology, what truly meaningful implementation of Kinect could come with, lets say, a Halo game? What I mean by truly meaningful implementation is something that uses Kinect in a way different than available on different systems.
 

Atomski

Member
wizword said:
Why do people have emotional attachment toward awful motion control devices.
Console Warz!

I notice people seem to be more open minded to Move these days. What happened to "NO WAGGLE!" that was on the boards months ago.
 

Alx

Member
TheExecutive said:
what truly meaningful implementation of Kinect could come with, lets say, a Halo game?

I'm not sure that's really the best approach... new systems are there to allow for new things, not to be shoehorned into existing games that never asked for any change.
FPS were created for keyboard and mouse. After decades of existence, there is still no better way to play a FPS than keyboard and mouse. Gamepads, wiimote, move or kinect won't change anything about that.
 
Kinetc:Peter Molyneux over promised as usual. Not necessarily a fault of the device. Tell Peter to sell rubies, and he'll promise diamonds.

Move: A much more sober vision, but Sony made good on it more or less. Just wish it had a good harcore experience to offer rather than Wii-esque games with better precision.
 

Sydle

Member
TheExecutive said:
So let's get a solid example going and this is a question for everyone: from what we have seen so far and from what we know of the technology, what truly meaningful implementation of Kinect could come with, lets say, a Halo game? What I mean by truly meaningful implementation is something that uses Kinect in a way different than available on different systems.

This is exactly what I hope developers are not doing (trying to shoehorn their existing ideas into Kinect games), but I think that with the right UI another Halo Wars could work beautifully with it.
 

RedStep

Member
distrbnce said:
And I'm willing to bet we'll never see a game that can use Kinect accurately enough to throw a grenade, unless it's a fixed distance kind of thing. Just a cynical guess though.

For some functions, why not? Controllers usually don't offer much control of a grenade either. Push button to throw. Sometimes longer to throw farther. Hand motions could be used for that (in conjunction with a controller).

I'm thinking that hand signals in something like Rainbox Six could be awesome. I'd rather put my hand up to tell my team to hold than hold LB and hit Left (or whatever).
 

farnham

Banned
what i would like to see from kinect is

using games with your normal pad but adding recognition of body movement.
 
Atomski said:
Console Warz!

I notice people seem to be more open minded to Move these days. What happened to "NO WAGGLE!" that was on the boards months ago.


Mine was always "NO WAGGLE IF IT MEANS SHITTY VISUALS!"

Problem solved. And now the waggle actually works.
 

Guevara

Member
Atomski said:
Console Warz!

I notice people seem to be more open minded to Move these days. What happened to "NO WAGGLE!" that was on the boards months ago.

I for one am anti-waggle but pro-light gun. Metroid Prime 3, RE4: Wii Edition and the Conduit are NOT ENOUGH. I'm hoping Move comes to Serious Hardcore FPS/TPS since no one is making these games for the Wii.
 

beast786

Member
RedStep said:
For some functions, why not? Controllers usually don't offer much control of a grenade either. Push button to throw. Sometimes longer to throw farther. Hand motions could be used for that (in conjunction with a controller).

I'm thinking that hand signals in something like Rainbox Six could be awesome. I'd rather put my hand up to tell my team to hold than hold LB and hit Left (or whatever).

I just think all that trying to force kinect is just a supuerfical way to implement. When, you playing fast games i dont think you want wave your arm just for the sake of it. When there is a faster, easier option infront of you.

Unless the games are made with kinect in mind. It will always feel gimmicky IMO. For kinect to look and feel fresh the Developers have to think outside the box and be creative.

The question is who is willing to take that chance.
 

distrbnce

Banned
Atomski said:
Console Warz!

I notice people seem to be more open minded to Move these days. What happened to "NO WAGGLE!" that was on the boards months ago.

I always thought the "waggle" term itself was in reference to the Wii's lack of precision (in the early days at least) where most games could be played just by waggling the controller, and thus they could be shunned easily by the hardcore community.
 
swanlee597 said:
Gee I wonder what Neogaf think of kinect compared to move?

Who would have ever guessed how this thread turned out? how utterly predictable. Hive mind is well in place here.

Not sure which way you were leaning but GAF was always pro Kinect when they were first announced. I personally never understood why but to each their own.
 

distrbnce

Banned
TheRagnCajun said:
Kinetc:Peter Molyneux over promised as usual. Not necessarily a fault of the device. Tell Peter to sell rubies, and he'll promise diamonds.

Move: A much more sober vision, but Sony made good on it more or less. Just wish it had a good harcore experience to offer rather than Wii-esque games with better precision.

It already does, and it looks to be increasing at a higher rate than anyone expected.
 
Paco said:
This is exactly what I hope developers are not doing (trying to shoehorn their existing ideas into Kinect games), but I think that with the right UI another Halo Wars could work beautifully with it.


So, explain to me how that would work? Do you have to stand to give commands? I didnt think Kinect could read individual fingers? How would you select units etc... It seems too inaccurate to do a lot of things that would be necessary for a successful implementation of a UI for RTS games. But once again I am positing a question. Inform me GAF!
 

PSGames

Junior Member
TheExecutive said:
So, explain to me how that would work? Do you have to stand to give commands? I didnt think Kinect could read individual fingers? How would you select units etc... It seems too inaccurate to do a lot of things that would be necessary for a successful implementation of a UI for RTS games. But once again I am positing a question. Inform me GAF!

Think of the area in front of you as a giant touch pad or Microsoft Surface and you'll be in the right direction.

Here's a video of an RTS on Surface:

http://www.youtube.com/watch?v=58a-IsZ1zIg&feature=player_embedded

I imagine it would work just like that plus with the added aide of an Xbox 360 controller.
 

distrbnce

Banned
PSGames said:
Think of the area in front of you as a giant touch pad or Microsoft Surface and you'll be in the right direction.

Here's a video of an RTS on Surface:

http://www.youtube.com/watch?v=58a-IsZ1zIg&feature=player_embedded

I imagine it would work just like that plus with the added aide of an Xbox 360 controller.

Er, you can think of the area in front of you as a giant touchpad, but Kinect can't. That's where the problem lies.

I don't see how Kinect will exist in the longrun without allowing controllers to be used at the same time, but that would go against their entire message of controller-free gaming, wouldn't it?

Have they said anything about this specifically? Other than allowing secondary players to use controllers.
 

Zoe

Member
PSGames said:
Think of the area in front of you as a giant touch pad or Microsoft Surface and you'll be in the right direction.

But is Kinect going to have enough precision for that? People were having a hard enough time with the UI controls during the mall demos.
 

JaggedSac

Member
distrbnce said:
Er, you can think of the area in front of you as a giant touchpad, but Kinect can't. That's where the problem lies.

I don't see how Kinect will exist in the longrun without allowing controllers to be used at the same time, but that would go against their entire message of controller-free gaming, wouldn't it?

Have they said anything about this specifically? Other than allowing secondary players to use controllers.

Why would that implementation not work with Kinect? One could create a personal "touch plane" in front of a person based on arms length. Would take some getting used to, but there is nothing technically preventing the method.

Kudo has said that he expects those sorts hybrid Kinect/controller of experiences.
 

distrbnce

Banned
JaggedSac said:
Why would that implementation not work with Kinect? One could create a personal "touch plane" in front of a person based on arms length. Would take some getting used to, but there is nothing technically preventing the method.

Kudo has said that he expects those sorts hybrid Kinect/controller of experiences.

I'd bite if there was a shooting game or something where you can manipulate the pointer with your index finger, but I haven't seen that.

The idea would work for a Minority Report situation, where the monitor is bigger than your armspan, and you're about a foot or two away from it... but not on a 50" TV from across the living room.

The burden of proof lies with Microsoft though, not our imaginations.
 

Sydle

Member
TheExecutive said:
So, explain to me how that would work? Do you have to stand to give commands? I didnt think Kinect could read individual fingers? How would you select units etc... It seems too inaccurate to do a lot of things that would be necessary for a successful implementation of a UI for RTS games. But once again I am positing a question. Inform me GAF!

Your hands are pointers, imagine your palm is facing the screen or whatever else is comfortable for you (doesn't really matter since it's tracking your wrist). With your left hand create a quick circular motion around a unit to select them. Bring up your right hand and place the virtual cursor on a destination and then move your left hand towards your right until they touch. Think of it in quick motions. Now think how you could repeat that several times in a matter of seconds to control your units.

I imagine there would need to be menus and such, but those could be pop ups when you move your hand to the far side of the display. Then you just swipe your way through it. This could also work with Viva Pinata. :lol

Just a quick idea without much thought admittedly, I'm sure someone else could come up with one better.
 

RedStep

Member
distrbnce said:
Er, you can think of the area in front of you as a giant touchpad, but Kinect can't. That's where the problem lies.

The device absolutely can detect the depth/distance of your hands separately. This should work - push your hand forward to select/shoot/whatever.
 

Zoe

Member
Paco said:
Your hands are pointers, imagine your palm is facing the screen or whatever else is comfortable for you (doesn't really matter since it's tracking your wrist). With your left hand create a quick circular motion around a unit to select them. Bring up your right hand and place the virtual cursor on a destination and then move your left hand towards your right until they touch. Think of it in quick motions. Now think how you could repeat that several times in a matter of seconds to control your units.

Ugh, you'd really want to go through all of that trouble instead of using a KBM?

That would get tiring very quickly.
 

Sydle

Member
Zoe said:
Ugh, you'd really want to go through all of that trouble instead of using a KBM?

That would get tiring very quickly.

No, but if I'm going to shoehorn one of my existing games into the Kinect lineup then I think it's the only one that really fits. Your sentiment is exaclty why I think we need to drop the idea that Kinect should be used for existing games when their respective controls work just fine.

We should be expecting new games where our controllers don't make sense.
 
RedStep said:
The device absolutely can detect the depth/distance of your hands separately. This should work - push your hand forward to select/shoot/whatever.

That's more or less how Eyetoy Play interface (and some others) works. You move your hand in a "invisible wall", until is over a button, and you move the fingers (instead of moving the hand forward) to press the button.

Is not precise, even with your hands reflected in the screen, and you get tired after 30 minutes or 1 hour of gameplay. It's weird, but is very difficult to move your hand in the correct position, you don't have a correct scope of the size of the invisible wall.
 
Top Bottom