• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Project Natal - controller free gaming on 360

gofreak

GAF's Bob Woodward
Not sure if this is old or markedly surprising even, but just ran across this:

http://www.develop-conference.com/developconference2010/keynotes_2010.html

GEORGE ANDREAS - DESIGN TRACK KEYNOTE

The Future IS Controller-Free Games and Entertainment

The growth of physical based gaming has introduced an entirely new audience to our industry and like it or not, it's here to stay. As the technology improves, diversifies and becomes ever more complex designing compelling experiences is a challenge but this is something we should embrace as developers and not fear. But where do we go from here?

In this session George - Creative Director at Rare, will talk candidly about how Rare has created one of the premier launch titles for "Project Natal" and the challenges faced regarding the new design philosophy at the studio, "everything you've learnt over the last 25 years - throw it away!" We also take a brief look at Rare's last foray into the physical play space from yesteryear. We outline our vision for the future of physical based gaming and why as an industry we have only just scratched the surface of the vast array of possibilities now available to us, what this means for developers and for future generations of gamers alike.

Is Rare Natal-only now, or...? Stuff like this makes it sound like they've kind of wholly embraced it as their future or whatever.
 

TheOddOne

Member
gofreak said:
Is Rare Natal-only now, or...? Stuff like this makes it sound like they've kind of wholly embraced it as their future or whatever.
Would not be suprised though, last few months they have been spreading the Natal philosphy.
 
Razgreez said:
Could you show some sort of example where PN has done that though? Even something as simple as a 3D button press i.e. fully depressed, half way depressed, completely pressed sorta thing. Just curious
There is depth sensing going on, as clearly seen in many Ricochet demo vids. Not sure why it's so difficult to believe that Natal can do this since it is tested technology and a successful approach that has several showings from various vendors in video form on the 'net. I mean, MS isn't going to spend millions for patent clearance to tech regarding depth sensing methods if they aren't going to use similar-enough technology. If that was the case, why not just stick with the Vision camera, which is their standard and cheap webcam for X360 that's already out there?

cakefoo said:
Your claim was that you'd lose complete control of your body; in the Kinetic video it's demonstrating that you could have at least 2D control, which is much better than no control at all.

No, my claim was that without depth sensing and the specific software, the ability to track the body would be extra limited when trying to puppeteer a 3D model with real Z movement that goes beyond, say, the whole body moving forward or backward. A depth-capable setup allows individual parts of that body to move into and out of the flat plane normally seen in just video-only capture and analysis.

gofreak said:
I'd hold your horses on assumptions about Natal's (in)sensitivity to lighting conditions. Last I read the tracking uses RGB data also - that automatically implies at least some dependency on the quality of that data, and thus lighting conditions. It may not be exclusively dependent on RGB like eye is, but I wouldn't assume independence from it yet based on what's been reported.

The range of conditions PS eye is usable for in this context I think is also reasonable enough to allow devs to use that if they wish.

It can use the RGB cam to pick up video for color and image recognition, and general video capture. It doesn't needs it for depth, motion, and general body recognition for tracking the points to form its skeletons tracked from players or even props allowed to be recognized in the dark or uneven lighting situations. Plenty of vid out there showing consistent and accurate capture in dark rooms where only the display for the game provides light source. The NIR depth capture part of the hardware ensure that no other source of lighting is necessary for skeletal tracking.


Eyetoy didn't sell into millions of homes because they were perfectly lit. It's more robust than is sometimes suggested, and PSeye has made advances over it also.
I dunno. My experience with Eyetoy, PS Eye, and webcam motion capture software has all been contingent upon having lighting conditions that I don't normally maintain in my playing environment...such as just wanting to play in relatively low-lighting in the living room with a secondary source coming from behind me and my primary source being from the display itself. Finicky is the word that best describes the lighting requirements for consistent capture...as you practically need an adjusted source of fill lighting to guarantee acceptably performing feedback from something like EyePet. But it depends on how it's trying to capture, I suppose. If it depends on a lot of feature recognition to drive whatever the software is trying to do, it's usually the case that there really needs to be a lot more light to come from the direction of the camera than I have setup by default.

As for buttons, you say it like it's a bad thing, but buttons at the finger tips are an advantage. If you need to trigger something very reliably, a button is still by far the best way to do that.
I don't think it's a bad thing, but in the context of trying to fulfill the ideal of no-controller play, as it is often plugged by MS for it featured use of Natal in PR from last year, I think it's important to see how far it can go reliably. I do wish that a secondary wireless controller, either wand-like or a small handheld or worn device would be released alongside to start...just to cover all bases. I've expressed this sentiment as early as the period shortly after Natal's reveal from last E3. Shooting and continued movement through a 3D environment are certainly the most convenient actions to cover with a buttons and/or sticks on a handheld device in conjunction with camera input.

This is true, turn off the lights and you just have the spheres. But turn on the lights and, again, I think the operating environment would be good enough for that not to prevent a dev from looking at data beyond the wands.
Well, if we're looking at the most comfortable, default conditions already at a player's home, it's an important consideration since not everyone will be able to or want to adjust to suit a game...and, ideally, they really shouldn't have to...it should just work. Whatever the reality, it is something that devs have to consider since it can make or break features that depend on better video capture of details to ensure something, like feature tracking of a face or other distinct and recognizable images, would be reliable enough during gameplay to do key things inside of that experience.

I mean, I know I'll have some reply who says 'I had to setup 20 1000W spotlights in my living room before it would work!', but I think typically most people are able to get these things working OK in their environment. A consumer product wouldn't have already spawned out of this and been successful in its own way if this were some huge showstopper.
If you're talking about Eyetoy or PS Eye, I really think the experiences probably range from unplayable to semi-playable to 'I spent twenty minutes getting it just right and have to make sure the lights are just so...'. There may well be a significant chunk out there who had no problems whatsoever, but I doubt that's the vast majority. I'd be curious what their feedback quality and their numbers are for user experience and how that compares to the total sold. I haven't ever had a consistent experience with normal cam capture software for games and demos without having to adjust for it using calibration...which was fine because I was highly interested in the novelty seeing it work optimally...but I don't think that's a good road to take for something that's supposed to appeal to non-hardcore gamers or non-tech geeks. But who knows...maybe they've adjusted what it's tracking to work across more varied lighting conditions or can rely on display-based lighting since it is most common source of lighting in any normal use sort of situation. Still doesn't change the hardware limitation of the cam itself no matter what the software may be doing differently on the analysis end of the image captured.

Perhaps they've worked out how to pulse or use colors and intensities displayed during important moments at regular enough intervals to ensure decent lighting conditions during gameplay...like flashing bright images or making the entire display change just long enough to be able to create a situation to capture enough detail to track it even in darker situations, I dunno.

The sum of your points here seems to be that Natal can track you doing multiple different things simultaneously...but that's not really some unique characteristic. If I jump to the left and turn my head at the same time, a PSeye (with the right software...like the demo Marks did) would be able to detect that.
I think it's unique in the sense that the combination of simultaneous 3D movements and input data being tracked and estimated by the Natal software can result in equally-simultaneous and complex output and that allows for a more complex effect on a game situation for the game software to act on. Just to come up with a crazy yet conceivable example, you could be crawling toward the screen and holding out your hands to interact with a virtual on-screen object and your avatar/game character could be tracking along doing the same thing. That's not going to happen with PS Eye because it cannot tell what I'm even doing thanks to not understanding the depth and relationship of my body and its appendages when in such a position. Perhaps if you were wearing colors and/or patterns to help in the determination because without the ability to generate and move a joint/bone-constant 3D skeleton from a depth image coming from points in space, it's going to have to rely on apparent detail seen with the help of a adequate light source to track. For simpler actions, like your example of jumping or ducking, I agree that it wouldn't be a stretch at all as long as it were kept to X/Y movement as the camera sees it for the recognition of a jump watching the mass in front of it move, but turning your head and working off of that might be problematic due to inconsistent lighting to track a face.

Anyway, I didn't mean to overemphasize or belabor the point about Natal's ability to do so much analysis simultaneously as a key point in of itself, but rather I should have made my statement focused on what it should do with so much concurrent action coming in from the camera to work with and spit out reliably for a game to also be able to become dependent on and work out with consistent timing...to the point that it would be reasonable to ask the player to rub their tummies and pat their heads while speaking key words and stoop under an oncoming railroad trestle as they ride atop a train they're walking forward on. Perhaps an overexaggeration of the hardware and software's ability and, more importantly, a bit much to expect a real game situation to ask a player to be able to reasonably perform...but let's wait until next week to see how things look from what is shown.


True, although that wasn't part of comparison that was offered. Regarding though spatial awareness of your body... eye's relative weakness here would be depth, but it's not helpless. If I step back, my torso will get smaller, but remain in proportion. If I step forward it'll get bigger. If I lean forward my head will get bigger and my torso shorter (or disappear depending). If I step backward my head will get smaller and my torso shorter. It is possible to track these changes and correlate them to this body movement. The difference with eye is that the granularity of depth-change detection is going to be coarser...but whether this yields a game-breaking difference or not might be different question.
Right, I offered this solution in that post when I talked about comparing from a starting image and size and sensing scale changes from that initial capture for comparison. Not perfect within a certain range of change because the analysis will likely be limited by the resolution of capture and, perhaps, feature recognition and tracking that require (again!) good enough lighting, but good enough if the change is large enough going to and away from the camera.

A lot of eyetoy games didn't even try to identify parts of the body discretely, they just looked for motion, but as we've seen with Kung Fu Live, at least, segmentation of the body and labelling of parts with bounding boxes is possible with eye.
Indeed, as I think they're calibrating before the gameplay session starts by asking the player to hold out and scan in those distinct parts for tracking. So, my guess is normal consideration for movement and collision by looking at frames and see what pixels move in what directions and at what estimated speed while performing some form of feature/image tracking based on the scanned and tracked appendage and how it conforms then works with the total silhouette to conform to a known and expected body structure, a rudimentary 2D skellie.


No, it could be done a lot better than on Wii :p Wii is entirely blind and the original Wiimote, at least, is like a man who is spun around every few seconds, with only passing relative knowledge of where it's at. With eye and move you can know very accurately the position in space in front of the TV of the controller (hands), and you can also track head/torso, and indeed even legs (though in a much more limited way). The difference of course is that for the parts of your body that go in front of other parts, you're going to difficulty with eye (arms and legs). With a move in your hand, though, that's solved for the arms, and then the only difference I can see between what's being done in Richochet and was done in the Marks 'robot' demo, for upper body, was how well elbows would be dealt with. Elbows are generally a pain in the ass for any tracking because there's so much potential for occlusion, so Marks' demo doesn't even bother with it. It'll be interesting to see just how reliably Natal 'gets' elbows.
Well, I meant rough in the nicest way possible...but it could be done if we're honest about how all of this is still, no matter how accurate is may be, an illusion to complete for the sake of the traditionally grander illusion wrapped in smoke and mirrors of a video game experience. Simplified movements and some teaching for expected gestures to trigger the desired result would be necessary...but it might be necessary no matter the fidelity of tracking just to ensure a fun and playable game for most people. I think Natal will be good with elbows and joints once realistic constraints are placed on the models being puppeteered. Ricochet is just not a good example of how good it could be...I think we'll see that soon enough when Monday rolls around.


Some of those things are problematic with Move. Legs for example, because we can't really ask you to strap moves to them (whatever Sony's patents might say :p). But Shoulder charge? Sure (at least where there's overt translation of the shoulder/torso). Opening and closing palm? If that were required and I was making a game for Move, I'd map that to a button. It'll probably be far more reliable/unambiguous than the state detection of your hand Natal offers as your hand moves around and becomes more or less visible (and funnily enough, in the Marks robot demo the trigger was indeed mapped to opening and closing the palm, and because it's nice and analog you can a nice range of open-ness and closed-ness). Elbow? Very tricky with move. Jumping could actually I think be detected with eye. Ducking certainly.
Yeah, it depends, but I think if you're facing the screen and, with it, your camera, the palm/fist detection should be easy enough outside of people who have strange hands or cannot reliably go full open on the palm transition...but then you'd think just making the first check for a smaller image in a fist versus the bigger palm in front of the camera they're thrust toward should be the baseline for comparison and determination. Jumping and ducking will, on the Eye, obviously a rough check for coverage just by comparing how many pixels and/or the amount of change of position from one frame or state to another to determine or guess the current action if the analysis is just looking at quick or dramatic changes. Some kind of subtlety will clearly be a bit messier to consider tracking in a game situation without reliable enough detection so looking at extreme poses and positions of movement are probably going to be more useful for PS Eye, but that all depends on what you're doing, of course.


I'm not and didn't intend that. I intended to make the comparison on what eye and move in total could offer beyond just the hands, though I thought it was worth noting that for the hands, having a controller here I think is an advantage, and will probably remain so for a while. As I said in my first post, the rest of the tracking is not as good or as general as Natal, but it's also not absent, and if you're wanting to make a game with more full body awareness in tandem with the controllers, you definitely have options.

As I also said, though, Sony isn't as interested at the moment in encouraging development along those lines - or so it seems. They seem to be focussing on building a bridge to Wii which strategically might make sense, since that's where the target content's at right now. But as a technological comparison it's a bit more in the middle between the two approaches (camera | controller).
Yeah, I didn't mean to come across that Natal should do absolutely everything better than Eye (just most things :)), as there are advantages to the wand with buttons, and sticks, and physical sensors. We'll see what happens next week as I think most clear distinctions will be able to be drawn following the Sony conference on Tuesday after the latest stuff is shown off for the public.
 

Razgreez

Member
MightyHedgehog said:
There is depth sensing going on, as clearly seen in many Ricochet demo vids. Not sure why it's so difficult to believe that Natal can do this since it is tested technology and a successful approach that has several showings from various vendors in video form on the 'net. I mean, MS isn't going to spend millions for patent clearance to tech regarding depth sensing methods if they aren't going to use similar-enough technology. If that was the case, why not just stick with the Vision camera, which is their standard and cheap webcam for X360 that's already out there?

Geez all i asked for was an example. The ricochet vid looks exactly like the old eye-toy kinetic vids to me. I can see the avatar's limbs moving slight forward of backward but no noticeable depth effect on the balls i.e. they might as well remain on the same plane. Or are you saying you dont have an example and it's all conjecture on your part as much as it is on everybody else's? What i've learnt from experience is that it's one thing to provide proof of concept/prototype and file a patent, and something totally different getting the actual product to perform as you want it to in reality.

Would be beyond awesome to play a game like smartbomb on the psp using PN "as advertised". Pulling the pins out carefully, tilting the tables, connecting the wires, aligning the laser reflectors all with your hands and all with an accurate sense of depth and stability. I can picture the sweat dripping from my brow while i sit there deep in anxiety and concentration
 
Razgreez said:
The ricochet vid looks exactly like the old eye-toy kinetic vids to me.
Are you serious? In Ricochet, you can see a polygonal mapped character/avatar portraying your entire body's every move. In Kinetic, you just see a mirror image of yourself on screen.
 
gofreak said:
Not sure if this is old or markedly surprising even, but just ran across this:

http://www.develop-conference.com/developconference2010/keynotes_2010.html



Is Rare Natal-only now, or...? Stuff like this makes it sound like they've kind of wholly embraced it as their future or whatever.

Rare may be seeing Natal as their second chance to make itself relevant to more than small number of people in that room. They know outside of that room, nobody gives a shit about Rare games.

Natal is their chance to start new. And they are doing exactly that with all re-branding.
 

ViolentP

Member
Monty Mole said:
Are you serious? In Ricochet, you can see a polygonal mapped character/avatar portraying your entire body's every move. In Kinetic, you just see a mirror image of yourself on screen.

So you're saying Kinetic looks better?
 

Alx

Member
Razgreez said:
Geez all i asked for was an example. The ricochet vid looks exactly like the old eye-toy kinetic vids to me. I can see the avatar's limbs moving slight forward of backward but no noticeable depth effect on the balls i.e. they might as well remain on the same plane.

Even if that were the case (which I doubt), it's only a matter of game design... the ingame avatar animation is proof enough that Natal is able to map your 3D motions to a virtual character. Once it's done, you can decide how you will use this mapping in your game, and let depth and 3D models have more or less influence on the physics of the ball...

Even if as a game it looks similar, eyetoy kinetics is much more basic, all it does is "if it moves, it can destroy the objects". It doesn't know what is a limb, a torso or an object... your cat could play the game and it wouldn't make any difference.
 
Monty Mole said:
Are you serious? In Ricochet, you can see a polygonal mapped character/avatar portraying your entire body's every move. you just see a mirror image of yourself on screen.


Reminds me of the Armando Iannucci sketch about green-screening all the drinks glasses in Eastenders!!


http://www.youtube.com/watch?v=UjhBf4vlTw0


"No that's how you do it, but why do you do it?"

Seems pointless when the mirror image is cooler.
 
derFeef said:


Sorry, but I am all sorts of confused as to what I'm supposed to be seeing.


Wait, are you talking about the concept art to the right of the art showing what appears to be rain, and headlights in FORZA 3? I'm guessing thats showing head tracking isn't it? Thats all fantastic and whatnot but seeing how there is no rain or weather in Forza 3 its hard for me to believe head tracking is coming.
 

ShapeGSX

Member
Razgreez said:
Geez all i asked for was an example. The ricochet vid looks exactly like the old eye-toy kinetic vids to me. I can see the avatar's limbs moving slight forward of backward but no noticeable depth effect on the balls i.e. they might as well remain on the same plane.

Ancient example. 363 days old.

http://www.latenightwithjimmyfallon.com/blogs/2009/06/kudo-tsunado-demos-project-natal/

When he moves forward, the avatar moves forward. This affects when his hands will hit the balls.

When the ball is reset, you have to move your hand forward in order to smack it.

There you go. Depth is affecting game play.
 

Alx

Member
travisbickle said:
Seems pointless when the mirror image is cooler.

It may be "cooler", but it's also much more limited, both in control and in design : with the mirror image, you have to build all your games around the picture of the player facing the camera. No boxer in a short, no elf running in the meadow, no bald space marine, only yourself...
The single fact that with natal, you can show the back of the player, like in Ricochet, changes everything, since it gives you a point of view more adapted to gameplay : since you can see what's in front of your character, you can have a real 3D game. Notice that one of the main gameplay differences between Ricochet and Kinetic is that the first one has a 3D playfield.
 
Never noticed the glitches on that Jimmy Fallon video before, after he misses the serve, it thinks he's facing away from the screen for a bit, very odd.
 
I think just having the mirror image looks cheap, and it severely limits the style of games. It just reminds me of all the mediocre cam based games I've played before. I owned an Eyetoy, and the "tracking" was... well, I don't even know if you could call it that. Most of the games just looked for movement of any kind, so you could easily cheat. The window washing game, for example, with Natal it would track your hand, which would have to approach and touch the window, and only your hand could clean it. With the Eyetoy you could literally just go right up to the camera, swipe your finger across and clean the window in a millisecond. I've no doubt that you could hit the targets in Eyetoy Kinetic by just throwing things in front of the camera.
 
derFeef said:
OMG I can see MYSELF in the TV, and that virtual objects are bouncing off of me! OMG!


So "I can see a polygonal mapped character/avatar portraying my entire body's every move in the TV, and that virtual objects are bouncing off of it! OMG!" is a lot more 2010?

It's just another case of improving tech for the sake of it and not for improving the essence of the game.
 

derFeef

Member
travisbickle said:
So "I can see a polygonal mapped character/avatar portraying my entire body's every move in the TV, and that virtual objects are bouncing off of it! OMG!" is a lot more 2010?

It's just another case of improving tech for the sake of it and not for improving the essence of the game.
You are right, it´s not that different if you look at it two-dimensionally, but still very different in appeal. It feels a lot less cheap. And yeah, dare them improving the tech!
 

Alx

Member
InaudibleWhispa said:
Most of the games just looked for movement of any kind, so you could easily cheat.

Heh, I high-scored the stupid Crazy Taxi minigame in Sega Superstars by holding the camera in my hand, shaking it and blowing on the microphone. :D
 
travisbickle said:
So "I can see a polygonal mapped character/avatar portraying my entire body's every move in the TV, and that virtual objects are bouncing off of it! OMG!" is a lot more 2010?
Of course it is. Nearly every person standing in front of Natal for the first time has a "whoa" moment when they see the game character reflect their every move. The game Ricochet itself mightn't take advantage of that technology in a great way, but the tech itself is certainly 'a lot more 2010' and it can be used in many more ways.
 
InaudibleWhispa said:
I think just having the mirror image looks cheap, and it severely limits the style of games. It just reminds me of all the mediocre cam based games I've played before. I owned an Eyetoy, and the "tracking" was... well, I don't even know if you could call it that. Most of the games just looked for movement of any kind, so you could easily cheat. The window washing game, for example, with Natal it would track your hand, which would have to approach and touch the window, and only your hand could clean it. With the Eyetoy you could literally just go right up to the camera, swipe your finger across and clean the window in a millisecond. I've no doubt that you could hit the targets in Eyetoy Kinetic by just throwing things in front of the camera.


I was once playing the eyetoy with my back to the window, and suddenly the real window cleaner showed up, and started washing the windows both in real life and in the game at the same time! I'd like to see Natal try and do that.
 
Graphics Horse said:
I was once playing the eyetoy with my back to the window, and suddenly the real window cleaner showed up, and started washing the windows both in real life and in the game at the same time! I'd like to see Natal try and do that.
:lol
 

Man

Member
tinfoilhatman said:
Seriously people are still comparing Natal to eyetoy?!?!? Gimme a break already this should be a bannable offence.
People are comparing Move to Wiimote plus.
 

ShogunX

Member
Man said:
People are comparing Move to Wiimote plus.

kids-crying.jpg


Grow up.
 

Ristlager

Member
tinfoilhatman said:
Seriously people are still comparing Natal to eyetoy?!?!? Gimme a break already this should be a bannable offence.

Untill we see something radically different from the old Eyetoy games, comparing them isn't that wrong. We have not seen anything in Natal games so far that hasn't been possible on Eyetoy (but we have only seen Riccochet and that raft game). But I doubt we are going to make the comparison after E3, since Microsoft are going to show us the big guns, and they should be far more impressive.
 
Ristlager said:
Untill we see something radically different from the old Eyetoy games, comparing them isn't that wrong. We have not seen anything in Natal games so far that hasn't been possible on Eyetoy (but we have only seen Riccochet and that raft game). But I doubt we are going to make the comparison after E3, since Microsoft are going to show us the big guns, and they should be far more impressive.
It's like comparing a square and a cube, no matter how you look at it. It doesn't matter what they've shown because we know what the technology is, and we know that Natal isn't just a better camera. And when has Eyetoy, a standard 2D webcam, been capable of rendering a character that can fully mimic the full range of human body movement in 3D? That is what Ricochet is doing, and as far as I know Eyetoy hasn't done anything of the sort. All of the Eyetoy games I've played are absolutely primitive in comparison.
 

cakefoo

Member
MightyHedgehog said:
cakefoo said:
Your claim was that you'd lose complete control of your body; in the Kinetic video it's demonstrating that you could have at least 2D control, which is much better than no control at all.
No, my claim was that without depth sensing and the specific software, the ability to track the body would be extra limited when trying to puppeteer a 3D model with real Z movement that goes beyond, say, the whole body moving forward or backward.
*ahem*
MightyHedgehog said:
 

-COOLIO-

The Everyman
Shogun PaiN said:
http://scrapetv.com/News/News%20Pages/Everyone%20Else/images/kids-crying.jpg[IMG]

Grow up.[/QUOTE]

he makes a good point.

eyetoy -> natal is comparable to wiimote -> move
 
cakefoo said:
Eh, my larger point was about the lack of comprehensive tracking. So, yes, if you want to pick out that bit, go right ahead despite my acknowledging the ability that any webcam, including the PS Eye, could track the legs as a part of you (but not necessarily as the working legs part of you) but not do much with it beyond the simple stuff you've seen on PS2. You cannot puppeteer a 3D model using the the Sony cam going into and out of the screen to do anything gameplay-related with any consistency...not without depth sensing hardware and software. Not sure why you're bothering, honestly. You know what I meant or do I have to keep reiterating the same point of 3D tracking of your body over and over again? You cannot kick into or away away from the camera with your legs using PS Eye or Eyetoy and hit something in front of you with any sort of accuracy if it could be detected reliably at all. How difficult is it to comprehend, man? If it were to your left or right or generally outside of your silhouette that you're staring back at which is looking at you, you'd be able to 'hit' other things outside of your outline using the old and simple 2D collision we've all seen since and before PS2 had it.
 
-COOLIO- said:
he makes a good point.

eyetoy -> natal is comparable to wiimote -> move
I'm not fully up to date with how different Wii Motion+ and Move are, but they appear to be capable of pretty much the same thing, just Move is better at it. Natal is a completely different piece of technology to the Eyetoy which is just a camera, and as such it has the potential to open the doors to different gaming experiences. I'm not sure Move is going to do that. To me it looks like it'll be pretty much Wii games, but better.
 

Kafel

Banned
This thread is annoying.

The unveiling can't come soon enough so we see that it simply "works" and then the games will speak for themselves.
 

Alx

Member
-COOLIO- said:
he makes a good point.

eyetoy -> natal is comparable to wiimote -> move

Not really...wiimote and move are very close both in technology and functionality.
The wiimote uses IR sources (sensor bar), a (very basic) camera to detect the sources, and inertial sensors. Move uses light sources (balls), a standard camera to detect the sources, and inertial sensors. These similar components are arranged differently, but the output is the same : measurement of the 6 degrees of freedom of the controller held by the user. The main differences are that the Move camera is a real camera, and it can be used in "eyetoy mode", so basically Move = eyetoy + Wiimote(+).

The difference between eyetoy and natal is more important, since both the technology and functionalities are different. The Natal output is a lot more than an eyetoy image with better quality.
 

cakefoo

Member
MightyHedgehog said:
Eh, my larger point was about the lack of comprehensive tracking. So, yes, if you want to pick out that bit, go right ahead despite my acknowledging the ability that any webcam, including the PS Eye, could track the legs as a part of you
But it directly conflicts when you say you'd "lose the ability to use (your) legs and body to hit and block." It's plain as day something that I isolated because it was factually untrue.
 

ShogunX

Member
-COOLIO- said:
he makes a good point.

eyetoy -> natal is comparable to wiimote -> move

Maybe im missing something about Move but from what ive seen so far it doesn't appear to do anything that you couldn't pretty much do with the Wii - Well apart from be used with HD games of course :D

The same cant be said about Natal and the eyetoy. To be honest a lot of posters in here are either misinformed by what natal can do or they are choosing to be a little ignorant - I guess you could say the same about me and my views on Move.
 
Kafel said:
This thread is annoying.

The unveiling can't come soon enough so we see that it simply "works" and then the games will speak for themselves.
Yes, it is. Monday is so far away...but I suspect that virtually all of most doggedly asserted crap will vaporize the moment we see more than Ricochet despite most comments about its limitations being already proven false by just that simple game demo a lot of people seem so unimpressed with and some are even comparing to Eyetoy games...

cakefoo said:
But it directly conflicts when you say you'd "lose the ability to use (your) legs and body to hit and block."
Are you deliberately being dense? Because I was talking to gofreak about Ricochet in that long-ass post that you've clipped that tasty morsel from. Please tell me you think PS Eye or Eyetoy can do Ricochet as we've all seen it, and if you do, I'll just figure that you don't understand what I'm talking about and we'll just call it a day. You want to ignore the context inside of the convo, you go right ahead.
 

gofreak

GAF's Bob Woodward
MightyHedgehog said:
It can use the RGB cam to pick up video for color and image recognition, and general video capture. It doesn't needs it for depth, motion, and general body recognition for tracking the points to form its skeletons tracked from players or even props allowed to be recognized in the dark or uneven lighting situations. Plenty of vid out there showing consistent and accurate capture in dark rooms where only the display for the game provides light source. The NIR depth capture part of the hardware ensure that no other source of lighting is necessary for skeletal tracking.

I haven't seen those. If it's only using depth for the tracking, then for sure, it should work in the dark even. I thought I recalled reading that RGB was used also here, but if not, then for sure the dependencies there go away.

MightyHedgehog said:
Finicky is the word that best describes the lighting requirements for consistent capture...

I dunno if I'd call it finicky to require standard artificial light in a room or daylight (vs low light)...most people can turn up the lights in their room and it's not a huge deal. That said though, I've seen Marks lately taking his avatar demo out in less than ideal lighting conditions (most recently at Games For Health were it was typically dimmer and artificial 'presentation lighting' to accommodate a projector), and the demos worked fine.

I'm not saying no one - or you in your setup specifically - has problems, but I just wonder how frequently it is that there are unavoidable environment issues.

MightyHedgehog said:
Jumping and ducking will, on the Eye, obviously a rough check for coverage just by comparing how many pixels and/or the amount of change of position from one frame or state to another to determine or guess the current action if the analysis is just looking at quick or dramatic changes. Some kind of subtlety will clearly be a bit messier to consider tracking in a game situation without reliable enough detection so looking at extreme poses and positions of movement are probably going to be more useful for PS Eye, but that all depends on what you're doing, of course.

Depends how you're implementing it I think. You can do something quite granular and fluid though, it doesn't have to be binary 'i'm standing up'/'i'm crouching' detection...you can follow along with the inbetween bits. The puppet/avatar demo does a pretty good job of giving a strong impression of quite granular 1:1 upper body mapping including ducking and jumping etc, but it has cases where it'd fall apart and require something more sophisticated than what it's doing (which is really just using head tracking and the controllers at the hands...but it's amazing how much mileage you get out of that). You'd want to do more if you wanted to distinguish between forward/backward leaning and x/y translation of the head for example.

MightyHedgehog said:
We'll see what happens next week as I think most clear distinctions will be able to be drawn following the Sony conference on Tuesday after the latest stuff is shown off for the public.

Well, like I said earlier, this is a somewhat moot debate anyway, since software-wise Sony is mostly exploring one end of things, and as it appears, isn't really focusing too much on the general camera-input end. The distinctions based on what the software reflects are already clear I think. Maybe we'll see some tech demos from Sony that mix in the camera more though (as with the puppet demo).

As for MS, they've got plenty they'll hopefully show. The applications that show the expansion of camera-based control, and its relevance/applicability etc. Perhaps even some pure tech demo stuff that might show more than the first software does. Some final specs would be nice too :p

MightyHedgehog said:
Yes, it is. Monday is so far away...but I suspect that virtually all of most doggedly asserted crap will vaporize the moment we see more than Ricochet despite most comments about its limitations being already proven false by just that simple game demo a lot of people seem so unimpressed with and some are even comparing to Eyetoy games...

I think people are comparing to eyetoy games because of how they're seeing the tech applied vs the quality of the tech itself. But we have indeed only seen one game or demo, or heard of only a couple of games, and I expect a lot more variety to talk about next week.

Shogun PaiN said:
Maybe im missing something about Move but from what ive seen so far it doesn't appear to do anything that you couldn't pretty much do with the Wii

There's plenty of improvements - not least thanks to the camera - but it's probably best to ask about them in a Move thread...
 

cakefoo

Member
MightyHedgehog said:
Are you deliberately being dense? Because I was talking to gofreak about Ricochet in that long-ass post that you've clipped that tasty morsel from. Please tell me you think PS Eye or Eyetoy can do Ricochet as we've all seen it, and if you do, I'll just figure that you don't understand what I'm talking about and we'll just call it a day. You want to ignore the context inside of the convo, you go right ahead.
Well now you're saying Eye can track legs and body, but without the 3D fidelity. But that's all gofreak said too:
gofreak said:
I mean for something like Ricochet, you could definitely do that on Move, albeit with more compromised leg interaction
But you countered him, saying he would "lose the ability to use his legs and body to hit and block with."

In a game like Ricochet you don't need 3d tracking to block a ball. Will 3d help? Sure. But it won't completely remove limb control if you just had a 2d camera. What you said was understood as losing "the" ability to use those parts, not just "some" ability to use them.
 

mrwilt

Member
Wouldn't you need 3D tracking rather than 2D for the ricochet game? I would imagine that if it was just 2D, when you lift your hand to serve the ball it would just knock the ball straight up because the camera wouldn't know if you hand is behind the ball, underneath the ball or in front of the ball. Z depth definitely plays a part in that.
 

Yamauchi

Banned
I've watched the Youtube trailer about 10 times now. I was fairly amazed when I first saw it, but now it's starting to seem really silly to me. Maybe I'm just simple-minded in this aspect, but holding a make-believe steering wheel doesn't seem fun.

Ricochet looks really fun, though. A few more games like that and it could be really popular.
 
Top Bottom