This isn't PR.
Videogame give you the "context"Voice control =/= lip reading. It's simply impossible to do it with high accuracy in a commercially-viable product.
Right now lip-reading is done by people. There are plenty of situations, where disagreements rise due to inability to read some lip movements. You need context for that, electronic device won't provide you with context.
Then we get into the whole accents, pronunciation errors, etc.
Voice control will end up being similar - you would need a lot of processing power to analyze the tone of voice, pitch, etc. Just because you can do stupid one word commands in Kinect does not mean the technology will develop by the same order of magnitude as CPU, RAM, etc. The box is for playing video games, and watching crap TV shows, FFS!
Microsoft at its best - PR bullshit.
Ever heard of "controlled leak"?
We got a group of friends together last weekend and played Kinect Sports and Dance Central. Worked great and everyone had fun. I also play Child Of Eden regularly and I prefer the Kinect controls to the controller. I've also heard good things about Gunstringer and Fruit Ninja.
Haven't tried it but i also hear good things about it.... It's a bit of an Smartphone app thingy, but seems to work well.
How about you get the first one kinda functioning before talking about the next one?
You don't use raw data for the depth field. You detect jitter between adjacent points. Meaning it's physically impossible to produce a depth field of the same resolution as the camera. If the adjacent camera pixel is lit up by another ray, the jitter information is destroyed. That's why Kinect depth fields are 320x240, even though the two cameras used to produce them are both 640x480.Uh, of course it is hampered by USB, you can even observe it firsthand with the different SDKs on PC, official or not : you can retrieve raw data @ 640x480 precision from the sensor, the only reason why the 360 doesn't do it is because it takes too much bandwidth.
Compression could have helped, but it also adds latency and computing ressources, and reduce measurement precision.
USB is not really a good protocol, especially for video streaming. It's an old problem, but since it's the most common standard, engineers have to deal with it.
The hardware is working as it should.
That some devs use it for things it isn't build for yeah dont blame the hardware blame the software.
But trust Pachter, advanced devkits aren't out there... really."It can be cabled straight through on any number of technologies that just take phenomenally high res data straight to the main processor and straight to the main RAM and ask, what do you want to do with it?" our source said.
Voice control =/= lip reading. It's simply impossible to do it with high accuracy in a commercially-viable product.
You don't use raw data for the depth field. You detect jitter between adjacent points. Meaning it's physically impossible to produce a depth field of the same resolution as the camera. If the adjacent camera pixel is lit up by another ray, the jitter information is destroyed. That's why Kinect depth fields are 320x240, even though the two cameras used to produce them are both 640x480.
Is it even possible?
I mean, lip reading is an iffy science as it is.
And why would you need to lip read if it can recognize voice commands anyway?
So it can automatically ragequit games for you?
You don't use raw data for the depth field. You detect jitter between adjacent points. Meaning it's physically impossible to produce a depth field of the same resolution as the camera. If the adjacent camera pixel is lit up by another ray, the jitter information is destroyed. That's why Kinect depth fields are 320x240, even though the two cameras used to produce them are both 640x480.
USB was not the bottle neck, the skeletal recognition takes huge processing power and RAM. I've watched it choke up an i7 with 16 GB of RAM.
Good idea. When you throw down the controller it send an angry message to your opposition, logs you off and shuts down the xbox.
Yep, and game journalists get constantly bribed to write good reviews.Ever heard of "controlled leak"?
And it also doesn't have a 3D depth camera, which is the thing that is supposedly being limited.USB being a bottleneck for a 320x240 signal at 30hz... weird, my PSEye works at 640x480 at 60hz or 320x240 at 120hz and it also uses USB ;O
USB being a bottleneck for a 320x240 signal at 30hz... weird, my PSEye works at 640x480 at 60hz or 320x240 at 120hz and it also uses USB ;O
PSEye has been optimized to stream video data and only that (I think RGB info is coded on 3 bytes instead of the usual 4).
A kinect stream is a 32 bit color image + 16 bit depth image (+ sound). It's bigger in size than a regular image, and probably less optimized.
I think we made the calculations a few months ago, that the PSEye would use the full bandwidth of a USB2 port. You probably can't use another device on the same USB controller (but it's not a big problem on PS3 where few peripherals are connected through USB).
Doesn't say anything about it being built in. That'd be dumb with entertainment centers.
It also uses sound, I dunno. I can have both the PSEye using headtracking + voice chat and a driving wheel on my 80GB fat PS3. I would think MS would've optimized it better.
I think what they really mean is packed in
The PSEye is different technology to Kinect's main sensor, you can't really compare them like that. Headtracking is just software making use of the standard PSEye stream. Kinect is passing along the same kind of RGB data from one camera, as well as feeding the depth data from a sensor reading an IR projection like this.It also uses sound, I dunno. I can have both the PSEye using headtracking + voice chat and a driving wheel on my 80GB fat PS3. I would think MS would've optimized it better.
$599
So it can automatically ragequit games for you?
I wish so bad this would happen. It would break the internet. The 2 MS douche looking guys come out (one is wannabe Tom Cruise the other is that sunglasses Kinect guy), they show off Kinect with a family on stage etc. 40 minutes pass of this over and over. Then 599$ US Dollars. Explosion.
The PSEye is different technology to Kinect's main sensor, you can't really compare them like that. Headtracking is just software making use of the standard PSEye stream. Kinect is passing along the same kind of RGB data from one camera, as well as feeding the depth data from a sensor reading an IR projection like this.
Why would anyone want a company to fuck up so bad? That's no different than the dude in the other thread who said he can't wait for Sony to fuck up with the PS4.
Don't worry, I already know the answer, but it's still sad.
I'm guessing this rumour is referring to new hardware though, so naturally more bandwidth will be hugely beneficial.Ah, I guess it doesn't compress the information from the IR matrix depth calculation and it does consume a bit of bandwidth. Will a better connection/bandwidth make Kinect that much better though?
It registers you yelling, sends a poorly written message full of homophobic slurs to the guy who killed you, and logs you off XBL. We might be onto something here.