realalbedo
Banned
now. Im happy. But i need Linux support!
Like he says himself at the beginning of the video, it is the 3DV (not 3DView, my bad) presentation I was talking about, and he says that he only spent a few days at 3DV to toy with the demos... So yeah you could say he worked on it, but apart from those few days there is no sign of a big activity on the subject, it's more technical intelligence.
Which is fine and indeed it's his job to do it, but you can't pretend he's an expert in everything he laid his hands on.
When I said "Sony didn't believe in 3D cams and chose physical motion controllers instead" I didn't imply that the decision was forced on him. If anything, since he's in charge, maybe I should have said "Marks didn't believe in 3D cams...", except that I'm not 100% sure that the decision was his.
Anyway that's fine too, sometimes you have to make a choice, you can't bet on everything. And Move was a rather sensible choice, one that many gamers here will support rather than kinect-like systems.
Like he says himself at the beginning of the video, it is the 3DV (not 3DView, my bad) presentation I was talking about, and he says that he only spent a few days at 3DV to toy with the demos... So yeah you could say he worked on it, but apart from those few days there is no sign of a big activity on the subject, it's more technical intelligence.
In fairness, that's just the first batch of demos he showed, and they were their own demos, not 3DV's, albeit existing ones rigged together quickly to make use of the z-data in those few days. But they did seemingly later get their hands on their own unit and spent more time r&d-ing on it in their own labs - one result being the motion capture stuff he shows later in the vid.
There was a team at SCEA working on that, they even presented their work at Siggraph 2003. Their work definitely wasn't just paperware.
We tried a lot of different 3D cameras. I love the 3D camera technology; personally, I like the technology part of it. We worked closely with our game teams at what it would enable, and it enabled making the things we already did with EyeToy more robust, but it didnt really enable as many new experiences as what we were hoping it would enable, so it made the things we were already able to do a little bit more robust which is good but it adds a lot of cost and it didnt enable some of the other experiences we wanted to achieve.
Man, my TV is going to start looking like this:
with all the stupid gimmick add-ons that are going to be standard across all platforms and each are proprietary.
That specific motion capture sequence was the standard 3DV demo at the time, I don't think he has anything to do with it.
It's pretty easy to forget with the lack of support Sony gave it.It nice to see people forget that sony created EyeToy, you know just like Kinect but much worse. Short translation same shit different package.
The Siggraph thing wasn't simply a Marks-chaired workshop or track that SCEA itself wasn't presenting work at - it was a SCEA presentation within the emerging tech track. All the people mentioned there were SCEA people. The 2003 emerging tech track isn't online in the ACM digital library unfortunately (they seemed to start putting them online from 04), but there are citations of their paper in later papers, so I assume it exists.
It's been over a year since Kinect launched and I'd say Microsoft isn't doing much better.It's pretty easy to forget with the lack of support Sony gave it.
Time to get a gaming PC. Consoles are all becoming a joke
It's been over a year since Kinect launched and I'd say Microsoft isn't doing much better.
First post that actually gets it. I did some research:Thanks for bringing up this old patent as I was about to run an article about it claiming it was something new
By the way, I think many are missing the point of this patent, or rather what it's actually patented here.
I've read the whole thing and it's more like an indication of how a depth camera can improve the augmented reality thing.
The patent, in fact, doesn't focus on user interaction a la Kinect (albeit it's mentioned as a possible application), but more on what might be an evolution of the current AR tech as seen in games such as Start The Party.
It basically postulates that thanks to depth data, virtual objects can actually be rendered "within" the real 3D space rather than simply applied over it. Virtual stuff could move behind real objects and even cast shadows on them realistically. Users can be dressed with virtual clothes and stuff like that. This is basically EyeToy 3 more than a Kinect 2. If you get what I mean. And besides, this stuff could theoretically be done with the current Kinect tech already.
Honestly, I don't really give a shit about this stuff because I believe AR is as cool as a tech as its possible applications are conceptually "meh". Especially if the point is to create a virtual world around your real self rather than "virtualizing" yourself within a virtual word on the screen which you can actually explore (like Kinect does). I mean, what's more immersive? Interacting with a virtual dragon in your living room (facing the screen rather than the dragon, no less - meh, talk about disconnection) or be actually into Skyrim?
Richard should really let the idea of bringing stuff out of the screen die and focus on how to put you into the screen.
An invention is provided for affording a real-time three-dimensional interactive environment using a depth sensing device. The invention includes obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device.
"Embodiments of the present invention provide real-time interactive gaming experiences for users. For example, users can interact with various computer-generated objects in real-time. Furthermore, video scenes can be altered in real-time to enhance the user's game experience. For example, computer generated costumes can be inserted over the user's clothing, and computer generated light sources can be utilised to project virtual shadows within a video scene. Hence, using the embodiments of the present invention and a depth camera, user's can experience an interactive game environment within their own living room. "
"The processing system 174 can be implemented by an entertainment system, such as a Sony.RTM. Playstation.TM. II or Sony.RTM. Playstation.TM. I type of processing and computer entertainment system. It should be noted, however, that processing system 174 can be implemented in other types of computer systems, such as personal computers, workstations, laptop computers, wireless computing devices, or any other type of computing device that is capable of receiving and processing graphical image data."
The OpenMAX IL 1.2 camera component is also updated with the following advanced capabilities:
• Enhanced Focus Range, Region and Status support;
• Field of View controls;
• Flash status reporting;
• ND Filter support;
• Assistant Light Control support;
• Flicker Rejection support;
• Histogram information;
• Sharpness control;
• Ability to synchronize shutter opening and closing events with audio playback.