• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sony Patents Kinect-Like 3D Depth-Sensing Camera for PlayStation Consoles

Like he says himself at the beginning of the video, it is the 3DV (not 3DView, my bad) presentation I was talking about, and he says that he only spent a few days at 3DV to toy with the demos... So yeah you could say he worked on it, but apart from those few days there is no sign of a big activity on the subject, it's more technical intelligence.
Which is fine and indeed it's his job to do it, but you can't pretend he's an expert in everything he laid his hands on.

When I said "Sony didn't believe in 3D cams and chose physical motion controllers instead" I didn't imply that the decision was forced on him. If anything, since he's in charge, maybe I should have said "Marks didn't believe in 3D cams...", except that I'm not 100% sure that the decision was his.
Anyway that's fine too, sometimes you have to make a choice, you can't bet on everything. And Move was a rather sensible choice, one that many gamers here will support rather than kinect-like systems.

I think he is a very smart guy and I would hazard to say he is an expert in that field because of the time he spent working it.

He had nothing negative to say about 3DV tech except for the fact that it was expensive at the time. I doubt he would have said it was a reasonable cost to put on an accessory.

Given the date he made the videos I am sure the tech was not feasible. But "betting" is something Sony doesn't really do. Their choices in experimental tech is very obvious and practical.

MS on the other hand basically bought 3DV, had a team of smart and dedicated engineers figuring out how to lower costs, another team of brilliant software engineers working on the built in recognition of the cams and mics and build an flexible SDK around that. Then afterwards spent half a billion advertising.

I get the impression that it was less of a "bet" for MS given the scope of their operations and the money they were able lay down for R&D on the fiscal reports.
 

gofreak

GAF's Bob Woodward
Like he says himself at the beginning of the video, it is the 3DV (not 3DView, my bad) presentation I was talking about, and he says that he only spent a few days at 3DV to toy with the demos... So yeah you could say he worked on it, but apart from those few days there is no sign of a big activity on the subject, it's more technical intelligence.


In fairness, that's just the first batch of demos he showed, and they were their own demos, not 3DV's, albeit existing ones rigged together quickly to make use of the z-data in those few days. But they did seemingly later get their hands on their own unit and spent more time r&d-ing on it in their own labs - one result being the motion capture stuff he shows later in the vid. There was a team at SCEA working on that, they even presented their work at Siggraph 2003. Their work definitely wasn't just paperware.

I think it's more than fair to say they 'worked with the tech'. That's not to say they are absolute domain experts or the leaders in the field or anything of the sort, but it's worth pointing out so that people don't think the stuff in the OP is necessarily new work. I'm not sure how different the patent in the OP is from the original, if at all, but I think it's worth pointing out it may be legacy research.
 

Alx

Member
In fairness, that's just the first batch of demos he showed, and they were their own demos, not 3DV's, albeit existing ones rigged together quickly to make use of the z-data in those few days. But they did seemingly later get their hands on their own unit and spent more time r&d-ing on it in their own labs - one result being the motion capture stuff he shows later in the vid.

That specific motion capture sequence was the standard 3DV demo at the time, I don't think he has anything to do with it.

There was a team at SCEA working on that, they even presented their work at Siggraph 2003. Their work definitely wasn't just paperware.

But that's precisely what is bothering me when people say they worked on it : I can't find any trace of said works, be it as publications or demos. Your link is the closest thing to that (thanks for it BTW), but it only shows that Marks was in charge of a SIGGRAPH conference, there is no mention of a paper from a Sony lab (I don't have access to IEEE papers though, so maybe it would be easier for someone in a lab to give me a precise reference ?). At the moment, all the publications, demos and products from Marks I know of were based on 2D computer vision.
I'm ready to revise my position on the state of their work once I see some real evidence (actually I was getting to it when I read your link, but it's still not convincing), but for the moment even Marks declaration make me understand that they considered the tech, tested it and discarded it.

http://www.vg247.com/2010/03/30/mar...because-it-didnt-enable-many-new-experiences/

We tried a lot of different 3D cameras. I love the 3D camera technology; personally, I like the technology part of it. We worked closely with our game teams at what it would enable, and it enabled making the things we already did with EyeToy more robust, but it didn’t really enable as many new experiences as what we were hoping it would enable, so it made the things we were already able to do a little bit more robust — which is good — but it adds a lot of cost and it didn’t enable some of the other experiences we wanted to achieve.

That comment even hints that they intended to use it for simple user segmentation in eyetoy-like games (which makes sense, since full gesture tracking wasn't ready at the time, and would be too CPU heavy for a PS2 hardware).
 

Hayeya

Banned
Man, my TV is going to start looking like this:

sega-tower-of-power.jpg


with all the stupid gimmick add-ons that are going to be standard across all platforms and each are proprietary.

HOLY SHIT,,,, This pic deserves a thread of its own,

Regarding the topic, i believe its an updated eyetoy cam that works with move,,, they cant just abandon move right ? right ? ... anyone ? unless their R&D costs 0 $
 

dark10x

Digital Foundry pixel pusher
NO no no no! I do NOT want this to come to fruition. They already copied Nintendo with the Move and it's actually a decent product, but Kinect is just awful. I can't even begin to imagine why such a product has sold such big numbers. It's like people have made up all these reasons in their head why it's awesome but nothing delivers. I've talked to so many people who are all "Modern Warfare is going to be SICK on Kinect" without even stopping to consider how that might work.
 

gofreak

GAF's Bob Woodward
That specific motion capture sequence was the standard 3DV demo at the time, I don't think he has anything to do with it.

Are you sure? As he presented it in that video it was different from the first demos, in a different lab, with another SCEA guy demoing it. (Although I should say, those first demos were also their work...). Maybe they were working in parallel on their own mocap solution at that time - all I know is they were doing that realtime mocap work with z data around that time, they presented it at Siggraph as their own, and also filed patents around it...so would be kind of bold to do that with 3DV's work :)

The Siggraph thing wasn't simply a Marks-chaired workshop or track that SCEA itself wasn't presenting work at - it was a SCEA presentation within the emerging tech track. All the people mentioned there were SCEA people. The 2003 emerging tech track isn't online in the ACM digital library unfortunately (they seemed to start putting them online from 04), but there are citations of their paper in later papers, so I assume it exists.
 

Alx

Member
The Siggraph thing wasn't simply a Marks-chaired workshop or track that SCEA itself wasn't presenting work at - it was a SCEA presentation within the emerging tech track. All the people mentioned there were SCEA people. The 2003 emerging tech track isn't online in the ACM digital library unfortunately (they seemed to start putting them online from 04), but there are citations of their paper in later papers, so I assume it exists.

I suppose I'll give them the benefit of doubt then, but I'd still very like to see some real results of those works. Like I said, filing a patent is easy, making it work is another thing.
 
I love that the "Kinect is basically eyetoy" meme has survived until 2012. As Draft once said, here's your blue ocean fellas, drink deep.
 

J-Rzez

Member
Not good news for us. Just another junk peripheral that will be packed into the PS4 box in which said costs could have went to further push the real hardware in the box. :(
 
It's been over a year since Kinect launched and I'd say Microsoft isn't doing much better.

By the end of this year Microsoft will have published more Kinect games (not counting Kinect-enhanced titles such as Forza 4 and Halo CE Anniversary), some of them high profile releases, than what Sony managed with EyeToy software in five years.
 
Depth sensing cameras are now inexpensive and 2011 Sony picture cameras have single lens 3-D (simulated) using depth sensing.

Thanks for bringing up this old patent as I was about to run an article about it claiming it was something new :p

By the way, I think many are missing the point of this patent, or rather what it's actually patented here.

I've read the whole thing and it's more like an indication of how a depth camera can improve the augmented reality thing.

The patent, in fact, doesn't focus on user interaction a la Kinect (albeit it's mentioned as a possible application), but more on what might be an evolution of the current AR tech as seen in games such as Start The Party.

It basically postulates that thanks to depth data, virtual objects can actually be rendered "within" the real 3D space rather than simply applied over it. Virtual stuff could move behind real objects and even cast shadows on them realistically. Users can be dressed with virtual clothes and stuff like that. This is basically EyeToy 3 more than a Kinect 2. If you get what I mean. And besides, this stuff could theoretically be done with the current Kinect tech already.

Honestly, I don't really give a shit about this stuff because I believe AR is as cool as a tech as its possible applications are conceptually "meh". Especially if the point is to create a virtual world around your real self rather than "virtualizing" yourself within a virtual word on the screen which you can actually explore (like Kinect does). I mean, what's more immersive? Interacting with a virtual dragon in your living room (facing the screen rather than the dragon, no less - meh, talk about disconnection) or be actually into Skyrim?

Richard should really let the idea of bringing stuff out of the screen die and focus on how to put you into the screen.
First post that actually gets it. I did some research:

1) The patent was filed on 26th October 2011, by PlayStation Eye creator Dr Richard Marks. The patent was published on 16th February 2012.
2) OpenMax IL 1.2 Spec internally released Nov 7, 2011 and published Feb 2012 (supports hardware APIs necessary for AR, especially the Camera APIs)
3) Khronos meeting discussing AR Nov 11, 2011 with Sept 2012 target date (Leveraging Browser technology)

So Technology standards developed or patent applications for AR were developed/filed late 2011 (Oct-Nov) but waited till Feb 2012 to be published.

Sony Publishes a Patent for a Kinect style "USER-DRIVEN THREE-DIMENSIONAL INTERACTIVE GAMING ENVIRONMENT"

isthisthepstationmotionjtjt.png


An invention is provided for affording a real-time three-dimensional interactive environment using a depth sensing device. The invention includes obtaining depth values indicating distances from one or more physical objects in a physical scene to a depth sensing device.

"Embodiments of the present invention provide real-time interactive gaming experiences for users. For example, users can interact with various computer-generated objects in real-time. Furthermore, video scenes can be altered in real-time to enhance the user's game experience. For example, computer generated costumes can be inserted over the user's clothing, and computer generated light sources can be utilised to project virtual shadows within a video scene. Hence, using the embodiments of the present invention and a depth camera, user's can experience an interactive game environment within their own living room. "

"The processing system 174 can be implemented by an entertainment system, such as a Sony.RTM. Playstation.TM. II or Sony.RTM. Playstation.TM. I type of processing and computer entertainment system. It should be noted, however, that processing system 174 can be implemented in other types of computer systems, such as personal computers, workstations, laptop computers, wireless computing devices, or any other type of computing device that is capable of receiving and processing graphical image data."

Augmented Reality support so we now have multiple points to prove Sony is going to support ADVANCED AR on the PS3, Vita and PS4.
Sony outlines a long term roadmap for Playstation tech
Sony SmartAR delivers high-speed markerless augmented reality
Augmented Reality: Sony PS Vita: AR Suite demo
PS3 augmented reality demo video


Acessories (new camera) coming by Sept 2012 for the PS3? Khronos published a PDF outlining among other things AR by Sept 2012 The PDF mentions what's needed by a Camera for AR and the OpenMax IL 1.2 spec fills that need. (Managed Infra Red source is needed as mentioned in the Patent)

The OpenMAX IL 1.2 camera component is also updated with the following advanced capabilities:

• Enhanced Focus Range, Region and Status support;
• Field of View controls;
• Flash status reporting;
• ND Filter support;
• Assistant Light Control support;
• Flicker Rejection support;
• Histogram information;
• Sharpness control;
• Ability to synchronize shutter opening and closing events with audio playback.

Current StreamInput Participants Aiming for production implementations in September 2012 includes Sony (page 16).


This May 2011 article "Turning ‘natural interface’ input into a new data standard" outlines the recently published OpenMax 1.2 specs (above):
 
Top Bottom