This seems to indicated you would/might be able to turn any flat, tablet-like, surface into a virtual touchpad - perhaps by placing specific strips on them. Like making your own AR cards for vita
This seems to be a product a Tablet for the PlayStation.
If you've ever used AR cards on the vita or PS3 you will see how similar this is to that, especially the last passage/page you quoted
OK my mistake, didn't realise the cameras were actually on the device. Note to self, read patents carefully first before commenting.
Still having difficulty understanding the second page though. Does the unit have a screen or not? If so why would you not just make it touch screen instead of using cameras to judge swipes and positioning?
Touchpad which leads me to believe that this is the DS4 controller.
& I hope I'm right because that would mean that you can place the controller on your table & it will be able to track your hands in 3D & it will also be used for 3D scanning of objects into games & it can also scan your face.
Won't it have to be rather... huge to facilitate decent scanning of objects like hands - or soda cans as in the patent. For that reason i doubt it's the DS4. Most likely an added peripheral.
A further theory/speculation - it is the pad which can be utilized with the "break-apart" DS4 controller
the cameras could have wide angle lens & also if you read it they talk about rotating the object to get a full scan if it's too big.
The 'EyePad' thing is just a tongue in cheek reference to iPad...
I think the patent is just the product of brainstorming that might have been linked to PS4's controller (with the touchpad and all). With the rumours we have, I don't think its key idea - the depth volume for finger/object tracking around the touchpad - has made it into the final product.
This might explain the Sony one camera depth sensing (Via reflected IR intensity) patent and the two camera depth sensing (Via Parallax) patents. Both may be used next generation with different accessories. One camera depth sensing being cheaper with less overhead and in smaller cheaper accessories and perhaps in a new camera for the PS3 Playstation Eye. Two cameras in the PS4 Playstation Eye which likely usesthe cameras could have wide angle lens & also if you read it they talk about rotating the object to get a full scan if it's too big.
Good point and after a few minutes thought; these features are in handhelds operating on batteries and draw little power in part because of accelerator hardware (Co-processors) which I expect in the PS4 and Xbox 720.Razgreez said:I've finally read all of it (had some time to burn at work). The device seems to range from being a simple (if not more robust) AR marker with controls attached, to having capabilities such as a touchscreen, 3D facial and skeletal scanning and even its own built in co-processor. The latter features obviously being more expensive and difficult to implement.
Research appears to be quite detailed based on the patent so it remains to be seen what features the device ships with - if or when it does. Co-processors, active stereoscopic cameras and touchscreens could potentially all lead to large power consumption issues
The 'EyePad' thing is just a tongue in cheek reference to iPad...
I think the patent is just the product of brainstorming that might have been linked to PS4's controller (with the touchpad and all). With the rumours we have, I don't think its key idea - the depth volume for finger/object tracking around the touchpad - has made it into the final product.
PS4 to Gaikai like serve to handhelds in the home and to be served via Sony's Gaikai server.Having failed to find a sustainable business model in its original form, cloud gaming technology is coming home. By the end of the year, several major manufacturers will be offering players the ability to stream gameplay from PCs to mobile devices and set-top boxes in the home - with next-gen consoles perfectly positioned to follow suit.
Nvidia has already revealed its plans with the intriguing Project Shield announcement at CES last month. Integrating a state-of-the-art mobile processor in the form of Tegra 4, Shield not only allows for Android gaming on the move but also connects via WiFi to a GeForce "Kepler" GTX-equipped PC, allowing for any game to be streamed over a home network onto the handheld. Valve is set to follow suit with its entry-level Steambox, which in concept sounds very similar indeed to the OnLive microconsole - a low-power device designed for media streaming and equipped with interfaces for gaming controllers.
My guess also.Gaikai (PSTV): The complete back catalogue of PS games from PS1/PS2/PSP/PS3/Vita/PSMobile, streamable to ANY device that is connected to the internet.Your ID is locked to the new controller and NOT the console.PSTV would be integrated into every Bravia TV too and any TV that bought the licensing agreement.
In fact I believe that IS the biggest announcement we will get on WED.
Home 1.75 has us moving local stored personal data to the server. This likely means Home will soon be Gaikai served which also means it can expand to Handhelds and Smart TVs.
With this there should be no load times to move from site to site in Home.
I expect Gaikai to be a big part of the Playstation announcements on the 20th.
Home 1.75 has us moving local stored personal data to the server. This likely means Home will soon be Gaikai served which also means it can expand to Handhelds and Smart TVs.
With this there should be no load times to move from site to site in Home.
I expect Gaikai to be a big part of the Playstation announcements on the 20th.
the cameras could have wide angle lens & also if you read it they talk about rotating the object to get a full scan if it's too big.
That controller pic shows the lens on the back next to the usb port so it's a different idea, but I think there could be something to this Leap Motion theory simply because the pad looks the same size and shape and glassy looking. You'd be able to put the pad down in front of you and do intricate kinecty finger movements above it.
[0074] In addition to this AR marker functionality, as noted above the stereoscopic views of the common volume located above the touch panel (or equivalent surface area) of the EyePad provide depth maps for any real object that is positioned within the common volume, such as the user's hand. From these depth maps and the known positions of the cameras on the EyePad, it is possible to construct a 3D model or estimate of the user's hand with respect to the location and orientation of the EyePad and hence also with respect to the location and orientation of the EyePet (or other virtual entities) the are interacting with the EyePad. The 3D model of the EyePet and the 3D model of the user's hand can thus occupy a common virtual space, enabling very precise interaction between them.
[0075] For example here a user points with their hand within the common volume, the depth maps from the two stereoscopic cameras describe the location of points on the surface of the user's index finger within the common volume, and hence it is possible to calculate whether those surface points coincide with the surface model of the virtual EyePet. This gives the user the precision to stroke the EyePet's ear, tap its nose, tickle its tummy or otherwise interact with it in very specific ways, and moreover to do so for whatever arbitrary position or orientation they are holding the EyePad in.
[0076] The common volume can also be used as a proxy for a 3D virtual environment. For example, the P3D can display a fish tank on the TV. The common volume on the EyePad can then correspond to the virtual volume of the fish tank, enabling the user to interact with a virtual fish in the tank by moving their finger to a corresponding position within the common volume. A similar mode of interaction could be used to explore a graphical rendering of a room in a point-and-click style adventure. Other examples will be apparent to a person skilled in the art, such as using a finger tip to specify a path for a rollercoaster, or playing a virtual version of the well known electrified wire-loop game.
[0077] In conjunction with the video images obtained by the stereoscopic cameras, EyePad (or the PS3 in conjunction with the EyePad) can also construct a vitual model of an object placed upon the touchpad or equivalent central area; for example if the user places a can of cola on the touchpad, depth maps and images for both sides of the can are obtained, enabling the generation of a virtual model of the can.
[0078] Optionally, to obtain an improved image of the can near the centreline of the diagonal of the touchpad running between the stereo cameras, the user may rotate the can, and the rotation is measured using known optical flow techniques; the resulting images and depth maps from the new angle provide redundancy that enables an improved image and model of the can to be generated. Alternatively or in addition, further stereoscopic cameras 1030C(L, R) and 1030D(L, R) may be provided at the remaining corners of the touchpad to provide such redundancy in the captured information.
[0079] In this way, the user can place an object on the EyePad, and see it copied into the virtual world of the game.
[0080] In a similar manner, the user can put their face within the common volume in order to import their own face onto an in-game character or other avatar. Where the common volume is smaller than the user's face, again an optical flow technique can be used to build multiple partial models of the user's face as it is passed through the common volume, and to assemble these partial models into a full model of the face. This technique can be used more generally to sample larger objects, relating the accumulated depth maps and images to each other using a combination of optical flow and the motion detection of the EyePad to create a final model of the object.
Looks like there was.nothing to that mini eyepad idea now the trackpad is no longer glossy.
http://www.theregister.co.uk/2013/01/15/chrome_adopts_web_speech_api/ said:The API is a W3C initiative that makes it possible for browsers to tune into audio input, or even record sound.
If you're brave enough to run the Chrome 25 beta, available here, Google's demo of a voice-driven email composer shows off the new feature nicely. Google says the API also works in the Chrome Android beta.
Once the feature makes it into production versions of Chrome, things could get mighty interesting. A voice-enabled Android browser would take the mobile OS's voice-recognition capabilities beyond its current Voice Actions abilities. It will also take Android past Apple's flawed speech Siri recognition – which is, in your correspondent's experience, rather less useful than the voice-driven search in Google's iOS search app. A speechified version of Chrome on iOS with speech baked in would put the cat among the pigeons.
Speech in Chrome is not Google's only post-WIMPs experiment. One worth looking at is the collection of games created to promote global men's health charity Movember. The games use a PC's camera to control an on-screen mustache, with changes of your head position and wiggles of your upper lip replacing more conventional game controllers.
Google's adoption of the Web Speech API comes on top of numerous gesture-recognition efforts, including Intel's recent release of an SDK for its perceptual computing toolkit, Kinect for PC, and gesture-driven interfaces in all manner of Smart TVs. The frequency of announcements of this ilk signal that the industry is moving beyond the WIMPs interface. Google's addition of speech to Chrome will offer another post-WIMPs method of interaction and do so in a crucial class of application.
Edit to be clear:
If Sony is using 8 GB GDDR5 for main RAM then they must have two systems in the PS4, a low power and a Performance SoC all in one. The low Power system can not use GDDR5 memory or the USB3 port and this is likely the reason for the separate Kinect/Eye port. Google TV platforms pull 8 watts average and a PS3 pulls 61 watts minimum. A PS4 with GDDR5 memory even clocked lower with a Jaguar CPU and only 4 CUs active is going to pull more than 25 watts and likely more than 45 watts at idle or streaming mainly because of the GDDR5 memory (45 watts is the current max for IPTV streaming and that is likely to drop). A Temesh with DDR3 and 2 CUs pulls between 5-15 watts and with Wide IO 176GB/sec DRAM a few watts more; add to this 5 watts for the Zero power GPU mode. There will be power mode regulations that impact the PS4 and Xbox 720 designs.
DF said:There was also talk of a new processing module in the PS4 hardware designed to handle tasks like background downloading. Our sources suggest a low-power ARM core designed to handle "standby" tasks along these lines, while the console also saves the current gameplay state when the system is closed down, meaning instant access to the last game you played when you power up again.
DF said:There are a number of new ideas we love about the PlayStation 4, revealed for the first time last night. A low-power ARM processor manages PS4 while it's on standby, and freeze-frames current gameplay in memory for instant-on gaming when you power-up
Thanks for the cites...I do understand that there is probably a complete low power system in addition to a performance system in the PS4. If Sony is going to support something like Google TV as speculated by Microsoft in their leaked Xbox 720 powerpoint then a low power system (less than 10 watts) with most of the Google TV external box features is needed (including 8 GB memory). How are they going to accomplish that? Also when, as rumored, Sony releases an "other OS" for the PS4 is it going to include support for all hardware or only parts of it.Hey Jeff, I assume you missed the reveal that the PS4 has an onboard ARM CPU, for OS and/or low power tasks.
http://www.eurogamer.net/articles/df-hardware-spec-analysis-playstation-4
Very interesting read summarizing PS4 architectural choices (sorry if old, I've not seen it):
http://www.bradfordtaylor.com/insert-blank-press-start/ps4-vs-the-great-discord/
Think of me as an informed armchair quarterback with no inside track. I understand systems and can follow schematics, flow charts and timing charts but do not design or program on these systems. I'm 61 and was involved in programming in the 80's with two partners and developed 4 commercial products of which one was bought by EA (Music Construction Set for the ST) and one distributed in Europe (Revolver), this was before GPUs. As such my opinion on gaming performance is worthless other than I agree with the very good article cited above your post.Jeff I am curious, what are your thoughts on the PS4 as a gaming machine?
If RVU support is coming for the PS3 and Webkit is to get accelerated (GPU) composting then Sony likely has an answer that would allow Other OS Linux for the PS3.Insiders from Sony say they have introduced a customized kernel version rather than using the basic kernel to support this feature. This customized kernel may support specific versions of Linux only as a part of beta testing. Subsequently Sony will enable all version support after successful completion of beta testing.
But this time Sony is confident that they wont block this feature, and that they have an alternative to block the security threats.
An inside source also says Sonys firmware upgrade during the release of PlayStation 4 will re-enable the other OS support in PlayStation 3 as well. So its good news for PlayStation 3 owners too after suffering for couple of years. Moreover its believed to be a gamble to boost PlayStation 4 sales.
Same applies to the PS4.http://forum.beyond3d.com/showpost.php?p=1716237&postcount=968 said:This is the HDMI CEC standard: http://en.wikipedia.org/wiki/HDMI#CEC
The most interesting IMO parts are the tuner control, I guess that means that the Xbox ought to be able to tell the cable device to switch to X channel. There is also one touch play, one touch record, volume, time record etc. I would say that an Xbox with inline HDMI (HDMI pass-thru) ought to be able to tell the cable box to record X program, or switch to Y channel at a set time and record X program and directly play any stored information on the device. I don't think there are too many cable companies/satellite companies so as a standard it ought to be workable with the most common dozen or so devices.
Version 2.0
The HDMI Forum is working on the HDMI 2.0 specification.[151][152] In a 2012 CES press release HDMI Licensing, LLC stated that the expected release date for the next version of HDMI was the second half of 2012 and that important improvements needed for HDMI include increased bandwidth to allow for higher resolutions and broader video timing support.[153] Longer term goals for HDMI include better support for mobile devices and improved control functions.[153]
On January 8, 2013, HDMI Licensing, LLC announced that the next HDMI version is being worked on by the 83 members of the HDMI Forum and that it is expected to be released in the first half of 2013.[12][13][14]
Based on HDMI Forum meetings it is expected that HDMI 2.0 will increase the maximum TMDS per channel throughput from 3.4 Gbit/s to 6 Gbit/s which would allow a maximum total TMDS throughput of 18 Gbit/s.[154][155] This will allow HDMI 2.0 to support 4K resolution at 60 frames per second (fps).[154] Other features that are expected for HDMI 2.0 include support for 4:2:0 chroma subsampling, support for 25 fps 3D formats, improved 3D capability, support for more than 8 channels of audio, support for the HE-AAC and DRA audio standards, dynamic auto lip-sync, and additional CEC functions.[154] The Sony PlayStation 4 will utilize this standard.
Jeff, I love your posts.jeff_rigby said:...EVERY SINGLE THING HE SAYS...
The advent of RVU soft clients not only on new Sony BRAVIA TVs but also on the estimated 16 million active connected PlayStation 3s in the US represents a significant expansion in the number of TVs that can use a RVU software client
Strategically, the relatively fast porting of the RVU client to new platforms emphasizes a key advantage of simple, remotely-rendered remote user interface (RUI) standards such as RVU: moving complexity from the client to the server (Picture based Menu) makes it easier to port the client to a range of hardware platforms.
The other class of remote UI being discussed in the industry is one based on HTML5, which would rely more heavily on the local graphics and rendering capabilities in the clients. HTML5-based remote UIs are being discussed by a wide range of service providers and pay-TV software companies. It is too early foresee how RVU and HTML5 will propagate through the market, whether one standard will supersede the other, or whether any degree of cross-compatibility will be achieved between them
I.E. There are two SoCs in one with a ARM low power trustzone separate from the High performance Game Console...ARM system for secure and low power IPTV and X86/GPU for gaming/performance. Some fabric must exist to manage threads across the ARM buss and X86 buss as they are different and incompatible...this is why in some use cases we can have two systems in one.http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.prd29-genc-009492c/ch06s03s02.html said:The security of the system is achieved by partitioning all of the SoCs hardware and software resources so that they exist in one of two worlds - the Secure world for the security subsystem, and the Normal world for everything else. Hardware logic present in the TrustZone-enabled AMBA3 AXI bus fabric ensures that no Secure world resources can be accessed by the Normal world components, enabling a strong security perimeter to be built between the two. A design that places the sensitive resources in the Secure world, and implements robust software running on the secure processor cores, can protect almost any asset against many of the possible attacks, including those which are normally difficult to secure, such as passwords entered using a keyboard or touch-screen.
Which means the ARM Trustzone processor can also be used for some of the work in the "real world" Game Console side.The second aspect of the TrustZone hardware architecture is the extensions that have been implemented in some of the ARM processor cores. These additions enable a single physical processor core to safely and efficiently execute code from both the Normal world and the Secure world in a time-sliced fashion. This removes the need for a dedicated security processor core, which saves silicon area and power, and allows high performance security software to run alongside the Normal world operating environment.
http://www.arm.com/products/processors/cortex-a/cortex-a5.php said:The ARM Cortex-A5 processor is the smallest, lowest cost and very energy efficient applications processor capable of delivering the internet to the widest possible range of devices: from low-cost entry-level smartphones, feature phones and smart mobile devices, to pervasive embedded, consumer and industrial devices.
http://en.wikipedia.org/wiki/Advanced_Microcontroller_Bus_Architecture said:The Advanced Microcontroller Bus Architecture (AMBA) is used as the on-chip bus in system-on-a-chip (SoC) designs. This is useful for major project in VLSI stream. Since its inception, the scope of AMBA has gone far beyond microcontroller devices, and is now widely used on a range of ASIC and SoC parts including applications processors used in modern portable mobile devices like smartphones.
The AMBA protocol is an open standard, on-chip interconnect specification for the connection and management of functional blocks in a System-on-Chip (SoC). It facilitates right-first-time development of multi-processor designs with large numbers of controllers and peripherals.
Advanced eXtensible Interface (AXI)
AXI, the third generation of AMBA interface defined in the AMBA 3 specification, is targeted at high performance, high clock frequency system designs and includes features which make it very suitable for high speed sub-micrometer interconnect:
AMBA products
A family of synthesizable intellectual property (IP) cores AMBA Products licensable from ARM Limited that implement a digital highway in a SoC for the efficient moving and storing of data using the AMBA protocol specifications. The AMBA family includes AMBA Network Interconnect (NIC-301), SDRAM and FLASH memory controllers (DMC-34x, SMC-35x), DMA controllers (DMA-230, DMA-330), level 2 cache controllers (L2C-310), etc.
Some manufacturers utilize AMBA buses for non-ARM designs. As an example Infineon uses an AMBA bus for the ADM5120 SoC based on the MIPS architecture.
@Jeff,
I'm really trying to follow your greatly documented news and analyses, but I - and I believe I speak for many more - have no clue what you want to say. So, what does that all mean in terms of real life applications / user cases?
Something like Google TV is coming to both consoles and will use ARM processor/buss and accelerators so that they can get really low power (under 10 watts) and secure IPTV. How much is running on the ARM buss is unknown. How they are going to integrate ARM and X86 busses and CPU code is unknown. The OS just got interesting <grin>.
As to real life applications/use cases I think I have gone over this. Take Google TV features + cloud + RVU-DLNA+DTCP-IP + XTV + ATSC 2.0 + OCAP applications + HTML5 + HTML5 Browser Desktop + Voice and Gesture recognition + Skype + IPTV and Augmented Reality, some imagination and think Own the living room.
Likely the ARM processor is handling the blue tooth for the Six Axis Controller, maybe the Kinect like Gesture and voice control is via accelerators connected to the ARM buss, who knows? If it makes sense for security or low power then I'd guess the ARM low power subsystem is being used.Can the OS run on the ARM processor as well (at the same time) in the end freeing up the Jag CPU's for games?
For those that don't understand anything beyond Google TV like features; All the initials are about STANDARDS that require a Software Stack that starts with a "networked" Blu-Ray player that has a HTML5 Browser and supports DLNA with DTCP-IP security (Think PS3 as a "networked" Blu-ray player). Think of a PS4 as a PS3 on steroids with a more secure DRM called "Trustzone" that allows for low power IPTV so Google TV is practical.@Jeff, thanks - this I can understand
Compare this (3.5 watts) with the PS3 using 61 watts idle and less than 90 watts IPTV streaming.Mali-400 MP
The world's first OpenGL® ES 2.0 conformant multi-core GPU provides 2D and 3D acceleration with performance scalable up to 1080p resolutions, while maintaining ARM® leadership on power and bandwidth efficiency.
With support for 2D vector graphics through OpenVG 1.1 and 3D graphics through OpenGL ES 1.1 and 2.0, the Mali-400 MP provides a complete graphics acceleration platform, based on open standards.
Scalable from 1 to 4 cores the Mali-400 MP enables a wide range of different use cases, from mobile user interfaces up to smartbooks, HDTV & mobile gaming, to be addressed with a single IP. One single driver stack for all multi-core configurations simplifies application porting, system integration and maintenance. Multicore scheduling and performance scaling is fully handled within the graphics system, with no special considerations required from the application developer
The provision of an industry standard AMBA® AXI interface makes integration of Mali-400 MP into system-on-chip designs straight-forward, and also provides a well-defined interface for connecting to other bus architectures. ARM is in the unique position to provide an optimized compute platform that uses ARM Cortex processors, Mali GPU and ARM CoreLink CCI-400 technologies. This heterogeneous approach means that a range of applications is more efficiently processed when shared between the CPU and the GPU. This makes full use of the inherent capabilities of each system component to achieve the best possible balance of power and performance.