• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS4's GPU customization revealed (paging Jeff)

I give up, I'll just wait for the teardown.

But don't you think they'd mention this? Or mention this supposed second GPU?


Why would Cerny play down this second cpu as an I/O and file management handler if it's part of this amazing Media processor you're describing?
First let me say that I respect your opinions, you actually read the cites and come up with valid arguments.

First, Game mode is not regulated and the latest PS3 has a 61 watt min (min should be = EPA regulated system mode of 40 watts at GPU active UI) and 95 watt max. PS4 may have a min of 60 watts (That's a guess but remember the WiiU is 37 watts.) with a Max of 140 watts using the AMD GPU and nearly 10X more powerful than the PS3.

Streaming IPTV on PS3 is near the Max power rating or about 85 watts (EPA reg is 50 watts) and the PS4 using the AMD GPU would be on the order (with accelerators) of 40-60 or if using a second GPU on the order of about 7-10 watts.

They haven't mentioned a HDMI-in (not necessary if using RVU and HDMI-in is not a lock for this reason) or that they will be supporting RVU or that RVU is going to be the hook that allows them to own the living room. The PS4 looks like it's going to be the most powerful game console this generation and so far this is about gaming.

It's not an amazing media processor, it's what is going to be standard. Wake on voice and gesture and wake on Lan, AOAC in the first home platform serving to all CE devices in the home added to RVU, XTV and the most powerful game console GPU......

iLo said:
Given the lack of time stamp, this failure to live up to EPA: was it before or after the announcement of the current specs?
That's the biggest issue I have with the document. These regs start in 2008 with a three tier phase-in and with feedback from Game Console Manufacturers. To get a better understanding read the following:

Energy Star EPA sponsored Voluntary compliance standards Three tiers starting with tier 1 2010

There are EU power regulations for Always on Standby mode with exceptions for "special features". Standby is 500mw but special exceptions are allowed and I have not been able to find the power that is authorized. It applies to the PS4 and Xbox 720. The always on mode for the Xbox and PS4 is not required to be 500mw, read the exceptions and use cases. One has a game console able to turn on a Blu-ray player and control as well as play the blu-ray in the player; RVU should allow such a use case.

Joint power mode paper from all Game console makers.
Response to the above paper from EU energy conservation groups
EPA Publishes Voluntary Criteria for ENERGY STAR Game Consoles

It costs the Manufacturers money to comply with EPA EnergyStar regulations and if you read all the above you see them whining and trying to set levels that they can meet without having to spend extra to comply.

The first Sony patent dealing with 2 GPU for power savings is 2009 (the first EPA proposals were in 2008), the last building on the previous is Jul 2012. So Sony knew they would have issues with one GPU early on.

Additionally the Game console must meet STB feature requirements for Computers (which are less stringent but are equal to the above).

Continuation of the discussion on SemiAccurate.

Consider ARMs big.LITTLE and Sony's big-little GPU patent. Consider AMD's HSA and the first third party IP they put in their SoCs is ARM/Trustzone. Consider that the Durango is a Windows 8 AOAC game console and AOAC is a Power saving Tablet feature. See a pattern?
 

i-Lo

Member
^I think the ARM chip was created to facilitate the conformation with the EPA standard given it was put into place in 2010 (as you pointed out).

I would also speculate that going with Jaguar may have something to do with this besides the discussion about how Steamroller was not going to be ready.

Given the public announcement in 2013, I would assume that Sony has hit close enough to the proposed target to satiate EPA.
 

onQ123

Member
Hello and acknowledged.

can you confirm that Starsha is the GNB & is there anything else you can add now that the other specs are out.


oh & sorry for reposting this info but you know once it hit the internet it's never truly gone in fact I found it on another website like a week after it was removed lol.
 

onQ123

Member
All I can say is that, I had to face consequences for whatever information I posted in this forum and I think I still am.

Sorry to hear that guess you was caught up in the moment not really thinking about the consequences at the time hopefully you didn't get your friend in too much trouble.

might have been a better idea to post it on one of them random blogs with a few lies mixed in with the truth to make the trail harder to follow then PM it to someone else & let them post it.
 
From a SemiAccurate Thread:

Comparing a 7970 to Sony's GPU in the Jaguar APU to determine if the PS4 APU can meet Energy Star 40 watts for an active screen without user input and 50 watts for IPTV streaming using h.265 (IPTV using h.264 is accelerated and the GPU not needed).

Originally Posted by Drunkenmaster

In the case of a 7970, idle is something around 300Mhz (I forget which, with dual screen its higher and so I haven't seen the lower clock in ages), but for ACCELERATED flash content, the card isn't in idle anymore. What is stable on the gpu at 300Mhz and say 0.9v, isn't stable when under load and flash acceleration is being used, this is using the gpu, but occasionally with new cards you get a situation where the card is not clocking up from the 300Mhz(or thereabouts) idle clocks to the 500Mhz and marginally higher voltage desktop/load mode, and then in games it will clock up to 900-1000Mhz at 1.1v or so.

LP Silicon can't go to .9V Given that letters to the EPA are slightly vague, I believe it's saying that with LIMITED (due to cost) Freq/voltage scaling it would not be possible to scale a PS4 GPU enough to go from Full game power to low clock/votage enough to do h.265 at under 50 watts, it would be too expensive to use a GPU that could scale that range. I think you have to change the silicon being used to have the voltage range. TSMC is using the same low power silicon for all Jaguar based APUs. It's not the same silicon being used for the high performance GPUs that have a wider range of voltages.

Just based on cost and that LP silicon is TSMCs first formula @28nm and fits the clock speeds being used, I think it's what is being used for the PS4. It has the least voltage range to scale voltage with Frequency.

From EPA Draft said:
One stakeholder commented that power scaling does not have infinite elasticity and that in order for the manufacturer to “hit” a certain downscaled number for media streaming, the manufacturer may have to opt for a chip that has an energy ceiling below that which is optimal for other, non-scaled functions, like game play. Alternatively, there may exist a subset of chips that could handle both extremes (e.g., a chip designed for high-end ultra books) but at an exorbitant cost relative to what is an affordable option for a device priced at several hundred dollars. It also noted that redesigning consoles’ motherboards to accommodate scalable architecture is an incredibly complicated and expensive process and even if the next generation was updated it may require more processing power.

For instance when you stated .9V for the AMD GPU I remembered doing research on the types of Silicon that might be used for the PS4 AMD APU and came to the decision LP would be used and LP can't support .9V, it's generally 1.05 and a .01V difference creates a large difference in waste heat. High Performance Silicon can support a wider range of voltages. Reduce voltage and the silicon uses less power so you can scale a GPU higher in freq without it getting hot. It can also scale to a higher frequency with a larger voltage. Sweet spots are calculated with multiple hour test runs.

PRACTICAL POWER GATING AND DYNAMIC VOLTAGE/FREQUENCY SCALING AMD paper and the example used is LLANO which is a High power/performance APU where the cost benifit is greater for gateing and power/fequency scaling than for LP silicon and Jaguar APUs. I can't find any information on this but likely Jaguar APUs on LP silicon have fewer expensive power scaling features. It follows that unless Sony customized the APU it also has limited power scaling and gating. So comparing the power gating and scaling of Jaguar APUs to LLano or Trinity high performance APUs is likely going to give incorrect conclusions. Same goes for comparing High performace AMD GPUs with Jaguar based APU with larger GPUs.

LP silicon fits with a 10 year life in PS4 refreshes. Next refresh is going to use LPM and Jaguar or the next iteration of Jaguar. This was part of one of Sweetvar26's leaks; "They moved on to Jaguar from Steamroller for the 10 year life"; it's also supported by H. Goto in PC Watch articles, High performance/power CPUs don't have a life going forward in AMD published roadmaps (he extends the roadmap with a ?). Handhelds and battery life are the thrust of the design criteria and major market so High performance CPUs for PCs are not in the roadmap. With more power efficient Low Power designs the power/freq scaling and gating gives less benefit for the costs incurred, so we should see less in the future.

Originally Posted by http://semimd.com/blog/tag/amd/
Sony introduced the PlayStation 4, which is based on AMD’s single-chip, eight-core custom processor. The x86 processor, dubbed Jaguar, is a 28nm device built by TSMC.
 
STB functionality which was at 20 watts from 2010 to about 2011 and applied to Game consoles was lifted by Energy star because stake holders stated that multiple features provided above traditional STB would use more than 20 watts. STB is RVU and is about the FCC mandate passed in 2010 that Cable companies would have to provide the following:

The new set of FCC rules (from 2010), set to be made mandatory by June 2, 2014, also clarifies what capabilities are expected of the HD streams:

recordable high-definition video => DVR ability in both "always on" consoles without a TV tuner needed using RVU.
closed captioning data
service discovery
video transport
remote control command pass-through

DLNA Premium Video Profile, an HD-compliant version of the secure-streaming standard set to be ratified in 2013, was suggested as one possible option for cable companies.
The STB 20 watt limit may be revisited by Energy Star and other countries.

The PS4 STB functionality @ 20 watts had to include a ARM GPU & CPU. SO from 2008 through sometime in 2011 the PS4 design MUST have included ARM GPUs. As I said before, I think the statements from 2 stakeholders that they couldn't meet the EPA specs with AMD GPU means the ARM GPU is in the second chip as Southbridge (power VR series 6 which is similar to the ARM Mali 600 series and can support a 4K UI). Series 6 ARM GPUs are about equal to a PS3 GPU and were never considered for the primary gaming GPU in the PS4. 2 GPUs were planned since 2009 (See Sony Patent from 2009 for two GPUs, low + High power ).

The following from 2009 was taken to mean for the Vita but Sony signed an agreement for the rights to use the PowerVR 5 GPU in other consumer products. The following says Series 6 in the PS4. Series 5 is in the Vita right? 5 or 6 in the 22nm PS3?

http://translate.google.co.uk/translate?u=http://ps3clan.nl/2009/11/ps4-krijgt-ongeevenaarde-graphics/&sl=nl&tl=en&hl=en&ie=UTF-8 said:
FNGonline free SCEI has officially chosen the PowerVR Series 6 made by Imagination Technologies for the production of the next console in 2012. (If the film is now running in the cinema, the world ends, fine!) The IMGTec's PowerVR technology will use a technology called TBDR and perform 3 to 5 times better than a competitive level nVidia / ATI graphics card. TBDR was the basis of the SEGA DREAMCAST and the reason that the console had incredible grafics for that time. With all the current developments, the technology is so far developed that the results will be incredible. So the Series 6 card plus the Cell processor ..... World Domination!

"The PlayStation 4 Shall use a high-end variant of the 6 Series line. Performance, specifications and features are unknown at this time. The Series 6 Shall receive an official announcement from IMGTEC sometime in 2010, with initial models targeting the smartphone and netbook sector.

http://gamingbolt.com/sony-licenses-more-powervr-sgx-series5xt-graphics-chips-from-imagination-technologies said:
Imagination Technologies Group plc (LSE: IMG; “Imagination”), a leader in System-on-Chip Intellectual Property (“SoC IP”), has signed a further license agreement with Sony Corporation, (“Sony”), a leading consumer electronics company, for IP cores from Imagination’s PowerVR SGX Series5XT graphics family.

Sony will deploy Imagination’s technologies in SoCs targeting consumer markets.

Under the terms of the licence agreement Imagination receives licence fees and royalty revenues based on shipments of semiconductor products incorporating Imagination’s IP.
Were changes to the PS4 design made in 2011 or is a ARM 600 series GPU in the second chip as Southbridge.
 

onQ123

Member
STB functionality which was at 20 watts from 2010 to about 2011 and applied to Game consoles was lifted by Energy star because stake holders stated that multiple features provided above traditional STB would use more than 20 watts. STB is RVU and is about the FCC mandate passed in 2010 that Cable companies would have to provide the following:

The STB 20 watt limit may be revisited by Energy Star and other countries.

The PS4 STB functionality @ 20 watts had to include a ARM GPU & CPU. SO from 2008 through sometime in 2011 the PS4 design MUST have included ARM GPUs. As I said before, I think the statements from 2 stakeholders that they couldn't meet the EPA specs with AMD GPU means the ARM GPU is in the second chip as Southbridge (power VR series 6 which is similar to the ARM Mali 600 series and can support a 4K UI). Series 6 ARM GPUs are about equal to a PS3 GPU and were never considered for the primary gaming GPU in the PS4. 2 GPUs were planned since 2009 (See Sony Patent from 2009 for two GPUs, low + High power ).

The following from 2009 was taken to mean for the Vita but Sony signed an agreement for the rights to use the PowerVR 5 GPU in other consumer products. The following says Series 6 in the PS4. Series 5 is in the Vita right? 5 or 6 in the 22nm PS3?



Were changes to the PS4 design made in 2011 or is a ARM 600 series GPU in the second chip as Southbridge.

Only problem with the PowerVR 6 rumor is where it came from




11/16/09 WORLD EXCLUSIVE: Sony has chosen the GPU for the next generation PlayStation.

We can officially reveal in this world exclusive that SCEI has officially chosen the Imagination Technologies currently in development PowerVR Series 6 architecture for it's next generation PlayStation console scheduled for 2012 worldwide deployment.

The PlayStation 4 shall use a high end variant of the Series 6 line. Performance, specifications and features are at this time unknown. The Series 6 shall receive an official announcement from IMGTEC sometime in 2010, with initial models targeting the smartphone and netbook sector.

It is believed that Sony has gained exclusive rights to the technology for the console space.

IMGTEC's PowerVR technology uses an advance technique called TBDR which can outperform a competing IMR product from nVidia/ATi by 3-5 fold whilst maintaining equal die size and price point. TBDR was the primary reason the SEGA DreamCast was capable of such astonishing graphical feats as early as 1998.

Interestingly SCEI has also chosen IMGTEC as the graphics provider for their next generation PSP. That particular product shall however be using the Series 5XT.

Mr Zachary Morris

Zack Morris is know to make up crazy rumors about PowerVR & Dreamcast 2
 
Only problem with the PowerVR 6 rumor is where it came from

Zack Morris is know to make up crazy rumors about PowerVR & Dreamcast 2

Yup, you can question the source and when this was first released in 2009 it was savaged in BY3D and declared false. But they were looking at it as the primary GPU not the STB or smaller GPU in the 2009 Sony patent (Low power + High power GPU).

The Quote out of the post: "The PlayStation 4 Shall use a high-end variant of the 6 Series line. Performance, specifications and features are unknown at this time. The Series 6 Shall receive an official announcement from IMGTEC sometime in 2010, with initial models targeting the smartphone and netbook sector." is a different "professional" style. Information was released for ARM Mali 600 series GPUs in 2010, IMGTEC 6 series in 2011.

The IMGTEC series 6 has some custom high-end variants that support DX 11.1 and designs have being released early last year (2012). On IMGTEC's web site the series 6 is said to be designed for tablets to PCs and Game consoles as the DX 11.1 and OpenGL ES 3.0 indicates. DX 11.1 is/was not a handheld GPU spec till Windows 8 I'd guess and DX 11.1 is exactly what the Durango and PS4 AMD GPUs are supposed to support.

Too much in that 2009 quote is spot on and too early for anyone but an insider and this likely is an accurate leak. I wouldn't bet on it but I won't be surprised if it's accurate.

If we look at what the Low power GPU used with STB features has to support:

Blu-ray playback (includes Java)
4K blu-ray
RVU + XTV (browser & Java)
Augmented Reality & Browser
4K UI
Codec accelerators encode and decode including multi-stream for 4K glassless S3D and multiple Head mounted VR glasses

Imagination touts the Codec and Augmented reality features of their GPUs.

http://www.imgtec.com/powervr/sgx_series6.asp said:
Based on a scalable number of compute clusters the PowerVR Rogue architecture is designed to target the requirements of a growing range of demanding markets from mobile to the highest performance embedded graphics including smartphones, tablets, PC, console, automotive, DTV and more. Compute clusters are arrays of programmable computing elements that are designed to offer high performance and efficiency while minimising power and bandwidth requirements.

PowerVR Series6 GPU cores are designed to offer computing performance exceeding 100GFLOPS (gigaFLOPS) and reaching the TFLOPS (teraFLOPS) range enabling high-level graphics performance from mobile through to high-end compute and graphics solutions.

The PowerVR Series6 family delivers a significant portfolio of new technologies and features, including: an advanced scalable compute cluster architecture; high efficiency compression technology including lossless image and parameter compression and the widely respected PVRTC™ texture compression; an enhanced scheduling architecture; dedicated housekeeping processors; and a next generation Tile Based Deferred Rendering architecture. These features combine to produce a highly latency tolerant architecture that consumes the lowest memory bandwidth in the industry while delivering the best performance per mm2 and per mW.

All members of the Series6 family support all features of the latest graphics APIs including OpenGL ES 3.0*, OpenGL ES 2.0, OpenGL 3.x/4.x, OpenCL 1.x and DirectX10 with certain family members extending their capabilities to full WHQL-compliant DirectX11.1 functionality.
The PowerVR™ range of multi-standard video cores offer performance scalability from single stream low-definition video decode acceleration to multi stream high-definition hardware video encode and decode. The widest range of multi-standard format support includes: H.264 (High, Main and Base Profiles), H.263, MPEG-4, MPEG-2, WMV9/VC-1, AVS and JPEG.

The PowerVR VXD range of multi-standard video accelerators decode a range of compressed video formats on a multi-standard basis utilizing multimode hardware.

Also remember our discussion of Ray Tracing and your find Imagination Technologies:

http://www.imgtec.com/powervr/powervr_openrl_raytracing_technology.asp said:
PowerVR OpenRL™ is a flexible low level API, available for download as an SDK for accelerating ray tracing in both graphics and non-graphics (e.g., physics) applications. It is being integrated by a rapidly growing number of developers in rendering applications for a wide range of markets, including film and video, games, architecture, and industrial design.

This cross-OS, cross-platform API is the only ray tracer that supports multiple graphics devices with optimal performance that is vendor agnostic. Like OpenGL, OpenRL hides the complexities of interfacing with different graphics devices by presenting a single, uniform interface.

A free perpetual license of OpenRL is available for integration, with either commercial or non-commercial applications.
 
Hmm...according to the patent, they are aiming for context switching between the two, technology AMD wont implement into their APUs until 2014. Am I confusing something or is the PS4 more impressive than I thought?
 

onQ123

Member
Hmm...according to the patent, they are aiming for context switching between the two, technology AMD wont implement into their APUs until 2014. Am I confusing something or is the PS4 more impressive than I thought?

you don't think it took Sony 6 years to just pick a AMD APU & put it in a console do you?
 

Angry Fork

Member
I imagine Jeff as this mastermind genius who helped design the PS1/PS2, but then was fired for siding with Kuturagi's opposition within the organization, so now he spends all his days getting revenge by leaking/explaining all their secrets.
 

RoboPlato

I'd be in the dick
Word. PS4 is looking good.

I agree. I was initially a tad bit disappointed with the flop ratings of the CPU/GPU (I was expecting 2.2-2.5Tflops) but it sounds like the architectural improvements are going to put the actual output of the console above what I had initially expected. Everything I've heard about it's design has been brilliant and really adds to the performance that they'll get, especially in a closed box APU.
 
I agree. I was initially a tad bit disappointed with the flop ratings of the CPU/GPU (I was expecting 2.2-2.5Tflops) but it sounds like the architectural improvements are going to put the actual output of the console above what I had initially expected.

Indeed. You win Mark Cerny.
 
I agree. I was initially a tad bit disappointed with the flop ratings of the CPU/GPU (I was expecting 2.2-2.5Tflops) but it sounds like the architectural improvements are going to put the actual output of the console above what I had initially expected. Everything I've heard about it's design has been brilliant and really adds to the performance that they'll get, especially in a closed box APU.
never DOubt Based Cerny
 
Only problem with the PowerVR 6 rumor is where it came from
Read the thread you started on BY3D: "What are the Pros & Cons of having a PowerVR GPU in a Next Gen Console?" with two GPUs in mind and the IMGTec employee comments about having cake and eating it too. Think on the imgtec employee comment that among other things the .1 in 11.1 is about notification that the GPU supports TBDR. Imgtec employee talking Rogue for next generation one second and then denying it with a comment that the die is cast already in latin.

Weak support but in the thread it's Primarily about Sony and a Imgtec employee talks about a support chip that could be used in a Console to support Ray tracing. Also mentioned is Sony is pushing the limits.

So the second chip could contain even more that I'm guessing.... AMD Fusion chip to allow easy cross platform porting from PC and more in the second chip than support for low power RVU/XTV.

Was the reason rumors of Sony interested in Larabee for ray tracing: "2007 saw Intel working on the top secret Larrabee project, a massively parallel multi processor destined to become a graphics processor and real time ray tracer. With 40 simple CPU cores running in parallel Larrabee was hoped to be Intel’s foot in the graphics processor door."

Good article here.

Sony to have DirectX 11.1 support for the PS4 11.1 is also supposed to be required for Windows 8 and the Xbox 720 GPU will support this right?

AMD’s Graphics Core Next (GCN) Architecture, available in AMD Radeon™ HD 7700, HD 7800 and HD 7900 Series graphics, along with AMD FirePro™ W series cards, provide complete support for DirectX 11.1 Feature Level 11_1 in Windows 8. Examples of some of the new features it brings to the table, which are not yet available on competing products, include:

Target Independent Rasterization: accelerates rendering of 2D vector graphics (used by the Windows Modern UI, HTML5 web pages, and .SVG image files) by up to 500% or more.

UAV improvements: allow DirectCompute shaders to share data with any stage of the Direct3D rendering pipeline, enabling new hybrid graphics techniques that seamlessly combine GPU compute with traditional 3D rendering

Sum of Absolute Differences: exposes new shader instructions on the GPU that can massively accelerate a wide range of image processing tasks, including video image stabilization, photo/video search, and gesture or face recognition
The above three features give us an idea of the .1 difference between DX 11 and DX 11.1 and how AMD thinks they will be used. A Guess would be HTML5 games, Augmented reality and Social (Pictures, Home movies, sharing, indexing).

Wouldn't the UAV improvements and Target Independent Rasterization above combined with the changes made to the PS4 GPU support a combined Ray Tracing and rasterization => OpenRL - PowerVR of Imagination Technologies. Ray tracing need lots of compute right? Doesn't the PS4 GPU design appear to be very flexible allowing more compute that possibly needed for a GPU Rasterized Game

John Carmackid Software, local hero jump to post
I wrote the following (slightly redacted) up a little while ago for another company looking at consumer level ray tracing hardware as it relates to games. I do think workstation applications are the correct entry point for ray tracing acceleration, rather than games, so the same level of pessimism might not be apropriate. I have no details on Imagination’s particular technology (feel free to send me some, guys!).

------------

The primary advantages of ray tracing over rasterization are:

Accurate shadows, without explicit sizing of shadow buffer resolutions or massive stencil volume overdraw. With reasonable area light source bundles for softening, this is the most useful and attainable near-term goal.

Accurate reflections without environment maps or subview rendering. This benefit is tempered by the fact that it is only practical at real time speeds for mirror-like surfaces. Slightly glossy surfaces require a bare minimum of 16 secondary rays to look decent, and even mirror surfaces alias badly in larger scenes with bump mapping. Rasterization approximations are inaccurate, but mip map based filtering greatly reduces aliasing, which is usually more important. I was very disappointed when this sunk in for me during my research – I had thought that there might be a place for a high end “ray traced reflections” option in upcoming games, but it requires a huge number of rays for it to actually be a positive feature.

Some other “advantages” that are often touted for ray tracing are not really benefits:

Accurate refraction. This won’t make a difference to anyone building an application.

Global illumination. This requires BILLIONS of rays per second to approach usability. Trying to do it with a handful of tests per pixel just results in a noisy mess.

Because ray tracing involves a log2 scale of the number of primitives, while rasterization is linear, it appears that highly complex scenes will render faster with ray tracing, but it turns out that the constant factors are so different that no dataset that fits in memory actually crosses the time order threshold.

Classic Whitted ray tracing is significantly inferior to modern rasterization engines for the vast majority of scenes that people care about. Only when two orders of magnitude more rays are cast to provide soft shadows, glossy reflections, and global illumination does the quality commonly associated with “ray tracing” become apparent. For example, all surfaces that are shaded with interpolated normal will have an unnatural shadow discontinuity at the silhouette edges with single shadow ray traces. This is most noticeable on animating characters, but also visible on things like pipes. A typical solution if the shadows can’t be filtered better is to make the characters “no self shadow” with additional flags in the datasets. There are lots of things like this that require little tweaks in places that won’t be very accessible with the proposed architecture.

The huge disadvantage is the requirement to maintain acceleration structures, which are costly to create and more than double the memory footprint. The tradeoffs that get made for faster build time can have significant costs in the delivered ray tracing time versus fully optimized acceleration structures. For any game that is not grossly GPU bound, a ray tracing chip will be a decelerator, due to the additional cost of maintaining dynamic accelerator structures.

Rasterization is a tiny part of the work that a GPU does. The texture sampling, shader program invocation, blending, etc, would all have to be duplicated on a ray tracing part as well. Primary ray tracing can give an overdraw factor of 1.0, but hierarchical depth buffers in rasterization based systems already deliver very good overdraw rejection in modern game engines. Contrary to some popular beliefs, most of the rendering work is not done to be “realistic”, but to be artistic or stylish.

I am 90% sure that the eventual path to integration of ray tracing hardware into consumer devices will be as minor tweaks to the existing GPU microarchitectures.

John Carmack
1 post | registered Jan 23, 2013
"minor tweaks to the existing GPU microarchitectures" = the .1 in DX 11.1?

http://arstechnica.com/gadgets/2013/01/shedding-some-realistic-light-on-imaginations-real-time-ray-tracing-card/ said:
The Caustic technology's (Ray Tracing) path to the mass market will be similar: Imagination intends to integrate into future versions of its PowerVR GPUs. This isn't going to happen anytime soon—the Imagination representative gave us a tentative estimate of "four to five years" from now—but it may be that the phones and tablets of tomorrow will be capable of 3D rendering that is only now beginning to hit high-end workstations.

This would dovetail nicely with the way the industry is moving. Mobile devices are already getting more productive as they get more powerful, and the major hardware manufacturers seem determined to deliver devices that can be all things to all people—phones that can double as tablets, tablets that can double as laptops, and so on. By 2018, it's easy to imagine a tablet that can also do high-end CAD work, and if Imagination has its way, the Caustic ray tracing technology will be leading that charge.
During the life of both the PS4 and Xbox 720, handhelds will have support for Ray Tracing. Combining rasterization with a few select objects being ray traced might be an option for next generation consoles.

http://www.neogaf.com/forum/showthread.php?t=482792&highlight=ray+tracing onQ123 appears to be really up on this technology, I'm only just beginning to investigate it.

YouTube’s member ‘aimedehaire’ (aka icelaglace or Hayssam Keilany) has uploaded an interesting DX11 Tech Demo. According to the GTA IV and Skyrim modder, this is a DX11 Raytracing Tech Demo that is running on an i7 Intel and a Radeon 5870. Keilany has not shared any additional information about it, so go ahead and enjoy it for what it truly is – a tech demo, showcasing what next-gen era could actually bring to the table. Enjoy!
http://www.youtube.com/watch?feature=player_embedded&v=gbjW57zlVfc


Also some comments from AMD indicate there won't be a DX 12. One speculation would be an industry (Microsoft) change to OpenGL. If Microsoft is moving to Handheld support with Windows 8 then they need to accept the industry open source standards. The article has several possibles.

More Carmack ray tracing here:
 
http://seekingalpha.com/article/1355781-the-turnaround-accelerates-for-amd?source=google_news said:
Jaguar core APUs, code-named Kabini and Temash, will be built on TMSC's 28nm process. There are conflicting reports of where the PS4 APU will be fabbed. But, we do know that it is capable of being ported to GlobalFoundries 28nm process.

While we haven't gotten the official word from Microsoft (MSFT), it is the worst kept secret in technology that the "Xbox Next" will be an AMD CPU/GPU for the game processing and an ARM Holdings (ARMH) core for running the OS and set top box functions. With Microsoft apparently a silent member of the HSA, it doesn't make any sense for them to not be building an x86 based SoC.

"ARM Holdings (ARMH) core for running the OS and set top box functions." PS4 second chip the same!
 
Top Bottom