• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS4's GPU customization revealed (paging Jeff)

Posting on the behalf of onQ123. Paging Jeff_Rigby for finding these clues 4 months ago (Dec). But no one listened. (NOTE: More Info at Jeff's Link)

Some times the answer can be right under our nose & we never see it.


Remember the sweetvar26 post about the PS4 chip?

PS4:

New Starsha GNB 28nm TSMC
Milos
Southern Islands

DX11
SM 5.0
Open CL 1.0
Quad Pixel pipes 4
SIMD’s 5
Texture Units 5TCP/2TCC
Render back ends 2
Scalar ALU’s 320

EDIT: Some of those were crossed, may be they were updated/changed at a later date, I have no idea.
Quote:
Couple of more updates

Graphic North Bridge(GNB) Highlights
Fusion 1.9 support
DCE 7.0
UVD 4.0
VCE
IOMMU
ACP
5x8 GPP PCIE cores
SCLK 800MHz/LCLK 800MHz

Pretty weak compared to the PS4 GPU huh?

wait what the hell is a 'Graphic North Bridge'?

google & what do you find?

http://www.indeed.com/r/Rami-Dornala/e0704aad508659b2

Rami Dornala
Waltham, MA
Work Experience
Graphic processor
AMD - Waltham, MA
September 2011 to Present
Project:1 GNB core SOC
Duration: Sept 2011 , till date
Location: AMD
Description:
GNB core is based on the AMD fusion core technology, The GNB is a fusion of Graphic processor, power optimizer, audio processor, south bridge and north bridge which share a common interface with system memory.

Role: Tech Lead, Was responsible for Delivery of verification for Tapeout
Contribution:
1. Responsible for Functional verification of GNB.
2. Integrated ACP IP into the GNB environment
3. Integrated ISP IP into the GNB environment.
4. Aware of BIA, IFRIT flows.
5. Responsible for SAMARA and PENNAR integration.
6. Involved in kabini coverage closure, involved in LSC for kabini
7. Involved in fc mpu integration.
8. ONION and GARLIC bus OVC understanding and GNB environment set up for samara database.
9. Involved in LSA for Samara and Pennar GNB's
10. Involved in setting up of Pennar database with GF libraries
9.Involved with migration of Pennar database from TSMC to GF libraries.

Team Size: 12
Technology used:
Verification environment is a hybrid mixture of System-C, SystemVerilog and C++ language.GNB is targeted for 20nm technological library with GF foundaries.
Project:2 G4Main SOC

oh so it's a Graphic North Bridge yeah that make's sense! wait no it doesn't this is just as crazy as Mark Cerny saying that the PS4 custom chip is a south bridge.


these people are crazy

wait what? ONION and GARLIC where have I seen that before?

lvp2.jpg

As onQ123 pointed out, Cerny has said the chip in charge of background downloads and other basic OS functions are on the South Bridge... that makes sense as the ARM Trust Zone is part of the South Bridge as it has to keep the I/O secure...

So, if everything is in order. We have the custom South Bridge ARM chip and the CPU (Jaguar), GPU (7800 series GCN), and the custom GNB all in the AMD "Fusion" design for extremely efficient caching between the CPU and GPU.

TL;DR: (SOMEONE CORRECT ME IF I'M WRONG)
This provides a super fast and efficient way of caching data without having to do redundant work. Cerny mentioned CPU and GPU not having to copy redundant info from the cache in order to use it. It allow straight from CPU to GPU data transferring. All of this is integrated into the CPU, GPU and North Bridge (memory controller). All of this will significantly reduce latency beyond than just providing a large L2 cache because a lot of unnecessary work is cut out and more "shortcuts" are provided.


Even more TL;DR: Worried about GDDR5 latencies? Don't be. Large Cache, very capable bus, shortcuts and clever data transferring between CPU and GPU make that a non-issue.
 

Saberus

Member
No matter what you think of Jeff, he doesn't make stuff up. He looks for clues from online sources and tries to put a puzzle together.. yes he can be wrong, but he's never hides that fact. Its always fun to speculate.

IMO.. he's more correct than not.
 

GraveRobberX

Platinum Trophy: Learned to Shit While Upright Again.
I'll take Jeff's word as gospel before that hack of an analyst that is Michael Pachter on the industry inside know how
 

DBT85

Member
Is this a running gag on GAF? Why don't people listen to this Jeff guy?

Because he casts a new so wide that eventually you'll catch a fish.

And because to 99% of the people reading his posts they are entirely unintelligible.
 

iceatcs

Junior Member
Oh, I was saying Durango will have ARM and AMD. But I didn't realise it seem also in PS4 too.

With my speculate tell me Durango will have exactly same architecture as PS4 but only question is type of RAM and all of those processors.
 

Router

Hopsiah the Kanga-Jew
Yeah I talked with Jeff about some of this stuff and after like 2 minutes it just all goes over my head.
 

Saberus

Member
Oh, I was saying Durango will have ARM and AMD. But I didn't realise it seem also in PS4 too.

With my specialisation tell me Durango will have exactly same architecture as PS4 but only question is type of RAM and all of those processors.

Sounds about right.. but like you said.. Sony will have their own customization of the APU and specialized ram. That will make the difference between the two.
 

JaseC

gave away the keys to the kingdom.
Is this a running gag on GAF? Why don't people listen to this Jeff guy?

Because even he doesn't have a strong understanding of what he posts; more often than not he finds two 2s and comes up with a product of 5 (I found X, which seems related to Y, therefore [wild theory]). I do admire his enthusiasm and intrepid spirit, though.
 
Updated OP with a very small TL;DR.

GDDR5 high latencies am cry. No need to worry about it. There are likely many other implications, but I simply don't understand it all.

Oh, I was saying Durango will have ARM and AMD. But I didn't realise it seem also in PS4 too.

With my speculate tell me Durango will have exactly same architecture as PS4 but only question is type of RAM and all of those processors.

This isn't about the ARM processor (but the PS4 will have it part of the ARM TrustZone system.)
 

Ce-Lin

Member
at this point I'm starting to believe we will see games compatible with both Durango and PS4... one console future?
 
Is this a running gag on GAF? Why don't people listen to this Jeff guy?

The level of detail he puts into his speculation is pretty insane. He used to post briefly at a PC/console technology forum. Right or wrong I'm impressed with his passion and persistence. However, I haven't bothered to read any of his threads closely on this forum. So I am not sure why folks here don't listen, but from my perspective he was seeing stuff that wasn't there at beyond 3D. And some folks much more knowledgeable than me when it comes to console technology took him to task if I recall correctly so I'm glad to hear he's getting stuff right recently here at gamer haven like neogaf.
 

Triple U

Banned
So ps4 gpu is based on 7800 series not 8xxx series disappointing to hear
I really hate getting into the speculatory crap but per AMD/Sony, it's based off of there next gen tech. I really don't see why these type threads are necessary when we've already gotten so much.
 
Is this a running gag on GAF? Why don't people listen to this Jeff guy?

Because he's wrong most of the time and when he's right it's random. He's like when someone wins a $100 from the lottery after buying a million tickets.

We should be thanking Sweetvar for the original leak, it's been the best most accurate one until recently.
 
I believe Garlic and Onion are already apart of AMD's pre-existing APU designs, and the new part on the PS4 is the Onion+ part. I believe that's how it is.
 

jwk94

Member
He's not always right. Lol.
oh

It's more like 90%(?) of GAF has no idea what he's talking about. :(
I tried reading the OP and was lost ><

Because even he doesn't have a strong understanding of what he posts; more often than not he finds two 2s and comes up with a product of 5 (I found X, which seems related to Y, therefore [wild theory]). I do admire his enthusiasm and intrepid spirit, though.
Alright, thanks.
 
I believe Garlic and Onion are already apart of AMD's pre-existing APU designs, and the new part on the PS4 is the Onion+ part. I believe that's how it is.

I think so too. This Fusion is probably available to the Durango as well. It probably just comes down to whether or not they have the die space for it.
 

pants

Member
The one cool thing about Jeff: No one else really posts the tech stories he does so it is useful to have here as a reference. he casts his net so wide and randomly that later down the line it's useful to come back to some of the stuff he posts for more information and context on something.

At the time of posting though, he's usually fishing for needles in the Marianas trench.
 

onQ123

Member
Also remember what Eurogamer said

Additional hardware: GPU-like Compute module, some resources reserved by the OS

However, there's a fair amount of "secret sauce" in Orbis and we can disclose details on one of the more interesting additions. Paired up with the eight AMD cores, we find a bespoke GPU-like "Compute" module, designed to ease the burden on certain operations - physics calculations are a good example of traditional CPU work that are often hived off to GPU cores. We're assured that this is bespoke hardware that is not a part of the main graphics pipeline but we remain rather mystified by its standalone inclusion, bearing in mind Compute functions could be run off the main graphics cores and that devs could have the option to utilise that power for additional graphical grunt, if they so chose.
 

Razgreez

Member
Also remember what Eurogamer said

This seems to make logical sense. It seems that eurogamer mislead themselves by thinking that the extra-compute module would be used for physics. It also gives credence to previous posters who, though they appeared crazy at the time, highlighted the significance of the/a "companion chip"
 

Karak

Member
No matter what you think of Jeff, he doesn't make stuff up. He looks for clues from online sources and tries to put a puzzle together.. yes he can be wrong, but he's never hides that fact. Its always fun to speculate.

IMO.. he's more correct than not.

And he finds some interesting things. He may connect dots that are not there at times but aside from THEODDONE he is one of the best at gathering interesting data.
 

DieH@rd

Banned
Im not sure that this data of OP is all true, but Cerny's explanation gave us the best insight into APU customization and fast communication between CPU and GPU modules.
http://www.neogaf.com/forum/showthread.php?t=532077

Translation from japanese said:
Cerney: The GPGPU for us is a feature that is of utmost importance. For that purpose, we&#8217;ve customized the existing technologies in many ways.

Just as an example&#8230;when the CPU and GPU exchange information in a generic PC, the CPU inputs information, and the GPU needs to read the information and clear the cache, initially. When returning the results, the GPU needs to clear the cache, then return the result to the CPU. We&#8217;ve created a cache bypass. The GPU can return the result using this bypass directly. By using this design, we can send data directly from the main memory to the GPU shader core. Essentially, we can bypass the GPU L1 and L2 cache. Of course, this isn&#8217;t just for data read, but also for write. Because of this, we have an extremely high bandwidth of 10GB/sec.

Also, we&#8217;ve also added a little tag to the L2 cache. We call this the VOLATILE tag. We are able to control data in the cache based on whether the data is marked with VOLATILE or not. If this tag is used, this data can be written directly to the memory. As a result, the entirety of the cache can be used efficiently for graphics processing.

This function allows for harmonization of graphics processing and computing, and allows for efficient function of both. Essentially &#8220;Harmony&#8221; in Japanese. We&#8217;re trying to replicate the SPU Runtime System (SPURS) of the PS3 by heavily customizing the cache and bus. SPURS is designed to virtualize and independently manage SPU resources. For the PS4 hardware, the GPU can also be used in an analogous manner as x86-64 to use resources at various levels. This idea has 8 pipes and each pipe(?) has 8 computation queues. Each queue can execute things such as physics computation middle ware, and other prioprietarily designed workflows. This, while simultaneously handling graphics processing.

This type of functionality isn&#8217;t used widely in the launch titles. However, I expect this to be used widely in many games throughout the life of the console and see this becoming an extremely important feature.
 
The weak AMD chip with only 320 ALU's that sweetvar26 leaked last year wasn't the PS4's GPU it was the secondary chip.


secondary-ps4-chip-e1361421157584.png

Wait, that's what you were postulating on Beyond 3D? Cause that makes zero sense. GNB is just a term AMD uses for the way they integrate the GPU and memory controller in their APU designs. It has nothing to do with a potential Arm core in the south bridge for housekeeping in low power states. And an off chip south bridge is a terrible place to lock up a bunch of shaders.

We can already guess the weak APU was the 4 core bulldozer based chip used in the early departure kits. We know the final ps4 specs. Your crusade to discover hidden compute resources is pointless.
 

TL;DR: (SOMEONE CORRECT ME IF I'M WRONG)
This provides a super fast and efficient way of caching data without having to do redundant work. Cerny mentioned CPU and GPU not having to copy redundant info from the cache in order to use it. It allow straight from CPU to GPU data transferring. All of this is integrated into the CPU, GPU and North Bridge (memory controller). All of this will significantly reduce latency beyond than just providing a large L2 cache because a lot of unnecessary work is cut out and more "shortcuts" are provided.


Even more TL;DR: Worried about GDDR5 latencies? Don't be. Large Cache, very capable bus, shortcuts and clever data transferring between CPU and GPU make that a non-issue.

Cerny talked about this very exact same thing in another interview. Its posted in one of the PS4 Cerny interview threads(its a different interview than the one in the OP of said thread.)

edit: DieH@rd posted exactly what I referring to above.

I know but i hoped that some elements of 8xxx series would come in ps4 GPU like AMD did with X360

It may be. PS4's custom compute architecture may have a lot more in common with GCN2 than GCN. We wont know for sure until GCN2 is revealed, but the whats found in PS4 is definitely a GCN++.
 

Razgreez

Member
Not always but Jeff's about 79% right most of the time. And when is wrong, it's usually wild guessing based on sold research.

IJWT
(In Jeff We Trust)

I'm assuming your ambivalent wording means he's right approximately 79% of the time... in which case he should take up asset management since he would become the greatest asset manage in the world (for reference a great asset manager is seen as one who gets it right 51% of the time)
 

jaosobno

Member
PS4 looks to be an extremely efficient system. Considering that the 1st wave of games doesn't even use this kind of optimization and still manages to look amazing, 2nd wave of games should be mind blowing, not just because of this cache bypass but because devs will learn many little tricks (that we have no clue are there) that will produce incredible results.

@jeff, my hat's off to you sir, once again your meticulous research method has proven successful.
 

gofreak

GAF's Bob Woodward
Well, we're not talking about magic bullets wrt latency.

If you have to hit main memory you have to eat that latency, even if you do save some latency on cache checks.

But there is nice flexibility about what caches you do or don't use. GPU can go through no cache, go through GPU cache and CPU cache, go through CPU cache only.

So if you are running compute work and graphics work together on the GPU, the compute threads can work with the CPU and go through CPU cache only, or no cache if you're not working with the CPU, so GPU cache remains unpolluted for the graphics threads. It works on a finer granularity than that, you're not locked to a certain type of access per thread, but you get the idea :)


edit - this is talking about the bus stuff...which was already revealed via vgleaks and the cerny interview.
 
http://www.neogaf.com/forum/showpost.php?p=52485990&postcount=41 said:
RE: onQ123 post. Wait, that's what you were postulating on Beyond 3D? Cause that makes zero sense. GNB is just a term AMD uses for the way they integrate the GPU and memory controller in their APU designs. It has nothing to do with a potential Arm core in the south bridge for housekeeping in low power states. And an off chip south bridge is a terrible place to lock up a bunch of shaders.

We can already guess the weak APU was the 4 core bulldozer based chip used in the early departure kits. We know the final ps4 specs. Your crusade to discover hidden compute resources is pointless.
Yeah the off chip southbridge sorta kills the AMD stock GNB all in one idea (Potted MCM still likely though). If you look at ARM Trustzone and assume the second custom chip has ARM trustzone and also all the low power support for multiple planned background services as well as something like Google TV then this differs from the AMD stock APU design goals and would require something different from AMD's GNB (which is where AMD would put Trustzone and likely why everything IO is in UNB and AMD Kabini is the first SoC with IO in the SoC, First SoC with third party IP (Trustzone and more)).

Trustzone security includes all IO and the UI, anything that can be intercepted by malware to get bank pin numbers or intercept pay TV or IPTV streams. I assume the PS4 will have a GPU accelerated UI which requires a ARM GPU and if the PS4 is to support a 4K UI then it needs a Mali 600 series GPU which has compute 1.1 support. Example: The UI is used by Trustzone applications to display an Icon (See Lock and Green check mark in picture below) that assures the user that entering a PIN number is via a secure system. If UI were not part of Trustzone then Malware could mimic this Icon even though the system is not secure, you need a secure UI so that the ICON can be trusted.

There are multiple voluntary and required power modes for Game Consoles.

Idle Menu Less than 35 watts As soon as you fire up the AMD APU everything on, Idle power would exceed this due to GPU and GDDR5 memory. Keep the GPU in Zero power mode (5 Watts) + GDDR5 memory you are at 25 + watts. The PS3 XMB is XML using GPU accelerated OpenVG and the PS4 is supposed to support a 4K UI, I would think it would need GPU acceleration and we are back to more than 35 watts.

The only way this works is with APU + GPU where a 2-4 CU GPU is on while the second GPU is in Zero power mode. When Sony announced only 1 GPU and GDDR5 memory all assumptions that this mode could be accomplished with AMD hardware vanished, (and I freaked, my assumption was APU + GPU with stacked DRAM for low power) something else must be used for low power and OS UI. ARM could do 4K UI with GPU acceleration for about 5+ watts total if it had it's own memory.

AMD recommended APU + GPU till 2014 for two reasons 1) Context switching between GPU and Compute work loads and 2) Idle power mode. With the inclusion of ARM Trustzone, UI and IP Streaming can be done with ARM and the large GPU in a single APU not APU + GPU design can sleep.

There are EU power regulations for Always on Standby mode with exceptions for "special features". Standby is 500mw but special exceptions are allowed and I have not been able to find the power that is authorized. It applies to the PS4 and Xbox 720. The always on mode for the Xbox and PS4 is not required to be 500mw, read the exceptions and use cases. One has a game console able to turn on a Blu-ray player and control as well as play the blu-ray in the player; RVU should allow such a use case.

Cerny did state that the CPU in the second chip "so called Southbridge" is there to handle background tasks because of restrictive EU power regulations. Southbridge is on and the APU is mostly turned off, that should include the GDDR5 controller and memory. This depends on what the EU regulations will allow as well as GPU and GDDR5 standby power requirements.

The low standby power assumption is that the hard disk is sleeping, GDDR5 and APU off. Instant on & Instant start of gameplay would be a snapshot roll-in of X86 register information and GDDR5 Memory data from Flash. Waking up a hard disk and decompressing and decrypting a file to be used to snapshot rollin to X86 from a (assumption) ARM controlled hard disk should be slower than from Flash. This also requires that the ARM CPU has it's own LP memory. Southbridge would essentially be a ARM SoC supporting a 500mw standby and 2-5 watt background mode and XTV/RVU/DVR.

Higher power standby mode: (Nearly instant on and game play restart) Even if the GCN GPU does have a 3 Watt standby mode and GDDR5 is in the same range (Memory and GPU registers need to maintain their data) for a total of 5 watts in addition to the ARM (less than 500mw), it will be an always on 5+ watts which does not makes sense and will probably not be allowed. Best from a always on power view, though more complicated, is the first case above.

PS3 hypervisor, one SPU used and encryption/decription to hard disk of the PS3 kernel matches some of the Trustzone features. Trustzone is used with IO so it being in the Southbridge close to IO and Hard disk is the place it should be located. Second, Trustzone is ARM code only but it can manage another ISA family boot and function as a move engine.

Supporting Xtended TV is going to allow Java and Javascript code from TV video streams and websites targeted in those streams. I.E. PS4 and Xbox 720 will be open systems with security concerns. Sony can no longer just restrict access to the PS3/PS4. XTV and RVU will be on when the TV is on and I am guessing that ARM is used for this including a ARM GPU for accelerated GPU OS UI and XTV.

ARM Trustzone in Game consoles Likely

ARM is famous for its low-power chip designs, Gemalto is known for its NFC security features, and Giesecke & Devrient brings some nice nano-SIM notoriety to the table. As a trio, these companies want to push forward a security standard that could be readily used in a wide range of web-connected devices, including tablets, smart TVs, game consoles and smartphones. The standard itself is built on ARM's TrustZone hardware-based security, which has been around for a while and is built into every ARM Cortex-A series processor,

arm-trusted-2.jpg


http://www.xbitlabs.com/news/mobile/display/20121218221526_ARM_G_D_and_Industry_Players_Develop_Trusted_Execution_Environment_for_Mobile_Devices.html said:
A Trusted Execution Environment (TEE) is a secure area that resides in the application processor of an electronic device. Separated by hardware from the main operating system, a TEE ensures the secure storage and processing of sensitive data and trusted applications. It protects the integrity and confidentiality of key resources, such as the user interface and service provider assets. A TEE manages and executes trusted applications built in by device makers as well as trusted applications installed as people demand them. Trusted applications running in a TEE have access to the full power of a device's main processor and memory, while hardware isolation protects these from user installed apps running in a main operating system. Software and cryptographic isolation inside the TEE protect the trusted applications contained within from each other. Device and chip makers use TEEs to build platforms that have trust built in from the start, while service and content providers rely on integral trust to start launching innovative services and new business opportunities.

&#8220;Trustonic will accelerate the adoption and widespread use of ARM TrustZone technology in a diverse set of trusted enterprise, commerce and entertainment services by delivering a Trusted Execution Environment to the broad ARM ecosystem," said Warren East, chief executive officer of ARM.

Numerous companies, including 20th Century Fox Home Entertainment, Cisco, Discretix, Good Technology, Inside Secure, Irdeto, MasterCard, Nvidia, Samsung Electronics, Sprint, Symantec, and Wave Systems, plan to work with Trustonic and adopt the TEE.

FCC to Force all Cable TV Providers to Stream HD With "Open" Standard by 2014

Was supposed to go into effect Dec 2012 but was delayed by TiVo.

The U.S. Federal Communications Commission -- upset that cable television providers (CTPs) did not allow streaming of HD video via secured connections like the Digital Living Network Alliance (DLNA) standard -- in 2010 decided to force the issue proposing an order to force CTPs to stream.

The new set of rules, set to be made mandatory by June 2, 2014, also clarifies what capabilities are expected of the HD streams:

recordable high-definition video
closed captioning data
service discovery
video transport
remote control command pass-through

DLNA Premium Video Profile, an HD-compliant version of the secure-streaming standard set to be ratified in 2013, was suggested as one possible option for cable companies.
Looks like the RVU additions to DLNA recently adopted are the standard. Xtended TV is coming to OTA with ATSC 2.0 in the US this year and those standards should make their way into cable.

So DVR ability with an always on Xbox 720 or PS4 is a given right? Broadcasters are going to insist the platforms be secure and most new Set Top Boxes are including ARM Trustzone.

This is the hook, the reason an always on game console can command the living room. But there is going to be competition from cheap ARM powered Google TV STBs so easier to use, more powerful, more features than the cheaper ARM platforms starting with camera and Skype included.

dlnapremiumvideo.png


Well actually, in addition to the Cable Box Gateway device the home network can have DLNA media servers. Every RVU device that can access the Cable Gateway can access the DLNA Media server and likely the DVR in the PS4 and Xbox 720. For a Fee, the PS4 likely will find the commercial media (music, pictures and video) on the home network and index them for you (with cover art). Family Movies and pictures will likely be indexed for free and will allow Picasa like searches by face or scene detect. Lots of cloud services for both Microsoft and Sony to offer for a tiered fee schedule.
 

onQ123

Member
Yeah the off chip southbridge sorta kills the AMD stock GNB all in one idea (Potted MCM still likely though). If you look at ARM Trustzone and assume the second custom chip has ARM trustzone and also all the low power support for multiple planned background services as well as something like Google TV then this differs from the AMD stock APU design goals and would require something different from AMD's GNB (which is where AMD would put Trustzone and likely why everything IO is in UNB and AMD Kabini is the first SoC with IO in the SoC, First SoC with third party IP (Trustzone and more)).

Trustzone security includes all IO and the UI, anything that can be intercepted by malware to get bank pin numbers or intercept pay TV or IPTV streams. I assume the PS4 will have a GPU accelerated UI which requires a ARM GPU and if the PS4 is to support a 4K UI then it needs a Mali 600 series GPU which has compute 1.1 support. Example: The UI is used by Trustzone applications to display an Icon (See Lock and Green check mark in picture below) that assures the user that entering a PIN number is via a secure system. If UI were not part of Trustzone then Malware could mimic this Icon even though the system is not secure, you need a secure UI so that the ICON can be trusted.

There are multiple voluntary and required power modes for Game Consoles.

Idle Menu Less than 35 watts As soon as you fire up the AMD APU everything on, Idle power would exceed this due to GPU and GDDR5 memory. Keep the GPU in Zero power mode (5 Watts) + GDDR5 memory you are at 25 + watts. The PS3 XMB is XML using GPU accelerated OpenVG and the PS4 is supposed to support a 4K UI, I would think it would need GPU acceleration and we are back to more than 35 watts.

The only way this works is with APU + GPU where a 2-4 CU GPU is on while the second GPU is in Zero power mode. When Sony announced only 1 GPU and GDDR5 memory all assumptions that this mode could be accomplished with AMD hardware vanished, (and I freaked, my assumption was APU + GPU with stacked DRAM for low power) something else must be used for low power and OS UI. ARM could do 4K UI with GPU acceleration for about 5+ watts total if it had it's own memory.

AMD recommended APU + GPU till 2014 for two reasons 1) Context switching between GPU and Compute work loads and 2) Idle power mode. With the inclusion of ARM Trustzone, UI and IP Streaming can be done with ARM and the large GPU in a single APU not APU + GPU design can sleep.

There are EU power regulations for Always on Standby mode with exceptions for "special features". Standby is 500mw but special exceptions are allowed and I have not been able to find the power that is authorized. It applies to the PS4 and Xbox 720. The always on mode for the Xbox and PS4 is not required to be 500mw, read the exceptions and use cases. One has a game console able to turn on a Blu-ray player and control as well as play the blu-ray in the player; RVU should allow such a use case.

Cerny did state that the CPU in the second chip "so called Southbridge" is there to handle background tasks because of restrictive EU power regulations. Southbridge is on and the APU is mostly turned off, that should include the GDDR5 controller and memory. This depends on what the EU regulations will allow as well as GPU and GDDR5 standby power requirements.

The low standby power assumption is that the hard disk is sleeping, GDDR5 and APU off. Instant on would be a snapshot roll-in of X86 register information and GDDR5 Memory data from Flash. Waking up a hard disk and decompressing and decrypting a file to be used to snapshot rollin to X86 from a (assumption) ARM controlled hard disk should be slower than from Flash. This also requires that the ARM CPU has it's own LP memory. Southbridge would essentially be a ARM SoC supporting a 500mw standby and 2-5 watt background mode and XTV/RVU/DVR.

Higher power standby mode: Even if the GCN GPU does have a 3 Watt standby mode and GDDR5 is in the same range for a total of 5 watts in addition to the ARM (less than 500mw), it will be an always on 5+ watts which does not makes sense and will probably not be allowed. Best from a always on power view, though more complicated, is the first case above.

PS3 hypervisor, one SPU used and encryption/decription to hard disk of the PS3 kernel matches some of the Trustzone features. Trustzone is used with IO so it being in the Southbridge close to IO and Hard disk is the place it should be located. Second, Trustzone is ARM code only but it can manage another ISA family boot and function as a move engine.

Supporting Xtended TV is going to allow Java and Javascript code from TV video streams and websites targeted in those streams. I.E. PS4 and Xbox 720 will be open systems with security concerns. Sony can no longer just restrict access to the PS3/PS4. XTV and RVU will be on when the TV is on and I am guessing that ARM is used for this including a ARM GPU for accelerated GPU OS UI and XTV.

ARM Trustzone in Game consoles Likely



arm-trusted-2.jpg




FCC to Force all Cable TV Providers to Stream HD With "Open" Standard by 2014

Was supposed to go into effect Dec 2012 but was delayed by TiVo.

Looks like the RVU additions to DLNA recently adopted are the standard. Xtended TV is coming to OTA with ATSC 2.0 in the US this year and those standards should make their way into cable.

So DVR ability with an always on Xbox 720 or PS4 is a given right? Broadcasters are going to insist the platforms be secure and most new Set Top Boxes are including ARM Trustzone.

This is the hook, the reason an always on game console can command the living room. But there is going to be competition from cheap ARM powered Google TV STBs so easier to use, more powerful, more features than the cheaper ARM platforms starting with camera and Skype included.

dlnapremiumvideo.png


sweetvar26 said that the ARM security wasn't in the PS4 only the Xbox3

A feature for the Xbox that the PS4 doesn't have is something related to "ARM security".
 
As far as I can tell, there's nothing really special in that info. If anything it reveals the Onion/Garlic memory buses are a standard AMD APU feature rather than any kind of PS4 customization.
 
sweetvar26 said that the ARM security wasn't in the PS4 only the Xbox3
It's not in Thebe (it's in the second chip) and sweetvar26 was relying on his room mate for information from AMD, the second custom chip mentioned is likely not produced by AMD. Kryptos meaning hidden likely means something like Google TV is hidden in Kryptos. Kryptos as one chip containing an ARM system (reason for DDR3) means a larger chip or reducing the number of CUs. Sony going with GDDR5 means the Google TV like feature set could not use GDDR5 memory (too hot and draws too much energy) so a second SoC with it's own memory.

The lowest power design would use DDR3 components in a 2.5D stacked on interposer. Oban may be the interposer for Microsoft. This would allow LPDDR3 power levels at 67GB/sec speeds with 512 bit wide memory => 2 256bit or 4 128 bit channels. Wide IO uses DDR3 components and has 4 channels = 4 move contollers

All speculation but it fits. It's also totally ass backwards from what I was expecting for the PS4 due to the Sony CTO, Sony SVP Technology platform, Charlie at Semi-Accurate and the Yole PDF. It likely follows that GF is producing Kryptos and it's being packaged by AMKOR and the now rumored delays for Xbox are due to the faster Wide IO memory shortages.

Most of what you speculated is going to be in 2014 AMD designs and is likely in Kryptos. As with many of my speculations you are half right, the underlying logic is there but Sony decided to do it their way which could not be anticipated.

If I'm correct and there is a very large amount of speculation in this, Kryptos and PS4 will price out about the same with Kryptos quickly being much cheaper but Sony able to get the PS4 to market before Kryptos. A refresh to use stacked DDR4 for the PS4 is very possible in 1 to 2 years. Good decision on Sony's part I think.

Edit:
onQ123 said:
The weak AMD chip with only 320 ALU's that sweetvar26 leaked last year wasn't the PS4's GPU it was the secondary chip.
Yup but creates an issue with my assumption above with sweetvar26 not knowing about about the second chip. Likely the 320 ALU GPU is the min necessary for a GPU accelerated HTML5 browser and/or UI.
 
Top Bottom