• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Interesting new patents by SCE, multiple GPU's in mind

Panajev2001a

GAF's Pleasant Genius
URL: http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=5&f=G&l=50&co1=AND&d=PG01&s1=%22Sony+Computer%22.AS.&OS=AN/%22Sony+Computer%22&RS=AN/%22Sony+Computer%22

I am personally quite puzzled by this patent as it opens the doors to a portable system with either a "low power GPU" coupled to a "high performance GPU" or a system with a CPU+GPU in which the CPU (with a small GPU inside?) can take the duty of graphics processing away from the GPU when there is need to conserve battery life (at the loss of performance).
It does cover the case of multiple GPU's used to increase performance, but I think that is a "let's cover our legal butt for future products (that is PS4)" kind of thing more than the main element of this patent IMHO... which would seem to be PSP2 related... being a SCE patent (a SCEA R&D patent to be more accurate).

This patent covers quite a bit of ground and partially one could get it started by looking at this bit from the patent:

[...] Some laptop computers are beginning to solve this problem by introducing two GPUs in one laptop-one a low-performance, low-power consumption GPU and the other a high-performance, high-power consumption GPU-and letting the user decide which GPU to use.

[...]

[0004]Unfortunately, architecturally dissimilar GPUs are not capable of cooperating with one another in a manner that allows seamless context switching between them. Therefore a problem arises in computing devices that use two or more architecturally dissimilar GPUs in that in order to switch from one GPU to another the user must stop what they are doing, select a different GPU, and then reboot the device.

[...]

[0006]It would be desirable to allow the context switching to be hidden from the user and performed automatically in the background. Unfortunately, no solution is presently available that allows for dynamic, real-time context switching between architecturally distinct GPUs. The closest prior art is the Apple MacBook Pro, from Apple Computer of Cupertino, Calif., which contains two architecturally distinct GPUs but does not allow dynamic context switches between them.

Basically this is talking about a way to solve the problem of dynamically switching graphics processor active without user intervention. The example given is the older MacBook Pro which has both an Integrated GPU (or IGP) as well as a dedicated GPU (the GeForce 9600M GT) to allow you to choose between maximum battery life (with the IGP) or highest performance (with the dedicated GPU). Partially due to the "effort" needed to switch between active GPU (you have to log out and log back in after informing the OS that you want to switch active GPU) and due to the IGP providing good enough performance (especially if coupled with a fast enough GPU) make it so that most of the time you use the IGP even though you have a fast GPU with lots of dedicated RAM attached to it.

This patent identifies two problems:

1.) you want the system to be able to switch between GPU dynamically, at runtime without requiring user intervention or causing interruptions with the gameplay.

2.) you want to be able to decouple the kind of GPU used from the GPU the OS sees. You want to be able to use two completely different GPU's (one of which might even be the CPU running a software renderer) and seamlessly move from from one of the GPU's to the other(s).

[0047]Another solution would be to have the CPU interpret the architecture neutral instruction set and have the GPU Context Controller completely shut down the GPU. Graphics performance might severely degrade but potentially less power would be consumed. According to this solution the CPU would take over the processing tasks handled by the GPU. In such a case, this solution may be implemented in a system with just one GPU. Specifically, the CPU could take over for the GPU by performing a context switch between the GPU and the CPU.

Point 2.) requires pretty much the separation of the GPU native display list format (the display list contains all the references to data and the instructions needed by the GPU to do its work) from the display list format the OS/application side deals with. The app writes to a neutral display list format and such format is later compiled and optimized for the GPU the OS wants to render on.
While this does introduce a certain overhead, it can have some positive side-effects with regards to performance too as the neutral display list can be designed to help the GPU driver recognize optimization opportunities and act upon it as they best see fit. It might also be easier to expose to developers a fully documented and well understood neutral "virtual GPU" and keep the details about the physical GPU hidden from reverse engineering if the GPU maker so wishes. Another benefit is easier backwards compatibility as you only need to add a new target to the code which compiles the native GPU display list... the game thinks in terms of this virtual GPU and not of a physical GPU which particular quirks (not a 100% foolproof solution, but it can help... after-all not even on the iDevices you are completely abstracted from specific PowerVR features).

This patent also covers the possibility of having two or more GPU's and distribute work to them, shifting it from GPU to GPU when needed without user/programmer intervention.
[0045]Embodiments of the present invention as described herein may be extended to enable dynamic load balancing between two or more graphics processors for the purpose of increasing performance at the cost of power, but with architecturally similar GPUs (not identical GPUs as with SLI). By way of example, and not by way of limitation, a context switch may be performed between the two similar GPUs based on which one would have the higher performance for processing a given set of GPU input. Performance may be based, e.g., on an estimated amount of time or number of processor cycles to process the input.

Still puzzled...
 

gofreak

GAF's Bob Woodward
It does raise some questions.

But it may be pie-in-the-sky stuff.

I mentioned this patent briefly in the PSP2 thread - I think one noteworthy thing is that it's from a Sony Bend staffer.
 

RavenFox

Banned
The Firm that filed this needs to correct the Assignee section.

The embodiments section has some sweet stuff in there.
 

Arnie

Member
Wouldn't it be fair to assume that while this technique increases battery life it also increases cost, as Sony would have to include multiple GPU's per system. Going from the example given, I have a Macbook Pro with the ability to choose between GPU's depending on what work I'm doing, and I really love the feature, would be great if the PSP2 is a device capable of 3G web browsing or even a phone, in which you could disable the powerful GPU for such mundane tasks but flick it on for playing games.

In fact, doesn't this patent as a whole pretty much confirm the PSP2s, ability to perform tasks other than just games, paving the way for a PSPhone of some sort. I know it's already been wildly speculated but it really does make sense given this patent.
 

gofreak

GAF's Bob Woodward
It could relate to a phone device that has more than one type of processor. But if a Playstation phone did exist, why not just use the PSP2 GPU for everything including the phone tasks?

The other thing that springs to mind is if they include a version of the PSP graphics chip for BC. Letting it handle things when it can, then switching over to the PSP2 GPU when necessary.

The only other thing I could think of is having versions of the same core, one low power, and dynamically switching between them as load increases and decreases. But the architecturally distinct bit wouldn't fit here, and I'm not sure if this would save a lot of power vs normal power stepping and such within one GPU (?)

OTOH, I'd sort of be surprised if a Sony Bend staffer was working on something so low level. I can believe they'd be doing PSP2 work and research, but at this level? I mean, coming up with this architecture neutral instruction list would be a fairly big deal, no?
 
Pardon my ignorance, but wouldn't this sort of defeat the purpose of a dedicated gaming console (i.e. developers know exactly which spec they're developing and QAing for)? I mean, the PSP's ability to switch between 222 and 333 MHz has already done this, to a degree. But I feel like this sort of solution is much more at home on personal computers than it is on a dedicated single-spec piece of hardware. If it has to do with backwards compatibility then that's a different matter entirely, I suppose, though it'd likely drive up build costs in the same way that including PS2 hardware in the PS3 did.
 

androvsky

Member
Seems like that would cause all sorts of problems in a dedicated gaming device. If you switch to the crappy GPU in the middle of a game, does the game need to recognize this and switch to lower-resolution assets? If it switches to the CPU, does that mean devs have to leave a lot of CPU power unused so that the game is still playable when the CPU is suddenly tasked with rendering the graphics on top of everything else? They'll have to have such an overpowered CPU that they might be better off not bothering with any of this.

And it runs contrary to the big benefit of dedicated gaming systems delivering a predictable end-user experience.
 

RavenFox

Banned
badcrumble said:
Pardon my ignorance, but wouldn't this sort of defeat the purpose of a dedicated gaming console (i.e. developers know exactly which spec they're developing and QAing for)? I mean, the PSP's ability to switch between 222 and 333 MHz has already done this, to a degree. But I feel like this sort of solution is much more at home on personal computers than it is on a dedicated single-spec piece of hardware. If it has to do with backwards compatibility then that's a different matter entirely, I suppose, though it'd likely drive up build costs in the same way that including PS2 hardware in the PS3 did.
The cpu on the psp was locked. This is different. Why do you think it would be any different to do this on a home console when they behave just like computers and you code on them the same as computers? I mean they are computers. Remember Cell and RSX are already doing this to a degree. This is the extension for Sony.
androvsky said:
Seems like that would cause all sorts of problems in a dedicated gaming device. If you switch to the crappy GPU in the middle of a game, does the game need to recognize this and switch to lower-resolution assets? If it switches to the CPU, does that mean devs have to leave a lot of CPU power unused so that the game is still playable when the CPU is suddenly tasked with rendering the graphics on top of everything else? They'll have to have such an overpowered CPU that they might be better off not bothering with any of this.

And it runs contrary to the big benefit of dedicated gaming systems delivering a predictable end-user experience.
That's not going to happen.

GPU Context Controller is configured to perform a context switch from the high power GPU to the low power GPU if the high power GPU is the active GPU and the high power GPU is operating at a processing capacity that is less than or equal to the maximum processing capacity of the low power GPU.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Am I the only one whose first thought after reading this is: Gallium.
 

poppabk

Cheeks Spread for Digital Only Future
Pardon my ignorance, but why is it easier to have two GPU's rather than have one GPU that is scalable based on demand? Does a powerful modern GPU require that much more power to display a static windows screen than a lower level piece of hardware - and if so why?
 

gofreak

GAF's Bob Woodward
androvsky said:
Seems like that would cause all sorts of problems in a dedicated gaming device. If you switch to the crappy GPU in the middle of a game, does the game need to recognize this and switch to lower-resolution assets? If it switches to the CPU, does that mean devs have to leave a lot of CPU power unused so that the game is still playable when the CPU is suddenly tasked with rendering the graphics on top of everything else? They'll have to have such an overpowered CPU that they might be better off not bothering with any of this.

And it runs contrary to the big benefit of dedicated gaming systems delivering a predictable end-user experience.

It wouldn't necessarily be something exposed to the end-user as something they could twiddle.

i.e. something the system does in the background, but only stepping up when necessary, and only stepping down when the load on the system is low enough to be handled by the lower power chip - i.e. do it without degradation.

Thinking about the notion of a PSP chip being in there though, I'm not sure why you'd need to dynamically switch between them and not just have apps tell the system which chip it'll use, as normal. Unless they really think some PSP2 games are parts thereof might run OK on that chip, but that seems unlikely.

The only other thing I can think of is if they put two chips in there from different manufacturers, both with a reasonable minimum level of capability, but one lower powered, one high powered, each potentially with a different API, and then flick between them as games demand using a scheme like this. But would this really save power over power-stepping in one GPU? And there's the cost of something like this too.
 

PistolGrip

sex vacation in Guam
Weak GPU for Android (gimped if you will)
Strong GPU for full fledge PSP licensed games

Oh snap blew yo' mind.
 

mrklaw

MrArseFace
Perhaps it uses the low power GPU while on batteries for portable gaming, but when you dock it at home to charge and play on your HDTV it switches to higher power GPU?

Or IGP for movie decoding and discrete GPU for games.
 

gofreak

GAF's Bob Woodward
mrklaw said:
Perhaps it uses the low power GPU while on batteries for portable gaming, but when you dock it at home to charge and play on your HDTV it switches to higher power GPU?

Or IGP for movie decoding and discrete GPU for games.

In terms of the 'plug in at home for HD' thing, I don't think this is at all about scaling output quality one way or the other. 'Just' about figuring out what resources are or aren't needed as a game plays, and switching processing between a lower powered and higher powered GPU to match those needs. You also wouldn't need this kind of dynamic switching for putting movie playback on one chip and higher powered stuff on another either...that would be hardcoded. What's described is much more on-the-fly, so for example, when you render load decreases because you're in a small room, it might switch, and then switch again when render load increases when you walk out into a busier scene or whatever.

Question is if this really saves power over power-stepping within 1 GPU, and also if the switch would always be invisible to the user. If you like it's taking power-stepping from within one GPU to between two GPUs that even might be quite different in how they operate. But is this worth it?

(I guess we'll find out when Sony announces their tech - if it really does give a big saving on power and they can manage the cost, then maybe a scheme like this will appear in PSP2. If not, I'm sure they won't use it.)
 

spwolf

Member
PistolGrip said:
Weak GPU for Android (gimped if you will)
Strong GPU for full fledge PSP licensed games

Oh snap blew yo' mind.


weak gpu for non gaming, full GPU for gaming more likely...


you could not scale the games so much to work on both, not 3D games... good thing about this is a lot lower consumption as powerful gpu is turned off completely... this is why Optimus has a lot of potential vs just GPU turning parts of itself off.
 

Panajev2001a

GAF's Pleasant Genius
gofreak said:
It does raise some questions.

But it may be pie-in-the-sky stuff.

I mentioned this patent briefly in the PSP2 thread - I think one noteworthy thing is that it's from a Sony Bend staffer.

I thought I caught this before you... your patent snuffing skills are still very good ;).
 

Panajev2001a

GAF's Pleasant Genius
poppabk said:
Pardon my ignorance, but why is it easier to have two GPU's rather than have one GPU that is scalable based on demand? Does a powerful modern GPU require that much more power to display a static windows screen than a lower level piece of hardware - and if so why?

It might be easier to use a small GPU that does a simple task optimally than just put to sleep and wake up portions of a big and large GPU to do that simple task. You do that with big chips to save on power consumption, but it might yield even better results if you can afford to keep two separate GPU's, each optimized for the task at hand. By being better I mean reducing the power consumption even further.
 

Panajev2001a

GAF's Pleasant Genius
gofreak said:
In terms of the 'plug in at home for HD' thing, I don't think this is at all about scaling output quality one way or the other. 'Just' about figuring out what resources are or aren't needed as a game plays, and switching processing between a lower powered and higher powered GPU to match those needs. You also wouldn't need this kind of dynamic switching for putting movie playback on one chip and higher powered stuff on another either...that would be hardcoded. What's described is much more on-the-fly, so for example, when you render load decreases because you're in a small room, it might switch, and then switch again when render load increases when you walk out into a busier scene or whatever.

Question is if this really saves power over power-stepping within 1 GPU, and also if the switch would always be invisible to the user. If you like it's taking power-stepping from within one GPU to between two GPUs that even might be quite different in how they operate. But is this worth it?

(I guess we'll find out when Sony announces their tech - if it really does give a big saving on power and they can manage the cost, then maybe a scheme like this will appear in PSP2. If not, I'm sure they won't use it.)

I do not think that they would do GPU switching at a frame level granularity, that is each frame based on predicted load. Unless the load changed dramatically, there would be little point over just using the fast GPU and let it optimize its power consumption automatically. It might be easier when you have very simple loading screens, when you have FMV's, and mostly static elements. A power optimized GPU, with access to the HW video decode hardware, might handle that kind of load very efficiently.
 

gofreak

GAF's Bob Woodward
M3d10n said:
Sorry Sony, it has been done and put on the market already: NVidia Optimus.

What's described here goes deeper than Optimus.

AFAIK - though please correct me if I'm wrong - Optimus works simply with application profiles. If this is a game I know needs a GPU, fire up the main GPU. If this is a web browser, use the integrated GPU. It's claim to fame is to do this without user intervention, and without screen flicker.

This patent is about generalising graphics work to run on either processor, and switch on increased or decreased processing load while apps are running, not based on an application profile at start time. So theoretically it could switch at a much more granular level - for example, when scenes within a game become more or less complex.


Panajev2001a said:
I do not think that they would do GPU switching at a frame level granularity, that is each frame based on predicted load. Unless the load changed dramatically, there would be little point over just using the fast GPU and let it optimize its power consumption automatically. It might be easier when you have very simple loading screens, when you have FMV's, and mostly static elements. A power optimized GPU, with access to the HW video decode hardware, might handle that kind of load very efficiently.

If there's no benefit to at least a certain level of dynamic switching - not necessarily frame by frame, but scene by scene or whatever, or level by level or over sets of frames - then this patent becomes sort of useless. Since you could just do the application profile switching Optimus does for example, without having to 'neutralise' the instructions sent to the GPU. This patent seems aimed at switching without the game or app knowing about it, and that would only become necessary if it was being done 'on the fly' in response to changing conditions within the application.
 

Panajev2001a

GAF's Pleasant Genius
M3d10n said:
Sorry Sony, it has been done and put on the market already: NVidia Optimus.

True, but I do not think it goes as far as Sony wants it to go, that is I do not think nVIDIA wants a GPU agnostic solution that makes changing GPU vendor easier than before while preserving backward compatibility :p.
 

AndyD

aka andydumi
RavenFox said:
The Firm that filed this needs to correct the Assignee section.

The embodiments section has some sweet stuff in there.

Haha yeah.

It could be interesting. Could you use them in conjunction, say weak GPU for XMB/phone overlays onto the strong GPU's 3D graphics?

The idea of switching is a nice one, particularly if invisible.

Could you also use it to migrate software across devices?
 

Panajev2001a

GAF's Pleasant Genius
gofreak said:
It could relate to a phone device that has more than one type of processor. But if a Playstation phone did exist, why not just use the PSP2 GPU for everything including the phone tasks?

The other thing that springs to mind is if they include a version of the PSP graphics chip for BC. Letting it handle things when it can, then switching over to the PSP2 GPU when necessary.

The only other thing I could think of is having versions of the same core, one low power, and dynamically switching between them as load increases and decreases. But the architecturally distinct bit wouldn't fit here, and I'm not sure if this would save a lot of power vs normal power stepping and such within one GPU (?)

OTOH, I'd sort of be surprised if a Sony Bend staffer was working on something so low level. I can believe they'd be doing PSP2 work and research, but at this level? I mean, coming up with this architecture neutral instruction list would be a fairly big deal, no?

Yes, but it is not so low level that crosses the HW engineers' job. You could also see dynamic shader partitioning and re-compilation/optimizations patents by SCEA R&D that could have helped PS2's GS emulation and probably are part of the kit Sony is now using for their PS2 HD ports. I am personally very happy if SCE is letting their very bright software R&D guys take a big lead in developing the next-generation technologies that will drive the PlayStation business forward (as the entire Move deal showed how strong and productive their R&D teams are).
 

Panajev2001a

GAF's Pleasant Genius
AndyD said:
Haha yeah.

It could be interesting. Could you use them in conjunction, say weak GPU for XMB/phone overlays onto the strong GPU's 3D graphics?

The idea of switching is a nice one, particularly if invisible.

Could you also use it to migrate software across devices?

Yes and... kind of yes ;).

GPU commands are not everything, but it would make for easier emulation for sure. Unless you were trying to run the same software across devices with vastly differing performance profiles (and you were trying to move the software to a slower device).
 
Top Bottom