• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

IBM: Cell continues as integrated part of Power roadmap; working on next consoles

DrXym

Member
It's funny how people proclaimed the Cell "too hard" when the entire industry has started going the same way with CUDA / OpenCL and GPGPU programming in general.

I would be surprised if MS does adopt the Cell, but they have something called DirectCompute which is an abstraction layer much like OpenCL that could form the basis of creating work units which are offloaded onto the GPU or some other exotic high speed processor.
 
If they continue to use Cell for the Playstation line, they need to find a better replacement for the PPE on the Cell itself, and a GPU with fewer bottlenecks. I'd love to see what the current cell could do with a better GPU, but that likely will never be observed.
 
gofreak said:
In terms of downsides that have manifest in a very material way on end results, I think Cell's are probably less prominent than others. I'd hazard a guess that most developers wouldn't tell you, today at least, that Cell is where their performance bottleneck was in PS3 games... far from it, probably.

And I wouldn't for one minute disagree with that but it certainly caused some headaches, especially early on and if you're going to increase the number of SPEs, its definitely a problem which needs solving.

DrXym said:
It's funny how people proclaimed the Cell "too hard" when the entire industry has started going the same way with CUDA / OpenCL and GPGPU programming in general.

I would be surprised if MS does adopt the Cell, but they have something called DirectCompute which is an abstraction layer much like OpenCL that could form the basis of creating work units which are offloaded onto the GPU or some other exotic high speed processor.

The irony is ofcourse, that CELL would likely perform tremendously well with most DirectCompute tasks if Microsoft/IBM wrote a driver for it.
 

panda21

Member
DrXym said:
It's funny how people proclaimed the Cell "too hard" when the entire industry has started going the same way with CUDA / OpenCL and GPGPU programming in general.

cell still is much harder than GPGPU programming. they are not especially similar architectures and the way cell works is much harder to program for, but the GPGPU is also only good for certain problems where you want to do the exact same thing thousands of times.

basically they had a neat idea but it seems the conventional CPU manufacturers arent too good on delivering on this stuff at the moment (e.g intel with larrabee)
 
Man said:
SCE studios and a few 3rd parties are offloading a lot of graphics related work to the Cell.
The few 3rd parties part is the problem though.

Every retail PS3 developer is offloading some amount of graphics workload to CELL these days. Some are just doing more than others.
 
panda21 said:
cell still is much harder than GPGPU programming. )

Utter hogwash. Loading half a dozen threads is much easier than loading hundreds of threads. Creating "GPGPU" code is only "easier" because there's a simple high level language to work with, but there's absolutely nothing stopping someone from creating an OpenCL or DirectCompute driver for CELL. Oh and yes, CELL could chew through most OpenCL code very quickly if someone wrote a decent driver for it.

Do you think its just a coincidence that most of the current usage cases for DirectCompute (like better quality SSAO and motion blur) are exactly the same things that are being offloaded to CELL? Oh, and fyi, the highest quality current implementation of motion blur is actually on the PS3, not the PC, and no, its not running on RSX! :lol
 

Truespeed

Member
DonMigs85 said:
If they care at all about BC they'll have to continue using IBM parts. However the Wii and 360's respective CPUs shouldn't be that tough to emulate on a decent Intel/AMD chip but the Cell will be.

Fortunately, one thing Intel or AMD couldn't emulate is price - which is why they all went running to IBM in the first place.
 
Truespeed said:
Fortunately, one thing Intel or AMD couldn't emulate is price - which is why they all went running to IBM in the first place.

Back in 2004? No, but I think a Fusion design from AMD certainly could these days, especially since you're now free to fab that chip wherever you want. The AMD of 2010 is completely different to the AMD of 2004.

No one's going to go with Intel though, I'll give you that.
 

gofreak

GAF's Bob Woodward
brain_stew said:
And I wouldn't for one minute disagree with that but it certainly caused some headaches, especially early on and if you're going to increase the number of SPEs, its definitely a problem which needs solving.

If they scale the number of SPEs up they'll scale the number of Power cores too.

It's interesting that they say Cell will become integrated with their next line - meaning, presumably, Power8. Perhaps like their recent networking chip, Power8 will have the main Power core and then an assortment of other co-processors and accelerators dedicated to certain tasks. In the case of Power8, perhaps SPUs - or something SPU compatible - will become the default co-processor option in Power8, perhaps as optional add-ins.

I would guess then, if 'their next line' refers to Power8 that PS4 will use a Power8 chip.

It leaves open the possibility too of MS or Nintendo using Power8, and thus also some Cell flavouring. Unless Sony has some contractual clauses about that.
 

panda21

Member
brain_stew said:
Utter hogwash. Loading half a dozen threads is much easier than loading hundreds of threads. Creating "GPGPU" code is only "easier" because there's a simple high level language to work with, but there's absolutely nothing stopping someone from creating an OpenCL or DirectCompute driver for CELL. Oh and yes, CELL could chew through most OpenCL code very quickly if someone writes a decent driver for it.

what i meant is given the capabilities of current GPUs, the model you would be forced to go with is much simpler to program than cell, where you can potentially have each SPU executing completely different code, and need to be moving data around the very small memory each SPU has.

on the GPU you are just running one kernel that has access to several gigs of main memory (and shared if you want) that does the exact same thing in each thread. i agree it might be much harder to find a use for it in current games, its just that the limitations of it mean what you can do is bound to be simpler, and so easier to do.

i'd contest that cell could compete with a fermi though if you were to try and make it execute opencl/cuda code, given the restrictions on the way those things work. infact you just plain couldnt with most things because generally in gpgpu you have a huge amount of main memory to work with. even if you could i dont think it would compare well to a fairly plain recent desktop gpu just because its quite old now.
 

Argyle

Member
DonMigs85 said:
Is RSX's 128-bit memory bus a direct consequence of only giving it 8 ROPs compared to 16 on a 7800 GTX, or would it have been possible to retain a 256-bit bus with only 8 ROPs?
The RSX has a 128-bit path to video memory because essentially the other 128 bits of the bus is connected to the system's main memory.
 

Truespeed

Member
Galvanise_ said:
If they continue to use Cell for the Playstation line, they need to find a better replacement for the PPE on the Cell itself, and a GPU with fewer bottlenecks. I'd love to see what the current cell could do with a better GPU, but that likely will never be observed.

With the exception of cost, I really don't think there's anything preventing IBM from just using symmetric cores in future Cell lines.
 

DrXym

Member
panda21 said:
cell still is much harder than GPGPU programming. they are not especially similar architectures and the way cell works is much harder to program for, but the GPGPU is also only good for certain problems where you want to do the exact same thing thousands of times.

basically they had a neat idea but it seems the conventional CPU manufacturers arent too good on delivering on this stuff at the moment (e.g intel with larrabee)

I see Cell, shaders, OpenCL, CUDA etc as much of a muchness. They all use C like languages, they all have ways of defining and executing units of work, they all have events / signal architectures for talking back and forth, they all have primitives for working over vectors of of data rapidly.

Every platform has gotchas, but the techs have a lot in common. I expect anyone versed with one would pick up another fairly quickly. That's why I never got the "hard" claim when presumably every game has people who were writing shaders anyway.

There are OpenCL implementations that run over Cell, CPUs and NVidia / AMD GPUs so I imagine that it will prove to be a very attractive proposition to games devs.
 

DonMigs85

Member
Argyle said:
The RSX has a 128-bit path to video memory because essentially the other 128 bits of the bus is connected to the system's main memory.
Hmm really? I wonder if it would have been wiser to just devote the full 256 bits to VRAM alone. After all it basically ends up just stealing bandwidth from Cell, and do most games really need more than 256MB of VRAM?
 
DonMigs85 said:
Hmm really? I wonder if it would have been wiser to just devote the full 256 bits to VRAM alone. After all it basically ends up just stealing bandwidth from Cell, and do most games really need more than 256MB of VRAM?

RSX has a 128-bit but to GDDR3 vram and Cell has a 64-bit bus to XDR ram, total 192 bits of bus. The reason why they used 128-bit bus is because you can only put about 2 pins per square millimeter of chip. 256-bit but with interconnect to CPU and other connections would have to be over 150 mm^2 in size, RSX is PS3 slim is already smaller than that. With 128-bit bus they can cost reduce the machine much more over.
 

Truespeed

Member
brain_stew said:
Back in 2004? No, but I think a Fusion design from AMD certainly could these days, especially since you're now free to fab that chip wherever you want. The AMD of 2010 is completely different to the AMD of 2004.

No one's going to go with Intel though, I'll give you that.

Perhaps, but whatever AMD comes up with will likely be undercut by IBM.
 

Argyle

Member
DonMigs85 said:
Hmm really? I wonder if it would have been wiser to just devote the full 256 bits to VRAM alone. After all it basically ends up just stealing bandwidth from Cell, and do most games really need more than 256MB of VRAM?
If you thought the problems associated with dividing the RAM up into two pools was bad, imagine how bad it would be if the GPU had only very slow access to main RAM. As it is the CPU has very slow access to video RAM and developers rely on the GPU to write things into video RAM.

IMHO the original vision for the PS3 was to have the two different pools of memory acting as a unified pool - the CPU and GPU would have fast access to either pool. But reality didn't work out that way (largely due to limitations in the RSX and its PC heritage) and so we got what we got (GPU has fast access to both, CPU only has fast access to main RAM).

You can still effectively use all 256 bits of the bus on the RSX by moving some textures into main memory and reading from those while writing to video memory. I think many games do that with their post process effects, having certain buffers in main memory means the SPEs can work on them, and you'll get more effective fillrate when using the GPU. Developers can't use main RAM for everything though as it is a scarce commodity since all your CPU code and data has to reside there...
 

Dennis

Banned
Does anybody have an idea how long before a new console launches the deals have to be settled? What kind of timeframe are we looking at? If no deals have been struck yet would it imply that we are say, at least a year from launch?
 

DonMigs85

Member
Argyle said:
If you thought the problems associated with dividing the RAM up into two pools was bad, imagine how bad it would be if the GPU had only very slow access to main RAM. As it is the CPU has very slow access to video RAM and developers rely on the GPU to write things into video RAM.

IMHO the original vision for the PS3 was to have the two different pools of memory acting as a unified pool - the CPU and GPU would have fast access to either pool. But reality didn't work out that way (largely due to limitations in the RSX and its PC heritage) and so we got what we got (GPU has fast access to both, CPU only has fast access to main RAM).

You can still effectively use all 256 bits of the bus on the RSX by moving some textures into main memory and reading from those while writing to video memory. I think many games do that with their post process effects, having certain buffers in main memory means the SPEs can work on them, and you'll get more effective fillrate when using the GPU. Developers can't use main RAM for everything though as it is a scarce commodity since all your CPU code and data has to reside there...
Thanks, this really clarifies a lot.
I guess that recent OS memory footprint reduction that gives devs an extra 70MB can come in really handy then eh?
 
DennisK4 said:
Does anybody have an idea how long before a new console launches the deals have to be settled? What kind of timeframe are we looking at? If no deals have been struck yet would it imply that we are say, at least a year from launch?

More like 3 years. I remember MS running X360 demos on Apple hardware back in 2003, when they said exactly what kind of a CPU design we could expect.
 

Argyle

Member
DonMigs85 said:
Thanks, this really clarifies a lot.
I guess that recent OS memory footprint reduction that gives devs an extra 70MB can come in really handy then eh?
Any reduction in OS main memory footprint is helpful of course...your definition of recent might be different than mine, I thought that happened years ago :)
 
Lagspike_exe said:
More like 3 years. I remember MS running X360 demos on Apple hardware back in 2003, when they said exactly what kind of a CPU design we could expect.

Aye. Sony's software guys are being much more kept in the loop this time with regards to the hardware. I wouldn't be surprised if the ICE team and SCEE's tech team in Cambridge are already working on the next gen hardware (devkit software, scouting out software techniques etc).
 
Truespeed said:
Perhaps, but whatever AMD comes up with will likely be undercut by IBM.

If AMD flatly refuse to allow their GPU to be integrated with an IBM CPU, I don't see how they'll be able to do that.
 

Dennis

Banned
Lagspike_exe said:
More like 3 years. I remember MS running X360 demos on Apple hardware back in 2003, when they said exactly what kind of a CPU design we could expect.
Would that imply that the hardware specifications are set about 3 years before actual launch? That does not seem probable to me. Perhaps a rough outline but I would be interested in how flexible (or not) this process is. And how soon before launch the final hardware have to be decided.

Another thing, are Sony and MS waiting for Nintendo to make the first move or do they only care about each other?
 
DennisK4 said:
Would that imply that the hardware specifications are set about 3 years before actual launch? That does not seem probable to me. Perhaps a rough outline but I would be interested in how flexible (or not) this process is. And how soon before launch the final hardware have to be decided.

Another thing, are Sony and MS waiting for Nintendo to make the first move or do they only care about each other?

Perfect specifications are set around a year before launch. We're talking about making deals here. You need to sign them at least 3 years in advance if you want the other side to develop a custom solution for your console. More than 2 years in advance of X360 launch, it was known that MS will use an IBM/ATI combo:
http://www.gamespot.com/news/6078054.html

Nvidia's RSX doesn't fall into this category, as it is not a custom solution, but rather a modified PC GPU that was mounted on top of Sony's existing PS3 architecture.
 
DennisK4 said:
Would that imply that the hardware specifications are set about 3 years before actual launch? That does not seem probable to me. Perhaps a rough outline but I would be interested in how flexible (or not) this process is. And how soon before launch the final hardware have to be decided.

Another thing, are Sony and MS waiting for Nintendo to make the first move or do they only care about each other?

Xbox 360's specification was changed just a few months before launch (RAM doubled and Xenon clockspeeds reduced).
 

McHuj

Member
DennisK4 said:
Would that imply that the hardware specifications are set about 3 years before actual launch? That does not seem probable to me. Perhaps a rough outline but I would be interested in how flexible (or not) this process is. And how soon before launch the final hardware have to be decided.

Another thing, are Sony and MS waiting for Nintendo to make the first move or do they only care about each other?

I would guess that it's probably close to 18 months to 2 years for an architecture spec freeze before launch.

Probably 6-9 months to develop the processor (assuming some sort of existing starting base), then spec freeze, another 6 months or so of exhaustive testing, 4-6 weeks for initial manufacture, 3 or so months for validation of chip when it comes back, and if there are no major bug fixes needed, it's probably ready for manufacture. (At least this has been my experience)

There can always be tweaks in a processor design up until tapeout, but the closer you get to manufacture, their magnitude diminishes.

Some stuff like clock rate, memory size, storage, etc could probably fluctuate up to half a year before launch.

If MS and Sony are targeting a late 2012 release, now is probably the right time to start the development processes going.
 

Mr_Brit

Banned
DennisK4 said:
Would that imply that the hardware specifications are set about 3 years before actual launch? That does not seem probable to me. Perhaps a rough outline but I would be interested in how flexible (or not) this process is. And how soon before launch the final hardware have to be decided.

Another thing, are Sony and MS waiting for Nintendo to make the first move or do they only care about each other?
Considering that the PS3's specs vastly changed less than a year before launch I'd say that is a huge overestimate. I'd say more like 8-12 months with a definite final spec at around 4 months before production (not launch) starts.
 
Pardon my ignorance, but why is it notable MS isn't negotiating with IBM? Does IBM produce almost all CPU/GPU architectures for consoles? The Xbox and Nintendo uses ATI (AMD), correct?
 
I think SONY is not going to do any more R & D after wasting 2 Billion on PS3 Development. I think that if the Console Manufacturers were aiming for 10 year cycle without new consoles overlapping the cycle they should have at-least included 1 gig rams in the console.
 

DonMigs85

Member
Unregistered007 said:
I think SONY is not going to do any more R & D after wasting 2 Billion on PS3 Development. I think that if the Console Manufacturers were aiming for 10 year cycle without new consoles overlapping the cycle they should have at-least included 1 gig rams in the console.
Could've been worse. Just be glad Epic came to the rescue with their tech demo, otherwise the 360 would've only had 256MB total RAM.
 

mrklaw

MrArseFace
brain_stew said:
I'm not expecting discrete CPU and GPU chips next generation, it'll only make some of the potential workloads infinitely more difficult and drive up costs. If we do see two chips then they'll probably be two separate CPU/GPU hybrids, not two specific parts each with a tightly defined function.

Not sure. They might be forced to go with twin chips if the power they want to settle at is too much for an integrated solution. But perhaps they'd roadmap for it to migrate to a single chip over time as they naturally engineer cost out of it.


Gofreak, your comments about GPGPUs are completely valid, but if Sony do go that route, doesn't that dilute the value of CELL in the first place?


Whats the expectation regarding GPU performance from AMD/Nvidia? Are AMD leading the way regarding power consumption/heat which could be important for a home console, or are they both fairly even now? Any thoughts as to where the line will be drawn? Take a current top end GPU assuming it'll be mid-range by the time of launch, or something less powerful to save costs (basically, is it likely the GPU being used is already in consumer PCs as of now?)
 
PopcornMegaphone said:
Pardon my ignorance, but why is it notable MS isn't negotiating with IBM? Does IBM produce almost all CPU/GPU architectures for consoles? The Xbox and Nintendo uses ATI (AMD), correct?

Xbox 360 and Nintendo Wii did use ATI GPU, but they also used an IBM CPU.
 

GCX

Member
PopcornMegaphone said:
Pardon my ignorance, but why is it notable MS isn't negotiating with IBM? Does IBM produce almost all CPU/GPU architectures for consoles? The Xbox and Nintendo uses ATI (AMD), correct?
IBM produced the Gekko and Broadway CPUs for GC and Wii.
 

mrklaw

MrArseFace
Unregistered007 said:
I think SONY is not going to do any more R & D after wasting 2 Billion on PS3 Development. I think that if the Console Manufacturers were aiming for 10 year cycle without new consoles overlapping the cycle they should have at-least included 1 gig rams in the console.

Potentially, a lot of that R&D will pay off with PS4. Bluray will be dirt cheap to include and the storage will be useful (MS will hit that issue with 720 - do they license bluray, resurrect HDDVD or go with a wierd custom solution?), and CELL is starting to show its strength

and the '10 year' thing is just PR bullshit they always spout to persuade you to buy into the console midway through a generation. 5-6 years of active life before becoming the 'second tier' console is pretty good.
 
Thanks for the replies.

DonMigs85 said:
But at the time, the AMD/ATI merger hadn't gone through yet.

Yeah, that makes sense. It seems to me AMD offers "one stop shopping" and perhaps offers a more integrated/cost effective solution. Isn't there more work involved getting two different CPU/GPU manufacturers working together effectively?

But yeah, I fully admit I'm way out of my depth here.
 

Mr_Brit

Banned
mrklaw said:
Potentially, a lot of that R&D will pay off with PS4. Bluray will be dirt cheap to include and the storage will be useful (MS will hit that issue with 720 - do they license bluray, resurrect HDDVD or go with a wierd custom solution?), and CELL is starting to show its strength

and the '10 year' thing is just PR bullshit they always spout to persuade you to buy into the console midway through a generation. 5-6 years of active life before becoming the 'second tier' console is pretty good.
Why wouldn't MS use Blu Ray? There isn't any other viable Optical disc format out there. There is an almost guaranteed chance that both Sony and MS will use the new Blu Ray format that allows up to 200GB discs, it will be cheap to manufacture and the discs will be very cheap as well, allow you to use normal 25 and 50GB discs if you don't need that much space, be BC with PS3 (for Sony naturally), backwards compatible with DVDs and CDs(for MS and 360 BC naturally) and the lasers will most likely be upgraded to read at faster than normal speeds so one of these drives reading from a 6x disc should be able to easily load enough data to fill the 4+GB RAM these next consoles will have.
 
N

NinjaFridge

Unconfirmed Member
Galvanise_ said:
Aye. Sony's software guys are being much more kept in the loop this time with regards to the hardware. I wouldn't be surprised if the ICE team and SCEE's tech team in Cambridge are already working on the next gen hardware (devkit software, scouting out software techniques etc).

God, I hope it isn't SCEJ doing the firmware for PS4.
 
Speaking of Cell, does anyone know when we can expect a 32nm revision? I thought it was on schedule for this Fall, but haven't heard anything about it.
 
PopcornMegaphone said:
Isn't there more work involved getting two different CPU/GPU manufacturers working together effectively?

Obviously, but Microsoft managed it with Valhalla so its not impossible either.
 

No_Style

Member
Truespeed said:
Fortunately, one thing Intel or AMD couldn't emulate is price - which is why they all went running to IBM in the first place.

That's not necessarily true. The console hardware manufacturers (especially MS) learned that going to Intel/AMD was not the right choice because those big chip manufacturers didn't allow them to do whatever they wanted with the designs.

Intel/AMD were selling chips, IBM were selling designs.

The console manufacturers want designs because they can then integrate them with the GPU and cut costs down the road.

With the AMD/ATI merger, it is possible that any one of these console manufacturers could go to with AMD for all their design needs. IBM is still very much in the design game which is why everyone goes to them; at least one of the console manufacturers will use them. Intel? Not so much.

Console design process is really fascinating and anyone who's interested in this sort of thing should read Xbox 360 Uncloaked. Lots of great info and insight to the process.
 

Dennis

Banned
mrklaw said:
and the '10 year' thing is just PR bullshit they always spout to persuade you to buy into the console midway through a generation. 5-6 years of active life before becoming the 'second tier' console is pretty good.
This gen is 5 years old and have no end in sight (*sigh*). Its not going to be 5 more years before a new console of course but the business seems to be changing. The current way seems not to be profitable and I wonder if the next gen consoles will simply be minor upgrades, mainly to enable 3D and motion controls. I am becoming increasingly sceptical that we will see the massive tech improvements we have been used to.

Sony and MS must focus on not losing so much damn money on every unit sold.

If they think the hardware must be a very significant upgrade it makes sense to wait 7-8 until what represent a big upgrade has become quite cheap.
 

JardeL

Member
brain_stew said:
Oh, and fyi, the highest quality current implementation of motion blur is actually on the PS3, not the PC, and no, its not running on RSX! :lol
Do you have a source for this? I'd like to read a comparison article.

Also, if you have a link to an article about MLAA [ on CELL ] vs. PC AA, I'd like to know that too.
 
NinjaFridge said:
God, I hope it isn't SCEJ doing the firmware for PS4.

I've wanted the team based in Cambridge for years. I would like to think that the PS4 will launch with everyting we miss at present with some nice added features.
 

gofreak

GAF's Bob Woodward
JardeL said:
Do you have a source for this? I'd like to read a comparison article.

I'm not sure if it's the example brain_stew is thinking of, but The Force Unleashed 2 does 16-sample motion blur on Cell @ 1.4ms vs 5-to-11 sample motion blur on Xenos @ 2.2ms. I'm a wee bit surprised if PC-land implementations don't match or better this kind of thing though.

I know KZ2 did motion and DoF blur on Cell too - their DoF implementation on Cell had over 4 times the samples as their original RSX one (36 vs 8), with better quality sampling too IIRC.

It makes it all the more unfortunate when you hear about multi-plat games on PS3 with lower quality (RSX-based) motion blur or whatever. There are sample implementations in the Edge toolkit now AFAIK, so you'd think there'd be little excuse...
 
Top Bottom