• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Guerilla Dev : Cell CPU is more powerful then modern desktop chips, even the fastest Intel ones

Bernkastel

Ask me about my fanboy energy!

In the PS3 section
VAN DER LEEUW: Even desktop chips nowadays, the fastest Intel stuff you can buy is not by far as powerful as the Cell CPU, but it’s very difficult to get power out of the Cell. I think it was ahead of its age, because it was a little bit more like how GPUs work nowadays, but it was maybe not balanced nicely and it was too hard to use. It overshot a little bit in power and undershot in usability, but it was definitely visionary.
 

EverydayBeast

thinks Halo Infinite is a new graphical benchmark
I thought it made sense for SONY to go with the cell during the PS3 era, they come into generations with new tech all the time (CD Disk, DVD etc.) it was only considered a failure until developers figured it out in the later part of the PS3's years.
 

nowhat

Member
I read the article (a good read BTW), and I think you're oversimplifying what is being said here. I'd more rephrase that as "more powerful than modern desktop chips on certain tasks and a bitch to program for". I can understand why developers prefer x64.
 

Aion002

Member
So it could run games at 4K and at 60fps? This guy just wants to make headlines since the big boss left to Sony.
Really?

I would never thought that... Dude that works in a company wants to promote the company's product.
 
Never understood why console manufacturers don't have pursuit PowerPC CPUs which have always been better (and used in professiona devices, and Mac's) than X86/X64 architecture which are too old and with over-patched instructions.
Yeah, APU's are a great idea but are they worth it?
 

PhoenixTank

Member
Never understood why console manufacturers don't have pursuit PowerPC CPUs which have always been better (and used in professiona devices, and Mac's) than X86/X64 architecture which are too old and with over-patched instructions.
Yeah, APU's are a great idea but are they worth it?
I don't understand this post. PPC had its day, but it doesn't cut it anymore. Macs use x86 like normal PCs and have for quite some time now.
 

ethomaz

Banned
Well it still can run close to double the FLOFS than any modern multi-core CPU.
But it has really specific uses... for general purpose computing the actual CPUs are infinitely better.

Cell on PS3 with one core disabled can do 230 GFLOPS.

Never understood why console manufacturers don't have pursuit PowerPC CPUs which have always been better (and used in professiona devices, and Mac's) than X86/X64 architecture which are too old and with over-patched instructions.
Yeah, APU's are a great idea but are they worth it?
Probably because IBM changed the focus... they stopped to research PPC.
 
Last edited:
I don't understand this post. PPC had its day, but it doesn't cut it anymore. Macs use x86 like normal PCs and have for quite some time now.
PPC had its day but WHY doesn't cut anymore IF their performance are why better than X86? Power consumption is a reason... but why nobody have searched a solution?
Macs switched to X86 recently (10 years or less) but they have born with PPC architecture and always been superior to the X86 counterparts of their time (plus optimized software in general)
 
AFAIK Cell was ~250GFs... which is really good for a CPU. The problem is that it is very wonky and you have to code stuff specifically to run well on it. And not everything you try to run on it will run well. So there's that. It's like a Ferrari with a really shitty gearbox and steering wheel.
 
Well it still can run close to double the FLOFS than any modern multi-core CPU.
But it has really specific uses... for general purpose computing the actual CPUs are infinitely better.

I think (some) octa core CPUs are past it now, but for the year and the die area (and power) Cell could certainly pack in the matrix multiplications and then some. As I understand it, this, combined with the predictability of data accesses of many graphics related workloads, made it an ideal CPU (probably the best of its day) to help RSX out with its numerous shortcomings.

The areas where Cell excelled seem to have largely moved in the the direction of GPGPU and compute shaders.

Definitely a bold and forward looking move creating Cell though, even if evolution ultimately favoured a different path.

Would be awesome if PS5's solution to ray tracing and backwards compatibility was to get AMD to integrate some Cell BEs into the APU. :messenger_grinning_sweat: (Joke!)

PPC had its day but WHY doesn't cut anymore IF their performance are why better than X86? Power consumption is a reason... but why nobody have searched a solution?
Macs switched to X86 recently (10 years or less) but they have born with PPC architecture and always been superior to the X86 counterparts of their time (plus optimized software in general)

I think Apple ditched PowerPC about 13 years ago. IBM couldn't keep up with Intel in terms of desktop performance while keeping within a reasonable power budget. Last I read IBM were going the route of huge chips and stupid amounts of threads. Probably good for some scientific stuff, but not a great fit for games consoles or desktop computers.
 
Last edited:

Alexios

Cores, shaders and BIOS oh my!
Meh, if the end result experience was better on PCs of that era (which it was) never mind this era (lol), I don't see how that's possible outside theories nobody, not even first party studios, applied in practice (even the 360's traditional and far from top of the line architecture kept up decently well).
 
Last edited:
Never understood why console manufacturers don't have pursuit PowerPC CPUs which have always been better (and used in professiona devices, and Mac's) than X86/X64 architecture which are too old and with over-patched instructions.
Yeah, APU's are a great idea but are they worth it?
Well.. They have? Many times?


Or did you just mean "not anymore"?
 
If PS3 was as successful as PS2 then people would probably be praising cell as greatest thing of it's era because developers would have to use it best way possible. But since PS3 remained 3rd place console for years there was no point for 3rd parties to program for it when they had easy target in x360 and it failed at typical cpu tasks.
 

JohnnyFootball

GerAlt-Right. Ciriously.
If PS3 was as successful as PS2 then people would probably be praising cell as greatest thing of it's era because developers would have to use it best way possible. But since PS3 remained 3rd place console for years there was no point for 3rd parties to program for it when they had easy target in x360 and it failed at typical cpu tasks.
Eventually the PS3 became the lead platform in development as it was easier to port from PS3 to 360 than vice versa.
 

Sophist

Member
Cell had a theoretical peak of 250 gflops but in practice it was more like ~200 gflops on matrix multiplication which is the same as a ryzen 3700x, and ~45 gflops on fast fourier transform. Still much better than any CPUs at the time, true, but CPUs are not intended for this kind of computation. The GeForce 8800 GTX, released a few days before the playstation 3, had a theoretical peak of... 450 gflops!.
 

scalman

Member
allways wondering what it could do with modern GPU . it is beast. still sits near me now ..silently for now
 

McCheese

Member
I thought Dreamcast was also based on a Power PC CPU.

SEGA were going through a weird time where the US and JPN branch were competing with one another, both branches had early dreamcast prototypes, the US branch were going for a 3DFX / PowerPC combination that was killed off by Sega Japan, which went with it's own PowerVR2 / RISC combination.

As with most business decisions in Japan, it came down to who they were golfing with during that period.
 
Last edited:

Romulus

Member
Of course it is, that's why most top tier AAA linear ps3 games ran at sub 30fps 720p, even after years of experience on the hardware from the best devs in the world.

But maybe he's right.


I hate these sort of comments because they're impossible to disprove, but usually bullshit. It's like the guy claiming he had "potential" to be a top tier athlete but never got a fair shot. Lol. Its just exploiting the unknown to the fullest.
 
Last edited:

PhoenixTank

Member
PPC had its day but WHY doesn't cut anymore IF their performance are why better than X86? Power consumption is a reason... but why nobody have searched a solution?
Macs switched to X86 recently (10 years or less) but they have born with PPC architecture and always been superior to the X86 counterparts of their time (plus optimized software in general)
2005 (or Jan 2006 for products) was the start of Apple's x86 transition. A decade is an eternity given the way the tech industry moves, and we're way beyond that.
I can't give you a lone good answer as to why but it seems to be a lot of reasons. Starved research, failed inroads against Wintel, difficulty of using the theoretical performance advantage for general computing.
 
I very much doubt that it+s faster then modern ryzens and i9's.
But even if it is, it doesn't matter because the architecture is extremely complicated and x86 is so much more wide spread.
 
Now ask a Hellpoint Dev

And let them fight

ArtisticOrnateDotterel-small.gif
 
Cell had a theoretical peak of 250 gflops but in practice it was more like ~200 gflops on matrix multiplication which is the same as a ryzen 3700x, and ~45 gflops on fast fourier transform. Still much better than any CPUs at the time, true, but CPUs are not intended for this kind of computation. The GeForce 8800 GTX, released a few days before the playstation 3, had a theoretical peak of... 450 gflops!.

I'm looking I these numbers are a lie, it seems.
The 400 GFLOPs is possible, but in 8bit precision. At 32bit is 25 GFLOPS, at 64bits is 20 GFLOPs. So in practice is the CELL was just better than PC CPUs at that time, but nothing so revolutionary.
 

Trimesh

Banned
PPC had its day but WHY doesn't cut anymore IF their performance are why better than X86? Power consumption is a reason... but why nobody have searched a solution?
Macs switched to X86 recently (10 years or less) but they have born with PPC architecture and always been superior to the X86 counterparts of their time (plus optimized software in general)

Lots of reasons - the most basic one is that x86 had the sales volume to justify spending large amounts of money on R&D and Power PC didn't. On top of this, as the x86 developed and the ISA was extended most of the architectural deficits of the original design were mitigated - sure, PPC still has a much more orthogonal architecture than even x86-64, but the wide use of x86 CPUs has resulted in large amounts of development effort being put into compilers that can generate excellent code for x86 CPUs.

In a similar way, PPC had some very nice vector extensions (VMX/Altivec), which were much better than the x86 equivalent (SSE) - but since PPC development stalled and x86 development continued the latest iteration of x86 SIMD (AVX) is superior to Altivec.
 
Cell is very good at the kinds of computations which GPU's are good at. This is because the original design of PS3 didn't have a dedicated GPU - Cell was supposed to do both CPU and GPU functions.

That didn't work out at all which is why the final shipping PS3 had an Nvidia GPU bolted on. But Cell is still pretty amazing for what it was designed to do, and while modern GPU's have since surpassed Cell, it's still a very unique design in it's own right.
 
Having worked in the industry during the PS3 years, I firmly believe that if we didn't ever have that then aggregate engineering discipline wouldn't be anywhere near what it is today.

Cell forced a generation of console developers to learn to code to the metal (for those that hadn't learned it previously), to imbibe a powerful understanding of how game engine architecture could radically exploit execution parallelism and asynchronous logic flows, and how to make the most out of SIMD concurrency.

So many of these lessons have lead the way in informing the performance code quality that's pervasive in the games we play today.

We all owe a lot to our pal Ken Kuturagi...
 

Kagey K

Banned
Having worked in the industry during the PS3 years, I firmly believe that if we didn't ever have that then aggregate engineering discipline wouldn't be anywhere near what it is today.

Cell forced a generation of console developers to learn to code to the metal (for those that hadn't learned it previously), to imbibe a powerful understanding of how game engine architecture could radically exploit execution parallelism and asynchronous logic flows, and how to make the most out of SIMD concurrency.

So many of these lessons have lead the way in informing the performance code quality that's pervasive in the games we play today.

We all owe a lot to our pal Ken Kuturagi...
I would say so did size limits on XBLA games. Lots of devs said they hard a hard time fitting the file size restrictions. They had to learn to code and recompile games that on last gen they made to fit in a gig. This gen they just toss in bloat code and we get download it all. When indie games are over 10 gigs you know that code isn’t optimal.
 

Knightime_X

Member
Doesn't matter how strong something is if you can barely get anything out of it.
It's like bragging about the power of the sun to some nuclear reactor site but the best you can do is sunshine. lol
 

Hudo

Member
I think it rather depends on what you want to do. The Cell was apparently a master at floating point stuff, which a "regular" x64-based CPU can't compete with, as the latter has/had a focus on providing a solid base for more use cases. For example, providing more integer units, if I recall correctly, because outside of graphics and ML stuff, there's still a lot of integer math happening. That's why GPUs became a thing, because general x86 architecture just wasn't suitable for what graphics people wanted to do. And that's also why some companies throw in specialized FPGAs/ASICs on their hardware for specific tasks or "AI/ML Cores" for neural network stuff, because ML stuff deals with tensor math.

So this dev's claim is probably true only for a specific use case.
 

Sophist

Member
I'm looking I these numbers are a lie, it seems.
The 400 GFLOPs is possible, but in 8bit precision. At 32bit is 25 GFLOPS, at 64bits is 20 GFLOPs. So in practice is the CELL was just better than PC CPUs at that time, but nothing so revolutionary.


The performances they were able to achieve (in gflops):

Precision​
Matrix Multiplication​
FFT​
Single
204.7​
41.8​
Double
14.6​
6.75​
 
Cell is very good at the kinds of computations which GPU's are good at. This is because the original design of PS3 didn't have a dedicated GPU - Cell was supposed to do both CPU and GPU functions.

That didn't work out at all which is why the final shipping PS3 had an Nvidia GPU bolted on. But Cell is still pretty amazing for what it was designed to do, and while modern GPU's have since surpassed Cell, it's still a very unique design in it's own right.

As far as I know, the original plan was to ship the PS3 with a Cell CPU and a Toshiba GPU. The latter failed to materialize and NVidia supplied a modified GPU instead. Correct me if I'm wrong.
 

nemiroff

Gold Member
So it was SO much more powerful, yet it wasn't.. Well, it's not useful, but I guess it's something, whatever it is..
 
So it was SO much more powerful, yet it wasn't.. Well, it's not useful, but I guess it's something, whatever it is..
I think what you're saying is, if something is underutilized...isn't it really more powerful? Does it matter?

I do remember the Air Force arrayed over a thousand PS3s to do some satellite imagery tasks. There's lots of writeups about it.
 

Trimesh

Banned
As far as I remember the original plan was to render all the gpu stuff by the cell.

That's my understanding too - the initial design was just going to have a frame buffer and the cell would directly render to it. That was the approach that Toshiba used on their Cell based streaming multimedia platform.
 

Ar¢tos

Member
I would say so did size limits on XBLA games. Lots of devs said they hard a hard time fitting the file size restrictions. They had to learn to code and recompile games that on last gen they made to fit in a gig. This gen they just toss in bloat code and we get download it all. When indie games are over 10 gigs you know that code isn’t optimal.
I recently downloaded Seasons After Fall and it's 9gb!
A small indie games with only 4 areas, not much speech and very basic graphics.
There are indie games with a ton more content, a lot more audio and better graphics that are only 5gb or less.
There must be dupe data all over the place in that game and, surprisingly, it has plenty of load screens every time you move between areas.
 
Top Bottom