• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS3 Cell made Sonys first party what they are today.

PaintTinJr

Member
Stop misrepresenting what I say.
My point was that the SPE is a precursor to the modern CU. Sony even had the intention of having just the Cell to render graphics, before realizing the issues with yields.
Had Sony managed to get decent yields on the Cell, those SPEs would have been very similar in concept, to what the CUs we have today in a console SoC.
Your point that Cell led to Alder lake is complete non-sense. Cell was a dead end, that no one copied since.
And just because Alder Lake has one similarity in a very broad technical term, it does not mean it's related to Cell.
And this seems to distil our different views, because even now - as has been the case of needing a special CU for the tempest engine in the PS5, or special Co-processors in the IO complex - the SPUs can subsume all those algorithms and more with custom software solutions. It was the ideal type of processor to send on a long space mission, where what you needed 5years after launch from a CU/ASIC of the time wasn't right anymore, and you needed DSP/GPU level performance with CPU versatility but at the expense of leaning on human programming ingenuity, which was gained in those flight years. So being a precursor to something less versatile and less general purpose isn't how I would describe the SPUs, even if I see what you are driving at.
..The page you posted has only one mention of hUMA and it refers to AMD's Heterogeneous Unified Memory Access. You continue to use the wrong term for Heterogeneous Computing.
I'm not arguing that I can locate the resources from nearly 20years ago to prove I'm right(switch from http to https makes old info sparse), and clearly you don't want to accept that the Cell BE's EiB and flexIO were designed for hUMA operation, so we reach an impasse - I could try and point you to it being a ringbus with a token access mechanism, and why such a uncommon communication topology for the processor fits the Unified Memory Access paradigm like a glove, but you seem like, unless I can use a time-machine to get the pages I've read/seen to show you, you aren't interested.

I had hoped that me enlightening you on issues like the Cell BEs (RoadRunner) power efficiency and the nature of SPUs being autonomous after being kicked off, and SPUs - split across different Cell BE processor like they are in the Sony Zegos, fixstar boards, etc - being able to access unified XDR with equal priority via the EiB ringbus would have given me a bit good will from you, for you to trust that what I was recounting from the time was correct.. but sadly not it seems.
 

winjer

Gold Member
And this seems to distil our different views, because even now - as has been the case of needing a special CU for the tempest engine in the PS5, or special Co-processors in the IO complex - the SPUs can subsume all those algorithms and more with custom software solutions. It was the ideal type of processor to send on a long space mission, where what you needed 5years after launch from a CU/ASIC of the time wasn't right anymore, and you needed DSP/GPU level performance with CPU versatility but at the expense of leaning on human programming ingenuity, which was gained in those flight years. So being a precursor to something less versatile and less general purpose isn't how I would describe the SPUs, even if I see what you are driving at.

The Cell arch made a tradeoff. By removing things like the branch predictor from it's SPEs and by having a mere In-Order arch, it saved a lot of space to be used on more computational power.
But this put the onus of optimization squarely on the developers side. Yes, there were a few examples with impressive use of the Cell CPU. But most devs had no time, nor budget, nor technical know how.
The Cell is great at parallel workloads with small dependencies. But because it's lacking an OoO and only the PPE has branching capabilities, it became very difficult to maintain the execution pipeline full, especially running code with high dependencies and as branching, as games are.
SMT probably helped to assign work to unused stages at any given point. And a good compiler also helps. But these things only go so far in games.
I remember when Cell dominated the Folding@Home rankings, due to it's code being very parallel and low dependencies.
But in games it never pulled away that far from the competition.

I'm not arguing that I can locate the resources from nearly 20years ago to prove I'm right(switch from http to https makes old info sparse), and clearly you don't want to accept that the Cell BE's EiB and flexIO were designed for hUMA operation, so we reach an impasse - I could try and point you to it being a ringbus with a token access mechanism, and why such a uncommon communication topology for the processor fits the Unified Memory Access paradigm like a glove, but you seem like, unless I can use a time-machine to get the pages I've read/seen to show you, you aren't interested.

I had hoped that me enlightening you on issues like the Cell BEs (RoadRunner) power efficiency and the nature of SPUs being autonomous after being kicked off, and SPUs - split across different Cell BE processor like they are in the Sony Zegos, fixstar boards, etc - being able to access unified XDR with equal priority via the EiB ringbus would have given me a bit good will from you, for you to trust that what I was recounting from the time was correct.. but sadly not it seems.

The issue I'm point at is that hUMA is not the same as Heterogeneous Computing.
At best, it's a subset of this definition, but focused on memory interfaces.
 
Keep in mind also that AAA games (and games in general) were made using 512MB of RAM. To make it worse:

It's not unified memory: 256 MB main RAM and 256 MB VRAM. To make it worse further:

The OS footprint when the console was first released hogged,- I believe close to a 100MB of RAM, then a few years later the OS footprint was slashed by 74MB.

Developers always complaining they dont have enough RAM. Waahhh cry babies. Dare you to make a Gran Turismo game 60fps on less than 512 MB of RAM!!
 

mckmas8808

Banned
As everyone knows by now the infamous Cell CPU in the PS3 was a really hard and time consuming to code. There is a YT video by Modern Vintage Gamer who goes into detail about what was involved. The amount of code that was required to just send one command was alot more than typical core would use.

We saw just how this effected the multiplatform games that was released on PS which ran alot worse on the PS3 than the 360 for the majority of the generation.
In response to the trouble developers were having with the Cell Sony put alot of effort into the ICE teams to get the absolute best tools for taking advantage of the Cell and help development of third party games on the platform. From my understanding the ICE team was taken from the Sony first party teams such as Naughty Dog, GG and Santa Monica Studios.
By the end of the generation Sony's internal teams were putting out games that were amongst the most impressive of the generation.
Each Sony studio developed their own internal game engines, built from the ground up to take advantage of the parallel processing that the Cell offered.
As a result their current and recent projects are extremely well coded and efficient on multicore processors and their engines keep up with the best of them including Idtech and Unreal Engine.
The hard graft that these studios had to do when stuck with the Cell has given them a skill set and coding tools that are benefiting them today.

As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.

While I understand why both Sony and MS moved to PC parts for their new consoles, I really miss the days of proprietary processors from Sony, Sega etc.

This is my first thread on GAF, so go easy on me.

You think you miss the bolded........but trust me you don't!!!
 

damiank

Member
I guess there is a big reason why nVidia only made one console for MS and only one for Sony.
But AMD/ATI already made several of them. Being a good partner in venture like this, is very important.
Nintendo Switched from ArtX/AMD (GC->Wii->WiiU) to Nvidia. I wonder if Nintendo Switch again to AMD as Steam Deck is emulating Switch just fine.

Also, that hypotetical Cell 2 for PS4 could be PowerXCell based with architecture upgrades but in 4+8 combination instead of 1+8. With Radeon from actual PS4 and 8GB of RAM, devs could use their existing 360 engines and optional stuff from PS3 that could be used on SPE's to offload GPU here and there. But that's just hypotetical. Also Sony itself could use their existing PSone and PStwo emus, moving PS2 Classics emu to full-blown emulator with support for every PS2 controllers and accesories (as actual PS2_netemu can't do). For PS5 it could be 8+8 combination and end of the line for this crap XD.
 
Last edited:

Fafalada

Fafracer forever
The Cell arch made a tradeoff. By removing things like the branch predictor from it's SPEs and by having a mere In-Order arch, it saved a lot of space to be used on more computational power.
SPEs don't have branch predictor by design - it wasn't 'removed' - as I pointed out earlier, it really didn't need one.
But the rest is not really specific to Cell or the SPEs. From 1995 - 2012, 8 out of 10 consoles released had CPUs (and other processors) use in-order architecture. 12 out 15 if counting handhelds. It's only in the most recent decade that we've finally seen the switch to OOOE proper.
So yes - it was a trade-off, but it was one that almost every console made in pursuit of power-efficiency.
 
I can't understand where this narrative that they stumbled in the last few years is coming from. It's been hit after hit pretty much, not only since the console released but even in the moths that preceded the release. Critically and commercially successful games, sold out console, huge service numbers, etc.

The last R&C was likely more successful then ever, R&C games didn't use to get this much attention or promotion before. GT7 did exceptionally well as well. Returnal did great for a game like that.

You seem to be underselling a lot of things, Horizon was a massively successful new IP and I think it's way to soon to assume the sequel didn't do as well as they hoped for. Releasing close to Elden Ring didn't help but it's nothing that can't be overcome over time.

TLoU2 is probably the only recent game that might've sold bellow expectations (given that Sony doesn't update the numbers) but even that was still massively successful anyway and the multiplayer component was not even released yet.

Horizon clearly hasn't sold as well as its predecessor.

It nose-dived on the charts immediately. The only question is whether it can have legs as the drought of big releases continues.
 
To provide some context, during Cell development, their budget was ~O(200M transistors). When you look back and compare it to PS5, we're approaching 20,000M transistors. So the design paradigm to fit in the performance envelope is quite a bit different and when you actually do the regression analysises and look at prediction and OOOE and branching, etc these things aren't winners. Especially coming off PS2, Cell was a more approachable design. Hofstee was outspoken about Cell being easier to approach.


For what's worth - original targets had Cell 4x more powerful, and the GPU substantially faster at non-compute workloads (but not much else) so what we got was not even all that exotic in the end.

Agreed. There is some ambiguity in what they had planned, but there was early significant interest in having a 1TFlop/sec Cell processor around the time the patents were drawn up. It's likely they had a more aggressive lithography roadmap in mind -- remember they were already producing EE+GS's on CMOS4 @90nm in FY2004. And then there's IBM's influences are a mixed-bag, 90nm SOI, a guTS/Rivina-derived PPE, the EIB was an elegant solution over a cross-bar considering the performance/space trad-offs.


I remember when Cell dominated the Folding@Home rankings, due to it's code being very parallel and low dependencies.
But in games it never pulled away that far from the competition.

It was never a fair competition. Use the same GPU and tell us the games would be remotely close if the only variable was Cell verse Xenon. Once they moved from the Toshiba design, they lost the memory architecture and programming paradigm that would have been interesting. It wasn't even a G8x class GPU. As Jim Kahle has admitted, the nVidia interconnect 'problem' was a late-addition for them.

Interesting thought experiment: If PS3 had a similar caliber GPU, it's easy to imagine as they were discrete back then, so that was now negligible to developers, what do you think they could have done with ~200GFlop/sec of sustained and pretty general FP computation?
 
Horizon clearly hasn't sold as well as its predecessor.

It nose-dived on the charts immediately. The only question is whether it can have legs as the drought of big releases continues.
If you say so, let's see how long it takes Sony to make the numbers public.

If they take too long to talk about it than it is likely it might have underperformed (but there is no doubt that it sold extremely well regardless, it's a very successful IP already).
 
Last edited:

SlimySnake

Flashless at the Golden Globes
If you say so, let's see how long it takes Sony to make the numbers public.

If they take too long to talk about it than it is likely it might have underperformed (but there is no doubt that it sold extremely well regardless, it's a very successful IP already).
The first one sold what 2.7 million copies in the first month and 20 million overall? You would expect it to shatter on those numbers, especially considering its cross gen. I would've expected them to release those numbers by now.

Whats even more weird is that we havent seen GT7 numbers either.

We know Demon Souls only sold 1.2 million in the first year. Ratchet 1 million in in the first month. Returnal only 500k in the first three months. The new DLC only has 10,000 players on the leaderboards apparently. Whats up with these 1:20 attach ratios for a console so popular?

Spiderman Miles is pretty much the only PS5 game that continues to chart every month. So everyone who picks up a PS5 buys Miles and nothing else. why? Could it be because its only $50 whereas everything else is $70? Last gen Sony's first party had an incredible run of games selling way better than their predecessors. KZ2 sold 2.1 million in 6 weeks. INfamous and BB sold 1 million in a month. Even DriveClub sold 2 million in 8 months. Then Uncharted 4, Horizon, GOW, Spiderman, Days Gone, TLOU2 and Ghosts just continued to outsell each other. Massive first week sales. Incredible legs. It was insane. Then PS5 hits and they all barely sell a million? Whats going on here?
 
Last edited:
The first one sold what 2.7 million copies in the first month and 20 million overall? It's definitely curious that a game that sold 20 million wouldnt sell more than 2.7 million in its first three months.

Whats even more weird is that we havent seen GT7 numbers either.

We know Demon Souls only sold 1.2 million in the first year. Ratchet 1 million in in the first month. Returnal only 500k in the first three months. The new DLC only has 10,000 players on the leaderboards apparently. Whats up with these 1:20 attach ratios for a console so popular?

Spiderman Miles is pretty much the only PS5 game that continues to chart every month. So everyone who picks up a PS5 buys Miles and nothing else. why? Could it be because its only $50 whereas everything else is $70? Last gen Sony's first party had an incredible run of games selling way better than their predecessors. KZ2 sold 2.1 million in 6 weeks. INfamous and BB sold 1 million in a month. Even DriveClub sold 2 million in 8 months. Then Uncharted 4, Horizon, GOW, Spiderman, Days Gone, TLOU2 and Ghosts just continued to outsell each other. Massive first week sales. Incredible legs. It was insane. Then PS5 hits and they all barely sell a million? Whats going on here?
Wait, do you know how much Horizon Forbidden West sold? I don't and I don't know how someone could tell from just looking at position on charts.

GT7 just released, they said it had the best launch of the franchise if I'm not mistaken, that alone is great news.

You honestly think Returnal is a flop? Sony even bought the studio after the game released, they clearly liked what they saw, it's not like there were big expectations for the game.

Honestly you seem to be jumping to conclusions way too soon.
 
Last edited:

LordOfChaos

Member
As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.


Cell was in the end a pre-GPGPU design that aimed to make CPUs much more SIMDy per transistor, and when taken advantage of it did deliver that, with SIMD flops unparalleled for years after its release. However, GPUs quickly became better at doing what it did better than general CPUs.


Nowadays if you want to rain a bunch of particles with real physics on the screen for instance, you want to do that on the GPU.


 
Cell was in the end a pre-GPGPU design that aimed to make CPUs much more SIMDy per transistor, and when taken advantage of it did deliver that, with SIMD flops unparalleled for years after its release. However, GPUs quickly became better at doing what it did better than general CPUs.

The obvious disclaimer is that, you're right, there has been an alignment of computation to dedicated substrate that has made Cell less needed today. But, consider:


I would suggest it was a potential platform, an architecture, that could have been extended and yielded interesting benefits.

If Sony's economics weren't a factor, PlayStation3 dedicated 258mm^2 (RSX) and 235mm^2 (Cell) to computation on their platform (~500mm^2 total). We're now getting ~300mm^2 with PlayStation5. Yet, we praise Cerny.

If we still had that area, we'd be talking the mid-point between an RTX 2070 and 2080, so on-the-order of 10B transistors. 75% of that dedicated to graphics, the other 3B is free.

But, while on the Cell theme -- there were plans to extend the design in width (4 PPEs) and length (upto 32 SPEs). Obviously the PPEs wouldn't be the same design, we can afford to use a more accommodating Power core. And there was talk that a SPE didn't have to be as we saw in Cell. They discussed having an APU that was basically additional PPEs. So, you would have a heterogeneous processing environment linked by the EIB (which could be replaced by a X-bar, too) that could be tailor made.

Would this have a niche and find utility in today's computational landscape? Maybe, maybe not. But it would be a lot more interesting from an theoretical standpoint than what Cerny has given us.
 

PaintTinJr

Member
To provide some context, during Cell development, their budget was ~O(200M transistors). When you look back and compare it to PS5, we're approaching 20,000M transistors. So the design paradigm to fit in the performance envelope is quite a bit different and when you actually do the regression analysises and look at prediction and OOOE and branching, etc these things aren't winners. Especially coming off PS2, Cell was a more approachable design. Hofstee was outspoken about Cell being easier to approach.




Agreed. There is some ambiguity in what they had planned, but there was early significant interest in having a 1TFlop/sec Cell processor around the time the patents were drawn up. It's likely they had a more aggressive lithography roadmap in mind -- remember they were already producing EE+GS's on CMOS4 @90nm in FY2004. And then there's IBM's influences are a mixed-bag, 90nm SOI, a guTS/Rivina-derived PPE, the EIB was an elegant solution over a cross-bar considering the performance/space trad-offs.




It was never a fair competition. Use the same GPU and tell us the games would be remotely close if the only variable was Cell verse Xenon. Once they moved from the Toshiba design, they lost the memory architecture and programming paradigm that would have been interesting. It wasn't even a G8x class GPU. As Jim Kahle has admitted, the nVidia interconnect 'problem' was a late-addition for them.

Interesting thought experiment: If PS3 had a similar caliber GPU, it's easy to imagine as they were discrete back then, so that was now negligible to developers, what do you think they could have done with ~200GFlop/sec of sustained and pretty general FP computation?
As much as I agree with your wider points about Cell BE, the shoehorned 11th hour RSX was still more capable than the Xenos by about +40% more quads on screen - as the headline figure - with optimised wound quadrilateral strip geometry (1.1 Billion polys/vertices versus about 400-700M polygons going by ATI 980 Pro/X1600(?) tech specs + a boost).

The 360 had the same technical nightmare first year as the PS3, just it had no competition via the Ps2 to make its screen-tearing, frame pacing without fullscreen AA or falling below 1280x720 native an issue that we all remember, well - unlike the PS3 - and because the ATI Xenos had Micro Architecture fast path features such as EarlyZ, non-optimised screen processing favoured the 360. Polygon processing on the xenos was almost as fast as optimised wound quad strips - probably having some batching in hardware so the one extra vert for an extra polygon was automatic but still constrained by the max polygon count. Unlike on nvidia cards where you could get an extra polygon per additional vert, so the difference between optimised and not, was 3 verts per polygon, or 1 vert per polygon, altering the max polygon throughput from 340M polygons/sec to 1billion polygons/sec.

Then you had the RSX supported HD ready and Full HD triple buffering with full precision zbuffering with hardware accelerated sRGB gamma correction - proper Standard dynamic Range Colour gamut - and supported 10 bit per channel RGB framebuffers, etc.

The only real weaknesses was the alpha blending couldn't match the Xenos from its higher performance but too small edram, ,and because the xenon and Xenos shared unified ram, the 256MB of GDDR3 and 256MB of XDR in the PS3 became another problem to solve moving data around. The ATI EarlyZ feature also provided an early out for shading too IIRC - like a precursor to variable rate shading - which was an additional automatic saving over the RSX for non-optimised rendering.

Overall the RSX was stronger, but even it was far more work for equal results (in verts + fragment throughput) compared to the competition.
 
Last edited:

LordOfChaos

Member
The obvious disclaimer is that, you're right, there has been an alignment of computation to dedicated substrate that has made Cell less needed today. But, consider:


I would suggest it was a potential platform, an architecture, that could have been extended and yielded interesting benefits.

If Sony's economics weren't a factor, PlayStation3 dedicated 258mm^2 (RSX) and 235mm^2 (Cell) to computation on their platform (~500mm^2 total). We're now getting ~300mm^2 with PlayStation5. Yet, we praise Cerny.

If we still had that area, we'd be talking the mid-point between an RTX 2070 and 2080, so on-the-order of 10B transistors. 75% of that dedicated to graphics, the other 3B is free.

But, while on the Cell theme -- there were plans to extend the design in width (4 PPEs) and length (upto 32 SPEs). Obviously the PPEs wouldn't be the same design, we can afford to use a more accommodating Power core. And there was talk that a SPE didn't have to be as we saw in Cell. They discussed having an APU that was basically additional PPEs. So, you would have a heterogeneous processing environment linked by the EIB (which could be replaced by a X-bar, too) that could be tailor made.

Would this have a niche and find utility in today's computational landscape? Maybe, maybe not. But it would be a lot more interesting from an theoretical standpoint than what Cerny has given us.


Absolutely no denying that the 7th gen was a lot more interesting than what we have now, given that we're still talking about and dissecting Cell to this day.

But it must be said that the get up and get going utility of PC-like hardware today is a big boon to developers. As far as the argument that more transistors thrown at the problem would yield more power, of course yeah, but that is a matter of them shifting away from the lossy 7th gen's battle tanks going at each other to more hybrid sedans, not a mark on the architecture or Cerny.

Imo this argument also had more appeal in the 8th gen. The Jaguar CPUs were single core dogs and not impressive even using all 7 (available) cores for SIMD. I would definitely like to see the what-if universe simulation of an extended Cell 2 with 4PPEs/32SPEs in there, but I've not been unhappy with the 9th gen move to easily accessible, 7 core Zen 2 power, with unified memory making GPGPU even more viable than standalone PC cards and likely subsuming most of what a theoretical Cell could have done here.


Actually before it launched I was quite in like with the idea of a "Cell assist engine", a full Cell processor in the PS3 for BC that developers could also tap for PS5 titles however they wanted, but alas it was only fantasy. 230 million transistors would be a pretty small addition.
 
Last edited:
Overall the RSX was stronger, but even it was far more work for equal results (in verts + fragment throughput) compared to the competition.

Sony, Toshiba and IBM formed STI in 2001 for a product in 2005. If a similarly successful project with Toshiba worked out, things would have been very different. I would even suggest that if a similar period of time was invested with nVidia, we could have seen a G8x derivative that didn't have the shitty memory, bandwidth or command processor issues and would have made these discussions moot. You make great arguments, but lets be honest, it was a hack job...


Absolutely no denying that the 7th gen was a lot more interesting than what we have now, given that we're still talking about and dissecting Cell to this day.

But it must be said that the get up and get going utility of PC-like hardware today is a big boon to developers. As far as the argument that more transistors thrown at the problem would yield more power, of course yeah, but that is a matter of them shifting away from the lossy 7th gen's battle tanks going at each other to more hybrid sedans, not a mark on the architecture or Cerny.

I agree! This discussion held more water in the 8th generation.

Also, Sony's high-dimensional design space is overwhelmingly dominated by economic concerns, which I have the luxury of not paying attention to.

PS. Perhaps Panajev remembers this, but I think there was a years later, post-launch interview with Hofstee or someone that said in hindsight, if they realized OpenCL would come about they would have developed a processor dedicated to that?! Sorry, I'm getting old and have forgotten so much, I work on computation that is much wetter now....
 
Last edited:
No, it was garbage and no amount of nostalgia will change that.
Yeah it wasn't suited to a games console and in hindsight Sony would choose a different path.
However it made Sony's studios mad good at parallel Compute and their engines are really efficient at it. With Sony first party games looking so good, rarely do they have major performance issues related to poor coding.
They tend to get every bit of performance and efficiency out of their hardware. A skill born out of fire lol
 
Last edited:

LordOfChaos

Member
Member the PowerXCell 8i? This is two of them in an IBM Blade

yAHtPAPMB18e1kfyDwnjoUnshILc1Htb3Z7zI-OXHbU.png


'Member the SpursEngine accelerator cards?

PxVC1100.jpg


There was even a few laptops with Cell accelerators



I 'member
 

BlackTron

Member
Meh. It's too bad we couldn't have a machine with the easy architecture of PS4 during the time period of PS3. The variety, quality and sheer number of games would have beaten both the PS3 and PS4 libraries we got.

Could have been like PS2 ver 2.
 

Lysandros

Member
I was all in on the Saturn. When a developer knew how to use it it could out do the PS1. Look at how good Sega Rally, VF2 and the revised Daytona were.
The most annoying thing from my point of view was the lack of transparency effects. Mesh smoke effects and shadows sucked lol.
No. Not true at all as a general rule in pure 3D workloads. Saturn suffered from substantially lower framerates besides the effects compared to Playstation due to a lack of proper, hardware accelerated 3D architecture and significantly lower effective fill rate/bandwidth. I think you are unaware of Ezra Dreisbach's (Lobotomy Software, mainly a Saturn developer at the time) John Carmack's and nearly every other developers' comments on respective machines. There was simply no way for Saturn to reach PS1 level in performance and visual fidelity for fully 3D games besides a few exceptions unrelated to hardware. No need to cite the famous examples, nearly everyone is fairly familiar with PS1 library.

Edit: By the way PS1 was pretty far from being simplistic in design, in many ways it was very similar to PS2, which was build upon its foundation. As an example PS1 CPU contained two on chip co-processors (one of them being the famous GTE) besides the proper core and a pretty beefy MDEC decompressor. So it was a complex just like PS2' EE. Yes the architecture was certainly more streamlined compared to Saturn, but it still required alot effort to extract optimum performance. In fact recently a former Saturn developer posted on this site that by 1997 it was actually easier to make 3D games on Saturn compared to PlayStation.
 
Last edited:

SkylineRKR

Member
Whatever benefit the Cell processor provided, the underpowered RSX took it away, with the Cell SPE having to assist with the graphics rendering in order to get parity with the 360.

I wonder what the original Toshiba-designed GPU would have looked like? It was believed to be an iteration of the graphics synthesizer used in the PS2, with a few SPE's for rendering and a graphics core for rasterization, paired with eDRAM. My guess was they found performance to be very poor with the ATI cores that Microsoft was going to use at the time and shelved it. Another issue with the PS2 graphics system was it was too different compared to how other GPU's functioned, and Sony was aware of how poor some of the PS2 ports were, and the issues gathered from developer feedback.
Then again had they delayed the PS3 and used a GPU like the GeForce 8800GT, the PS3 would have wiped the floor with the 360 in the graphics department, but how much would such a console cost? They were already pushing it with the stock PS3.

I've said it before, a PS3 with a GF8, no Rambus XDR but unified DDR, no Blu Ray and perhaps no hardware BC would probably wipe the floor with 360 and could've been sold at a competitive price too. The question is parity by third parties but yes it would've been more of a powerhouse with a more logical RAM solution.

No Blu Ray would probably sting but it wasn't really needed as by then everything already pointed towards VOD and streaming. I think most PS3 discs had duped data to cut down on seek times anyway, because they were much smaller in reality. And even if it did need the space for some reason, Sony could offer full installations from disc like 360 eventually did and PS4 and X1 required. Sony had an edge with HDD from the get go, delivering more space and possible to upgrade with off the shelf parts. As we know most 360 games played on newer systems are fine still, despite being originally on 'just' DVD. And the small download size is actually a benefit nowadays.
 

AGRacing

Gold Member
Most of these points also apply to the Sega Saturn, back then the narrative was hard to code for = bad. But with PS3 it's now, hard to code for = good.
The Saturn had lots of untapped potential.. but I believe there were so many design "mistakes" in the Saturn architecture due to the system being radically re-engineered late in the development. It had 2 CPUs but they had to share the same bus and couldnt operate in a way that would truly justify choosing 2 in the first place. 2 VPUs as well but they weren't identical in capability... had to split vram, split tasks in a way that made it difficult to push effects like transparency. No hardware z sorting.
If they had the same hardware parts budget and different goals for what the hardware should do at the beginning of development that system would have been MUCH better.

The Cell chip itself was VERY purpose built and I think the OP is right to infer that "multi threaded thinking" was a must for 1st party and that Sony probably did benefit from that in the unfortunate "jaguar core" years later. I think that's a smart take.

The PS3 could have been better as well though if they didn't have to fall back on the Nvidia GPU. That's sort of the Saturn element of the systems design... but not as bad of a situation. I think the split RAM situation was less than ideal as well. Those systems were RAM starved to begin with and could have benefitted incredibly from double the RAM or for the PS3 even 50% more RAM.

Oh well. Its always fun to play "what if" with those systems. Today's hardware truly does feel boring in comparison.
 
Last edited:

John Wick

Member
Wasn't the PS3 the worst console ever in terms of performance per cost wise ? 900$ to make at lunch while the Xbox 360 cost half as much at the time
Clearly you have no idea what your talking about. The 360 was never half the price.
The PS3 was cutting edge tech at the time. Sony were selling it at a massive loss. Blu-Ray, hard drive, wireless, HDMi, and backwards compatibility straight out the box
 

winjer

Gold Member
Clearly you have no idea what your talking about. The 360 was never half the price.
The PS3 was cutting edge tech at the time. Sony were selling it at a massive loss. Blu-Ray, hard drive, wireless, HDMi, and backwards compatibility straight out the box

The PS3 had somethings that were cutting edge at the time. And others that were not so much.
Like the GPU, that still used a non-unified shader arrangement. Or the CPU that had a dumb front-end, being an In-order architecture. Or having two memory pools of memory in a console. Or using HDD when SSDs were common.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
The Saturn had lots of untapped potential.. but I believe there were so many design "mistakes" in the Saturn architecture due to the system being radically re-engineered late in the development. It had 2 CPUs but they had to share the same bus and couldnt operate in a way that would truly justify choosing 2 in the first place. 2 VPUs as well but they weren't identical in capability... had to split vram, split tasks in a way that made it difficult to push effects like transparency. No hardware z sorting.
If they had the same hardware parts budget and different goals for what the hardware should do at the beginning of development that system would have been MUCH better.

The Cell chip itself was VERY purpose built and I think the OP is right to infer that "multi threaded thinking" was a must for 1st party and that Sony probably did benefit from that in the unfortunate "jaguar core" years later. I think that's a smart take.

The PS3 could have been better as well though if they didn't have to fall back on the Nvidia GPU. That's sort of the Saturn element of the systems design... but not as bad of a situation. I think the split RAM situation was less than ideal as well. Those systems were RAM starved to begin with and could have benefitted incredibly from double the RAM or for the PS3 even 50% more RAM.

Oh well. Its always fun to play "what if" with those systems. Today's hardware truly does feel boring in comparison.
PPE had some genuine foot guns and in some corner cases you could get SPE’s beating it at general purpose code apparently, but aside from those I do not see glaring architectural mistakes in CELL (CBEA, so PPE + RingBus + SPE’s + XDR + FlexIO [the latter had issues on the nVIDIA implementation side because it was a rush job).

Could SPE’s have had bigger cache with a portion they could lock and use as scratchpad in an update? Yes. Could you have had more than one PPE with a modern OoOE implementation and other quirks removed (LHS with lack of forwarding paths requiring an otherwise unnecessary round trip to memory). Yes good.

Saturn was a hard to develop but also more locked down and some could say flawed architecture, CELL was hard to get to grip with but not necessarily bad.

In time SIMD power balance was always going to shift to GPU’s with much much wider vector units and a programming model to match (scalar / SIMT), but you still need strong predictable FP performance on the CPU’s side and that model was a bit ahead of its time for console developers but not bad.

Proof? Performance improved across the board as titles started to be made with PS3 as lead platform (it did. It just make it easier to make PS3 versions)… I have not heard of Xbox 360 CPU performance regressions as titles started to be more and more CELL optimised.
 

Panajev2001a

GAF's Pleasant Genius
The PS3 had somethings that were cutting edge at the time. And others that were not so much.
Like the GPU, that still used a non-unified shader arrangement. Or the CPU that had a dumb front-end, being an In-order architecture. Or having two memory pools of memory in a console. Or using HDD when SSDs were common.
Small, large capacity, user replaceable, and reliable SSD’s were common in the PS3 days (PS3 launched November 2006)?

In December 2006 Advanced Media announced that its 32GB 2.5" SATA SSD costs $1,000.

In another priced case study, SSD Speeds Up Eve Online, a SAN based SSD fromTexas Memory Systems provided a 40x speedup in a system running on 150 IBM servers with 17,000 concurrent users. The system which TMS supplied for this application,, and has a list price (Q405) of $142,000.
2006: http://www.storagesearch.com/2006-archived-ssdbyuersguide.html

GPU granted, it was a last minute rush job when the Toshiba GPU was deemed not fit to launch with, despite there being working samples (just not something they could mass produce)… the FlexIO issues with CPU and GPU having uneven access to each other’s memory was also a result of this (Toshiba’s RS had gobs of eDRAM and was built around FlexIO from day 1, it I the PS3 as super PS2 part, not the innovation side of things).

CPU’s adopted SMT/hyper threading which was still advanced for the time, but their bet on very high frequency and low power forced some constraints like the in order front end (and some issues like LHS and some microcodes instructions which developers hated in both consoles).
 
Last edited:

winjer

Gold Member
Small, large capacity, user replaceable, and reliable SSD’s were common in the PS3 days?

It's the time when they were starting to hit mainstream in the PC consumer space.

GPU granted, it was a last minute rush job when the Toshiba GPU was deemed not fit to launch with, despite there being working samples (just not something they could mass produce)… the FlexIO issues with CPU and GPU having uneven access to each other’s memory was also a result of this (Toshiba’s RS had gobs of eDRAM and was built around FlexIO from day 1, it I the PS3 as super PS2 part, not the innovation side of things).

CPU’s adopted SMT/hyper threading which was still advanced for the time, but their bet on very high frequency and low power forced some constraints like the in order front end (and some issues like LHS and some microcodes instructions which developers hated in both consoles).

I wasn't talking about SMT, but about OoO execution.
But SMT had been mainstream on PC since 2002, when first implemented in the Pentium 4. Almost half a decade before the PS3.
 

Panajev2001a

GAF's Pleasant Genius
It's the time when they were starting to hit mainstream in the PC consumer space.
Mass market launch date being November 2006, we are talking of buying them in volume in what… Jan or February 2006? I think that was way way too early for a console to use… nobody introduced SSD’s in consoles till 2020…

I wasn't talking about SMT, but about OoO execution.
But SMT had been mainstream on PC since 2002, when first implemented in the Pentium 4. Almost half a decade before the PS3.
Ironically another speed demon design whose innovation tried to salvage a design where the sacrifices (pipeline length for example) were dragging it down hard.
 

John Wick

Member
The PS3 had somethings that were cutting edge at the time. And others that were not so much.
Like the GPU, that still used a non-unified shader arrangement. Or the CPU that had a dumb front-end, being an In-order architecture. Or having two memory pools of memory in a console. Or using HDD when SSDs were common.
Lol! I meant as in for a console. Wasn't Sony losing about $200 per console? Also wasn't Blu-Ray players about $600+ at the time. I think Sony kinda used PS3 as a trojan horse for Blu-Ray adoption and standard.
 

winjer

Gold Member
Mass market launch date being November 2006, we are talking of buying them in volume in what… Jan or February 2006? I think that was way way too early for a console to use… nobody introduced SSD’s in consoles till 2020…

Could be used in a high end SKU, just like latter PS3 models with bigger HDDs.

Ironically another speed demon design whose innovation tried to salvage a design where the sacrifices (pipeline length for example) were dragging it down hard.

Yet some people still think the Cell was the best thing ever.
 

winjer

Gold Member
Lol! I meant as in for a console. Wasn't Sony losing about $200 per console? Also wasn't Blu-Ray players about $600+ at the time. I think Sony kinda used PS3 as a trojan horse for Blu-Ray adoption and standard.

The X360 used a GPU with unified shaders.
And the PS4, PS5, X1 and Series S/X, all use OoO CPUs, while being very affordable. And even an SSD.
 
Last edited:

Sosokrates

Report me if I continue to console war
Technically there was nothing special about ps3 exclusives, they just had great devs work and polish the hell out of them. They would of looked just as good if sony went with a more standard ppc cpu design like the 360s.
 

Fafalada

Fafracer forever
As someone who loves the tech side of things I wonder if Sony had of stuck with the Cell and improved its shortcomings like making it Out of order, streamlining the command requirements what it could have been. No doubt it would have been more powerful than the jaguar cores in the PS4.
While I understand why both Sony and MS moved to PC parts for their new consoles, I really miss the days of proprietary processors from Sony, Sega etc.
The 'dream' of proprietary, built-for-purpose console tech died when PS3 made a switch to NVidia GPU at 11th hour. Nintendo hung on for a bit longer with WiiU, the writing was on the wall before the last decade started.
As for what could have been - there was a genuine interesting path towards dedicated deferred rasterizer that Sony could have pursued (and Cell paired really well to that concept) but looking at market realities it could have really hurt the platform to be even more different than it already was. In addition to R&D becoming unsustainable for anyone but the 3 major CPU/GPU-players, prioritizing developers meant prioritizing less diverse approach to hardware as well, and that's what played out in the last 15 years.

Technically there was nothing special about ps3 exclusives, they just had great devs work and polish the hell out of them.
You can say that for every big console exclusive ever - none of them are exempt from running on a more standard computing platform.
It's also a myopic perspective on things - closed-box optimization has always been about what the said closed box could do - not the hypothesis of writting a portable version of your software.

Also let's not pretend PPCs in 360 were standard, they had their own assortment of potholes that noone was happy to be working with. The real reason they are viewed (marginally)more favorably is that 360 was the lead platform for most developers that gen. And adapting to two painful CPU architectures at once was too much to ask for most.
 
Last edited:

Sosokrates

Report me if I continue to console war
The 'dream' of proprietary, built-for-purpose console tech died when PS3 made a switch to NVidia GPU at 11th hour. Nintendo hung on for a bit longer with WiiU, the writing was on the wall before the last decade started.
As for what could have been - there was a genuine interesting path towards dedicated deferred rasterizer that Sony could have pursued (and Cell paired really well to that concept) but looking at market realities it could have really hurt the platform to be even more different than it already was. In addition to R&D becoming unsustainable for anyone but the 3 major CPU/GPU-players, prioritizing developers meant prioritizing less diverse approach to hardware as well, and that's what played out in the last 15 years.


You can say that for every big console exclusive ever - none of them are exempt from running on a more standard computing platform.
It's also a myopic perspective on things - closed-box optimization has always been about what the said closed box could do - not the hypothesis of writting a portable version of your software.

Also let's not pretend PPCs in 360 were standard, they had their own assortment of potholes that noone was happy to be working with. The real reason they are viewed (marginally)more favorably is that 360 was the lead platform for most developers that gen. And adapting to two painful CPU architectures at once was too much to ask for most.
Which is my point... I mean look at the thread title.
 

Romulus

Member
The 'dream' of proprietary, built-for-purpose console tech died when PS3 made a switch to NVidia GPU at 11th hour. Nintendo hung on for a bit longer with WiiU, the writing was on the wall before the last decade started.
As for what could have been - there was a genuine interesting path towards dedicated deferred rasterizer that Sony could have pursued (and Cell paired really well to that concept) but looking at market realities it could have really hurt the platform to be even more different than it already was. In addition to R&D becoming unsustainable for anyone but the 3 major CPU/GPU-players, prioritizing developers meant prioritizing less diverse approach to hardware as well, and that's what played out in the last 15 years.


You can say that for every big console exclusive ever - none of them are exempt from running on a more standard computing platform.
It's also a myopic perspective on things - closed-box optimization has always been about what the said closed box could do - not the hypothesis of writting a portable version of your software.

Also let's not pretend PPCs in 360 were standard, they had their own assortment of potholes that noone was happy to be working with. The real reason they are viewed (marginally)more favorably is that 360 was the lead platform for most developers that gen. And adapting to two painful CPU architectures at once was too much to ask for most.


The difference is Sony had arguably the best devs at their prime. Just looking at the 360, the developer prowess wasn't close. I think that's the only reason some people during that time people thought the ps3 had an edge. We know better now. Sony devs would put the ps3 into very confined circumstances, (mostly linear or fixed camera angles) and showcases elements that have nothing to with power like great animations, selling it. But looking back now, even those ultra confined games struggled on ps3 hardware. You couldn't even rotate the camera in God of War 3 lol and that game had an atrocious framerate.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Could be used in a high end SKU, just like latter PS3 models with bigger HDDs.
Come on :), how much would have it cost? The current model already caused $200+ losses on each unit at $599. For a 32-64 GB model it would have been prohibitory expensive: not even MS thought about it for the Xbox One, Xbox One X, etc…

SATA bandwidth would have still limited you anyways.

Yet some people still think the Cell was the best thing ever.
CELL is not just PPE’s downsides (those downsides are the same as the PPX in Xenos).
 

Panajev2001a

GAF's Pleasant Genius
The X360 used a GPU with unified shaders.
And the PS4, PS5, X1 and Series S/X, all use OoO CPUs, while being very affordable. And even an SSD.
Only XSX|S and PS5 ship with an SSD and they are 2020 consoles vs a 2006 one…
 
Last edited:

HYDE

Banned
Sony’s mantra to make cinematic video games from the beginning of the PS3 era, is why Sony is what it is today.
This started on PS1 and has been in since their inception. ICO was a PS1 developed game SOTC was a PS2 developed game. God of War I & II were both cinematic also, and that’s just the best examples to name a few. Sony first party is fucking amazing. There’s a reason Microsoft is buying other developers…
 

winjer

Gold Member
Come on :), how much would have it cost? The current model already caused $200+ losses on each unit at $599. For a 32-64 GB model it would have been prohibitory expensive: not even MS thought about it for the Xbox One, Xbox One X, etc…

SATA bandwidth would have still limited you anyways.

CELL is not just PPE’s downsides (those downsides are the same as the PPX in Xenos).

Let's imagine Sony went with a similar approach to the PS4.
They went to AMD and got some CPUs. Maybe 2 Athlon 64 X2, bound by an interposer. At this time, these CPUs would be cheap, as they were in the end of life. But they have almost identical IPC to a jaguar core.
This alone would mean much greater yields and much lower prices.
Then, add a Xenos GPU, similar to the X360, from ATI. Chances are it would be cheaper than the 7900GT from nVidia.
Instead of going for that expensive memory of the PS3, from rambus. Use normal GDDR3, in a unified pool.
We can jeep the BluRay drive and add a small SSD of 20-60Gb, depending on SKU.

The result is a cheaper console to produce, using a CPU with OoO. Much easier to program.
With a much greater amount of programmers around the world that can code to X86, meaning faster and cheaper time to make games.
Less broken games because of an overcomplicated CPU. Better performance because of a more advanced GPU.
More ports from PC to the PS3, and vice versa.
 

PaintTinJr

Member
The 'dream' of proprietary, built-for-purpose console tech died when PS3 made a switch to NVidia GPU at 11th hour. Nintendo hung on for a bit longer with WiiU, the writing was on the wall before the last decade started.
As for what could have been - there was a genuine interesting path towards dedicated deferred rasterizer that Sony could have pursued (and Cell paired really well to that concept) but looking at market realities it could have really hurt the platform to be even more different than it already was. In addition to R&D becoming unsustainable for anyone but the 3 major CPU/GPU-players, prioritizing developers meant prioritizing less diverse approach to hardware as well, and that's what played out in the last 15 years.
...
I agree with your overall take that it could have hurt the market - 10 years ago, although Apple is effectively copying the cell idea 20years after STI group offered to them as a roadmap for their PowerPC macs - however it was really the advent of the Nvidia GTX 200 series GPUs that killed the Cell BE future IMHO - the GPUs PlayStation used after they acquired Gaikai, that eventually became PS Now - because they were able to handle most of Cell BE's workloads (probably measured by IBM Roadrunner type usage than PS3) and were slightly more power efficient IIRC.

Had Nvidia failed to deliver a GPU at that level, there's every possibility that the next-gen Cell BE would have been the heart of the PS4, and all the hard work of that generation would have carried over in a more literal way - than the general migration to heterogeneous compute and unified memory the PS3/360 endured and paved the way for the X1/PS4 - leading to a fork in graphics R&D between the STI group and Nvidia/AMD/Intel.
 

Fafalada

Fafracer forever
I think that's the only reason some people during that time people thought the ps3 had an edge.
PS3 did have an edge though - I know devs that had to scale back PS3 version of certain multiplatforms because publisher was worried how it would reflect on the 360 version (which was generally always seen as the lead SKU from sales perspective, regardless of the order of development).

I agree with your overall take that it could have hurt the market
I meant the market for the console itself. Chip-market at large would not have been negatively affected by different CPU paradigms, that'd be silly.

Had Nvidia failed to deliver a GPU at that level, there's every possibility that the next-gen Cell BE would have been the heart of the PS4
As mentioned earlier in this very thread - original Reality Synthesizer was a completely different paradigm from 'more SPEs everywhere'. It wasn't a bad approach either - but the engineering realities (or capabilities of teams building it) did not result in a realistic path to a commercial product. And like I said - the moment Sony killed that project, the doors to future custom-built GPUs were pretty much gone, especially given how disastrous PS3 turned out to be financially.
I also wouldn't oversubscribed to what Cell did - we eventually got to vast amounts of general-purpose compute anyway with GPU centric approach. It's the approach to rasterization that could have been radically different (or at least more diverse).
 
Top Bottom