There's no way that any choice of CPU is going to reduce costs or shorten project timelines in any case. Dream on, game dev doesn't work like that. The biggest culprits for projects overrunning are always management flip-flopping on direction and the like.
Perhaps he, Mr. tinfoilhatman, should have a look at the book Masters of
Doom. It's striking.
So Cell had nothing to do with increased devl costs and decreased performance for many PS3 multi-platform games? ...
There are some initial cost for adapting to the architecture. But these
costs are rather low compared to the cost of content development. And I
can't see how Cell's architecture has brought any shortage of games. On the
contrary it brought many high quality games.
... Weird from what I understand it's far easier\faster to develop for say an multi-core x86 CPU with OOO processing(or even the Xbox IBM multi-core) than anything exotic like cell where parallel processing is not an option but a requirement.
So forgetting about performance programming for processors that have to process data in parallel(and in order) is just as easy as conventional development? I'm not a developer(I work closely with them and QA teams) but that goes against everything I've ever heard
There was a problem at the start of this gen when coders were largely indoctrinated into the Wintel thread-based programming model rather than the "job"-based approach that is vastly more efficient in a multi-core environment. Parallellization is the future, not super-fast single-cores, so this is/was a necessary transition.
Also, you need to remember that on a coding team there are people working at a whole range of levels in relation to the hardware. Engine guys will be looking at low-level optimization but more will just be pumping out largely platform-agnostic mid-level C, and some will be using higher level scripting solutions like LUA or some bespoke equivalent for maximum portability.
Point is, the bulk of the code-base is going to be serving the needs of the design/content rather than the hardware its running on.
What affects things most on a day-to-day basis are the quality of tools and debug systems because errors in implementation are inevitable regardless of hardware. The more complex the hardware the more important this aspect becomes, but the key thing to remember is that once these issues are solved, they are solved. Once you've got the tools and a staff experienced at using them, you shouldn't need to be constantly revisiting this stuff even on systems which are really picky about memory/data alignment like PS3.
Code is maths, its a pyramid of knowledge which is constantly built up layer-upon-layer. So, "difficulties" tend to be of decreasing significance the longer the platform lasts in any case - which is why shifting CPU families is kind of a double-edged sword. You still have the initial "bump" where existing staff need to retrain/acquaint themselves with the new tools, tech, and its peculiarities.
More than anything else though, managers want results not excuses. Coders are paid problem-solvers, so whinging about the difficulty of a particular piece of hardware really doesn't cut-it. If coder x is struggling, you simply replace him with someone who can handle it, and ideally relish the challenge.
QFT
Dumb question but with the 360 was parallel processing ever used(with the CPU), I was under the impression it wasn't and this was something unique to GPU and processors like Cell? ...
In terms of parallel processing, everything that applies to Cell also applies
to the 360s Xenon processor. However, the programming model is different.
The Xenon processor, consisting of three modified PowerPC processors, is
based on the threading model working out of a uniform memory system where
each memory call is served by a fixed memory controller. Such system can and
have served many applications and have increased efficiency of processors
during the last decade. However, this threading model has its limits in
terms of scalability. It becomes inefficient as the number of threads
increases. Management of all the threads needs more fine grained control. To
do this threads are broken up into jobs. The difference is that the
communication is also broken up, i. e. the communication (the memory
transfer) is made explicit - user driven. And this allows for greater
utilization of multicore processors. In a threading model all the
communication happens implicit - done automatically by the processor's
control logic guessing what's best scheduling all the threads and utilizing
its busses. But this logic has its limits. Usually if you have more threads
than processors in a system, bus contention issues and over-utilization of
certain units will arises and will lead to a degradation in performance.
This is all know since ages. As the workload increases in a threading model
it becomes less and less efficient in utilizing system resources. And since
the workload for games increases at a huge rate, the programming model has
to change as well. What make the adaption so difficult is that x86
architecture in charge dictates the threading model. Intel will tell you how
good their processors cope with your threads esp. IF YOU BUY THEIR
COMPILERS! That's where their knowledge is. Their entire compiler suit is
optimized for the threading model. There are hundreds of patents into it.
They sell books and throw threading libraries at you like nothing else. So
no reason to change the x86 architecture. They just cash in. Btw; there is a
thread library on Cell that abstracts the SPEs into pthreads, see IBMs
libspe 2 on PS3 Linux. So you can bombard the SPE cores with threads as
well. But soon you will find out that it will be better to utilize the
communication unit (the MFC; memory flow controller) within each SPE to
enhance your game performance a multifold as is done with Sony's SPURS
library, a job scheduling library for the SPEs. Quite a bunch of PS3 games
use that.
But don't get me wrong on the thread model. It has its uses, esp in the
casual world. And I don't shoot for a job model or explicit DMA programming,
if I just want to cut a small even cool game together. However, there was a
reason Cell was created, and there was also a reason why it differs from
x86.
I can't wait for x86 to die off. These 3D transistors (and wafers) are only temporary slow down on moores law for this architecture type. As many bottlenecks as the Cell had, it is far more impressive than any x86 I've seen (not saying it's the most powerful... just for what it is). If people invested more into this, maybe even a cell with more than a measly 16kb per SPE and better memory controllers.. etc... we'd get some ridiculous breakthroughs. By far the biggest bottleneck was memory. Not being able to access ram fast enough, that's why XDR was needed.
Won't die any time soon if ever. Some of the graphics guys need to come down
making a new processor emulating the x86 for compatibility. Anyhow, AMD
missed the chance transforming the architecture with its AMD64/x86_64 ISA.
Anyhow, I hope the PS4 pwns.