I'm saying there are good use examples for very specific, and limited work cases for GPGPU, but you can't expect to offload everything from the CPU to the GPU because in order to even hope to match the performance of the 360 and PS3 CPUs, you would have to reduce the actual graphics work load to almost nothing. In essence I'm saying what a number of us have been saying all along: GPGPU is not a magic solution to all the WiiU's shortcomings. It's not even a half measure solution.
I agree, GPGPU is not a magical sollution to all situations.
IMHO however, you are not giving enough credit to the capabilities of modern GPU architecture to handle many of these tasks more efficiently and faster then Xenon and Cell. Also there seems to be some confusion about what defines a GPGPU task.
Xenon and Cell were frequently used by developers to co assist in graphical related processing. Lighting, SIMD, depth of field, AA, post processing effects, shadowing, list goes on. These tasks are not necessarily GPGPU, many are absoltuely not, but rather standard GPU tasks that developers offloaded to Cell and Xenon. So Nintendo bringing these tasks back onto the GPU does not mean its being done via GPGPU or that there will be a significant performance hit doing so.
Modern GPU architecture has been designed to allow the GPU to simultaneously handle a diverse work loads like those above. Yes the GPU would take a hit having to do all these tasks that traditionally Cell and Xenon assisted with, but Nintendo and ATi can very easily offset utelizing modern architecture like increased SIMD and shader cores, SRFs, ROPs etc. Modern GPU architecture is quite complex with multiple pipelines, multiple cores for various tasks, and also increased programambility to tap raw power. It's not as easy as saying if the Wii U's GPU needs to do SIMD processing that Cell and Xenon did, that the GPU is going to take a performance hit from its graphical abilities. It all depends on the Wii U's GPU architecture as to how well it's able to handle the diverse range of tasks it's been loaded with. All we know is the architecture and technology exists to build a GPU that is more then capable of handling a diverse workload without sacrificing X to process B. Wwhether Nintendo and ATi implamented it into the Wii U is something i cannot answer.
Modern GPU architecture is designed around the idea of it being able to processes and load itself up with multiple different tasks, to be able to handle physics, lighting, depth of field, and SIMD, all while doing the traditional GPU grunt work. Go look at modern PC games, you'll find for the most part the GPUs are already handling this stuff and more.
The real question is how well have ATi and Nintendo designed the GPU architecture of the Wii U? If designed correctly the Wii U's GPU should be easily able to handle all GPU assisted tasks Cell and Xenon did, as well as the traditional GPU processing. There's no question that the Wii U's CPU is not going to be able to match Cell and Xenon at these tasks, but we do NOT know how capable the GPU is them. But we can look at modern GPU architecture in PC from Nvidia and ATi to gauge the likelyhood. They tell us that modern GPUs are more then capable of handling all the GPU related tasks Cell and Xenon assist with.
The architecture Nintendo appear to have gone with for the Wii U is all about efficiency. The Xbox 360 and PS3's processors handled I/O, audio, security, SIMD, and assisted the GPU with graphical processing, and then did 'typical' processing on top. With the Wii U Nintendo have offloaded sound, I/O, security, and opperating system to their own dedicicated processors/silicon. They also appear to have offloaded SIMD and graphic related tasks back onto the GPU. As such does the Wii U's CPU need to be as beefy as Cell and Xenon when it seems like it's not doing anywhere near the level of work?
Tear downs have shown the Wii U's CPU is around 1/3rd the transistor count of the Xenon CPU in the Xbox 360. Does anyone know the transistor count of the Wii U's GPU vs Xbox 360 and PS3? That's something i'd be very keen to find out.
If we combined the transistor count of the Wii U's CPU, DSP, I/O processor, ARM seceurity processor, and ARM O/S processor. I wonder how it would compare to Xenon and Cell in pure transistor count. Simply put the Wii U's architecture is radically different from the Xbox 360 and PS3, both of those systems favoured using the CPU heavily to assist with everything from security, I/O, sound, opperating system, through to graphics. Nintendo have left the CPU to do very specific tasks, with other sub processors or silicon to do the rest.
I think docking 50-100Gflops for "GPGPU" functionality is fair enough, particularly when comparing to PS3, as the SPE tasks are the type of things most likely to be offloaded to the GPU on Wii U. That said, from my reading on SPE usage by studios like Guerrilla and Naughty Dog, it seems most of it's dedicated to tasks which would normally fall under plain old GPU functionality, particularly lighting.
I suppose I'd be interested to get a quote from a developer saying "we're using about X% of our GPU power for physics, etc.", to give us a better idea of the real-world efficiency for these things.
Exactly my point. Some of the heaviest use of Cell we've seen in games has been for GPU related tasks. Tasks that with modern architecture a GPU would handle on its own.