There are plenty of examples of PC games that have an option to shift physics to the GPU and they provide the same graphics with better physics then when using the CPU. Of course using flops for physics will allow less for rendering, that has to be balanced by the developer and no nobody's claiming its a magic bullet. But neither is the idea of physics on a GPU as pointless as you're making it sound. GPGPU functionality can absolutely take significant burden away from a CPU during rendering and as you know there is no fixed amount it would slow rendering down by. Obviously just because you're experiencing that level of slow down with your code doesn't make it a guideline for any GPGPU work done on any GPU.
Well, thats not really the same thing. PCs generally have alot more spare resources than a console. When you flip the switch to turn on your Physx or whatever, depending on the card you have, you will notice a fair amount of framerate drop. More or less.
On consoles, developers have to budget the resources and generally, pretty graphics win out. Other than this generations relative lack of logical hardware power, I don't know CPUs were to burdened anyway.Most were trying not to wreck their brains figuring what else they could move to the CPU.
If you want to get deeper into it we can but GPUs aren't really the god-send for phyics like some make them out to be. There are some aspects where a good CPU would best it at.
I don't recall anybody claiming that the GPGPU comes for free.
It's a simple fact of matter, though, that the GPU will handle some GP tasks better than the CPU. All next-get GPUs will be dong that. Will their framerates 'universally suffer' from that? It really depends what the games are trying to do - what their GP tasks and the rendering tasks involve.
Last but not least, games don't necessarily shoot for infinite framerates, whereas CUDA tasks do - they try to get max performance, sustained. Console games don't to do that - they aim for fixed performances (whether that fixed performance is close to the GPU's max is another matter).
I really don't agree with this. But I've seen "GP" cover so many different aspects that I really don't have a clear idea what you may be alluding to.
GPGPUs are pretty much a huge collection of SIMD units. Alot like Cell SPE's except exponentially more HP. It shares the same drawbacks though.
1. Memory Latency. The hardest one to shake because GPUs are sorta built to tolerate latency rather than combat it.
2. A Wide-Simd lane so it will generally struggle with anything logical
3. A lack of branch HW. This was the case for this gen so it won't be that big of a change next-generation but the few developers I know aren't fond of this at all.
Personally, I would rather just integrate a few decent SIMD units in the CPU and let my GPU do its thing but perspective is always intresting.
What would you rather see?