I agree.
Back on topic, there are still some ongoing questions about Latte:
eDRAM: Bobblehead from the beyond3D forum stated that the devs have access to the 32MB of eDRAM at 31.7GB/sec, and that the 2+1MB of eDRAM/1T-SRAM is not accessible to devs but is somehow being used by the system with Wii U games. We still don't know how those extra banks are being used, and there was something weird mentioned in Latte's docs (I will leave BG to clarify if he wants) that implies that there are some more things going on with mem1.
That's way too low. I mean, even if Nintendo went with a 512 bit bus, at 550 Mhz this is 35,2 GB/s.
Does that mean that the memory speed inside the GPU die is only running at 490 Mhz? If that's the case, then Nintendo has screwed up to a point its almost unbelievable. I mean, the point of having that eDRAM is the 0 latency factor, but at 490 Mhz the memory and 550 Mhz the GPU there will always be latency due to the memory being that slow.
Why the hell did Nintendo waste 1/3 of the die area of the big die in something that:
1. It has a small memory bandwidth compared to what should have had. The GC's 1MB of texture memory had 10.4 GB/second of theoretical bandwidth. Since now we are speaking of 720p resolutions, this 1MB of memory can't be used in the same way and won't be enough.
Even more, if backwards compatibility works like it should, that MB of memory should have a BW of 15.6 GB/s on Wii mode, and if it's speeds scale proportionally on WiiU mode, it should be faster than the 32MB memory bank.
2. It still has latencies with the GPU die due to it being slower than the rest of the chip.
I don't know how Nintendo can screw this that hard, I mean, they seemed to knew how to work on memory architectures those past years, but with this information I'm starting to think that the WiiU is as inefficient as a design as the Xbox 360 or the PS3 were, and even less powerful than them.