Kenka mentioned me a few pages back, so I might as well give my two cents.
First, it's worth keeping in mind that the general expectation until very recently was a CPU around 2GHz (many estimates around the 1.8GHz mark) and a GPU 500MHz or under (my guess was 480MHz).
The main take-home from the real clock speeds (higher clocked GPU than expected, lower clocked CPU than expected) is that the console is even more GPU-centric than expected. And, from the sheer die size difference between the CPU and GPU, we already knew it was going to be seriously GPU centric.
Basically, Nintendo's philosophy with the Wii U hardware is to have all Gflop-limited code (ie code which consists largely of raw computational grunt work, like physics) offloaded to the GPU, and keep the CPU dedicated to latency-limited code like AI. The reason for this is simply that GPUs offer much better Gflop per watt and Gflop per mm² characteristics, and when you've got a finite budget and thermal envelope, these things are important (even to MS and Sony, although their budgets and thermal envelopes may be much higher). With out-of-order execution, a short pipeline and a large cache the CPU should be well-suited to handling latency-limited code, and I wouldn't be surprised if it could actually handle pathfinding routines significantly better than Xenon or Cell (even with the much lower clock speed). Of course, if you were to try to run physics code on Wii U's CPU it would likely get trounced, but that's not how the console's designed to operate.
The thing is that, by all indications, MS and Sony's next consoles will operate on the same principle. The same factors of GPUs being better than CPUs at many tasks these days applies to them, and it looks like they'll combine Jaguar CPUs (which would be very similar to Wii U's CPU in performance, although clocked higher) with big beefy GPUs (obviously much more powerful than Wii U's).