Lol, good, because that is the actual difference in raw performance.
Not quite, it represents a difference in raw float point throughput, how that throughput translates to performance is a whole different ball game.
And that's not even talking about efficiency, difference in architectures... Your work simply might not need the extra operations. Say for example that all you want to do is to draw big black images as fast as you can. The performance won't be determined by the flop rate at all.
Of course, there's not much use in big black images, but not every game is going to be hold back by alu performance, heck even in some fairly high profile games in this generation developers have come out and said that they still some processing power unused on 7 year old hardwares (360 and Ps3) and that they can't put that to good use because they are being hold back by their memory.
My point is: Not knowing all the details of the architectures means we can't say precisely where each of them is going to have an advantage over the other. Not knowing which kind of games these consoles are going to run means we can't say pretty much nothing about their final performance. Say in a hypothetical scenario Orbis massive bandwidth gives it an immense edge over durango in deferred rendering. But for some reason developers decide to stick to forward rendering (be it lowest common denominator, gpgpu being used in a way where they can have the forward rendering advantages and the deferred ones too, etc) and in forward rendering orbis extra bandwidth doesn't make much a difference, but durango's memory setup allows it to compensate the float point advantage and then some, but by a smaller margin then DF would yield, so developers decide to stick to that for parity's sake.
Even if a company design a game console that can excel at current games doesn't mean it will hold true to games in 2-3 years or more. That kinda happened with 360. It's edram setup was less than ideal for deferred rendering, and even it's biggest advantage over Ps3 (lower cost MSAA) was pretty much nullified, and with MLAA there were actually some cases were Ps3 turned the table in it's favor...
RSX = 400GFLOPs, Xenos = 250GFLOPs... yes, flops flops everywhere...
We don't know that.
Not to enter in a Ps3 vs 360 debate that later on the generation, but those flops figures on RSX are simply not true.
RSX in fact very close to xenos in theorical float point performance (regarding pixel shading).
The "theorical" performance gets inflated because people assume they can simply add all the units that execute operations, but by design of the architecture they are not addable.
An oversimplified quick example: RSX has 8 vertex shading units, usually those flop figures add those to the pixel shading ones. But during a frame usually you do vertex processing before you do any pixel processing, so in reality when a game is drawing it's geometry the weak (in comparison to the pixel shaders) are actually stalling the pixel shader units that have to sit there waiting for it to finish so they can start their work. On xenos that's not a problem because all it's execution units can be dedicated to the vertex processing, so xenos ends that job more quickly and gets more time to do pixel shading work, and so, even though they have theoretical close performance to RSX's pixel shaders they can achieve a higher throughput.