You can observe on PCs that the performance decrease with higher resolutions is linear (as long as you don't run out of memory) and rarely limited by ROPs count.
For examyple: From GTX 580 to GTX 680, ROPs decreased (48 -> 32) while the clocks did not rise by the same amount. If you look at benchmarks with high resolutions and/or SSAA, you will see that the performance advantage of the GTX 680 over the GTX 580 will stay the same.
Well, you're talking about ~8x raw fillrates for games designed around current console limitations, so it's not likely to be ROP limited (i.e. games now are pretty conservative with transparencies and overdraw. Shadow resolutions are also pretty poop). With MSAA enabled, PC cards are in an even better situation because they sport single cycle 4x sampling and much higher Z-rates (depth - shadows, overdraw).
32ROPs are probably going to be enough for a long while. The question is if they'll spend the transistors on more ROPs vs other features (ALUs, caches, rasterizers, TMUs etc). In 2005, 8 ROPs was half of the max that high end PCs were sporting. The standard ROP has fluctuated somewhat in capabilities since then (e.g. sampling rates, render target support, blending capabilities, floating point rates etc). Ultimately, they'll want to balance it with the target bandwidth being provided (and shared with texturing/filtering etc.)
32 ROPs may be a given for middle-end PC hardware, but I wouldn't necessarily assume it to be so for consoles given the conservative rumours (we know very little as it is anyway). 32ROPs@1GHz certainly makes more sense for >1080p resolutions or Multi-monitor resolutions that do demand such obscene fillrates (aside from raw math per pixel). There are probably better things to have given a more limited transistor & power budget vs a 200W+ GPU though.
----
Btw, while the GTX580 had 48 ROPs, it could still only handle/process 32 colour pixel (fragments) per clock. The extra ROPs did give extra Z-fill though, but again, games are hardly pushing things on that front at the moment. The 48-ROPs had to do with how they were tied to the memory controllers, so a 384-bit bus demanded more ROPs to be connected in HW. Since the rasterizers (four of them) could only handle 8 fragments/pixels per clock, you'd only get 32 pixels/clock (non-Z).
----
Even the texture rates are significantly higher for PC, and with texture resolutions not that much higher than console, the demand on filtering isn't going to be significant until you start raising quality settings on certain texture-heavy shaders, but still... we're not doing anything particularly heavy yet due to console considerations.
(Btw., RSX has 8 ROPs, clocked at 550 MHz.)
500MHz.