Thanks for the insight
If they dropped the res because they wasted the gpu power they needed for native res to round the scopes and add some tesselated rocks and bricks that is by far the most silly choice.
Don't suppose you want to elaborate on the esram
Also you said you assume it uses a forward renderer (it does, cod games always have proper MSAA support)
Well. It's all guesses and speculation
You know, internet expert
The problems you might encounter with ESRAM would relate to how you use (and reuse) render targets within a frame. There is no doubt that certain things would benefit greatly from being in ESRAM compared to DDR, especially temporary buffers that get overwritten entirely and then can be thrown away (depth buffers often fall into this category).
Most modern games will easily use more than 32MB of render targets during a single frame (remember the SF presentation where they had 800MB at one point? !). The difficulty would be choosing which RTs sit in ESRAM and
when. If you aren't clearing the RT and regenerating it from scratch, then that would potentially mean copying the RT into ESRAM (from DDR at 68GB/s max) and then potentially copying it back to DDR to make room once you finish - which would call into question if it should even go into ESRAM at all. In a situation like that, you'd need the average number of
cache missing reads/writes per pixel
while in ESRAM to be pretty high to get past 'break even' on saved bandwidth and time.
It's a little hard to explain, but it'll be a very difficult balancing act (haha!) getting it right, especially if an engine likes to keep repeatedly accessing lots of big render targets over the course of a frame (eg, shadow maps). This is where deferred would help - you could generate the shadow map just before rendering the deferred light, then throw it away. In a forward render, you'd likely need to keep it around for most of the frame.
In some ways it's more complex problem than the EDRAM on the 360 - that could only hold the current RT and always copied back out to GDDR when you were done, so the usage pattern was pretty simple and easy to understand.