Funny how some people think you can optimize things without limits
and at the same time keep all the graphical fidelity and deep, resource hungry subsystems you had before you started optimizing.
Of course some stuff isn't optimal when it's written at first, some things can be optimized without losing any quality whatsoever, but the vast majority of performance related work is always about compromises: something has to go to make room for extra performance. The goal is of course to do that by removing things / scaling down things without anyone noticing, however it can be that this isn't possible, or not in all cases. So: compromises have to be taken: keep it and live with the extra perf hit, or remove it/scale it down and gain the performance.
And there's no end: it's a slippery slope: there's always something to optimize by taking it away or scaling it down: it depends on what quality bar you want to stick to, as that's one side of the compromise.
Gamers of course are never happy with the compromises taken: they want all:
and all the graphical fidelity and richness
and all the performance, but that's not possible: to reach one, you have to sacrifice the other. It's a balance. With tricks you can get very far though (as in: the gamer won't notice s/he's being tricked) but there's obviously an end to that. With settings one can mitigate the burden a bit: the gamer can decide where to compromise and that's perhaps the best way to do it, but it's still a balance: you can't have everything, you can only choose where your pain will be: less graphical fidelity or less performance, on the only constant in the equation: the hardware.
This whole 'downgrade!' debate is pure a result of a lack of understanding what it actually takes to render a full frame on common hardware
and at the same time have a massive amount of subsystems running concurrently making up the game world and all its elements, combined with over-eager marketing people who think it's wise to show the world imagery from pre-alpha code where maybe even half of all the subsystems of the final game aren't even running (so all the available hardware is put to work for running half the actual code making up the full game). Once the full game code and assets are complete, the hardware has to be used by all these subsystems, obviously leaving less room for the systems that were previously running the pre-fabricated demo for e.g. E3. To get near those results with final code is often impossible, as it would mean all the subsystems and assets added to the elements making up the demo have to run at infinite speed.
So you compromise: something has to give here and there to make room for them. It then comes down to (from the gamer's PoV): do I trust this dev that they have made all the right compromises and at the same time did they do all they could and what's possible to achieve those compromises?