There's too many factors at play here.
- Software at game development level is almost never written directly for bare metal (I'd guess never for modern gen consoles), ie. it does not directly interface the hardware components and the developers don't care with stuff like writing their own drivers for hardware modules, and/or optimizing existing drivers.
- Operating system and middleware stuff (engines, third party libraries) add complexity (stuff that can bug out) and additional layers for interfacing with the hardware, so they make the developer's life easier (they have to deal with less stuff and focus on their gameplay's code) but inevitably introduce things beyond the control of the developer.
- Things like portability, backwards compatibility, maybe even the intentional parity thing (for marketing reasons) factor in for development. Developer teams don't have the resources (time, money or expert know-how) to deal with each target system in a dedicated manner, and implementing all the required optimizations for it specifically. Especially when the producer/ and the goal is to release to multiplatform simultaneously, and if that includes PC (which is hard to optimize for, since there's so much variety in hardware), or even worse multiple OSes for PC (Linux, Windows, Mac) then no one in their right mind would even consider putting time to reach the limits of each different target hardware.
- Engine / tech demos are often non-viable to be "translated" as is in a proper game. The demos for the sake of pretty visuals for a scripted scenario or limited gameplay scope, are allowed to waste precious system resources for textures, reflections, and not care for framerate issues, bugs, broken/non-existing AI, game loops, battle choreography, online gameplay coordination, or whatever else is needed for smooth gameplay experience. The demos goal is to impress and sell an engine (or a concept of a game maybe). Which why "downgrading" in a final game product, has been a thing since forever in the industry.
- Game development 101 typically starts with a lesson on how to be appropriately abstract, cut corners and seek/invent workarounds to solve problems and be efficient. Full realism or "as realistic as possible" is rarely a programmers/developer's goal -- it's a marketing goal for sure, but the weight still falls on the developer to cut corners. If a thing doesn't have to be (fully) rendered, then it probably should not be, if it does not matter that an object is not reflected 100%, it should not be, and so on.
- For various reasons, it's not even desirable to have a system operating at its limits of maximum performance for long periods of time, for reasons like overheating, systems halts triggered by edge cases or bad interrupt and I/O handling or whatever, or fast battery depletion (for portables or controllers).
- Edit: Yes, of course there's the cases of lazy developers, project mismanagement by higher ups, releasing cookie cutter games with minimal effort to optimize or do much different really...