IQ is weird. What gets people (or me) most, I feel, is density and complexity of objects on screen. As in, huge scale, draw distances, and a whole lot of shit going on and it all looking good.
When you start cutting stuff out of the image for a closer, personal shot, a lot of impact in graphical leaps is lost. Take those FF XIII CGI clips. I know they're technically superior, and I can see they're technically superior, but my jaw does not drop at a close-up of Lightning, because a similar level of IQ on a screen can and has been achieved on modern games.
Honestly, there's some stuff in The Witcher 2 that envokes similar feelings to that Triace demo. Objects aren't as smooth, the lighting and shadows are rougher, and textures aren't quite as detailed, but we're getting there. We're not comparing a N64 game anymore. We can achieve incredible graphical fidelity that, in my opinion, borderlines on high end CG, all when the cards are played right (mostly the art, hardware and angle of the shot).
Pull the camera back for those wide views and then we start seeing the big weaknesses of all graphical processors, and the huge difference between a high end technically impressive modern game and pre-render CG. Same goes with density of intricate details, like shots of forests. Anything scene with a lot of flora is a good benchmark for how powerful hardware can be, and how realistic we can make our images.
Then there's animation, but that's a bit of a grey area, because though it significantly improves the presentation of a game, you cant really brute force it like post processing or graphical effects. Animators must meticulously animate everything themselves, and that comes down to time, money, and skill. Unless we dig into procedural, physics based animation, but that only covers a small portion of what games require.