One thing that could bring aa to the next level is, how should I put it, programmable super sampling. From what I remember the hardware AA is designed to sample each pixel the same way, I guess right in the middle. So if you have a camera standing still and without any animations going on you'd have a perfectly stable image. If you could skew the sampling per pixel, you'd get something closer to real life or cameras, a stream of light with slight variations over time. This would also add temporal AA that could help greatly with IQ in motion. People obviously already do this in offline rendering but last I checked it's impossible on current hardware isn't it?
EDIT:
Also you might want to add this to the first post, but I think
shadertoy is a great tool for learning about shaders and computer graphics. The code is often really simple so even beginners can look at and modify examples(in the browser) to get an understanding of how things work. It's different from the pipeline in a game but still a really cool app anyone interested should take a look at.
It's made by the
same guy linked to above.