Supersampling doesn't do anything with AF/texture filtering
That's completely wrong.
The reason stuff like mipmapping gets used is that sampling from the full-res texture would result in shimmering from undersampling at long distances. Supersampling is the simplest and most direct fix to undersampling.
Where basic mipmapping breaks down is at oblique angles. If you sample from a low LOD, you get shimmering from undersampling on the short axis. But if you sample from a high LOD, you get blurring because you're not sampling from a low enough LOD for the long axis. The solution? Use a low LOD, but take more samples from it so that you don't wind up with shimmering from undersampling (basically what AF does).
Again, supersampling accomplishes the same thing. When you supersample when using basic mipmapping, the GPU uses lower LODs, because with a higher resolution you can get away with that without undersampling (and shimmering). Hence even at oblique angles, you get sharper textures.
Granted, supersampling doesn't do AF particularly
well; 16xAF is a bajillion times more efficient than trying to get the same results through ridiculously high-order supersampling, which is why we turn on AF and do this stuff strictly in the TMUs.
//================
Actually, supersampling has EVERYTHING to do with texture filtering, and also with techniques like MSAA. Texture filtering and MSAA are just efficient ways to target supersampling-esque results for various aspects of an image. Texture filtering attempts to get supersampling-like results for textures, and MSAA attempts to get supersampling-like results for geometric edges.