I see, nice that they are doing proper stuff in game rather than faking it
Screen-space reflections are a pretty long shot from faking it. Aside from being more versatile than planar reflections in one or two respects (i.e. they're a lot easier to apply to curved surfaces) they're about as faked as reflections get. They attempt to determine what should be reflecting in surfaces based only on the things currently being looked at, which results in about as much visual stability as a Halo 2 cutscene running on an original Playstation.
The extent to which they're fake is why devs are interested in them, as opposed to expensive (but far more stable) methods like planar reflections which require rendering entire extra images, or full geometry ray tracing which takes tons of stuff into account.
AF is an extension of mipmapping, to deal with problems created by the basic mipmapping technique.
If you load up a 3d game from the early through mid 90's, and you rotate the camera, any detailed textures in the distance will be seen to shimmer. Suppose that the red dots represent the center points of pixels on your screen, and the black and white strips are the actual texture on a far-off object:
At this moment, this part of your screen will be solid black, because all of the pixel centers fall of black spots on the texture. But if you rotate your screen a bit to the right, the red dots all might slip onto the white, and suddenly the whole part of your screen looks white. This is very incorrect, shimmery behaviour. In the real world, if you were to view an object made of thin black and white strips from a distance, the black and white strips would blend to look grey. And it wouldn't shimmer in your vision as you look around.
The issue happens because you're sampling with a small number of sample points (the pixel centers) from a thing with lots of high-frequency detail (the strips). One solution is to sample from a lower-frequency image. Or, in simpler terms, a scaled down image. So, in addition to having your big texture with black and white strips, you might also have a scaled down copy of that texture, which in this case might just look like a grey blob:
(Pretend that the stripes are already only like a 10x10 texture for this to look correct, so that it actually does become a grey blob when you scale down. Obviously a ~200x200 texture like this wouldn't turn into a grey blob when scaled to ~70x70 like in this picture.)
Now that we have this scaled-down version of the texture on hand, we can use it when the surface is viewed from a large distance to get correct results:
Yay, we have now implemented mip mapping!
HOWEVER! There's a problem. While this technique works great when viewing surfaces head-on, it isn't quite right for viewing surfaces at oblique angles.
Suppose that the surface with this striped texture is suspended off the ground, maybe like a freeway sign. Suppose that we walk toward its location so that we're just about standing beneath it. And then suppose we look up at it. Because we're viewing it at an oblique angle, it's filling only a small slice of our view vertically, even though it's still filling a pretty good chunk of our vision horizontally.
So... which mip map level do we use? The stripes that are appropriate for viewing a texture up close, or the grey blob that we use when we view a texture from a distance?
In general, there isn't a good answer. If the texture has lots of high-frequency detail in the vertical direction (this one's stripes only have high-frequency detail in the horizontal direction, but we can't build a technique based on the quirks of our single specific texture), using the big version can cause shimmering when the camera is rotated up and down. But using the small one obviously obscures horizontal detailing (the stripes, in this case) that we really should still be able to see!
In games with regular mipmapping, choosing the smaller texture is standard, since it ensures image stability. This is why in console games today, textures are almost always fairly blurry when viewed at oblique angles.
Anisotropic filtering solves the problem by (in effect) including "stretched" textures into our mip maps. Look back at the image that contains both the stripes and scaled-down grey texture. Notice the missing stuff on the upper right and lower left? We can put additional, and very useful, scaled textures there.
The lower left spot can hold a vertically-squished texture that we can use like the situation of looking up at a surface. And the upper right spot can hold a horizontally-squished texture that we could use in situations like looking at a wall from an oblique angle.
In the case of this texture, because the high-frequency detail is entirely in the horizontal direction, these extra textures aren't very interesting...
...but again, we need to account for the general case where we might have vertical details as well.
And that's the gist of AF (its motivation and results, anyway; the exact implementation details don't usually work exactly like this these days, but it's the idea). It's awesome, but PS360 games haven't seen a whole lot of it because it taxes memory. A PS4 with 8GB of blazing fast RAM should be able to handle it much more easily.