Dude FP16 has been around for a while. I used it in college back in 2007. In fact, I think Microsoft and Nvidia came up with it. It isn't a breakthrough of this generation. It is an option. The reason why it got skipped previously is because there were very few purposes for it. 16 point floating precision is hard to deal with and no current engine uses it AFAIK. Like I mentioned, you have to design your shader to take advantage of it.
It is a forward thinking feature, and if it becomes common, perhaps some engines will start using it, but I doubt this gen anybody but Sony will make an effort.
FP16 exists since computes was created (edit- to not be called non-accurate since 1982).
GPUs in the past worked only with FP16... Radeon 9000 come with FP24 and futher nVidia and ATI moved to FP32 that become the standard.
That is nothing new.
FP16 running twice faster than FP32 didn't happen since GeForce FX5200 when they where crushed due low FP32 performance.
That is the new feature found in Vega and Pro... they can run 2x FP16 instructions instead 1x FP32 delivering twice flops... you are making confusion of GPUs supporting FP16 (all of them supports) vs GPUs that runs FP16 faster than FP32 (only future Vega, Pro, mobile GPUs and Tesla).
That didn't mean FP16 is better for games than FP32... it is twice faster with half memory/bandwidth use but at the end few cases can take advantage of FP16 without loss of quality.
BTW it was never skipped... games in the past was coded in FP16 only... tech envolves and it was replaced by a better quality option (FP32) and it will be replaced in the long future by FP64.