FP32 give better graphics than FP16... it affect quality... some simple examples:
In simple terms more precision = more accurate graphics with less artifacts.
But these cases are using fully FP16 or FP32... there are part of the code you don't need FP32 then you can use FP16 without lost quality... you can have a scenario where 80% of your code uses FP32 while the others 20% uses FP16 giving a image exactly like if you used fully FP32.
I think these are, without proper context, somewhere between irrelevant and deceptive. These are images being produced by calculations that noticeably differ in their results depending on whether you use FP16 or FP32, but, without knowing what the calculations being used are, it is impossible to know whether it's a case of FP16 being insufficient, or FP16 being incorrectly used. More over, as you indicate, these are cases where one is purely using FP32 and on is purely using FP16, which wouldn't be the case with the PS4 Pro.
Your summary is correct though: the ultimate point with this whole debate is that sometimes you can get away with lower precision without noticeably lowering quality. There are two really relevant questions that stem from this: how often can you get away with it, and, how often will developers go the extra mile to do so? I can only speak from my perspective as someone who works on mobile games, where bothering to exploit lower precision is absolutely necessary, but I'd suggest you can use FP16 a surprising amount without anyone being able to tell the difference.
Okay, so as an example, FP32 could be used close up and FP16 everywhere else? If so then it would seem like the gap could close or even more. But I really know nothing about this stuff, I just listen to the discussions. Nonetheless, could we agree that this is a dumb oversight from Microsoft? I mean, if it's there and nothing new, why not include FP16 if it can be twice as fast in certain scenarios? If I was them I'd be 100% sure that it did everything PS4 Pro could do before even thinking about improving stuff.
We don't know why Microsoft have decided not to include FP16. There may well be a very good reason that they didn't want this particular customisation. These kind of improvements, after all, are reflected in the physical hardware. It's not necessarily cost free, or down-side free. We'll probably never know. On the face of it, it does seem like an oversight, and it is certainly one area where the PS4 Pro's GPU one up's Scorpio's.
Either way, now we'll have these discussions for the rest of the generation. And some people will convince themselves that PS4 Pro is more powerful no matter what evidence is layed out and shout from the hills to make Scorpio seem like old news not worth the money.
I guess proof will be in the pudding, or rather DF FaceOffs. Can't wait!
I think you're wrong on two counts really. First, no-one is arguing the PS4 Pro is more powerful in here. Second, I doubt these discussions will continue much beyond November.
I'm not going to pretend like I understand how full precision really works. It seems like FP32 is the route to go for better quality textures etc, going off those images posted by Ethomaz.
I'm going to stick to what I can physically see ie comparisons in 4k.
Games I really want to see patched are Witcher 3 and Fallout 4 with the ultra textures.
As I said in my response to Ethomaz, I'd be careful in reading too much into those images. You certainly won't experience any differences of that nature between a game on Scorpio and the same game on PS4 Pro. Developers will only use FP16 when they don't believe there is an appreciable difference in quality, and where there is a non-negligible difference in performance. It's important to remember that both consoles support FP32 and FP16 - but only the Pro supports performing two FP16 operations at the same speed as one FP32 operation.
Can you provide any examples of items that would need fp32 and could benefit from fp16 instead?
I'm guessing you mean wouldn't rather than would here. There are various examples that come to my mind.
1) Colour. Even with HDR, we don't use more than 16 bits to store colour. Calculations involving colour can thus often be done at FP16.
2) Texture coordinates. FP16 is typically sufficient for the position you want to perform a texture lookup. Not certain how well that holds up on a 4K texture - it's possible there is noticeable precision loss near 1. Certainly, I've never noticed any issues with precision for textures smaller than 4K using FP16. Worth noting that a game being rendered at 4K dosn't mean all of your textures are going to be 4K.
3) Directions. A lot of shader mathematics revolve around normalised vectors, which is to say vectors which represent directions with a magnitude of one. Floating point numbers are, by design, increasingly precise at smaller magnitudes. Hence, FP16 is still quite precise for a normalised vector because its magnitude is small. A basic example of where you could get away with FP16 would be in the common diffuse lighting calculation
N dot L, where you determine how bright an object's surface is by dot-producting the direction of the incoming light (L) and the surface normal (N).
N dot L is equivalent to
N.x * L.x + N.y * L.y + N.z * L.z. My general experience is that, in this calculation, FP16 produces results that are indistinguishable from FP32. Other lighting calculations, like those involved in calculating specular reflection are a lot more sensitive to precision issues due to the relevant equations involve powers. Practically, in the PBR era, most lighting calculations require precision above that of FP16.
The general point with FP16 is that, ultimately, our intention with graphics is to output colours. Those colours are not stored at FP32, and, in fact, don't even use the full range of FP16, even in the HDR era. It is thus obvious that there will be areas prior to the output where you can use FP16 to store values, and areas where you can use FP16 to calculate values, without there being a difference, and an even wider range of areas where you can use FP16 to store or calculate values where there will be no noticeable or appreciable difference (i.e. you wouldn't prefer one output over the other). How well FP16 holds up in any given situation depends hugely on the magnitudes of the inputs, and the operation in question*. Obviously, each operation adds potential for imprecision, so, for example, repeated multiplications quite quickly see you lose precision.
As a basic example, you can expect a lot more precision when adding than when multiplying. I mentioned earlier that floating point number precision is higher with smaller magnitudes. The reverse of this is that floating point number precision is lower at higher magnitudes. The highest value (other than infinity) that you can store in an FP16 is 65,504, which is obviously big enough for a great many things, but, the next highest number is 65,472, and from there to 65,440. Hence, if I use FP16 to multiply two numbers together, say, 255.7 * 256.0 (= 65,459.2), I'm going to lose a lot of precision with FP16 when it gets rounded. With adding, the precision loss is far smaller. 255.7 + 256.0 = 511.7, as FP16 is capable of storing 511.75.