My understanding is that a 'float16' floating point type is a 16-bit long ( for example, NVIDIA's APIs allow to declare it in C/C++ codes ).
A regular Single-Precision floating point type is a 32-bit long: sign(1) + exponent(8) + mantissa(23). When 'sign' and 'mantissa' are combined it is known as a 24-bit precision. It could be called as 'float32' or 'float4' instead of just 'float'.
Anyway, Thanks for these web-links and I'll take a look at Intel's docs.
PS: Just found some additional information on: