Experimental extension extension of bfloat16
The IEEE754 float16 format, is already part of waLBerlas Datatypes.
However, this format has only 5 bits to express the mantissa, hence allowing only data in an absolute range from 2−14 \approx 6.10 \cdot 10^{−5} to (2−2^{−10}) \cdot 2^{15} = 65504.
Another possible 16 bits datatype would be bfloat16 from Google Brain.
This format has an exponent of 8 bits, such as float32, while reducing the mantissa length from 10 bits (IEE float16) to 7 bits.
This format is often used in the field of machine learning and might be utilizable for mixed precision simulations as well.
The bfloat16 format is supported by some GPU achitectures as well as some Intel CPUs.
Compare:
intel-bf16-hardware-numerics-definition-white-paper.pdf
and
Wikipedia page.
Edited by Michael Zikeli