深度学习与bfloat16(BF16)

您所在的位置:网站首页 Gddr5显卡支不支持bf16 深度学习与bfloat16(BF16)

深度学习与bfloat16(BF16)

2023-12-16 04:10| 来源: 网络整理| 查看: 265

Deep learning has spurred interest in novel floating point formats. Algorithms often don’t need as much precision as standard IEEE-754 doubles or even single precision floats. Lower precision makes it possible to hold more numbers in memory, reducing the time spent swapping numbers in and out of memory. Also, low-precision circuits are far less complex. Together these can benefits can give significant speedup.

深度学习促使了人们对新的浮点数格式的兴趣。通常(深度学习)算法并不需要64位,甚至32位的浮点数精度。更低的精度可以使在内存中存放更多数据成为可能,并且减少在内存中移动进出数据的时间。低精度浮点数的电路也会更加简单。这些好处结合在一起,带来了明显了计算速度的提升。

BF16 (bfloat16) is becoming a de facto standard for deep learning. It is supported by several deep learning accelerators (such as Google’s TPU), and will be supported in Intel processors two generations from now.

bfloat16,BF16格式的浮点数已经成为深度学习事实上的标准。已有一些深度学习“加速器”支持了这种格式,比如Google的TPU。Intel的处理与在未来也可能支持。

The BF16 format is sort of a cross between FP16 and FP32, the 16- and 32-bit formats defined in the IEEE 754-2008 standard, also known as half precision and single precision.

BF16浮点数在格式,介于FP16和FP32之间。(FP16和FP32是 IEEE 754-2008定义的16位和32位的浮点数格式。)

FormatBitsExponentFractionsign(符号)FP32328231FP16165101BF1616871

BF16的指数位比FP16多,跟FP32一样,不过小数位比较少。这样设计说明了设计者希望在16bits的空间中,通过降低精度(比FP16的精度还低)的方式,来获得更大的数值空间(Dynamic Range)。

reference

https://www.maixj.net/ict/bfloat16-19900 https://blog.csdn.net/qq_36533552/article/details/105885714



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3