Why are Eight Bits Enough for Deep Neural Networks?
Why are Eight Bits Enough for Deep Neural Networks?
Deep learning is a very weird technology. It evolved over decades on a very different track than the mainstream of AI, kept alive by the efforts of a handful of believers. When I started using it a few years ago, it reminded me of the first time I played with an iPhone – it felt like I’d been handed something that had been sent back to us from the future, or alien technology.
One of the consequences of that is that my engineering intuitions about it are often wrong. When I came across im2col, the memory redundancy seemed crazy, based on my experience with image processing, but it turns out it’s an efficient way to tackle the problem. While there are more complex approaches that can yield better results, they’re not the ones my graphics background would have predicted.
Another key area that seems to throw a lot of people off is how much precision you need for the calculations inside neural networks. For most of my career, precision loss has been a fairly easy thing to estimate. I almost never needed more than 32-bit floats, and if I did it was because I’d screwed up my numerical design and I had a fragile algorithm that would go wrong pretty soon even with 64 bits. 16-bit floats were good for a lot of graphics operations, as long as they weren’t chained together too deeply. I could use 8-bit values for a final output for display, or at the end of an algorithm, but they weren’t useful for much else.
It turns out that neural networks are different. You can run them with eight-bit parameters and intermediate buffers, and suffer no noticeable loss in the final results. This was astonishing to me, but it’s something that’s been re-discovered over and over again. My colleague Vincent Vanhoucke has the only paper I’ve found covering this result for deep networks, but I’ve seen with my own eyes how it holds true across every application I’ve tried it on. I’ve also had to convince almost every other engineer who I tell that I’m not crazy, and watch them prove it to themselves by running a lot of their own tests, so this post is an attempt to short-circuit some of that!
How does it work?
You can see an example of a low-precision approach in the Jetpac mobile framework, though to keep things simple I keep the intermediate calculations in float and just use eight bits to compress the weights. Nervana’s NEON library also supports fp16, though not eight-bit yet. As long as you accumulate to 32 bits when you’re doing the long dot products that are the heart of the fully-connected and convolution operations (and that take up the vast majority of the time) you don’t need float though, you can keep all your inputs and output as eight bit. I’ve even seen evidence that you can drop a bit or two below eight without too much loss! The pooling layers are fine at eight bits too, I’ve generally seen the bias addition and activation functions (other than the trivial relu) done at higher precision, but 16 bits seems fine even for those.
I’ve generally taken networks that have been trained in full float and down-converted them afterwards, since I’m focused on inference, but training can also be done at low precision. Knowing that you’re aiming at a lower-precision deployment can make life easier too, even if you train in float, since you can do things like place limits on the ranges of the activation layers.
Why does it work?
I can’t see any fundamental mathematical reason why the results should hold up so well with low precision, so I’ve come to believe that it emerges as a side-effect of a successful training process. When we are trying to teach a network, the aim is to have it understand the patterns that are useful evidence and discard the meaningless variations and irrelevant details. That means we expect the network to be able to produce good results despite a lot of noise. Dropout is a good example of synthetic grit being thrown into the machinery, so that the final network can function even with very adverse data.
The networks that emerge from this process have to be very robust numerically, with a lot of redundancy in their calculations so that small differences in input samples don’t affect the results. Compared to differences in pose, position, and orientation, the noise in images is actually a comparatively small problem to deal with. All of the layers are affected by those small input changes to some extent, so they all develop a tolerance to minor variations. That means that the differences introduced by low-precision calculations are well within the tolerances a network has learned to deal with. Intuitively, they feel like weebles that won’t fall down no matter how much you push them, thanks to an inherently stable structure.
At heart I’m an engineer, so I’ve been happy to see it works in practice without worrying too much about why, I don’t want to look a gift horse in the mouth! What I’ve laid out here is my best guess at the cause of this property, but I would love to see a more principled explanation if any researchers want to investigate more thoroughly? [Update – here’s a related paper from Matthieu Courbariaux, thanks Scott!]
What does this mean?
This is very good news for anyone trying to optimize deep neural networks. On the general CPU side, modern SIMD instruction sets are often geared towards float, and so eight bit calculations don’t offer a massive computational advantage on recent x86 or ARM chips. DRAM access takes a lot of electrical power though, and is slow too, so just reducing the bandwidth by 75% can be a very big help. Being able to squeeze more values into fast, low-power SRAM cache and registers is a win too.
GPUs were originally designed to take eight bit texture values, perform calculations on them at higher precisions, and then write them back out at eight bits again, so they’re a perfect fit for our needs. They generally have very wide pipes to DRAM, so the gains aren’t quite as straightforward to achieve, but can be exploited with a bit of work. I’ve learned to appreciate DSPs as great low-power solutions too, and their instruction sets are geared towards the sort of fixed-point operations we need. Custom vision chips like Movidius’ Myriad are good fits too.
Deep networks’ robustness means that they can be implemented efficiently across a very wide range of hardware. Combine this flexibility with their almost-magical effectiveness at a lot of AI tasks that have eluded us for decades, and you can see why I’m so excited about how they will alter our world over the next few years!
Why are Eight Bits Enough for Deep Neural Networks?的更多相关文章
- 为什么深度神经网络难以训练Why are deep neural networks hard to train?
Imagine you're an engineer who has been asked to design a computer from scratch. One day you're work ...
- Training Deep Neural Networks
http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html //转载于 Training Deep Neural ...
- On Explainability of Deep Neural Networks
On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is ...
- Introduction to Deep Neural Networks
Introduction to Deep Neural Networks Neural networks are a set of algorithms, modeled loosely after ...
- 深度神经网络入门教程Deep Neural Networks: A Getting Started Tutorial
Deep Neural Networks are the more computationally powerful cousins to regular neural networks. Learn ...
- Classifying plankton with deep neural networks
Classifying plankton with deep neural networks The National Data Science Bowl, a data science compet ...
- (Deep) Neural Networks (Deep Learning) , NLP and Text Mining
(Deep) Neural Networks (Deep Learning) , NLP and Text Mining 最近翻了一下关于Deep Learning 或者 普通的Neural Netw ...
- Must Know Tips/Tricks in Deep Neural Networks
Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei) Deep Neural Networks, especially C ...
- Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Initialization)
声明:所有内容来自coursera,作为个人学习笔记记录在这里. Initialization Welcome to the first assignment of "Improving D ...
随机推荐
- mysql 查询数据库或某张表有多大(字节)
转载:https://www.cnblogs.com/diandiandidi/p/5582309.html 1.要查询表所占的容量,就是把表的数据和索引加起来就可以了 select sum(DATA ...
- Hadoop环境搭建01
根据马士兵老师的Hadoop进行的配置 1.首先列下来需要用到的软件 VirtulBox虚拟机.Centos7系统镜像.xshell.xftp.jdk安装包.hadoop-2.7.0安装包 2.在Vi ...
- Python:三元运算
result=值1 if 条件 else 值2 如果条件为真,result=值1 如果条件为假,result=值2 例子: a,b,c=1,3,5 d=a if a>b else c print ...
- JavaScript数组去重的四种方法
今天,洗澡的想一个有趣的问题,使用js给数组去重,我想了四种方法,虽然今天的任务没有完成,5555: 不多说,po代码: //方法一:简单循环去重 Array.prototype.unique1 ...
- 关于java读取excle文件的相关方法 ;
1.读取Excle文件内容的方法 拿过来可以直接用 : 2.参照 http://www.anyrt.com/blog/list/importexcel.html#6 更多知识请参考:http://ww ...
- Android定位测试(深坑)
问题:我们是一个海外app,市场部去马来西亚打开那边的市场,发现了一个问题,就是我们的app定位有问题,还是成都的定位,主要原因是在马来西亚使用这个app,请求中带的经纬度参数是成都的,导致服务器返回 ...
- 【.Net】在WinForm中选择本地文件
相信很多朋友在日常的编程中总会遇到各钟各样的问题,关于在WinForm中选择本地文件就是很多朋友们都认为很难的一个学习.net的难点, 在WebForm中提供了FileUpload控件来供我们选择本地 ...
- stm32f4xx系统总线架构
最近有人在STMCU社区网站咨询如下问题: 由于实验需要,要用到STM32F407的两个DMA并用定时器触发,在使用过程中发现DMA1无法把GPIO的IDR上的数据传输到内存,调试过程中出现DMA1的 ...
- hdu 6400 Parentheses Matrix
题目链接 Problem Description A parentheses matrix is a matrix where every element is either '(' or ')'. ...
- CF373C-Counting Kangaroos is Fun
题意 有\(n\)只袋鼠,每只袋鼠有一个体积,如果一个袋鼠的体积小于等于另一个袋鼠体积的一半,那么这个袋鼠就可以被那一个袋鼠装进袋里.一个装了袋鼠的袋鼠不能再装或被装.被装进袋子的袋鼠就看不到了. 问 ...