[综述]Deep Compression/Acceleration深度压缩/加速/量化
Survey
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks, [arxiv '18]
A Survey of Model Compression and Acceleration for Deep Neural Networks [arXiv '17]
Quantization
- The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning [ICML'17]
- Compressing Deep Convolutional Networks using Vector Quantization [arXiv'14]
- Quantized Convolutional Neural Networks for Mobile Devices [CVPR '16]
- Fixed-Point Performance Analysis of Recurrent Neural Networks [ICASSP'16]
- Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations [arXiv'16]
- Loss-aware Binarization of Deep Networks [ICLR'17]
- Towards the Limit of Network Quantization [ICLR'17]
- Deep Learning with Low Precision by Half-wave Gaussian Quantization [CVPR'17]
- ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks [arXiv'17]
- Training and Inference with Integers in Deep Neural Networks [ICLR'18]
- Deep Learning with Limited Numerical Precision[ICML'2015]
- Model compression via distillation and quantization [ICLR '18]
- Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy [ICLR '18]
- On the Universal Approximability of Quantized ReLU Neural Networks [arXiv '18]
- Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference [CVPR '18]
- Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 [NIPS '16]
- XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks [ECCV '16]
- Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration [CVPR '17]
- Maxout Networks
- BinaryConnect: Training Deep Neural Networks with binary weights during propagations
- Ternary weight networks
- From Hashing to CNNs: Training Binary Weight Networks via Hashing
- Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization
- TRAINED TERNARY QUANTIZATION
- DOREFA-NET: TRAINING LOW BITWIDTH CONVOLUTIONAL NEURAL NETWORKS WITH LOW BITWIDTH GRADIENTS
- Two-Step Quantization for Low-bit Neural Networks
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
- Fixed-point Factorized Networks
- INCREMENTAL NETWORK QUANTIZATION: TOWARDS LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS
- Network Sketching: Exploiting Binary Structure in Deep CNNs
- Towards Effective Low-bitwidth Convolutional Neural Networks
- SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
- Very deep convolutional networks for large-scale image recognition
- Towards Accurate Binary Convolutional Neural Network
- Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
Pruning
- Learning both Weights and Connections for Efficient Neural Networks [NIPS'15]
- Pruning Filters for Efficient ConvNets [ICLR'17]
- Pruning Convolutional Neural Networks for Resource Efficient Inference [ICLR'17]
- Soft Weight-Sharing for Neural Network Compression [ICLR'17]
- Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding [ICLR'16]
- Dynamic Network Surgery for Efficient DNNs [NIPS'16]
- Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning [CVPR'17]
- ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression [ICCV'17]
- To prune, or not to prune: exploring the efficacy of pruning for model compression [ICLR'18]
- Data-Driven Sparse Structure Selection for Deep Neural Networks [arXiv '17]
- Learning Structured Sparsity in Deep Neural Networks [NIPS '16]
- Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism [ISCA '17]
- Channel Pruning for Accelerating Very Deep Neural Networks [ICCV '17]
- Learning Efficient Convolutional Networks through Network Slimming [ICCV '17]
- NISP: Pruning Networks using Neuron Importance Score Propagation [CVPR '18]
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers [ICLR '18]
- MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks [arXiv '17]
- Efficient Sparse-Winograd Convolutional Neural Networks [ICLR '18]
Low-rank Approximation
- Efficient and Accurate Approximations of Nonlinear Convolutional Networks [CVPR'15]
- Accelerating Very Deep Convolutional Networks for Classification and Detection (Extended version of above one)
- Convolutional neural networks with low-rank regularization [arXiv'15]
- Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation [NIPS'14]
- Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications [ICLR'16]
- High performance ultra-low-precision convolutions on mobile devices [NIPS'17]
- Speeding up convolutional neural networks with low rank expansions
- Coordinating Filters for Faster Deep Neural Networks [ICCV '17]
Knowledge Distillation
- Dark knowledge
- FitNets: Hints for Thin Deep Nets [ICLR '15]
- Net2net: Accelerating learning via knowledge transfer [ICLR '16]
- Distilling the Knowledge in a Neural Network [NIPS '15]
- MobileID: Face Model Compression by Distilling Knowledge from Neurons [AAAI '16]
- DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer [arXiv '17]
- Deep Model Compression: Distilling Knowledge from Noisy Teachers [arXiv '16]
- Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer [ICLR '17]
- Like What You Like: Knowledge Distill via Neuron Selectivity Transfer [arXiv '17]
- Learning Efficient Object Detection Models with Knowledge Distillation [NIPS '17]
- Data-Free Knowledge Distillation For Deep Neural Networks [NIPS '17]
- A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learnin [CVPR '17]
- Moonshine: Distilling with Cheap Convolutions [arXiv '17]
- Model compression via distillation and quantization [ICLR '18]
- Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy [ICLR '18]
Miscellaneous
- Beyond Filters: Compact Feature Map for Portable Deep Model [ICML '17]
- SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization [ICML '17]
Reference
- [1] http://chenrudan.github.io/blog/2018/10/02/networkquantization.html
- [2] https://github.com/TerryLoveMl/Model-Compression-Papers
[综述]Deep Compression/Acceleration深度压缩/加速/量化的更多相关文章
- Deep Learning(深度学习)学习笔记整理
申明:本文非笔者原创,原文转载自:http://www.sigvc.org/bbs/thread-2187-1-3.html 4.2.初级(浅层)特征表示 既然像素级的特征表示方法没有作用,那怎样的表 ...
- 【转载】Deep Learning(深度学习)学习笔记整理
http://blog.csdn.net/zouxy09/article/details/8775360 一.概述 Artificial Intelligence,也就是人工智能,就像长生不老和星际漫 ...
- Deep Learning(深度学习)学习笔记整理系列之(八)
Deep Learning(深度学习)学习笔记整理系列 zouxy09@qq.com http://blog.csdn.net/zouxy09 作者:Zouxy version 1.0 2013-04 ...
- DEEP COMPRESSION小记
2016ICLR最佳论文 Deep Compression: Compression Deep Neural Networks With Pruning, Trained Quantization A ...
- [转载]Deep Learning(深度学习)学习笔记整理
转载自:http://blog.csdn.net/zouxy09/article/details/8775360 感谢原作者:zouxy09@qq.com 八.Deep learning训练过程 8. ...
- 【转】Deep Learning(深度学习)学习笔记整理系列之(八)
十.总结与展望 1)Deep learning总结 深度学习是关于自动学习要建模的数据的潜在(隐含)分布的多层(复杂)表达的算法.换句话来说,深度学习算法自动的提取分类需要的低层次或者高层次特征. 高 ...
- Deep Learning(深度学习)学习系列
目录: 一.概述 二.背景 三.人脑视觉机理 四.关于特征 4.1.特征表示的粒度 4.2.初级(浅层)特征表示 4.3.结构性特征表示 4.4 ...
- CUDA上深度学习模型量化的自动化优化
CUDA上深度学习模型量化的自动化优化 深度学习已成功应用于各种任务.在诸如自动驾驶汽车推理之类的实时场景中,模型的推理速度至关重要.网络量化是加速深度学习模型的有效方法.在量化模型中,数据和模型参数 ...
- Deep Learning(深度学习)整理,RNN,CNN,BP
申明:本文非笔者原创,原文转载自:http://www.sigvc.org/bbs/thread-2187-1-3.html 4.2.初级(浅层)特征表示 既然像素级的特征表示方法没有作用,那怎 ...
随机推荐
- visp库中解决lapack库的问题
解决的办法是——绕过去,不要用这个库: 使用中发现如下代码抛出异常: //vpTemplateTracker.cpp try { initHessienDesired(I); ptTemplateSu ...
- LOJ#2244 起床困难综合症
解:m = 0的部分分,直接模拟.有and 0的部分分,直接模拟.<=1000的部分分,枚举攻击力之后模拟.所有操作相同的部分分,可以合并成只有一个操作.然后枚举m或者逐位贪心. 正解是逐位贪心 ...
- python之文件的读写和文件目录以及文件夹的操作实现代码
这篇文章主要介绍了python之文件的读写和文件目录以及文件夹的操作实现代码,需要的朋友可以参考下 为了安全起见,最好还是给打开的文件对象指定一个名字,这样在完成操作之后可以迅速关闭文件,防止一些无用 ...
- Vue(小案例_vue+axios仿手机app)_Vuex优化购物车功能
一.前言 1.用vuex实现加入购物车操作 2.购物车详情页面 3.点击删除按钮,删除购物详情页面里的对应商品 二.主要内容 1.用vuex加入购物车 (1)在src ...
- 【Network】优化问题——Label Smoothing
滴:转载引用请注明哦[握爪]https://www.cnblogs.com/zyrb/p/9699168.html 今天来进行讨论深度学习中的一种优化方法Label smoothing Regular ...
- 06--STL序列容器(priority_queue)
一:优先队列priority_queue简介 同队列,不支持迭代 (一)和队列相比 同: 优先队列容器与队列一样,只能从队尾插入元素,从队首删除元素. 异: 但是它有一个特性,就是队列中最大的元素总是 ...
- 爬虫框架Scrapy 之(四) --- scrapy运行原理(管道)
解析后返回可迭代对象 这个对象返回以后就会被爬虫重新接收,然后进行迭代 通过scrapy crawl budejie -o xx.josn/xx.xml/xx.csv 将迭代数据输出到json.xml ...
- Python——正则表达式初步应用(一)
1.先附上转载(www.cnblogs.com/huxi)的一张图,有重要的参考价值,其含义大家请通过阅读来理解. 2.附上初步学习Python时编写的一个爬糗事百科段子的代码. # -*- codi ...
- Centos环境下安装mongoDB
安装前注意: 此教程是通过yum安装的.仅限64位centos系统 安装步骤: 1.创建仓库文件: vi /etc/yum.repos.d/mongodb-org-3.4.repo 然后复制下面配置, ...
- C. Neko does Maths(数论 二进制枚举因数)
题目链接:https://codeforces.com/contest/1152/problem/C 题目大意:给你a和b,然后让你找到一个k,使得a+k和b+k的lcm. 学习网址:https:/ ...