[综述]Deep Compression/Acceleration深度压缩/加速/量化
Survey
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks, [arxiv '18]
A Survey of Model Compression and Acceleration for Deep Neural Networks [arXiv '17]
Quantization
- The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning [ICML'17]
- Compressing Deep Convolutional Networks using Vector Quantization [arXiv'14]
- Quantized Convolutional Neural Networks for Mobile Devices [CVPR '16]
- Fixed-Point Performance Analysis of Recurrent Neural Networks [ICASSP'16]
- Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations [arXiv'16]
- Loss-aware Binarization of Deep Networks [ICLR'17]
- Towards the Limit of Network Quantization [ICLR'17]
- Deep Learning with Low Precision by Half-wave Gaussian Quantization [CVPR'17]
- ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks [arXiv'17]
- Training and Inference with Integers in Deep Neural Networks [ICLR'18]
- Deep Learning with Limited Numerical Precision[ICML'2015]
- Model compression via distillation and quantization [ICLR '18]
- Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy [ICLR '18]
- On the Universal Approximability of Quantized ReLU Neural Networks [arXiv '18]
- Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference [CVPR '18]
- Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 [NIPS '16]
- XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks [ECCV '16]
- Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration [CVPR '17]
- Maxout Networks
- BinaryConnect: Training Deep Neural Networks with binary weights during propagations
- Ternary weight networks
- From Hashing to CNNs: Training Binary Weight Networks via Hashing
- Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization
- TRAINED TERNARY QUANTIZATION
- DOREFA-NET: TRAINING LOW BITWIDTH CONVOLUTIONAL NEURAL NETWORKS WITH LOW BITWIDTH GRADIENTS
- Two-Step Quantization for Low-bit Neural Networks
- LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
- Fixed-point Factorized Networks
- INCREMENTAL NETWORK QUANTIZATION: TOWARDS LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS
- Network Sketching: Exploiting Binary Structure in Deep CNNs
- Towards Effective Low-bitwidth Convolutional Neural Networks
- SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
- Very deep convolutional networks for large-scale image recognition
- Towards Accurate Binary Convolutional Neural Network
- Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
Pruning
- Learning both Weights and Connections for Efficient Neural Networks [NIPS'15]
- Pruning Filters for Efficient ConvNets [ICLR'17]
- Pruning Convolutional Neural Networks for Resource Efficient Inference [ICLR'17]
- Soft Weight-Sharing for Neural Network Compression [ICLR'17]
- Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding [ICLR'16]
- Dynamic Network Surgery for Efficient DNNs [NIPS'16]
- Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning [CVPR'17]
- ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression [ICCV'17]
- To prune, or not to prune: exploring the efficacy of pruning for model compression [ICLR'18]
- Data-Driven Sparse Structure Selection for Deep Neural Networks [arXiv '17]
- Learning Structured Sparsity in Deep Neural Networks [NIPS '16]
- Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism [ISCA '17]
- Channel Pruning for Accelerating Very Deep Neural Networks [ICCV '17]
- Learning Efficient Convolutional Networks through Network Slimming [ICCV '17]
- NISP: Pruning Networks using Neuron Importance Score Propagation [CVPR '18]
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers [ICLR '18]
- MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks [arXiv '17]
- Efficient Sparse-Winograd Convolutional Neural Networks [ICLR '18]
Low-rank Approximation
- Efficient and Accurate Approximations of Nonlinear Convolutional Networks [CVPR'15]
- Accelerating Very Deep Convolutional Networks for Classification and Detection (Extended version of above one)
- Convolutional neural networks with low-rank regularization [arXiv'15]
- Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation [NIPS'14]
- Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications [ICLR'16]
- High performance ultra-low-precision convolutions on mobile devices [NIPS'17]
- Speeding up convolutional neural networks with low rank expansions
- Coordinating Filters for Faster Deep Neural Networks [ICCV '17]
Knowledge Distillation
- Dark knowledge
- FitNets: Hints for Thin Deep Nets [ICLR '15]
- Net2net: Accelerating learning via knowledge transfer [ICLR '16]
- Distilling the Knowledge in a Neural Network [NIPS '15]
- MobileID: Face Model Compression by Distilling Knowledge from Neurons [AAAI '16]
- DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer [arXiv '17]
- Deep Model Compression: Distilling Knowledge from Noisy Teachers [arXiv '16]
- Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer [ICLR '17]
- Like What You Like: Knowledge Distill via Neuron Selectivity Transfer [arXiv '17]
- Learning Efficient Object Detection Models with Knowledge Distillation [NIPS '17]
- Data-Free Knowledge Distillation For Deep Neural Networks [NIPS '17]
- A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learnin [CVPR '17]
- Moonshine: Distilling with Cheap Convolutions [arXiv '17]
- Model compression via distillation and quantization [ICLR '18]
- Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy [ICLR '18]
Miscellaneous
- Beyond Filters: Compact Feature Map for Portable Deep Model [ICML '17]
- SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization [ICML '17]
Reference
- [1] http://chenrudan.github.io/blog/2018/10/02/networkquantization.html
- [2] https://github.com/TerryLoveMl/Model-Compression-Papers
[综述]Deep Compression/Acceleration深度压缩/加速/量化的更多相关文章
- Deep Learning(深度学习)学习笔记整理
申明:本文非笔者原创,原文转载自:http://www.sigvc.org/bbs/thread-2187-1-3.html 4.2.初级(浅层)特征表示 既然像素级的特征表示方法没有作用,那怎样的表 ...
- 【转载】Deep Learning(深度学习)学习笔记整理
http://blog.csdn.net/zouxy09/article/details/8775360 一.概述 Artificial Intelligence,也就是人工智能,就像长生不老和星际漫 ...
- Deep Learning(深度学习)学习笔记整理系列之(八)
Deep Learning(深度学习)学习笔记整理系列 zouxy09@qq.com http://blog.csdn.net/zouxy09 作者:Zouxy version 1.0 2013-04 ...
- DEEP COMPRESSION小记
2016ICLR最佳论文 Deep Compression: Compression Deep Neural Networks With Pruning, Trained Quantization A ...
- [转载]Deep Learning(深度学习)学习笔记整理
转载自:http://blog.csdn.net/zouxy09/article/details/8775360 感谢原作者:zouxy09@qq.com 八.Deep learning训练过程 8. ...
- 【转】Deep Learning(深度学习)学习笔记整理系列之(八)
十.总结与展望 1)Deep learning总结 深度学习是关于自动学习要建模的数据的潜在(隐含)分布的多层(复杂)表达的算法.换句话来说,深度学习算法自动的提取分类需要的低层次或者高层次特征. 高 ...
- Deep Learning(深度学习)学习系列
目录: 一.概述 二.背景 三.人脑视觉机理 四.关于特征 4.1.特征表示的粒度 4.2.初级(浅层)特征表示 4.3.结构性特征表示 4.4 ...
- CUDA上深度学习模型量化的自动化优化
CUDA上深度学习模型量化的自动化优化 深度学习已成功应用于各种任务.在诸如自动驾驶汽车推理之类的实时场景中,模型的推理速度至关重要.网络量化是加速深度学习模型的有效方法.在量化模型中,数据和模型参数 ...
- Deep Learning(深度学习)整理,RNN,CNN,BP
申明:本文非笔者原创,原文转载自:http://www.sigvc.org/bbs/thread-2187-1-3.html 4.2.初级(浅层)特征表示 既然像素级的特征表示方法没有作用,那怎 ...
随机推荐
- HFSS在进行仿真时端口与激励设置细则
最近发现在使用HFSS仿真天线时候在设置端口激励求解的时候,由于端口激励面积的大小和放置方式的不通最终的求解结果也有很多不同 在进行CPW结构的天线仿真中分别尝试了waveport 和lumpedpo ...
- windows下网络编程UDP
转载 C++ UDP客户端服务器Socket编程 UDPServer.cpp #include<winsock2.h>#include<stdio.h>#include< ...
- kubernetes之flannel
kubernetes网络通信 容器间的通信 pod内的容器通信(lo) Pod之间的通信 pod IP <-----> pod IP Pod与Service之间的通信 podIP ...
- SpringBoot项目打成jar包后上传文件到服务器 目录与jar包同级问题
看标题好像很简单的样子,但是针对使用jar包发布SpringBoot项目就不一样了.当你使用tomcat发布项目的时候,上传文件存放会变得非常简单,因为你可以随意操作项目路径下的资源.但是当你使用Sp ...
- Eclipse——在eclipse上安装Pydev插件实现python编程
介绍:2003年7月16日,以 Fabio Zadrozny 为首的三人开发小组在全球最大的开放源代码软件开发平台和仓库 SourceForge 上注册了一款新的项目,该项目实现了一个功能强大的 Ec ...
- 虚拟机有QQ消息时宿主机自动弹窗提示
因为是检测窗口实现的,所以要求设置会话窗口自动弹出,而且看完消息就把QQ消息窗口关掉... 虚拟机端 #! /usr/bin/env python # -*- coding: utf-8 -*- fr ...
- 【1】学习C++时,一些零散知识点01
1.编程理念 学习从学习完C后,接触了C++,最重要的便是编程理念的转变.C缩重视的是结构化编程,面对一个较大的程序,就将他分解成小型.便于管理的任务,如果分解后的任务还是偏难过大的话,那将这个任务继 ...
- grep sed awk 3个Linux中对文件内容操作的命令
在学习Linux命令中,发现3个有关于文件内容操作的命令grep,sed和awk,在这里简单汇总这3个命令主要作用,在实际中找到最合适的情景应用,详细用法可以参考其他文章. 1.grep命令 主要作用 ...
- git本机服务器配置(四):git+TortoiseGit+gitblit配置本机服务器
1.配置本机git服务器 1.1 打开gitblit服务器,登录之前设置的服务页面localhost:1081 1.2.登录账号,账号在(三)中有提到. 1.3 打开用户中心 1.4 点击SSH Ke ...
- 20175333曹雅坤 实验二 Java面向对象程序设计
实验二 Java面向对象程序设计 实验内容 1. 初步掌握单元测试和TDD 2. 理解并掌握面向对象三要素:封装.继承.多态 3. 初步掌握UML建模 4. 熟悉S.O.L.I.D原则 5. 了解设计 ...