网络压缩论文集(network compression)
Convolutional Neural Networks
- ImageNet Models
- Architecture Design
- Activation Functions
- Visualization
- Fast Convolution
- Low-Rank Filter Approximation
- Low Precision
- Parameter Pruning
- Transfer Learning
- Theory
- 3D Data
- Hardware
ImageNet Models
- 2017 CVPR Xception: Deep Learning with Depthwise Separable Convolutions(Xception)
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks (ResNeXt)
- 2016 ECCV Identity Mappings in Deep Residual Networks (Pre-ResNet)
- 2016 arXiv Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (Inception V4)
- 2016 CVPR Deep Residual Learning for Image Recognition (ResNet)
- 2015 arXiv Rethinking the Inception Architecture for Computer Vision (Inception V3)
- 2015 ICML Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Inception V2)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2015 ICLR Very Deep Convolutional Networks For Large-scale Image Recognition (VGG)
- 2015 CVPR Going Deeper with Convolutions (GoogleNet/Inception V1)
- 2012 NIPS ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
Architecture Design
- 2017 arXiv One Model To Learn Them All
- 2017 arXiv MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
- 2017 ICML AdaNet: Adaptive Structural Learning of Artificial Neural Networks
- 2017 ICML Large-Scale Evolution of Image Classifiers
- 2017 CVPR Aggregated Residual Transformations for Deep Neural Networks
- 2017 CVPR Densely Connected Convolutional Networks
- 2017 ICLR Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
- 2017 ICLR Neural Architecture Search with Reinforcement Learning
- 2017 ICLR Designing Neural Network Architectures using Reinforcement Learning
- 2017 ICLR Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
- 2017 ICLR Highway and Residual Networks learn Unrolled Iterative Estimation
- 2016 NIPS Residual Networks Behave Like Ensembles of Relatively Shallow Networks
- 2016 BMVC Wide Residual Networks
- 2016 arXiv Benefits of depth in neural networks
- 2016 AAAI On the Depth of Deep Neural Networks: A Theoretical View
- 2016 arXiv SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
- 2015 ICMLW Highway Networks
- 2015 CVPR Convolutional Neural Networks at Constrained Time Cost
- 2015 CVPR Fully Convolutional Networks for Semantic Segmentation
- 2014 NIPS Do Deep Nets Really Need to be Deep?
- 2014 ICLRW Understanding Deep Architectures using a Recursive Convolutional Network
- 2013 ICML Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures
- 2009 ICCV What is the Best Multi-Stage Architecture for Object Recognition?
- 1995 NIPS Simplifying Neural Nets by Discovering Flat Minima
- 1994 T-NN SVD-NET: An Algorithm that Automatically Selects Network Structure
Activation Functions
- 2017 arXiv Self-Normalizing Neural Networks (SELU)
- 2016 ICLR Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (ELU)
- 2015 arXiv Empirical Evaluation of Rectified Activations in Convolutional Network (RReLU)
- 2015 ICCV Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- 2013 ICML Rectifier Nonlinearities Improve Neural Network Acoustic Models
- 2010 ICML Rectified Linear Units Improve Restricted Boltzmann Machines (ReLU)
Visualization
- 2017 CVPR Network Dissection: Quantifying Interpretability of Deep Visual Representations
- 2015 ICMLW Understanding Neural Networks Through Deep Visualization
- 2014 ECCV Visualizing and Understanding Convolutional Networks
Fast Convolution
- 2017 ICML Warped Convolutions: Efficient Invariance to Spatial Transformations
- 2017 ICLR Faster CNNs with Direct Sparse Convolutions and Guided Pruning
- 2016 NIPS PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
- 2016 CVPR Fast Algorithms for Convolutional Neural Networks (Winograd)
- 2015 CVPR Sparse Convolutional Neural Networks
Low-Rank Filter Approximation
- 2016 ICLR Convolutional Neural Networks with Low-rank Regularization
- 2016 ICLR Training CNNs with Low-Rank Filters for Efficient Image Classification
- 2016 TPAMI Accelerating Very Deep Convolutional Networks for Classification and Detection
- 2015 CVPR Efficient and Accurate Approximations of Nonlinear Convolutional Networks
- 2015 ICLR Speeding-up convolutional neural networks using fine-tuned cp-decomposition
- 2014 NIPS Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
- 2014 BMVC Speeding up Convolutional Neural Networks with Low Rank Expansions
- 2013 NIPS Predicting Parameters in Deep Learning
- 2013 CVPR Learning Separable Filters
Low Precision
- 2017 arXiv BitNet: Bit-Regularized Deep Neural Networks
- 2017 arXiv Gradient Descent for Spiking Neural Networks
- 2017 arXiv ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks
- 2017 arXiv Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework
- 2017 arXiv The High-Dimensional Geometry of Binary Neural Networks
- 2017 NIPS Training Quantized Nets: A Deeper Understanding
- 2017 NIPS TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
- 2017 ICML Analytical Guarantees on Numerical Precision of Deep Neural Networks
- 2017 arXiv Deep Learning with Low Precision by Half-wave Gaussian Quantization
- 2017 CVPR Network Sketching: Exploiting Binary Structure in Deep CNNs
- 2017 CVPR Local Binary Convolutional Neural Networks
- 2017 ICLR Towards the Limit of Network Quantization
- 2017 ICLR Loss-aware Binarization of Deep Networks
- 2017 ICLR Trained Ternary Quantization
- 2017 ICLR Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
- 2016 arXiv Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
- 2016 arXiv Accelerating Deep Convolutional Networks using low-precision and sparsity
- 2016 arXiv Deep neural networks are robust to weight binarization and other non-linear distortions
- 2016 ECCV XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
- 2016 ICMLW Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
- 2016 ICML Fixed Point Quantization of Deep Convolutional Networks
- 2016 NIPS Binarized Neural Networks
- 2016 arXiv Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
- 2016 CVPR Quantized Convolutional Neural Networks for Mobile Devices
- 2016 ICLR Neural Networks with Few Multiplications
- 2015 arXiv Resiliency of Deep Neural Networks under Quantization
- 2015 arXiv Rounding Methods for Neural Networks with Low Resolution Synaptic Weights
- 2015 NIPS Backpropagation for Energy-Efficient Neuromorphic Computing
- 2015 NIPS BinaryConnect: Training Deep Neural Networks with Binary Weights during Propagations
- 2015 ICMLW Bitwise Neural Networks
- 2015 ICML Deep Learning with Limited Numerical Precision
- 2015 ICLRW Training deep neural networks with low precision multiplications
- 2015 arXiv Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation
- 2014 NIPS Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights
- 2013 arXiv Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
- 2011 NIPSW Improving the speed of neural networks on CPUs
- 1987 Combinatorica Randomized rounding: A technique for provably good algorithms and algorithmic proofs
Parameter Pruning
- 2017 ICML Beyond Filters: Compact Feature Map for Portable Deep Model
- 2017 ICLR Soft Weight-Sharing for Neural Network Compression
- 2017 ICLR Pruning Convolutional Neural Networks for Resource Efficient Inference
- 2017 ICLR Pruning Filters for Efficient ConvNets
- 2016 arXiv Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
- 2016 arXiv Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
- 2016 NIPS Learning the Number of Neurons in Deep Networks
- 2016 NIPS Learning Structured Sparsity in Deep Learning [code]
- 2016 NIPS Dynamic Network Surgery for Efficient DNNs
- 2016 ECCV Less is More: Towards Compact CNNs
- 2016 CVPR Fast ConvNets Using Group-wise Brain Damage
- 2016 ICLR Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
- 2016 ICLR Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
- 2015 arXiv Structured Pruning of Deep Convolutional Neural Networks
- 2015 IEEE Access Channel-Level Acceleration of Deep Face Representations
- 2015 BMVC Data-free parameter pruning for Deep Neural Networks
- 2015 ICML Compressing Neural Networks with the Hashing Trick
- 2015 ICCV Deep Fried Convnets
- 2015 ICCV An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
- 2015 NIPS Learning both Weights and Connections for Efficient Neural Networks
- 2015 ICLR FitNets: Hints for Thin Deep Nets
- 2014 arXiv Compressing Deep Convolutional Networks using Vector Quantization
- 2014 NIPSW Distilling the Knowledge in a Neural Network
- 1995 ISANN Evaluating Pruning Methods
- 1993 T-NN Pruning Algorithms--A Survey
- 1989 NIPS Optimal Brain Damage
Transfer Learning
- 2016 arXiv What makes ImageNet good for transfer learning?
- 2014 NIPS How transferable are features in deep neural networks?
- 2014 CVPR CNN Features off-the-shelf: an Astounding Baseline for Recognition
- 2014 ICML DeCAF: A Deep Convolutional Activation
Theory
- 2017 ICML On the Expressive Power of Deep Neural Networks
- 2017 ICML A Closer Look at Memorization in Deep Networks
- 2017 ICML An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis
- 2016 NIPS Exponential expressivity in deep neural networks through transient chaos
- 2016 arXiv Understanding Deep Convolutional Networks
- 2014 NIPS On the number of linear regions of deep neural networks
- 2014 ICML Provable Bounds for Learning Some Deep Representations
- 2014 ICLR On the number of response regions of deep feed forward networks with piece-wise linear activations
- 2014 ICLR Revisiting natural gradient for deep networks
3D Data
- 2017 NIPS PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- 2017 ICCV Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
- 2017 SIGGRAPH O-CNN: Octree-based Convolutional Neural Network for Understanding 3D Shapes
- 2017 CVPR PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- 2017 CVPR OctNet: Learning Deep 3D Representations at High Resolutions
- 2016 NIPS FPNN: Field Probing Neural Networks for 3D Data
- 2016 NIPS Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
- 2015 ICCV Multi-view Convolutional Neural Networks for 3D Shape Recognition
- 2015 BMVC Sparse 3D convolutional neural networks
- 2015 CVPR 3D ShapeNets: A Deep Representation for Volumetric Shapes
Hardware
- 2017 ISVLSI YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights
- 2017 ASPLOS SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
- 2017 FPGA Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks
- 2015 NIPS Tutorial High-Performance Hardware for Machine Learning
网络压缩论文集(network compression)的更多相关文章
- 网络压缩论文整理(network compression)
1. Parameter pruning and sharing 1.1 Quantization and Binarization Compressing deep convolutional ne ...
- plain framework 1 1.0.3更新 优化编译部分、网络压缩和加密
有些东西总是姗姗来迟,就好比这新年的钟声,我们盼望着新年同时也不太旧的一年过去.每当这个时候,我们都会总结一下在过去的一年中我们收获了什么,再计划新的一年我们要实现什么.PF并不是一个十分优秀的框架, ...
- VMware虚拟机上网络连接(network type)的三种模式--bridged、host-only、NAT
VMware虚拟机上网络连接(network type)的三种模式--bridged.host-only.NAT VMWare提供了三种工作模式,它们是bridged(桥接模式).NAT(网络地址转换 ...
- [USACO08JAN]手机网络Cell Phone Network
[USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cell phon ...
- linux 网络虚拟化: network namespace 简介
linux 网络虚拟化: network namespace 简介 network namespace 是实现网络虚拟化的重要功能,它能创建多个隔离的网络空间,它们有独自的网络栈信息.不管是虚拟机还是 ...
- 洛谷 P2812 校园网络【[USACO]Network of Schools加强版】 解题报告
P2812 校园网络[[USACO]Network of Schools加强版] 题目背景 浙江省的几所OI强校的神犇发明了一种人工智能,可以AC任何题目,所以他们决定建立一个网络来共享这个软件.但是 ...
- 论文笔记——A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding
论文<A Deep Neural Network Compression Pipeline: Pruning, Quantization, Huffman Encoding> Prunin ...
- Solaris11.1网络配置(Fixed Network)
Solaris11的网络配置与Solaris10有很大不同,Solaris11通过network configuration profiles(NCP)来管理网络配置. Solaris11网络配置分为 ...
- 洛谷P2899 [USACO08JAN]手机网络Cell Phone Network
P2899 [USACO08JAN]手机网络Cell Phone Network 题目描述 Farmer John has decided to give each of his cows a cel ...
随机推荐
- PS快速制作下雪效果
PS快速制作下雪效果 具体的制作步骤如下: 1.打开PS,打开素材,打开窗口-动作 2.创建新动作,参数如下图 3.回到图层,建立一个图层,填充黑色,如下图 4.滤镜-像素化-点状化,参数如下图 5. ...
- 2018最新php笔试题及答案(持续更新)
php中include和require的区别 在 PHP 中,您可以在服务器执行 PHP 文件之前在该文件中插入一个文件的内容.include 和 require 语句用于在执行流中插入写在其他文件中 ...
- 【剑指offer】矩形覆盖
一.题目: 我们可以用2*1的小矩形横着或者竖着去覆盖更大的矩形.请问用n个2*1的小矩形无重叠地覆盖一个2*n的大矩形,总共有多少种方法? 二.思路: 斐波那契数列 三.代码:
- [vue]js模块导入导出export default
webstrom调试未授权问题解决 分es6语法和node语法 参考 参考 - export default s1 1.仅能出现1次default 2.导入时候可以随便命名 3,导出时候不必写{} - ...
- [sh]sh最佳实战(含grep)
sh虐我千百遍,我待sh如初恋. sh复习资料 http://www.cnblogs.com/iiiiher/p/5385108.html http://blog.csdn.net/iiiiher/a ...
- 使用POI读取/创建Execl(.xlsx)文件
最近项目中用到了解析Execl表格的功能,在网上百度了一下自己写了一个小Demo.由于项目中使用的是Execl2007,就是后缀为.xlsx的,所以只研究了解析和创建Execl2007的文件,解析Ex ...
- asp.net本地读取excel正确。但在iis服务器上就报错 未在本地计算机上注册“Microsoft.ACE.OleDb.12.0”提供程序
本地vs2010可以上传ecxel文件.并读取数据,但部署到本地IIS.并访问.则提示: 未在本地计算机上注册“Microsoft.ACE.OleDb.12.0”提供程序 首先:确保安装了Micros ...
- SpringMyBatisDay03
1.Spring MVC 1)什么是Spring MVC Spring MVC是Spring框架中一个模块,实现MVC结构,便于简单,快速开发MVC结构的WEB应用,Spring MVC提供的API封 ...
- LeetCode-EvaluteReversePolishNotation
题目: Evaluate the value of an arithmetic expression in Reverse Polish Notation. Valid operators are + ...
- mongodbtemplate配置
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.sp ...