2019-ICLR-DARTS: Differentiable Architecture Search-论文阅读
DARTS
2019-ICLR-DARTS Differentiable Architecture Search
- Hanxiao Liu、Karen Simonyan、Yiming Yang
- GitHub:2.8k stars
- Citation:557
Motivation
Current NAS method:
- Computationally expensive: 2000/3000 GPU days
- Discrete search space, leads to a large number of architecture evaluations required.
Contribution
- Differentiable NAS method based on gradient decent.
- Both CNN(CV) and RNN(NLP).
- SOTA results on CIFAR-10 and PTB.
- Efficiency: (2000 GPU days VS 4 GPU days)
- Transferable: cifar10 to ImageNet, (PTB to WikiText-2).
Method
Search Space
Search for a cell as the building block of the final architecture.
The learned cell could either be stacked to form a CNN or recursively connected to form a RNN.
A cell is a DAG consisting of an ordered sequence of N nodes.
\(\bar{o}^{(i, j)}(x)=\sum_{o \in \mathcal{O}} \frac{\exp \left(\alpha_{o}^{(i, j)}\right)}{\sum_{o^{\prime} \in \mathcal{O}} \exp \left(\alpha_{o^{\prime}}^{(i, j)}\right)} o(x)\)
\(x^{(j)}=\sum_{i<j} o^{(i, j)}\left(x^{(i)}\right)\)
Optimization Target
Our goal is to jointly learn the architecture α and the weights w within all the mixed operations (e.g. weights of the convolution filters).
\(\min _{\alpha} \mathcal{L}_{v a l}\left(w^{*}(\alpha), \alpha\right)\) ......(3)
s.t. \(\quad w^{*}(\alpha)=\operatorname{argmin}_{w} \mathcal{L}_{\text {train}}(w, \alpha)\) .......(4)
The idea is to approximate w∗(α) by adapting w using only a single training step, without solving the inner optimization (equation 4) completely by training until convergence.
\(\nabla_{\alpha} \mathcal{L}_{v a l}\left(w^{*}(\alpha), \alpha\right)\) ......(5)
\(\approx \nabla_{\alpha} \mathcal{L}_{v a l}\left(w-\xi \nabla_{w} \mathcal{L}_{t r a i n}(w, \alpha), \alpha\right)\) ......(6)
- When ξ = 0, the second-order derivative in equation 7 will disappear.
- ξ = 0 as the first-order approximation,
- ξ > 0 as the second-order approximation.
Discrete Arch
To form each node in the discrete architecture, we retain the top-k strongest operations (from distinct nodes) among all non-zero candidate operations collected from all the previous nodes.
we use k = 2 for convolutional cells and k = 1 for recurrent cellsThe strength of an operation is defined as \(\frac{\exp \left(\alpha_{o}^{(i, j)}\right)}{\sum_{o^{\prime} \in \mathcal{O}} \exp \left(\alpha_{o^{\prime}}^{(i, j)}\right)}\)
Experiments
We include the following operations in O:
- 3 × 3 and 5 × 5 separable convolutions,
- 3 × 3 and 5 × 5 dilated separable convolutions,
- 3 × 3 max pooling,
- 3 × 3 average pooling,
- identity (skip connection?)
- zero.
All operations are of
- stride one (if applicable)
- the feature maps are padded to preserve their spatial resolution.
We use the
- ReLU-Conv-BN order for convolutional operations,
- Each separable convolution is always applied twice
- Our convolutional cell consists of N = 7 nodes, the output node is defined as the depthwise concatenation of all the intermediate nodes (input nodes excluded).
The first and second nodes of cell k are set equal to the outputs of cell k−2 and cell k−1
Cells located at the 1/3 and 2/3 of the total depth of the network are reduction cells, in which all the operations adjacent to the input nodes are of stride two.
The architecture encoding therefore is (αnormal, αreduce),
where αnormal is shared by all the normal cells
and αreduce is shared by all the reduction cells.
To determine the architecture for final evaluation, we run DARTS four times with different random seeds and pick the best cell based on its validation performance obtained by training from scratch for a short period (100 epochs on CIFAR-10 and 300 epochs on PTB).
This is particularly important for recurrent cells, as the optimization outcomes can be initialization-sensitive (Fig. 3)
Arch Evaluation
- To evaluate the selected architecture, we randomly initialize its weights (weights learned during the search process are discarded), train it from scratch, and report its performance on the test set.
- To evaluate the selected architecture, we randomly initialize its weights (weights learned during the search process are discarded), train it from scratch, and report its performance on the test set.
Result Analysis
- DARTS achieved comparable results with the state of the art while using three orders of magnitude less computation resources.
- (i.e. 1.5 or 4 GPU days vs 2000 GPU days for NASNet and 3150 GPU days for AmoebaNet)
- The longer search time is due to the fact that we have repeated the search process four times for cell selection. This practice is less important for convolutional cells however, because the performance of discovered architectures does not strongly depend on initialization (Fig. 3).
- It is also interesting to note that random search is competitive for both convolutional and recurrent models, which reflects the importance of the search space design.
Results in Table 3 show that the cell learned on CIFAR-10 is indeed transferable to ImageNet.
The weaker transferability between PTB and WT2 (as compared to that between CIFAR-10 and ImageNet) could be explained by the relatively small size of the source dataset (PTB) for architecture search.
The issue of transferability could potentially be circumvented by directly optimizing the architecture on the task of interest.
Conclusion
- We presented DARTS, a simple yet efficient NAS algorithm for both CNN and RNN.
- SOTA
- efficiency improvement by several orders of magnitude.
Improve
- discrepancies between the continuous architecture encoding and the derived discrete architecture. (softmax…)
- It would also be interesting to investigate performance-aware architecture derivation schemes based on the shared parameters learned during the search process.
Appendix
2019-ICLR-DARTS: Differentiable Architecture Search-论文阅读的更多相关文章
- 论文笔记:DARTS: Differentiable Architecture Search
DARTS: Differentiable Architecture Search 2019-03-19 10:04:26accepted by ICLR 2019 Paper:https://arx ...
- 论文笔记系列-DARTS: Differentiable Architecture Search
Summary 我的理解就是原本节点和节点之间操作是离散的,因为就是从若干个操作中选择某一个,而作者试图使用softmax和relaxation(松弛化)将操作连续化,所以模型结构搜索的任务就转变成了 ...
- 论文笔记:Progressive Differentiable Architecture Search:Bridging the Depth Gap between Search and Evaluation
Progressive Differentiable Architecture Search:Bridging the Depth Gap between Search and Evaluation ...
- 2019-ICCV-PDARTS-Progressive Differentiable Architecture Search Bridging the Depth Gap Between Search and Evaluation-论文阅读
P-DARTS 2019-ICCV-Progressive Differentiable Architecture Search Bridging the Depth Gap Between Sear ...
- 论文笔记系列-Auto-DeepLab:Hierarchical Neural Architecture Search for Semantic Image Segmentation
Pytorch实现代码:https://github.com/MenghaoGuo/AutoDeeplab 创新点 cell-level and network-level search 以往的NAS ...
- Research Guide for Neural Architecture Search
Research Guide for Neural Architecture Search 2019-09-19 09:29:04 This blog is from: https://heartbe ...
- 小米造最强超分辨率算法 | Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search
本篇是基于 NAS 的图像超分辨率的文章,知名学术性自媒体 Paperweekly 在该文公布后迅速跟进,发表分析称「属于目前很火的 AutoML / Neural Architecture Sear ...
- 论文笔记系列-Neural Architecture Search With Reinforcement Learning
摘要 神经网络在多个领域都取得了不错的成绩,但是神经网络的合理设计却是比较困难的.在本篇论文中,作者使用 递归网络去省城神经网络的模型描述,并且使用 增强学习训练RNN,以使得生成得到的模型在验证集上 ...
- 论文笔记:Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation2019-03-18 14:4 ...
随机推荐
- EOS基础全家桶(十)交易Action操作
简介 区块链上的所有操作都是通过交易(Transaction)上链的,无论你是转账交易还是发起的智能合约的调用,而EOS和传统区块链不同的是EOS在一个交易里可以发起多个行为(Action),这使得E ...
- 第五章:深入Python的dict和set
第五章:深入Python的dict和set 课程:Python3高级核心技术 5.1 dict的abc继承关系 class Mapping(Collection): __slots__ = () &q ...
- 【Hadoop离线基础总结】通过Java代码执行Shell命令
通过Java代码执行Shell命令 需求 在实际工作中,总会有些时候需要我们通过java代码通过远程连接去linux服务器上面执行一些shell命令,包括一些集群的状态管理,执行任务,集群的可视化界面 ...
- 关于proteus仿真的串口问题
以下四幅图都是关于串口中断的问题,串口中断需要一个接收或者发送数据的触发. 图一:因为由串口小助手发送的数据达到了单片机串口,所以引起了串口的中断. 图二:图一的大图. 图三:因为由串口小助手发送的数 ...
- [hdu4598]二分图判定,差分约束
题意: 给一个图,问能否给每个点分配一个实数值,使得存在一个数实数T,所有点满足:|value(i)| < T 且 u,v之间有边<=> |value(u)-value(v)| &g ...
- 记录下做攻防世界的misc题
0x00 记录一下,代表自己做过 0x01 flag_universe 看简介是来自2018年的百越杯. 将文件下载下来后,就一个flag_universe.pcapng文件,wireshark打开. ...
- shiro 实现自定义权限规则校验
<span style="font-family: Arial, Helvetica, sans-serif;">在系统中使用shiro进行权限管理,当用户访问没有权限 ...
- webpack指南(五)TypeScript
将webpack与TS进行集成. 1. 安装TypeScript 编译器和 loader npm install --save-dev typescript ts-loader 2. 在package ...
- node响应头缓存设置
我把react项目分成4个板块,在路由的顶层 今天在手机上打开react项目的时候,发现平级路由跳转时某一个图片较多的板块图片总是渲染得很慢,这分明是重新发起请求了. 然后我先查一下react-rou ...
- 基于 abp vNext 和 .NET Core 开发博客项目 - 统一规范API,包装返回模型
上一篇文章(https://www.cnblogs.com/meowv/p/12916613.html)使用自定义仓储完成了简单的增删改查案例,有心的同学可以看出,我们的返回参数一塌糊涂,显得很不友好 ...






















