安利一下刘铁岩老师的《分布式机器学习》这本书

以及一个大神的blog:

https://zhuanlan.zhihu.com/p/29032307

https://zhuanlan.zhihu.com/p/30976469


分布式深度学习原理

在很多教程中都有介绍DL training的原理。我们来简单回顾一下:

那么如果scale太大,需要分布式呢?分布式机器学习大致有以下几个思路:

  1. 对于计算量太大的场景(计算并行),可以多线程/多节点并行计算。常用的一个算法就是同步随机梯度下降(synchronous stochastic gradient descent),含义大致相当于K个(K是节点数)mini-batch SGD        [ch6.2]
  2. 对于训练数据太多的场景(数据并行,也是最主要的场景),需要将数据划分到多个节点上训练。每个节点先用本地的数据先训练出一个子模型,同时和其他节点保持通信(比如更新参数)以保证最终可以有效整合来自各个节点的训练结果,并得到全局的ML模型。        [ch6.3]
  3. 对于模型太大的场景,需要把模型(例如NN中的不同层)划分到不同节点上进行训练。此时不同节点之间可能需要频繁的sync。        [ch6.4]

它们可以总结为下图:

以数据并行为例,整个pipeline如下:

  1. 划分数据到不同节点
  2. 每个节点单机训练
  3. 节点之间的通信以及整个拓扑结构设计    【ch7】
  4. 多个训练好的子模型的聚合    【ch8】

Distributed DL model

目前工业界常见的Distributed DL方法有以下三种:【ch7.3】

1. PyTorch: AllReduce Model
MPI is a common method of distributed computing framework to implement distributed machine learning system. The main idea is to use AllReduce API to synchronize message and it also supports operations which satisfy Reduce rules. The common polymerization method for machine learning models is addition and average, so AllReduce logic is suitable to deal with it. The standard API of AllReduce have various implemented methods.
AllReduce mode is simple and convenient which is beneficial for paralleling training in synchronization algorithm. Till now, there are many deep learning systems still use it to complete communication function in distributed training, such as gloo communication library from Caffe2, DeepSpeech system in Baidu and NCCL communication library in Nvidia.
However, AllReduce can only support synchronizing communication and the logic of all working nodes are same which means every working node should handle completed model. It is unsuitable for large scale model.
Limitation of AllReduce:
When working nodes in system is increasing and the computing is unbalance, the training speed is decided by the slowest node in this system; once a working node does not work, the whole system has to stop.
Also, when the number of parameters of models in machine learning task is too large, it will exceed the memory capacity of single machine.

2. MXNet: Parameter Server Model
In the parameter server framework, all nodes in system are divided into worker and server logically. The main task of each worker is to take charge of local training task and communicate with parameter server through server interface. In this way, they can obtain latest model parameters from parameter server or send latest local training model to parameter server. With this parameter server, machine learning can be synchronous or asynchronous, or even mixed.

3. TensorFlow: Dataflow Model
Computational graph model in TensorFlow: Computation is described as a directed acyclic data flow graph. The nodes in the figure represent compute nodes and the edges represent data flow.
Distributed machine learning system based on data flow draws on the flexibility of DAG-based big data processing system, it describes the computing task as a directed acyclic data flow graph. The nodes in the figure represent the operations on the data and the edges in the figure represent the dependencies of the operation.
The system automatically provides distributed execution of the dataflow graph, so the user cares about how to design the appropriate dataflow graph to represent the algorithmic logic that is to be executed.
Below, it will take a data flow diagram representing the data flow system in TensorFlow as an example to introduce a typical data flow diagram.

分布式机器学习算法

【ch9】

Distributed Deep Learning的更多相关文章

  1. (转)分布式深度学习系统构建 简介 Distributed Deep Learning

    HOME ABOUT CONTACT SUBSCRIBE VIA RSS   DEEP LEARNING FOR ENTERPRISE Distributed Deep Learning, Part ...

  2. 英特尔深度学习框架BigDL——a distributed deep learning library for Apache Spark

    BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning ...

  3. CoRR 2018 | Horovod: Fast and Easy Distributed Deep Learning in Tensorflow

    将深度学习模型的训练从单GPU扩展到多GPU主要面临以下问题:(1)训练框架必须支持GPU间的通信,(2)用户必须更改大量代码以使用多GPU进行训练.为了克服这些问题,本文提出了Horovod,它通过 ...

  4. Install PaddlePaddle (Parallel Distributed Deep Learning)

    Step 1: Install docker on your linux system (My linux is fedora) https://docs.docker.com/engine/inst ...

  5. NeurIPS 2017 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

    在深度神经网络的分布式训练中,梯度和参数同步时的网络开销是一个瓶颈.本文提出了一个名为TernGrad梯度量化的方法,通过将梯度三值化为\({-1, 0, 1}\)来减少通信量.此外,本文还使用逐层三 ...

  6. 【深度学习Deep Learning】资料大全

    最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books  by Yoshua Bengio, Ian Goodfellow and Aaron C ...

  7. (转) Awesome Deep Learning

    Awesome Deep Learning  Table of Contents Free Online Books Courses Videos and Lectures Papers Tutori ...

  8. 机器学习(Machine Learning)&深度学习(Deep Learning)资料

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost到随机森林.D ...

  9. 机器学习(Machine Learning)&深入学习(Deep Learning)资料

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost 到随机森林. ...

随机推荐

  1. Java 性能优化的55个细节(珍藏版)

    在Java程序中,性能问题的大部分原因并不在于Java语言,而是程序本身.养成良好的编码习惯非常重要,能够显著地提升程序性能. 1.尽量在合适的场合使用单例 使用单例可以减轻加载的负担,缩短加载的时间 ...

  2. UVa 1343 The Rotation Game (状态空间搜索 && IDA*)

    题意:有个#字型的棋盘,2行2列,一共24个格. 如图:每个格子是1或2或3,一共8个1,8个2,8个3. 有A~H一共8种合法操作,比如A代表把A这一列向上移动一个,最上面的格会补到最下面. 求:使 ...

  3. Spring Data Jpa (五)@Entity实例里面常用注解详解

    详细介绍javax.persistence下面的Entity中常用的注解. 虽然Spring Data JPA已经帮我们对数据的操作封装得很好了,约定大于配置思想,帮我们默认了很多东西.JPA(Jav ...

  4. 《Effective Java》读书笔记 - 5.泛型

    Chapter 5 Generics Item 23: Don't use raw types in new code 虽然你可以把一个List<String>传给一个List类型(raw ...

  5. C语言第四次实验报告

    第四次实验报告 一·实验项目名称: 多球反弹 二·实验项目功能描述: (1)实现多个小球 (2)实现多个小球碰壁会反弹 (3)实现小球之间碰撞反弹 三· 项目模块结构介绍 #define High 4 ...

  6. Word2Vec模型参数 详解

    用gensim函数库训练Word2Vec模型有很多配置参数.这里对gensim文档的Word2Vec函数的参数说明进行翻译,以便不时之需. class gensim.models.word2vec.W ...

  7. Flask中的request模板渲染Jinja以及Session

    Flask中的request与django相似介绍几个常用的以后用的时候直接查询即可 1.request from flask import request(用之前先引用,与django稍有不同) r ...

  8. DNS 搜索 - dig 命令

    dig 命令_互动百科 示例: # 全部 dig www.zjffun.com # 只显示 ANSWER SECTION dig www.zjffun.com +noall +answer

  9. deepfm代码参考

    https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py https://www.cnblo ...

  10. 【HANA系列】SAP HANA SQL获取本周的周一

    公众号:SAP Technical 本文作者:matinal 原文出处:http://www.cnblogs.com/SAPmatinal/ 原文链接:[HANA系列]SAP HANA SQL获取本周 ...