http://blog.revolutionanalytics.com/2016/08/deep-learning-part-1.html

Deep Learning Part 1: Comparison of Symbolic Deep Learning Frameworks

by Anusua Trivedi, Microsoft Data Scientist

Background and Approach

This blog series is based on my upcoming talk on re-usability of Deep Learning Models at the Hadoop+Strata World Conference in Singapore. This blog series will be in several parts – where I describe my experiences and go deep into the reasons behind my choices.

Deep learning is an emerging field of research, which has its application across multiple domains. I try to show how transfer learning and fine tuning strategy leads to re-usability of the same Convolution Neural Network model in different disjoint domains. Application of this model across various different domains brings value to using this fine-tuned model.

In this blog (Part1), I describe and compare the commonly used open-source deep learning frameworks. I dive deep into different pros and cons for each framework, and discuss why I chose Theano for my work.

Please feel free to email me at trivedianusua23@gmail.com if you have questions.

Symbolic Frameworks

Symbolic computation frameworks (as in CNTK, MXNET, TensorFlow, Theano) are specified as a symbolic graph of vector operations, such as matrix add/multiply or convolution. A layer is just a composition of those operations. The fine granularity of the building blocks (operations) allows users to invent new complex layer types without implementing them in a low-level language (as in Caffe).

I've used different symbolic computation frameworks in my work. However, I found each of them has their pros and cons in their design and current implementation, and none of them can perfectly satisfy all needs. For my problem needs , I decided to work with Theano.

Here we compare the following symbolic computation frameworks:

Theano

  • Software: Theano
  • Creator: Université de Montréal
  • Software license: BSD license
  • Open source: Yes
  • Platform: Cross-platform
  • Written in: Python
  • Interface: Python
  • CUDA support: Yes
  • Automatic differentiation: Yes
  • Has pre-trained models: Through Lasagne's model zoo
  • Recurrent Nets: Yes
  • Convolutional Nets: Yes
  • RBM/DBNs: Yes

TensorFlow

  • Software: TensorFlow
  • Creator: Google Brain Team
  • Software license: Apache 2.0
  • Open source: Yes
  • Platform: Linux, Mac OS X,
  • Windows support on roadmap
  • Written in: C++, Python
  • Interface: Python, C/C++
  • CUDA support: Yes
  • Automatic differentiation: Yes
  • Has pre-trained models: No
  • Recurrent Nets: Yes
  • Convolutional Nets: Yes
  • RBM/DBNs: Yes

MXNET

  • Software: MXNET
  • Creator: Distributed (Deep) Machine Learning Community
  • Software license: Apache 2.0
  • Open source: Yes
  • Platform: Ubuntu, OS X, Windows, AWS, Android, iOS, JavaScript
  • Written in: C++, Python, Julia, Matlab, R, Scala
  • Interface: C++, Python, Julia, Matlab, JavaScript, R, Scala
  • CUDA support: Yes
  • Automatic differentiation: Yes
  • Has pre-trained models: Yes
  • Recurrent Nets: Yes
  • Convolutional Nets: Yes
  • RBM/DBNs: Yes

Non-symbolic frameworks

PROS:

  • Non-symbolic (imperative) neural network frameworks like torch, caffe etc. tend to have  very similar design in their computation part.
  • In terms of expressiveness, imperative frameworks with a good design can also expose graph-like interface (e.g. torch/nngraph).

CONS:

  • The main drawbacks of imperative frameworks actually lie in manual optimization. For example, in-place operation has to be manually implemented.
  • Most imperative frameworks are not designed well enough to have comparable expressiveness as symbolic frameworks.

Symbolic frameworks

PROS:

  • Symbolic frameworks can possibly infer optimization automatically from the dependency graph.
  • A symbolic framework can exploit much more memory reuse opportunities, as is well done in MXNET.
  • Symbolic frameworks can automatically compute an optimal schedule. This is explained in TensorFlow whitepaper.

CONS:

  • Available open source symbolic frameworks currently are still not good enough to beat imperative frameworks in performance.

Adding New Operations

In all of these frameworks, adding an Operation with reasonable performance is not easy.

Theano / MXNET

TensorFlow

Can add Operation in Python with inline C support.

Forward in C++, symbolic gradient in Python.

Code Re-usability

Training deep networks are time-consuming. So, Caffe has released some pre-trained model/weights (model zoo) which could be used as initial weights while transfer learning or fine tuning deep networks on domain specific or custom images.

  • Theano
    Lasagne is a high-level framework built on top of Theano. It’s very easy to use Caffe pre-tained model weights in Lasagne.
  • TensorFlow
    No support for pre-trained model.
  • MXNET
    MXNET has a caffe_converter tool which allows to convert pre-trained caffe model weights to fit MXNET.

Low-level Tensor Operators

A reasonably efficient implementation of low-level operators can serve as ingredients in writing new models, saving the effort to write new Operations.

Theano

TensorFlow

MXNET

A lot of basic Operations

Fairly good

Very few

Control Flow Operator

Control flow operators make the symbolic engine more expressive and generic.

Theano

TensorFlow

MXNET

Supported

Experimental

Not Supported

High-level Support

  • Theano
    Pure symbolic computation framework. High-level frameworks can be built to fit desired means of use. Successful examples include KerasLasagneblocks.
  • TensorFlow
    Has good design considerations for neural network training, and at the same time avoid being totally a neural network framework, which is a wonderful job. The graph collection, queues, image augmenters etc. can be useful building blocks for a higher-level wrapper.
  • MXNET
    Apart from the symbolic part, MXNET also comes with all necessary components for image classification, going all the way through data loading to building a model that has a method to start training.

Performance

Benchmarking Using Single-GPU

I benchmark LeNet model on MNIST Dataset using a Single-GPU (NVIDIA Quadro K1200 GPU).

Theano

TensorFlow

MXNET

Great

Not so good

Excellent

Memory

GPU memory is limited and may usually be a problem for large models.

Theano

TensorFlow

MXNET

Great

Not so good

Excellent

Single-GPU Speed

Theano takes a long time to compile a graph, especially with complex models. TensorFlow is a bit slower.

Theano / MXNET

TensorFlow

comparable to CuDNNv4

about 0.5x slower

Parallel/Distributed Support

Theano

TensorFlow

MXNET

experimental multi-GPU

multi-GPU

distributed

Conclusion

Theano (with higher-level Lasagne & Keras) is a great choice for deep learning models. It’s very easy to implement new networks & modify existing networks using Lasagne/Keras. I prefer python, and thus prefer using Lasagne/Keras due to their very mature python interface. However, they do not support R. I have tried using transfer learning and fine tuning in Lasagne/Keras, and it’s very easy to modify an existing network and customize it with domain-specific custom data.

Comparisons of different frameworks show that MXNET is the best choice (better performance/memory). Moreover, it has a great R support. In fact, it is the only framework that supports all functions in R. In MXNET, transfer learning and fine tuning networks are possible, but not as easy (as compared to Lasagne/Keras). This makes modifying existing trained networks more difficult, and thus a bit difficult to use domain-specific custom data.

Continued in Deep Learning Part 2: Transfer Learning and Fine-tuning Deep Convolutional Neural Networks

Comments

You can follow this conversation by subscribing to the comment feed for this post.

It’s worth to note that H2O is another framework of DL as well but w/o GPU support now.
And, it’s a tradeoff between performance and flexibility for DL framework.
One example in below blog post which shows the native R DL code w/ GPU backend acceleration.

http://www.parallelr.com/r-deep-neural-network-from-scratch/
http://www.parallelr.com/r-dnn-parallel-acceleration/
http://www.parallelr.com/r-dnn-cuda-multigpu/

Posted by: daisy | August 10, 2016 at 20:18

You are using some old TensorFlow release. It is no longer slow and it supports multi machine training. Operations can also be easily defined in Python and control ops are no longer experimental.

Posted by: Andrew | August 25, 2016 at 23:00

Would you please put your benchmarking code on GitHub and link back here in a comment?

Posted by: Dale Smith | August 26, 2016 at 05:21

Excellent post Anusha. Very informative. Do you mind I re-post this along with part 2 on my platform www.gladwinanalytics.com ? It would be greatly useful to tens and thousands of Gladwin Analytics users.

Thanks,
Anandh Shanmugaraj

Posted by: Big Data Jobs | August 27, 2016 at 05:35

Thanks for the comments.

Daisy - I tried to compare open-source frameworks only. I haven't played much with H2O, thanks for posting the links.

Andrew - Ahh! Thanks for pointing. I bench-marked TensorFlow sometime back, I need to update to new version.

Dale Smith - The plan is to make all codes available through Github. Its a work in progress, and I'll make it available once I have some newer version results.

Anandh Shanmugaraj - Feel free to re-post.

Posted by: Anusua Trivedi | August 29, 2016 at 07:12

The comments to this entry are closed.

Comparison of Symbolic Deep Learning Frameworks的更多相关文章

  1. Comparing deep learning frameworks: Tensorflow, CNTK, MXNet, & Caffe

    https://imaginghub.com/blog/10-a-comparison-of-four-deep-learning-frameworks-tensorflow-cntk-mxnet-a ...

  2. Deep Learning in R

    Introduction Deep learning is a recent trend in machine learning that models highly non-linear repre ...

  3. Machine and Deep Learning with Python

    Machine and Deep Learning with Python Education Tutorials and courses Supervised learning superstiti ...

  4. deep learning framework(不同的深度学习框架)

    常用的deep learning frameworks 基本转自:http://www.codeceo.com/article/10-open-source-framework.html 1. Caf ...

  5. Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1, Assignment(Regularization)

    声明:所有内容来自coursera,作为个人学习笔记记录在这里. Regularization Welcome to the second assignment of this week. Deep ...

  6. (转) Learning Deep Learning with Keras

    Learning Deep Learning with Keras Piotr Migdał - blog Projects Articles Publications Resume About Ph ...

  7. 课程一(Neural Networks and Deep Learning),第一周(Introduction to Deep Learning)—— 1、经常提及的问题

    Frequently Asked Questions Congratulations to be part of the first class of the Deep Learning Specia ...

  8. Convolutional Neural Networks from deep learning (assignment 1 from week 1)

    Convolutional Neural Networks https://www.coursera.org/learn/convolutional-neural-networks/home/welc ...

  9. Deep Learning for NLP学习翻译笔记(2)

    Deep Learning for NLP Deep Learning for NLP Lecture 2:Introduction to Teano enter link description h ...

随机推荐

  1. 搜狐 WEB 标准-前端技术应用规范

  2. overflow 在ie7下失效

    问题原因: 当父元素的后代元素的样式拥有position:relative属性时,父元素的overflow:hidden属性就会失效. 解决方法: 在父元素中使用position:relative;即 ...

  3. Linux-在新买的阿里云服务器上部署Tomcat并支持外网访问的配置(步骤记录)

    一.首先你得有一台外网上的服务器 华为.腾讯.阿里都有云服务售卖,我这里是在阿里云打折时购买的. 二.使用Xshell和XFTP连接上云服务 当然了,连接工具有很多种,可随意.购买服务器之后,你会收到 ...

  4. Makefile模板

    CC = gcc LD = gcc CFLAGS = -Wall -c LDFLAGS = SRC_DIRS = src test INC_DIRS = inc OBJ_DIR = obj OUT_D ...

  5. jmeter - 命令行方式运行

    命令格式: jmeter -n -t <testplan filename> -l <listener filename> 参数说明: -n 非 GUI 模式 -> 在非 ...

  6. 纯分享scp协议如何工作

    scp协议是什么, wiki上说: Secure copy or SCP is a means of securely transferring computer files between a lo ...

  7. Android之常用开发框架

    1.Rajawali介绍:安卓的OpenGL ES 2.0/3.0 引擎.可以用于制作普通应用或者动态壁纸,当然也可以用于制作游戏.项目地址: https://github.com/Rajawali/ ...

  8. 折腾了两天的跨站脚本提交问题,与IIS7有关

    根据这里提供的方法,本地测试通过没有问题,但是部署到服务器上之后,只有GET请求可以跨站提交,POST请求继续报错,折腾了两天之后觉得,是不是IIS7的问题?果然,找到了这篇文章,照做之后解决.

  9. [转载] C++异常处理机制

    原地址:http://blog.csdn.net/daheiantian/article/details/6530318 一.什么是异常处理 一句话:异常处理就是处理程序中的错误. 二.为什么需要异常 ...

  10. Excel一对多查询(index+small+if)

    一.学习 一对多查询模式化数组公式: =INDEX(区域,SMALL(IF(条件,行号数组,4^8),ROW(A1))) 三键齐按(ctrl+shift+回车) 在具有多个符合条件的情况下,提取和匹配 ...