FastML

Machine learning made easy

Deep learning architecture diagrams

2016-09-30

Like a wild stream after a wet season in African savanna diverges into many smaller streams forming lakes and puddles, deep learning has diverged into a myriad of specialized architectures. Each architecture has a diagram. Here are some of them.

Neural networks are conceptually simple, and that’s their beauty. A bunch of homogenous, uniform units, arranged in layers, weighted connections between them, and that’s all. At least in theory. Practice turned out to be a bit different. Instead of feature engineering, we now have architecture engineering, as described by Stephen Merrity:

The romanticized description of deep learning usually promises that the days of hand crafted feature engineering are gone - that the models are advanced enough to work this out themselves. Like most advertising, this is simultaneously true and misleading.

Whilst deep learning has simplified feature engineering in many cases, it certainly hasn’t removed it. As feature engineering has decreased, the architectures of the machine learning models themselves have become increasingly more complex. Most of the time, these model architectures are as specific to a given task as feature engineering used to be.

To clarify, this is still an important step. Architecture engineering is more general than feature engineering and provides many new opportunities. Having said that, however, we shouldn’t be oblivious to the fact that where we are is still far from where we intended to be.

Not quite as bad as doings of architecture astronauts, but not too good either.


An example of architecture specific to a given task

LSTM diagrams

How to explain those architectures? Naturally, with a diagram. A diagram will make it all crystal clear.

Let’s first inspect the two most popular types of networks these days, CNN and LSTM. You’ve already seen a convnet diagram, so turning to the iconic LSTM:

It’s easy, just take a closer look:

As they say, in mathematics you don’t understand things, you just get used to them.

Fortunately, there are good explanations, for example Understanding LSTM Networks and Written Memories: Understanding, Deriving and Extending the LSTM.

LSTM still too complex? Let’s try a simplified version, GRU (Gated Recurrent Unit). Trivial, really.

Especially this one, called minimal GRU.

More diagrams

Various modifications of LSTM are now common. Here’s one, called deep bidirectional LSTM:


DB-LSTM, PDF

The rest are pretty self-explanatory, too. Let’s start with a combination of CNN and LSTM, since you have both under your belt now:


Convolutional Residual Memory Network, 1606.05262


Dynamic NTM, 1607.00036


Evolvable Neural Turing Machines, PDF


Recurrent Model Of Visual Attention, 1406.6247


Unsupervised Domain Adaptation By Backpropagation, 1409.7495


Deeply Recursive CNN For Image Super-Resolution, 1511.04491

This diagram of multilayer perceptron with synthetic gradients scores high on clarity:


MLP with synthetic gradients, 1608.05343

Every day brings more. Here’s a fresh one, again from Google:


Google’s Neural Machine Translation System, 1609.08144

And Now for Something Completely Different

Drawings from the Neural Network ZOO are pleasantly simple, but, unfortunately, serve mostly as eye candy. For example:

  
ESM, ESN and ELM

These look like not-fully-connected perceptrons, but are supposed to represent a Liquid State Machine, an Echo State Network, and an Extreme Learning Machine.

How does LSM differ from ESN? That’s easy, it has green neuron with triangles. But how does ESN differ from ELM? Both have blue neurons.

Seriously, while similar, ESN is a recursive network and ELM is not. And this kind of thing should probably be visible in an architecture diagram.

Posted by Zygmunt Z. 2016-09-30  basicsneural-networks

Tweet

 

« Factorized convolutional neural networks, AKA separable convolutions

Comments

 

Recent Posts

Twitter

Follow @fastml for notifications about new posts.

  • Status updating...

Follow @fastml

Also check out @fastml_extra for things related to machine learning and data science in general.

GitHub

Most articles come with some code. We push it to Github.

https://github.com/zygmuntz

Cubert

Visualize your data in interactive 3D, as described here.

http://cubert.fastml.com/

Copyright © 2016 - Zygmunt Z. - Powered by Octopress

(转) Deep learning architecture diagrams的更多相关文章

  1. 15 cvpr An Improved Deep Learning Architecture for Person Re-Identification

    http://www.umiacs.umd.edu/~ejaz/ * 也是同时学习feature和metric * 输入一对图片,输出是否是同一个人 * 包含了一个新的层: include a lay ...

  2. Deep Learning in a Nutshell: History and Training

    Deep Learning in a Nutshell: History and Training This series of blog posts aims to provide an intui ...

  3. 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks

    In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...

  4. 【Deep Learning】genCNN: A Convolutional Architecture for Word Sequence Prediction

    作者:Mingxuan Wang.李航,刘群 单位:华为.中科院 时间:2015 发表于:acl 2015 文章下载:http://pan.baidu.com/s/1bnBBVuJ 主要内容: 用de ...

  5. Why GEMM is at the heart of deep learning

    Why GEMM is at the heart of deep learning I spend most of my time worrying about how to make deep le ...

  6. 【深度学习Deep Learning】资料大全

    最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books  by Yoshua Bengio, Ian Goodfellow and Aaron C ...

  7. (转) Awesome - Most Cited Deep Learning Papers

    转自:https://github.com/terryum/awesome-deep-learning-papers Awesome - Most Cited Deep Learning Papers ...

  8. (转) Deep Learning Research Review Week 2: Reinforcement Learning

      Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...

  9. deep learning 的综述

    从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久....)开始开文,即为记录自 ...

随机推荐

  1. 【java基础】方法2

    让形参可变的方法 jdk1.5之后,java允许定义形参长度可变的参数,允许为方法指定数量不确定的形参. package object; public class VariableParam { // ...

  2. easyui tree 折叠节点

    <ul id="jihuidian" class="easyui-tree" data-options="onBeforeLoad:functi ...

  3. [教程] 离线封装原版WIN7系统 100%纯净

    raymond 发表于 2015-11-28 18:54:15 https://www.itsk.com/thread-360376-1-4.html 对于之前我用母盘封装的系统,纯粹是为了体积而折腾 ...

  4. 如何低成本的打造HTC Vive虚拟演播室直播MR视频?

    http://m.toutiao.com/i6298923859378700802/?tt_from=weixin&utm_campaign=client_share&from=gro ...

  5. windows防火墙命令详解

    Old command 针对win7以下版本<包含win7> Example 1: 启用一个程序 Old command New command netsh firewall add al ...

  6. tiny_cnn代码阅读(2)

    上一篇讲了mse函数 , 这次gradient_descent_levenberg_marquardt @see ${root}/tiny_cnn/optimizer/optimizer.h 这个函数 ...

  7. python 学习笔记-----编码问题

    1.python 最早支持的是ASCII编码. 所以对于普通的字符串"ABC"为ASCII编码的形式.字母和数字之间的转换函数为ord('字母')和chr(‘数字’)函数. ord ...

  8. mybitis学习的页面

    http://mybatis.github.io/mybatis-3/zh/configuration.html

  9. kvm虚拟机安装

    KVM虚拟化技术介绍 概述 KVM是基于内核的虚拟化技术(Kernel-based Virtual Machine),于2007年的Linux 2.6.20被合并进Linux内核.KVM要求CPU支持 ...

  10. Rhel6-vpn配置文档

    系统环境: rhel6 x86_64 iptables and selinux disabled 主机: 192.168.122.160 server60.example.com 192.168.12 ...