【转载】 Tensorflow Guide: Batch Normalization (tensorflow中的Batch Normalization)
原文地址:
http://ruishu.io/2016/12/27/batchnorm/
------------------------------------------------------------------------------
Update [11-21-2017]: Please see this code snippet for my current preferred implementation.
I recently made the switch to TensorFlow and am very happy with how easy it was to get things done using this awesome library. Tensorflow has come a long way since I first experimented with it in 2015, and I am happy to be back.
Since I am getting myself re-acquainted with TensorFlow, I decided that I should write a post about how to do batch normalization in TensorFlow. It’s kind of weird that batch normalization still presents such a challenge for new TensorFlow users, especially since TensorFlow comes with invaluable functions like tf.nn.moments, tf.nn.batch_normalization, and even tf.contrib.layers.batch_norm. One would think that using batch normalization in TensorFlow will be a cinch. But alas, confusion still crops up from time to time, and the devil really lies in the details.
Batch Normalization The Easy Way
Perhaps the easiest way to use batch normalization would be to simply use the tf.contrib.layers.batch_norm layer. So let’s give that a go! Let’s get some imports and data loading out of the way first.
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from utils import show_graph
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Next, we define our typical fully-connected + batch normalization + nonlinearity set-up
def dense(x, size, scope):
return tf.contrib.layers.fully_connected(x, size,
activation_fn=None,
scope=scope) def dense_batch_relu(x, phase, scope):
with tf.variable_scope(scope):
h1 = tf.contrib.layers.fully_connected(x, 100,
activation_fn=None,
scope='dense')
h2 = tf.contrib.layers.batch_norm(h1,
center=True, scale=True,
is_training=phase,
scope='bn')
return tf.nn.relu(h2, 'relu')
One thing that might stand out is the phase term. We are going to use as a placeholder for a boolean which we will insert into feed_dict. It will serve as a binary indicator for whether we are in training phase=True or testing phase=False mode. Recall that batch normalization has distinct behaviors during training verus test time:
Training
Normalize layer activations according to mini-batch statistics.
During the training step, update population statistics approximation via moving average of mini-batch statistics.
Testing
Normalize layer activations according to estimated population statistics.
Do not update population statistics according to mini-batch statistcs from test data.
Now we can define our very simple neural network for MNIST classification.
tf.reset_default_graph()
x = tf.placeholder('float32', (None, 784), name='x')
y = tf.placeholder('float32', (None, 10), name='y')
phase = tf.placeholder(tf.bool, name='phase') h1 = dense_batch_relu(x, phase,'layer1')
h2 = dense_batch_relu(h1, phase, 'layer2')
logits = dense(h2, 10, 'logits') with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(y, 1), tf.argmax(logits, 1)),
'float32')) with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, y))
Now that we have defined our data and computational graph (a.k.a. model), we can train the model! Here is where we need to notice a very important note in the tf.contrib.layers.batch_norm documentation:
Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to the train_op, example:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
if update_ops:
updates = tf.group(*update_ops)
total_loss = control_flow_ops.with_dependencies([updates], total_loss)
If you are comfortable with TensorFlow’s underlying graph/ops mechanism, the note is fairly straight-forward. If not, here’s a simple way to think of it: when you execute an operation (such as train_step), only the subgraph components relevant to train_step will be executed. Unfortunately, the update_moving_averages operation is not a parent of train_step in the computational graph, so we will never update the moving averages! To get around this, we have to explicitly tell the graph:
Hey graph, update the moving averages before you finish the training step!
Unfortunately, the instructions in the documentation are a little out of date. Furthermore, if you think about it a little more, you may conclude that attaching the update ops to total_loss may not be desirable if you wish to compute the total_loss of the test set during test time. Personally, I think it makes more sense to attach the update ops to the train_step itself. So I modified the code a little and created the following training function
def train():
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
# Ensures that we execute the update_ops before performing the train_step
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer()) history = []
iterep = 500
for i in range(iterep * 30):
x_train, y_train = mnist.train.next_batch(100)
sess.run(train_step,
feed_dict={'x:0': x_train,
'y:0': y_train,
'phase:0': 1})
if (i + 1) % iterep == 0:
epoch = (i + 1)/iterep
tr = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.train.images,
'y:0': mnist.train.labels,
'phase:0': 1})
t = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.test.images,
'y:0': mnist.test.labels,
'phase:0': 0})
history += [[epoch] + tr + t]
print history[-1]
return history
And we’re done! We can now train our model and see what happens. Below, I provide a comparison of the model without batch normalization, the model with pre-activation batch normalization, and the model with post-activation batch normalization.

As you can see, batch normalization really does help with training (not always, but it certainly did in this simple example).
Additional Remarks
You have the choice of applying batch normalization either before or after the non-linearity, depending on your definition of the “activation distribution of interest” that you wish to normalize. It will probably end up being a hyperparameter that you’ll just have to tinker with.
There is also the question of what it means to share the same batch normalization layer across multiple models. I think this question should be treated with some care, and it really depends on what you think you’ll gain out of sharing the batch normalization layer. In particular, we should be careful about sharing the mini-batch statistics across multiple data streams if we expect the data streams to have distinct distributions, as is the case when using batch normalization in recurrent neural networks. As of the moment, the
tf.contrib.layers.batch_norm function does not allow this level of control.
But I do!… Kind of. It’s a fairly short piece of code, so it should be easy to modify it to fit your own purposes. *runs away*
-----------------------------------------------------------------------------
附(采用batch_normalization的代码):
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) def dense(x, size, scope):
return tf.contrib.layers.fully_connected(x, size,
activation_fn=None, scope=scope) def dense_relu(x, size, scope):
with tf.variable_scope(scope):
h1 = dense(x, size, 'dense')
return tf.nn.relu(h1, 'relu') tf.reset_default_graph()
x = tf.placeholder('float32', (None, 784), name='x')
y = tf.placeholder('float32', (None, 10), name='y')
phase = tf.placeholder(tf.bool, name='phase') h1 = dense_relu(x, 100, 'layer1')
h1 = tf.contrib.layers.batch_norm(h1,
center=True, scale=True,
is_training=phase,
scope='bn_1') h2 = dense_relu(h1, 100, 'layer2')
h2 = tf.contrib.layers.batch_norm(h2,
center=True, scale=True,
is_training=phase,
scope='bn_2') logits = dense(h2, 10, scope='logits') with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(y, 1), tf.argmax(logits, 1)),
'float32')) with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) def train():
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer()) history = []
iterep = 500
for i in range(iterep * 30):
x_train, y_train = mnist.train.next_batch(100)
sess.run(train_step,
feed_dict={'x:0': x_train,
'y:0': y_train,
'phase:0': 1})
if (i + 1) % iterep == 0:
epoch = (i + 1)/iterep
tr = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.train.images,
'y:0': mnist.train.labels,
'phase:0': 1})
t = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.test.images,
'y:0': mnist.test.labels,
'phase:0': 0})
history += [[epoch] + tr + t]
print( history[-1] )
return history train()
【转载】 Tensorflow Guide: Batch Normalization (tensorflow中的Batch Normalization)的更多相关文章
- 使用TensorFlow中的Batch Normalization
问题 训练神经网络是一个很复杂的过程,在前面提到了深度学习中常用的激活函数,例如ELU或者Relu的变体能够在开始训练的时候很大程度上减少梯度消失或者爆炸问题.但是却不能保证在训练过程中不出现该问题, ...
- Pytorch中的Batch Normalization操作
之前一直和小伙伴探讨batch normalization层的实现机理,作用在这里不谈,知乎上有一篇paper在讲这个,链接 这里只探究其具体运算过程,我们假设在网络中间经过某些卷积操作之后的输出的f ...
- Tensorflow 保存模型 & 在java中调用
本节涉及: 保存TensorFlow 的模型供其他语言使用 java中调用模型并进行预测计算 一.保存TensorFlow 的模型供其他语言使用 如果用户选择“y” ,则执行下面的步骤: 判断程序执行 ...
- 基于Anaconda安装Tensorflow 并实现在Spyder中的应用
基于Anaconda安装Tensorflow 并实现在Spyder中的应用 Anaconda可隔离管理多个环境,互不影响.这里,在anaconda中安装最新的python3.6.5 版本. 一.安装 ...
- PyTorch中的Batch Normalization
Pytorch中的BatchNorm的API主要有: 1 torch.nn.BatchNorm1d(num_features, 2 3 eps=1e-05, 4 5 momentum=0.1, 6 7 ...
- 深度学习中优化【Normalization】
深度学习中优化操作: dropout l1, l2正则化 momentum normalization 1.为什么Normalization? 深度神经网络模型的训练为什么会很困难?其中一个重 ...
- tensorflow学习笔记——使用TensorFlow操作MNIST数据(2)
tensorflow学习笔记——使用TensorFlow操作MNIST数据(1) 一:神经网络知识点整理 1.1,多层:使用多层权重,例如多层全连接方式 以下定义了三个隐藏层的全连接方式的神经网络样例 ...
- tensorflow学习笔记——使用TensorFlow操作MNIST数据(1)
续集请点击我:tensorflow学习笔记——使用TensorFlow操作MNIST数据(2) 本节开始学习使用tensorflow教程,当然从最简单的MNIST开始.这怎么说呢,就好比编程入门有He ...
- 【TensorFlow】:解决TensorFlow的ImportError: DLL load failed: 动态链接库(DLL)初始化例程失败
[背景] 在scikit-learn基础上系统结合数学和编程的角度学习了机器学习后(我的github:https://github.com/wwcom614/machine-learning),意犹未 ...
- CVPR2021 | 重新思考BatchNorm中的Batch
前言 公众号在前面发过三篇分别对BatchNorm解读.分析和总结的文章(文章链接在文末),阅读过这三篇文章的读者对BatchNorm和归一化方法应该已经有了较深的认识和理解.在本文将介绍一篇关于 ...
随机推荐
- rabbitMq实现系统内的短信发送设计&动态获取BEAN
rabbitMq实现系统内的短信发送设计&动态获取BEAN 1.短信非系统的重要节点操作,可以在任务完成之后,比如下单成功,发送下单成功的mq消息,短信服务接收到mq消息,动态的判断该短信的c ...
- java redis 短信业务应用
java redis 短信业务应用 短信业务场景:根据实际业务来通知客户,在短信的MQ中新增字段:是否需要发送短信的标识,短信微服务可以接收到MQ后根据该字段来判断是否发送. 如果发送端MQ的事件的操 ...
- radis简单学习笔记
原来写接口只用了本机缓存cache 来学习一下radis,用法应该跟cache一样吧,为了配套负载均衡的多服务器是多个服务器都可以读取缓存 一.下载 找了好长时间 github有的时候能上有的时候就上 ...
- ISCSI配置与挂载
ISCSI介绍 iSCSI使用 TCP/IP 协议,来提供网络存储. 客户端挂载后,可以对其进行分区,进行格式化,就好像是安装在本机上的硬盘一样. 为了保证传输速率,通常采用光纤. 配置环境 Cent ...
- detect.py - yolov5master nvidia jetson agx xavier for mask with UART
import argparse import time from pathlib import Path import cv2 import torch import torch.backends.c ...
- 安卓内核编译:关闭"error, forbidden warning"
安卓内核编译:关闭error, forbidden warning 背景 最近在编译Android kernel时,遇到error, forbidden warning, 导致编译中断,大大降低了de ...
- EEPROM、Flash有关知识
存储器分为两大类:RAM(Random Access Memory,任意地址访问储存器)和ROM(Read-Only Memory,只读储存器). 发展历史 ROM Read-Only Memory, ...
- 模拟用户登录-cookes
import requests url = 'https://www.xread8.com/user/login.json' headers = { 'User-Agent': 'Mozilla/5. ...
- 3568F-翼辉SylixOS国产操作系统演示案例
- Mysql生成实体类
-- 查询数据表结构 SELECT CONCAT('"e.',SUBSTRING(COLUMN_NAME,1),',"+'),COLUMN_NAME,',',COLUMN_TYPE ...