【转载】 Tensorflow Guide: Batch Normalization (tensorflow中的Batch Normalization)
原文地址:
http://ruishu.io/2016/12/27/batchnorm/
------------------------------------------------------------------------------
Update [11-21-2017]: Please see this code snippet for my current preferred implementation.
I recently made the switch to TensorFlow and am very happy with how easy it was to get things done using this awesome library. Tensorflow has come a long way since I first experimented with it in 2015, and I am happy to be back.
Since I am getting myself re-acquainted with TensorFlow, I decided that I should write a post about how to do batch normalization in TensorFlow. It’s kind of weird that batch normalization still presents such a challenge for new TensorFlow users, especially since TensorFlow comes with invaluable functions like tf.nn.moments, tf.nn.batch_normalization, and even tf.contrib.layers.batch_norm. One would think that using batch normalization in TensorFlow will be a cinch. But alas, confusion still crops up from time to time, and the devil really lies in the details.
Batch Normalization The Easy Way
Perhaps the easiest way to use batch normalization would be to simply use the tf.contrib.layers.batch_norm layer. So let’s give that a go! Let’s get some imports and data loading out of the way first.
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from utils import show_graph
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Next, we define our typical fully-connected + batch normalization + nonlinearity set-up
def dense(x, size, scope):
return tf.contrib.layers.fully_connected(x, size,
activation_fn=None,
scope=scope) def dense_batch_relu(x, phase, scope):
with tf.variable_scope(scope):
h1 = tf.contrib.layers.fully_connected(x, 100,
activation_fn=None,
scope='dense')
h2 = tf.contrib.layers.batch_norm(h1,
center=True, scale=True,
is_training=phase,
scope='bn')
return tf.nn.relu(h2, 'relu')
One thing that might stand out is the phase term. We are going to use as a placeholder for a boolean which we will insert into feed_dict. It will serve as a binary indicator for whether we are in training phase=True or testing phase=False mode. Recall that batch normalization has distinct behaviors during training verus test time:
Training
Normalize layer activations according to mini-batch statistics.
During the training step, update population statistics approximation via moving average of mini-batch statistics.
Testing
Normalize layer activations according to estimated population statistics.
Do not update population statistics according to mini-batch statistcs from test data.
Now we can define our very simple neural network for MNIST classification.
tf.reset_default_graph()
x = tf.placeholder('float32', (None, 784), name='x')
y = tf.placeholder('float32', (None, 10), name='y')
phase = tf.placeholder(tf.bool, name='phase') h1 = dense_batch_relu(x, phase,'layer1')
h2 = dense_batch_relu(h1, phase, 'layer2')
logits = dense(h2, 10, 'logits') with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(y, 1), tf.argmax(logits, 1)),
'float32')) with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, y))
Now that we have defined our data and computational graph (a.k.a. model), we can train the model! Here is where we need to notice a very important note in the tf.contrib.layers.batch_norm documentation:
Note: When is_training is True the moving_mean and moving_variance need to be updated, by default the update_ops are placed in tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to the train_op, example:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
if update_ops:
updates = tf.group(*update_ops)
total_loss = control_flow_ops.with_dependencies([updates], total_loss)
If you are comfortable with TensorFlow’s underlying graph/ops mechanism, the note is fairly straight-forward. If not, here’s a simple way to think of it: when you execute an operation (such as train_step), only the subgraph components relevant to train_step will be executed. Unfortunately, the update_moving_averages operation is not a parent of train_step in the computational graph, so we will never update the moving averages! To get around this, we have to explicitly tell the graph:
Hey graph, update the moving averages before you finish the training step!
Unfortunately, the instructions in the documentation are a little out of date. Furthermore, if you think about it a little more, you may conclude that attaching the update ops to total_loss may not be desirable if you wish to compute the total_loss of the test set during test time. Personally, I think it makes more sense to attach the update ops to the train_step itself. So I modified the code a little and created the following training function
def train():
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
# Ensures that we execute the update_ops before performing the train_step
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer()) history = []
iterep = 500
for i in range(iterep * 30):
x_train, y_train = mnist.train.next_batch(100)
sess.run(train_step,
feed_dict={'x:0': x_train,
'y:0': y_train,
'phase:0': 1})
if (i + 1) % iterep == 0:
epoch = (i + 1)/iterep
tr = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.train.images,
'y:0': mnist.train.labels,
'phase:0': 1})
t = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.test.images,
'y:0': mnist.test.labels,
'phase:0': 0})
history += [[epoch] + tr + t]
print history[-1]
return history
And we’re done! We can now train our model and see what happens. Below, I provide a comparison of the model without batch normalization, the model with pre-activation batch normalization, and the model with post-activation batch normalization.

As you can see, batch normalization really does help with training (not always, but it certainly did in this simple example).
Additional Remarks
You have the choice of applying batch normalization either before or after the non-linearity, depending on your definition of the “activation distribution of interest” that you wish to normalize. It will probably end up being a hyperparameter that you’ll just have to tinker with.
There is also the question of what it means to share the same batch normalization layer across multiple models. I think this question should be treated with some care, and it really depends on what you think you’ll gain out of sharing the batch normalization layer. In particular, we should be careful about sharing the mini-batch statistics across multiple data streams if we expect the data streams to have distinct distributions, as is the case when using batch normalization in recurrent neural networks. As of the moment, the
tf.contrib.layers.batch_norm function does not allow this level of control.
But I do!… Kind of. It’s a fairly short piece of code, so it should be easy to modify it to fit your own purposes. *runs away*
-----------------------------------------------------------------------------
附(采用batch_normalization的代码):
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) def dense(x, size, scope):
return tf.contrib.layers.fully_connected(x, size,
activation_fn=None, scope=scope) def dense_relu(x, size, scope):
with tf.variable_scope(scope):
h1 = dense(x, size, 'dense')
return tf.nn.relu(h1, 'relu') tf.reset_default_graph()
x = tf.placeholder('float32', (None, 784), name='x')
y = tf.placeholder('float32', (None, 10), name='y')
phase = tf.placeholder(tf.bool, name='phase') h1 = dense_relu(x, 100, 'layer1')
h1 = tf.contrib.layers.batch_norm(h1,
center=True, scale=True,
is_training=phase,
scope='bn_1') h2 = dense_relu(h1, 100, 'layer2')
h2 = tf.contrib.layers.batch_norm(h2,
center=True, scale=True,
is_training=phase,
scope='bn_2') logits = dense(h2, 10, scope='logits') with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(
tf.equal(tf.argmax(y, 1), tf.argmax(logits, 1)),
'float32')) with tf.name_scope('loss'):
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) def train():
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer()) history = []
iterep = 500
for i in range(iterep * 30):
x_train, y_train = mnist.train.next_batch(100)
sess.run(train_step,
feed_dict={'x:0': x_train,
'y:0': y_train,
'phase:0': 1})
if (i + 1) % iterep == 0:
epoch = (i + 1)/iterep
tr = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.train.images,
'y:0': mnist.train.labels,
'phase:0': 1})
t = sess.run([loss, accuracy],
feed_dict={'x:0': mnist.test.images,
'y:0': mnist.test.labels,
'phase:0': 0})
history += [[epoch] + tr + t]
print( history[-1] )
return history train()
【转载】 Tensorflow Guide: Batch Normalization (tensorflow中的Batch Normalization)的更多相关文章
- 使用TensorFlow中的Batch Normalization
问题 训练神经网络是一个很复杂的过程,在前面提到了深度学习中常用的激活函数,例如ELU或者Relu的变体能够在开始训练的时候很大程度上减少梯度消失或者爆炸问题.但是却不能保证在训练过程中不出现该问题, ...
- Pytorch中的Batch Normalization操作
之前一直和小伙伴探讨batch normalization层的实现机理,作用在这里不谈,知乎上有一篇paper在讲这个,链接 这里只探究其具体运算过程,我们假设在网络中间经过某些卷积操作之后的输出的f ...
- Tensorflow 保存模型 & 在java中调用
本节涉及: 保存TensorFlow 的模型供其他语言使用 java中调用模型并进行预测计算 一.保存TensorFlow 的模型供其他语言使用 如果用户选择“y” ,则执行下面的步骤: 判断程序执行 ...
- 基于Anaconda安装Tensorflow 并实现在Spyder中的应用
基于Anaconda安装Tensorflow 并实现在Spyder中的应用 Anaconda可隔离管理多个环境,互不影响.这里,在anaconda中安装最新的python3.6.5 版本. 一.安装 ...
- PyTorch中的Batch Normalization
Pytorch中的BatchNorm的API主要有: 1 torch.nn.BatchNorm1d(num_features, 2 3 eps=1e-05, 4 5 momentum=0.1, 6 7 ...
- 深度学习中优化【Normalization】
深度学习中优化操作: dropout l1, l2正则化 momentum normalization 1.为什么Normalization? 深度神经网络模型的训练为什么会很困难?其中一个重 ...
- tensorflow学习笔记——使用TensorFlow操作MNIST数据(2)
tensorflow学习笔记——使用TensorFlow操作MNIST数据(1) 一:神经网络知识点整理 1.1,多层:使用多层权重,例如多层全连接方式 以下定义了三个隐藏层的全连接方式的神经网络样例 ...
- tensorflow学习笔记——使用TensorFlow操作MNIST数据(1)
续集请点击我:tensorflow学习笔记——使用TensorFlow操作MNIST数据(2) 本节开始学习使用tensorflow教程,当然从最简单的MNIST开始.这怎么说呢,就好比编程入门有He ...
- 【TensorFlow】:解决TensorFlow的ImportError: DLL load failed: 动态链接库(DLL)初始化例程失败
[背景] 在scikit-learn基础上系统结合数学和编程的角度学习了机器学习后(我的github:https://github.com/wwcom614/machine-learning),意犹未 ...
- CVPR2021 | 重新思考BatchNorm中的Batch
前言 公众号在前面发过三篇分别对BatchNorm解读.分析和总结的文章(文章链接在文末),阅读过这三篇文章的读者对BatchNorm和归一化方法应该已经有了较深的认识和理解.在本文将介绍一篇关于 ...
随机推荐
- kettle从入门到精通 第三十七课 kettle 全量同步(数据量小)
1.下图是一些常见的数据同步业务场景: 实时数据:对实时性要求很高,延迟在毫秒范围内.常见的有kafka/rabbitmq等消息中间件,mysql binlog日志,oracle归档日志等. 离线数据 ...
- C# .NET 云南农信国密签名(SM2)简要解析
BouncyCastle库(BC库)与云南农信最大的区别是 : BC库 SM2Signer.Init() 方法比云南农信多了最后3行代码: digest.Reset(); z = GetZ(user ...
- Spring AOP 中@Pointcut的用法(多个Pointcut)
Spring AOP 中@Pointcut的用法(多个Pointcut) /** swagger切面,分开来写 **/ @Aspect @Component public class ApiOpera ...
- linux查看redis安装路径
## linux查看redis安装路径 redis-cli -h 127.0.0.1 -p 6379redis-cli monitor > redis2.log /usr/local/redis ...
- radis简单学习笔记
原来写接口只用了本机缓存cache 来学习一下radis,用法应该跟cache一样吧,为了配套负载均衡的多服务器是多个服务器都可以读取缓存 一.下载 找了好长时间 github有的时候能上有的时候就上 ...
- HBCK2修复hbase2的常见场景
上一文章已经把HBCK2 怎么在小于hbase2.0.3版本的编译与用法介绍了,解决主要场景 查看hbase存在的问题 一.使用hbase hbck命令 hbase hbck命令是对hbase的元数据 ...
- kylin-3.1.1-bin-hadoop3搭建,构建cube报的错误,Cannot modify dfs.replication at runtime. It is not in list of params that are allowed to be modified at runtime
主要是每次构建cube时会去读取kylin安装目录下的conf/kylin_hive_conf.xml文件, 副本是无法在hive查询时修改的,注释掉这两项 这个其实还有一些参数的控制: 添加这俩个参 ...
- Stable Diffusion(三)Dreambooth finetune模型
1. Dreambooth Dreambooth可以把你任何喜欢的东西放入Stable Diffusion模型. 1.1. 什么是Dreambooth 最初由谷歌在2022年发布,是对SD模型的fin ...
- openGauss集群主库出现流复制延迟告警
问题描述:环境是openGauss 5.0集群,在一次意外重启数据库之后.收到了一个主库的主从延迟告警,只有从库才能出现延迟,主库怎么会出现了告警延迟 告警信息: Status: Resolved H ...
- 3568F-PCIe 5G通信测试手册