TF2.0默认为动态图,即eager模式。意味着TF能像Pytorch一样不用在session中才能输出中间参数值了,那么动态图和静态图毕竟是有区别的,tf2.0也会有写法上的变化。不过值得吐槽的是,tf2.0启动速度仍然比Pytorch慢的多。

操作被记录在磁带中(tape)
这是一个关键的变化。在TF0.x到TF1.X时代,操作(operation)被加入到Graph中。但现在,操作会被梯度带记录,我们要做的仅仅是让前向传播和计算损失的过程发生在梯度带的上下文管理器中。

 with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
# loss_value 必须在tape内部 grads = tape.gradient(loss_value, mnist_model.variables)
optimizer.apply_gradients(zip(grads, mnist_model.variables),
global_step=tf.train.get_or_create_global_step())

注意到这里的tape.gradient用来计算损失函数和model参数的导数。我们在之前的版本要么使用优化器的minimize功能,要么使用tf.gradients来计算导数。在eager模式,tf.gradients不能使用。

# coding: utf-8

# pytorch: loss.backward(), optimizer.step()完成梯度计算和参数更新;
# tf2.0通过: grads = tape.gradient(), optimizer.apply_gradients()来实现!
# reference: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/tensorflow_v2/notebooks/3_NeuralNetworks/convolutional_network.ipynb from __future__ import absolute_import,division,print_function import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np # MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits). # Training parameters.
learning_rate = 0.001
training_steps = 200
batch_size = 128
display_step = 10 # Network parameters.
conv1_filters = 32 # number of filters for 1st conv layer.
conv2_filters = 64 # number of filters for 2nd conv layer.
fc1_units = 1024 # number of neurons for 1st fully-connected layer. # Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255. # Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1) # Create TF Model.
class ConvNet(Model):
# Set layers.
def __init__(self):
super(ConvNet, self).__init__()
# Convolution Layer with 32 filters and a kernel size of 5.
self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
self.maxpool1 = layers.MaxPool2D(2, strides=2) # Convolution Layer with 64 filters and a kernel size of 3.
self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
self.maxpool2 = layers.MaxPool2D(2, strides=2) # Flatten the data to a 1-D vector for the fully connected layer.
self.flatten = layers.Flatten() # Fully connected layer.
self.fc1 = layers.Dense(1024)
# Apply Dropout (if is_training is False, dropout is not applied).
self.dropout = layers.Dropout(rate=0.5) # Output layer, class prediction.
self.out = layers.Dense(num_classes) # Set forward pass.
def call(self, x, is_training=False):
x = tf.reshape(x, [-1, 28, 28, 1])
x = self.conv1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.maxpool2(x)
x = self.flatten(x)
x = self.fc1(x)
x = self.dropout(x, training=is_training)
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x # Build neural network model.
conv_net = ConvNet() # Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss) # Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1) # Stochastic gradient descent optimizer.
optimizer = tf.optimizers.Adam(learning_rate) # Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = conv_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y) # Variables to update, i.e. trainable variables.
trainable_variables = conv_net.trainable_variables # Compute gradients.
gradients = g.gradient(loss, trainable_variables) # Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables)) # Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y) if step % display_step == 0:
pred = conv_net(batch_x)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc)) # Test model on validation set.
pred = conv_net(x_test)
print("Test Accuracy: %f" % accuracy(pred, y_test))

注意:

- TF2.0默认为动态图,没有回话Session了;

- 代码中注意 `for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):` 的使用;

- Pycharm中注意:from tensorflow.keras import Model, layers ;跳不进去查看内部实现;用面向对象的思想写网络结构;init,build,call等函数实现;

 

Reference:

TensorFlow2.0初体验的更多相关文章

  1. MySQL8.0初体验

    MySQL8.0的官方社区开源版出来有段时间了,而percona的8.0版本还没有正式对外发布(已发布测试版),一直以来也没安装体验下这个号称质的飞跃的版本,今天正好有些时间就下了安装体验体验. 一. ...

  2. VUE 3.0 初体验之路

    在2020年9月中旬,vue.js发布了3.0正式版,在不久的将来,VUE3.0 也终将成为大前端的必然趋势, 环境搭建 node 版本要求: Node.js8.9 或更高版本 ,输入 node -v ...

  3. (一) .net core 2.0 初体验

    1..net core 2.0环境 .net core 下载地址:https://www.microsoft.com/net/core#windowscmd 问题一:提示[Failed to load ...

  4. 【swoole2.0】 PHP + swoole2.0 初体验

    背景: centos7   PHP7.1   swoole2.0 准备工作: 一.  swoole  扩展安装 1 下载swoole cd /usr/local wget -c https://git ...

  5. vue-cli3.0 初体验

    vue-cli3.0 自我记录 其实在2018年8月10号,vue-cli3.0就已经面世了,由于项目中应用的全是2.x版本,所以并不了解3.0的vue-cli发生了什么变化,那今天尝试了下遇见的问题 ...

  6. Vue3.0初体验

    最近看了Vue3.0的相关信息,相比Vue2.0有以下优点: Performance:性能更比Vue 2.0强. Tree shaking support:可以将无用模块"剪辑", ...

  7. ASP.NET2.0组件控件开发视频 初体验

    原文:ASP.NET2.0组件控件开发视频 初体验 ASP.NET2.0组件控件开发视频 初体验 录了视频,质量不是很好,大家体验下.我会重新录制的 如果不清楚,可以看看http://v.youku. ...

  8. vue.js2.0 自定义组件初体验

    理解 组件(Component)是 Vue.js 最强大的功能之一.组件可以扩展 HTML 元素,封装可重用的代码.在较高层面上,组件是自定义元素, Vue.js 的编译器为它添加特殊功能.在有些情况 ...

  9. ASP.NET Core 3.0 上的gRPC服务模板初体验(多图)

    早就听说ASP.NET Core 3.0中引入了gRPC的服务模板,正好趁着家里电脑刚做了新系统,然后装了VS2019的功夫来体验一把.同时记录体验的过程.如果你也想按照本文的步骤体验的话,那你得先安 ...

随机推荐

  1. echarts自定义悬浮框的显示

    最近在使用echarts的地图功能 ,业务需求是显示前五的具体信息,并且轮流显示,首先解决轮流显示的问题 var counta = 0; //播放所在下标 var mTime = setInterva ...

  2. Springboot+事务

    项目小 自己没有实际应用,但实际用起来不难. 参照 作者孙林峰的就可以了 http://blog.coocap.com/?p=610 其截图备份如下:

  3. Linux命令head

    1.命令简介 head (head) 用来显示档案的开头至标准输出中.如果指定了多于一个文件,在每一段输出前会给出文件名作为文件头.如果不指定文件,或者文件为"-",则从标准输入读 ...

  4. CentOS6.7搭建部署FTP服务 (详解主配置文件)

    FTP传输 三种解析: username -->UID  :/etc/passwd    将用户名转换成UID的库. hostname--->        IP   :DNS服务,/et ...

  5. Linux操作系统-CentOS7启动流程和服务管理

    Linux操作系统-CentOS7启动流程和服务管理 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.systemd POST --> Boot Sequence --&g ...

  6. 2019年杭电多校第一场 1009题String(HDU6586+模拟+单调栈)

    题目链接 传送门 题意 给你一个字符串,要你构造一个长为\(k\)的子串使得每个字母出现的次数在\([L_i,R_i](0\leq i\leq26)\)间且字典序最小. 思路 做这种题目就是要保持思路 ...

  7. pve配置

    U盘安装 推荐使用https://rufus.ie/ 刻录到U盘 (注意:以 DD 镜像 模式写入) 关闭订阅提醒 将if(data.status!=='Active')替换为if(false) se ...

  8. python预课03 三元表达式示例,函数定义示例,七段彩码管绘制示例

    三元表达式 s = '不下雨' if s == '下雨': print('带伞') if s == '不下雨': print('不带伞') #等效与以下语句 print('带伞' if s == '下 ...

  9. 安装GO

    1.中文社区 下载地址   https://studygolang.com/dl    选择自己操作系统版本 2.找到适合你系统的版本下载,本人下载的是windows版本.也可以下载Source自己更 ...

  10. Is Safari on iOS 6 caching $.ajax results? post Cache

    https://stackoverflow.com/questions/12506897/is-safari-on-ios-6-caching-ajax-results Since the upgra ...