此处纯粹作为个人学习使用,原文连接:https://www.jianshu.com/p/dc24e54aec81

这篇文章是借鉴很多博文的,作为一个关于slim库的总结

导入slim模块

import tensorflow.contrib.slim as slim

定义slim的变量

#Model Variables
weights = slim.model_variable('weights', shape = [10, 10, 3, 3],
initializer = tf.truncated_normal_initializer(stddev=0.1)
regularizer = slim.l2_regularizer(0.05),
device='/CPU:0')
model_variables = slim.get_model_variables() #获取变量吗? # Regular variables
my_var = slim.variable('my_var", shape=[20, 1],
initializer = tf.zeros_initializer())
regular_variables_and_model_variables = slim.get_variables()

# 这里的model_variable是作为模型参数保存的,variable是局部变量,不会保存。

Slim中实现一个层

input = ...
net = slim.conv2d(input, 128, [3,3], scope='conv1_1') # 代码重用
net = slim.repeat(net, 3, slim.conv2d, 256, [3,3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2') # 处理不同参数情况
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope ='fc/fc_2')
x = slim.fuly_connected(x, 128, scope = 'fc/fc_3')
# or
slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc') # 普通方法
x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')
x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')
x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')
x = slim.conv2d(x, 64, [1, 1], scope='core/core_4') # 简便方法:
slim.stack(x, slim.conv2d, [(32, [3,3]), (32, [1,1]), (64, [3,3]), (64, [1,1]), scopre='core')

定义相同参数的简化

with slim.arg_scope([slim.conv2d],  padding='SAME',
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')
net = slim.conv2d(net, [11,11], padding=' VALID', scope='conv2')
net = slim.conv2d(net, 256, [11, 11], scope='conv3') # arg_scope的嵌套
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.rely,
weights_initializer=tf.truncated_normal_initialier(stddev=0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
net = slim.conv2d(net, 256, [5, 5],
weights_initializer=tf.truncated_normal_initializer(stddev=0.03),
scope='conv2')
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc')

训练模型

loss = slim.losses.softmax_cross_entropy(predictions, labels)
# 自定义loss模型
# define the loss functions and get the total loss.
classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)
pose_loss = MyCustomLossFunction(pose_predictions, pose_labels)
slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss. # The following two ways to compute the total loss are equivalent:
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = classification_loss + sum_of_squares_loss + poses_loss + regularization_loss

# slim读取保存模型的方法

# Create some variables.
v1 = slim.variable(name='v1', ...)
v2 = slim.variable(name=''nested/v2', ...)
... # Get list of variables to restore (which contains only 'v2')
variables_to_restore = slim.get_variables_by_name("v2") # Create the saver which will be used to restore the varialbes.
restorer = tf.train.Saver(variables_to_restore) with tf.Session() as sess:
# Restore variables from disk.
restores.restore(sess, "/tmp/model.ckpt")
print("Model restored.") # 为模型添加变量前缀
# 假设我们定义的网络变量是conv1/weights, 而从VGG记载的变量名为#vgg16/conv1/weights, 正常load肯定会报错
def name_in_checkpoint(var):
return 'vgg16/' + var.op.name variables_to_restore = slim.get_model_variables()
variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore}
restorer = tf.train.Saver(variables_to_restore) with tf.Session() as sess:
# Restore variables from disk.
restorer.restore(sess, "/tmp/model.ckpt")

训练模型

在该例中, slim.learning.train根据train_op计算损失、应用梯度step. logdir指定checkpoints和event文件的存储路径。我们可以限制梯度step到任何数值。这里我们采用1000步。最后, save_summaries_secs=300表示每5分钟计算一次summaries, save_interval_secs=600表示每10分钟保存一次模型的checkpoint

g = tf.Graph()

# Create the model and specify the losses...
... total_loss = slim.losses.get_total_loss()
optimizer = tf.train.GradientDescentOptimizer(learning_rate) # create_train_op ensures that each time we ask for the loss, the update_ops
# are run and the gradients being computed are applied too.
train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = ... # Where checkpoints are stored. slim.learning.train(
train_op,
logdir,
number_of_steps=1000,
save_summaries_secs=300,
save_interval_secs=600)

Fine-Tuning a Model on a different task

假设我们有一个已经预训练好的VGG16的模型。这个模型是在拥有1000分类的ImageNet数据集上进行训练的。但是,现在我们想把它应用只具有20个分类的Pascal VOC数据集上。为了能这样做,我们可以通过利用除最后一些全连接层的其它预训练模型来初始化新模型的达到目的:

# Load the Pascal VOC data
image, label = MyPascalVocDataLoader(...)
images, labels = tf.train.batch([image, label], batch_size = 32) # Create the model
predictions = vgg.vgg_16(images)
train_op = slim.learning.create_train_op(...) # Specify where the Model, trained on ImageNet, was saved.
model_path = '/path/to/pre_trained_on_imagenet.checkpoint'
metric_ops.py
# Specify where the new model will live:
log_dir = from_checkpoint_'/path/to/my_pascal_model_dir/' # Restore only the convolutional layers:
variables_to_restore = slim.get_variables_to_restore(exclude=['fc6', 'fc7', 'fc8'])
init_fn = assign_from_checkpoint_fn(model_path, variables_to_restore) # Start training.
slim.learning.train(train_op, log_dir, init_fn=init_fn)

evaluation loop

import tensorflow as tf

slim = tf.contrib.slim

# Load the data
images, labels = load_data(...) # Define the network
predictions = MyModel(images) # Choose the metrics to compute:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
'accuracy': slim.metrics.accuracy(predictions, labels),
'precision': slim.metrics.precision(predictions, labels),
'recall': slim.metrics.recall(mean_relative_errors, 0.3),
}) # Create the summary ops such that they also print out to std output:
summary_ops = []
for metric_name, metric_value in names_to_values.iteritems():
op = tf.summary.scalar(metric_name, metric_value)
op = tf.Print(op, [metric_value], metric_name)
summary_ops.append(op) num_examples = 10000
batch_size = 32
num_batches = math.ceil(num_examples / float(batch_size)) # Setup the global step.
slim.get_or_create_global_step() output_dir = ... # Where the summaries are stored.
eval_interval_secs = ... # How often to run the evaluation.
slim.evaluation.evaluation_loop(
'local',
checkpoint_dir,
log_dir,
num_evals=num_batches,
eval_op=names_to_updates.values(),
summary_op=tf.summary.merge(summary_ops),
eval_interval_secs=eval_interval_secs)

tensorflow slim代码使用的更多相关文章

  1. 使用笔记:TF辅助工具--tensorflow slim(TF-Slim)

    如果抛开Keras,TensorLayer,tfLearn,tensroflow 能否写出简介的代码? 可以!slim这个模块是在16年新推出的,其主要目的是来做所谓的“代码瘦身” 一.简介 slim ...

  2. 解决TensorFlow最新代码编译错误问题

    老是有个习惯,看到开源代码更新了,总是想更新到最新版,如果置之不理的话,就感觉自己懒惰了或有的不负责任了,这个也可能是一种形式的强迫症吧: 前几天晚上git pull TensorFlow,完事后也没 ...

  3. tensorflow没有代码提示的问题

    在tensorflow包下的__init__.py文件中定义了一个contrib变量表示tensorflow.contrib包下的内容,但是tensorflow.contrib这个包是懒加载的,也就是 ...

  4. google tensorflow bert代码分析

    参考网上博客阅读了bert的代码,记个笔记.代码是 bert_modeling.py 参考的博客地址: https://blog.csdn.net/weixin_39470744/article/de ...

  5. tensorflow训练代码

    from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf mnist = input_dat ...

  6. tensorflow TensorArray 代码例子

    import tensorflow as tf import numpy as np B=3 D=4 T=5 tf.reset_default_graph() xs=tf.placeholder(sh ...

  7. Tensorflow模型代码调试问题

    背景: 不知道大家有没有这样的烦恼:在使用Tensorflow搭建好模型调试的过程中,经常会碰到一些问题,当时花了不少时间把这个问题解决了,一段时间后,又出现了同样的问题,却怎么也不记得之前是怎么解决 ...

  8. TensorFlow Slim 的常用操作

    https://blog.csdn.net/mzpmzk/article/details/81706379

  9. tensorflow slim训练以及到安卓部署教程

    https://blog.csdn.net/chenyuping333/article/details/81537551 https://blog.csdn.net/u012328159/articl ...

随机推荐

  1. 【2期】JVM必知必会

    JVM之内存结构图文详解 Java8 JVM内存结构变了,永久代到元空间 Java GC垃圾回收机制 不要再问我“Java 垃圾收集器”了 Java虚拟机类加载机制 Java虚拟机类加载器及双亲委派机 ...

  2. 多个线程运行MR程序时hadoop出现的问题

    夜间多个任务同时并行,总有几个随机性有任务失败,查看日志: cat -n ads_channel.log |grep "Caused by" Caused by: java.uti ...

  3. atom 在Ubuntu 18.04 上安装及基本使用

    前记: Atom 是github专门为程序员推出的一个跨平台文本编辑器.具有简洁和直观的图形用户界面,并有很多有趣的特点:支持CSS,HTML,JavaScript等网页编程语言.它支持宏,自动完成分 ...

  4. 激活函数-Activation Function

    该博客的内容是莫烦大神的授课内容.在此只做学习记录作用. 原文连接:https://morvanzhou.github.io/tutorials/machine-learning/tensorflow ...

  5. 【ST开发板评测】使用Python来开发STM32F411

    前言 板子申请了也有一段时间了,也快到评测截止时间了,想着做点有意思的东西,正好前一段时间看到过可以在MCU上移植MicroPython的示例,就自己尝试一下,记录移植过程. MicroPython是 ...

  6. linux命令--网络命令

    一.网络命令 1.配置ip 1.1 配置 IP 地址 IP 地址是计算机在互联网中唯一的地址编码.每台计算机如果需要接入网络和其他计算机进行数 据通信,就必须配置唯一的公网 IP 地址. 配置 IP ...

  7. 【Java基础】JDBC简明教程

    目录 1. 常用类 2. JDBC编程步骤 3. 事务处理 4. 数据库连接池 5. JDBC列子代码 6. 使用Apache的JDBC工具类 虽然在平时的开发过程中我们不会直接使JDBC的API来操 ...

  8. 常用类-ExcelHelper

    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.D ...

  9. [Spring]:java.lang.NoSuchMethodError: 'java.lang.String javax.annotation.Resource.lookup()'

    错误信息 11月 05, 2019 9:32:15 下午 org.springframework.test.context.TestContextManager prepareTestInstance ...

  10. JS基础研语法---函数基础总结---定义、作用、参数、返回值、arguments伪数组、作用域、预解析

    函数: 把一些重复的代码封装在一个地方,在需要的时候直接调用这个地方的代码就可以了 函数作用: 代码重用 函数的参数: 形参:函数定义的时候,函数名字后面的小括号里的变量 实参:函数调用的时候,函数名 ...