使用笔记:TF辅助工具--tensorflow slim(TF-Slim)
如果抛开Keras,TensorLayer,tfLearn,tensroflow 能否写出简介的代码? 可以!slim这个模块是在16年新推出的,其主要目的是来做所谓的“代码瘦身”
一.简介
slim被放在tensorflow.contrib这个库下面,导入的方法如下:
import tensorflow.contrib.slim as slim
众所周知 tensorflow.contrib这个库,tensorflow官方对它的描述是:此目录中的任何代码未经官方支持,可能会随时更改或删除。每个目录下都有指定的所有者。它旨在包含额外功能和贡献,最终会合并到核心TensorFlow中,但其接口可能仍然会发生变化,或者需要进行一些测试,看是否可以获得更广泛的接受。所以slim依然不属于原生tensorflow。
slim是一个使构建,训练,评估神经网络变得简单的库。它可以消除原生tensorflow里面很多重复的模板性的代码,让代码更紧凑,更具备可读性。另外slim提供了很多计算机视觉方面的著名模型(VGG, AlexNet等),我们不仅可以直接使用,甚至能以各种方式进行扩展。
slim的子模块及功能介绍:
arg_scope: provides a new scope named arg_scope that allows a user to define default arguments for specific operations within that scope.
除了基本的namescope,variabelscope外,又加了argscope,它是用来控制每一层的默认超参数的。(后面会详细说)
data: contains TF-slim's dataset definition, data providers, parallel_reader, and decoding utilities.
貌似slim里面还有一套自己的数据定义,这个跳过,我们用的不多。
evaluation: contains routines for evaluating models.
评估模型的一些方法,用的也不多
layers: contains high level layers for building models using tensorflow.
这个比较重要,slim的核心和精髓,一些复杂层的定义
learning: contains routines for training models.
一些训练规则
losses: contains commonly used loss functions.
一些loss
metrics: contains popular evaluation metrics.
评估模型的度量标准
nets: contains popular network definitions such as VGG and AlexNet models.
包含一些经典网络,VGG等,用的也比较多
queues: provides a context manager for easily and safely starting and closing QueueRunners.
文本队列管理,比较有用。
regularizers: contains weight regularizers.
包含一些正则规则
variables: provides convenience wrappers for variable creation and manipulation.
slim管理变量的机制
二.slim定义模型
slim中定义一个变量的示例:
# Model Variables
weights = slim.model_variable('weights', shape=[10, 10, 3 , 3], initializer=tf.truncated_normal_initializer(stddev=0.1), regularizer=slim.l2_regularizer(0.05), device='/CPU:0')model_variables = slim.get_model_variables()# Regular variablesmy_var = slim.variable('my_var', shape=[20, 1], initializer=tf.zeros_initializer())regular_variables_and_model_variables = slim.get_variables()slim中实现一个层:
首先让我们看看tensorflow怎么实现一个层,例如卷积层:
input = ...
with tf.name_scope('conv1_1') as scope:kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights'conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')bias = tf.nn.bias_add(conv, biases)conv1 = tf.nn.relu(bias, name=scope)input = ...net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')net = ...
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')net = slim.max_pool2d(net, [2, 2], scope='pool2')net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')假设定义三层FC:
# Verbose way:
x = slim.fully_connected(x, 32, scope='fc/fc_1')x = slim.fully_connected(x, 64, scope='fc/fc_2')x = slim.fully_connected(x, 128, scope='fc/fc_3')32, 64, 128], scope='fc')# 普通方法:
x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')# 简便方法:slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')slim中的argscope:
如果你的网络有大量相同的参数,如下:
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME',
weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005), scope='conv1')net = slim.conv2d(net, 128, [11, 11], padding='VALID', weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005), scope='conv2')net = slim.conv2d(net, 256, [11, 11], padding='SAME', weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005), scope='conv3')with slim.arg_scope([slim.conv2d], padding='SAME',
weights_initializer=tf.truncated_normal_initializer(stddev=0.01) weights_regularizer=slim.l2_regularizer(0.0005)):net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')net = slim.conv2d(net, 256, [11, 11], scope='conv3')with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005)): with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'): net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1') net = slim.conv2d(net, 256, [5, 5], weights_initializer=tf.truncated_normal_initializer(stddev=0.03), scope='conv2') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc')VGG:
def vgg16(inputs):
with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') return net三.训练模型
import tensorflow as tf
vgg = tf.contrib.slim.nets.vgg# Load the images and labels.images, labels = ...# Create the model.predictions, _ = vgg.vgg_16(images)# Define the loss functions and get the total loss.loss = slim.losses.softmax_cross_entropy(predictions, labels)
关于loss,要说一下定义自己的loss的方法,以及注意不要忘记加入到slim中让slim看到你的loss。
还有正则项也是需要手动添加进loss当中的,不然最后计算的时候就不优化正则目标了。
# Load the images and labels.
images, scene_labels, depth_labels, pose_labels = ...# Create the model.scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)# Define the loss functions and get the total loss.classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)pose_loss = MyCustomLossFunction(pose_predictions, pose_labels)slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss.# The following two ways to compute the total loss are equivalent:regularization_loss = tf.add_n(slim.losses.get_regularization_losses())total_loss1 = classification_loss + sum_of_squares_loss + pose_loss + regularization_loss# (Regularization Loss is included in the total loss by default).total_loss2 = slim.losses.get_total_loss()四.读取保存模型变量
通过以下功能我们可以载入模型的部分变量:
# Create some variables.
v1 = slim.variable(name="v1", ...)v2 = slim.variable(name="nested/v2", ...)...# Get list of variables to restore (which contains only 'v2').variables_to_restore = slim.get_variables_by_name("v2")# Create the saver which will be used to restore the variables.restorer = tf.train.Saver(variables_to_restore)with tf.Session() as sess: # Restore variables from disk. restorer.restore(sess, "/tmp/model.ckpt") print("Model restored.")假设我们定义的网络变量是conv1/weights,而从VGG加载的变量名为vgg16/conv1/weights,正常load肯定会报错(找不到变量名),但是可以这样:
def name_in_checkpoint(var):
return 'vgg16/' + var.op.namevariables_to_restore = slim.get_model_variables()variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore}restorer = tf.train.Saver(variables_to_restore)with tf.Session() as sess: # Restore variables from disk. restorer.restore(sess, "/tmp/model.ckpt")通过这种方式我们可以加载不同变量名的变量
使用笔记:TF辅助工具--tensorflow slim(TF-Slim)的更多相关文章
- 【TensorFlow】tf.reset_default_graph()函数
转载 https://blog.csdn.net/duanlianvip/article/details/98626111 tf.reset_default_graph函数用于清除默认图形堆栈并重置 ...
- tensorflow笔记3:CRF函数:tf.contrib.crf.crf_log_likelihood()
在分析训练代码的时候,遇到了,tf.contrib.crf.crf_log_likelihood,这个函数,于是想简单理解下: 函数的目的:使用crf 来计算损失,里面用到的优化方法是:最大似然估计 ...
- TensorFlow学习笔记2-性能分析工具
TensorFlow学习笔记2-性能分析工具 性能分析工具 在spyder中运行以下代码: import tensorflow as tf from tensorflow.python.client ...
- 自然语言处理NLP学习笔记二:NLP实战-开源工具tensorflow与jiagu使用
前言: NLP工具有人推荐使用spacy,有人推荐使用tensorflow. tensorflow:中文译作:张量(超过3维的叫张量)详细资料参考:http://www.tensorfly.cn/ J ...
- tf.nn.embedding_lookup TensorFlow embedding_lookup 函数最简单实例
tf.nn.embedding_lookup TensorFlow embedding_lookup 函数最简单实例 #!/usr/bin/env python # -*- coding: utf-8 ...
- 【TensorFlow】tf.nn.softmax_cross_entropy_with_logits的用法
在计算loss的时候,最常见的一句话就是 tf.nn.softmax_cross_entropy_with_logits ,那么它到底是怎么做的呢? 首先明确一点,loss是代价值,也就是我们要最小化 ...
- 【TensorFlow】tf.nn.max_pool实现池化操作
max pooling是CNN当中的最大值池化操作,其实用法和卷积很类似 有些地方可以从卷积去参考[TensorFlow]tf.nn.conv2d是怎样实现卷积的? tf.nn.max_pool(va ...
- TensorFlow学习---tf.nn.dropout防止过拟合
一. Dropout原理简述: tf.nn.dropout是TensorFlow里面为了防止或减轻过拟合而使用的函数,它一般用在全连接层. Dropout就是在不同的训练过程中随机扔掉一部分神经元.也 ...
- TensorFlow:tf.nn.max_pool实现池化操作
tf.nn.max_pool(value, ksize, strides, padding, name=None) 参数是四个,和卷积很类似: 第一个参数value:需要池化的输入,一般池化层接在卷积 ...
随机推荐
- 【转帖】处理器史话 | 这张漫画告诉你,为什么双核CPU能打败四核CPU?
处理器史话 | 这张漫画告诉你,为什么双核CPU能打败四核CPU? https://www.eefocus.com/mcu-dsp/371324 2016-10-28 10:28 作者:付丽华预计 9 ...
- linux shell 获取文件夹全文绝对路径
在ls中列出文件的绝对路径 ls | sed "s:^:`pwd`/:" # 就是在每行记录的开头加上当前路径 ps: #在所有行之前/后加入某个字符串 sed 's/^/stri ...
- spring bean是什么
Spring有跟多概念,其中最基本的一个就是bean,那到底spring bean是什么? Bean是Spring框架中最核心的两个概念之一(另一个是面向切面编程AOP). 是否正确理解 Bean 对 ...
- new Image 读取宽高为0——onload
获取图片一张图片的大小 let img = new Image() img.src = imgUrl if ( img.width != 375 || img.height != 200 ) { me ...
- ASP.NET面试题130道
130道ASP.NET面试题 1. 简述 private. protected. public. internal 修饰符的访问权限. 答 . private : 私有成员, 在类的内部才可以访问. ...
- ConsoleLoggerExtensions.AddConsole(ILoggerFactory)已过时代码修复
0x00.问题 netcoreapp2.2环境下, Startup.cs 代码配置如下 public void Configure(IApplicationBuilder app, IHostingE ...
- Navicat 连接mysql 报错: Authentication plugin caching_ sha2_password cannot be loaded
出现这个错误的时候, 网上的资料都是修改mysql的登录信息的, ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password ...
- 记redis一次Could not get a resource from the pool 异常的解决过程
最近有个项目中的redis每天都会报 "Could not get a resource from the pool"的错误,而这套代码在另一地方部署又没有问题.一直找不到错误原因 ...
- JS中的迭代器和生成器
利用迭代器生成一个遍历方法: let arr1 = [1, 2, 3, 11, 22, 13, 24]; function forOf(arr, callback) { // 找到迭代器函数 let ...
- sql server 获取某一字段分组数据的前十条记录
1.sql 语法 select m, n from ( select row_number () over (partition by m order by n desc) rn,--以m分组,分组内 ...