SciTech-BigDataAIML-Tensorflow-Variables
tf.config.run_functions_eagerly(True)
tf.data.experimental.enable_debug_mode()
tf.debugging.set_log_device_placement(True)
tf.config.set_soft_device_placement(True)
tf_gpus = tf.config.list_physical_devices('GPU')
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logical_gpus = tf.config.list_logical_devices("GPU")
tf_logical_cpus = tf.config.list_logical_devices("CPU")
tf_gpus = tf.config.list_physical_devices('GPU')
tf_logi_gpus = tf.config.list_logical_devices("GPU")
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logi_cpus = tf.config.list_logical_devices("CPU")
print("TF_PHYS_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_cpus]) or 'None'))
print("TF_LOGI_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_logi_cpus]) or 'None'))
print("TF_PHYS_GPUs:%s" % ('\n '.join(['%r'% c.name for c in tf_gpus ]) or 'None'))
print("TF_LOGI_GPUs:%s\n" % ('\n '.join(['%r'% c.name for c in tf_logi_gpus]) or 'None'))
# tf_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
# tf.config.experimental_connect_to_cluster(tf_resolver)
# tf.tpu.experimental.initialize_tpu_system(resolver)
# tf_logi_tpus = tf.config.list_logical_devices('TPU')
Lifecycles, naming, and watching
tf.Variable instance have the same lifecycle as other Python objects in Python-based TensorFlow,
When there are no references to a variable it is automatically deallocated.Variables can also be named which can help you track and debug them.
You can give two variables the same name.
# Create a and b; they will have the same name but will be backed by different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value, Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
Variable names are preserved when saving and loading models.
By default, variables in models will acquire unique variable names automatically,so you don't need to assign them yourself unless you want to.You can turn off
gradients for a variable
by settingtrainable
tofalse
at creation. Although variables are important for differentiation, some variables will not need to be differentiated. An example of a variable that would not need gradients is a training step counter.
step_counter = tf.Variable(1, trainable=False)
Placing variables and tensors
For better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype
. This means most variables are placed on a GPU if one is available.
However, you can override this. In this snippet, place a float tensor and a variable on the CPU, even if a GPU is available. By turning on device placement logging (see Setup), you can see where the variable is placed.
Note: Although manual placement works, using distribution strategies can be a more convenient and scalable way to optimize your computation.
If you run this notebook on different backends with and without a GPU you will see different logging. Note that logging device placement must be turned on at the start of the session.
tf.debugging.set_log_device_placement(True)
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
Note: Because tf.config.set_soft_device_placement
is turned on by default,
even if you run this code on a device without a GPU, it will still run,
The multiplication step will happen on the CPU.
print("Tensorflow CPUs:%s%s\n" % (
'\n phys:' + (', '.join(['\n %r'%(c.name,) for c in tf_cpus ]) if tf_cpus else ' None'),
'\n logi:' + (', '.join(['\n %r'%(c.name,) for c in tf_logical_gpus]) if tf_logical_cpus else ' None'),
))
print("Tensorflow GPUs:%s%s\n" % (
'\n phys:' + (', '.join(['\n %r'%(g.name,) for g in tf_gpus ]) if tf_gpus else ' None'),
'\n logi:' + (', '.join(['\n %r'%(g.name,) for g in tf_logical_gpus]) if tf_logical_gpus else ' None'),
))
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
gpus = tf.config.list_physical_devices('GPU')
if gpus: # Create 2 virtual GPUs with 1GB memory each
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
try:
tf.config.set_logical_device_configuration( tf_gpus[0],
[ tf.config.LogicalDeviceConfiguration(memory_limit=1024),
tf.config.LogicalDeviceConfiguration(memory_limit=1024) ] )
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
strategy = tf.distribute.MirroredStrategy(tf_gpus)
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse', optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))
SciTech-BigDataAIML-Tensorflow-Variables的更多相关文章
- (转)The Road to TensorFlow
Stephen Smith's Blog All things Sage 300… The Road to TensorFlow – Part 7: Finally Some Code leave a ...
- (zhuan) Deep Deterministic Policy Gradients in TensorFlow
Deep Deterministic Policy Gradients in TensorFlow AUG 21, 2016 This blog from: http://pemami49 ...
- Tensorflow_入门学习_1
1.0 TensorFlow graphs Tensorflow是基于graph based computation: 如: a=(b+c)∗(c+2) 可分解为 d=b+c e=c+2 a=d∗e ...
- [源码解析] 深度学习分布式训练框架 horovod (7) --- DistributedOptimizer
[源码解析] 深度学习分布式训练框架 horovod (7) --- DistributedOptimizer 目录 [源码解析] 深度学习分布式训练框架 horovod (7) --- Distri ...
- 『TensorFlow』使用集合collection控制variables
Variable Tensorflow使用Variable类表达.更新.存储模型参数. Variable是在可变更的,具有保持性的内存句柄,存储着Tensor 在整个session运行之前,图中的全部 ...
- Why do we name variables in Tensorflow?
Reference:Stack Overflow. The name parameter is optional (you can create variables and constants wit ...
- WARNING:tensorflow:From /usr/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py:189: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed
initialize_all_variables已被弃用,将在2017-03-02之后删除. 说明更新:使用tf.global_variables_initializer代替. 就把tf.initia ...
- tensorflow中的基本概念
本文是在阅读官方文档后的一些个人理解. 官方文档地址:https://www.tensorflow.org/versions/r0.12/get_started/basic_usage.html#ba ...
- Tensorflow 处理libsvm格式数据生成TFRecord (parse libsvm data to TFRecord)
#写libsvm格式 数据 write libsvm #!/usr/bin/env python #coding=gbk # ================================= ...
- Tensorflow二分类处理dense或者sparse(文本分类)的输入数据
这里做了一些小的修改,感谢谷歌rd的帮助,使得能够统一处理dense的数据,或者类似文本分类这样sparse的输入数据.后续会做进一步学习优化,比如如何多线程处理. 具体如何处理sparse 主要是使 ...
随机推荐
- Swiper.js滑动插件使用教程
几乎每个前端开发都应该用过这个滑动组件库吧?这就是大名鼎鼎的swiper.js 一.Swiper及其功能 Swiper.js 是一个流行的开源的移动端触摸滑动库,用于创建响应式.可触摸滑动的轮播图.滑 ...
- 操作系统综合题之“用记录型信号量机制的wait和signal操作来解决了由北向南和由南向北过河人的同步问题(独木桥问题-代码补充)”
1.问题:一条哦东西走向河流上,有一根南北走向的独木桥,要想过河只能通过这根独木桥.只要人们朝着相同的方向过独木桥,同一时刻允许有多个人可以通过.如果在相反的方向上同时有两个人过独木桥则会发生死锁.如 ...
- 笔记 - linux子系统更换阿里云镜像源
平时还是用 windows 多一些, 偶尔会玩一玩 linux, 之前给我一台多年的笔记本装了个 manjaro , 颜值是蛮高的, 就一点也不太熟, 就不想玩了, 还是用子系统, win 有支持 U ...
- 深入浅出:AST 技术的应用与实践
@charset "UTF-8"; .markdown-body { line-height: 1.75; font-weight: 400; font-size: 15px; o ...
- RPC实战与核心原理之网络通信
架构设计:涉及一个灵活的RPC框架 回顾 RPC的通信原理及RPC中各个功能组件的作用 RPC就是把拦截到的方法参数,转成可以在网络中传输的二进制,并保证服务提供方能正确还原出语义,最终实现想调用本地 ...
- Django踩坑之在Django中创建项目时ImportError: No module named django.core
不使用django-admin.py,而是使用django-admin.exe 具体操作如下 django-admin.exe startproject learning_log . ok,没有提示错 ...
- FileChooser文件保存样例
FileChooser fc = new FileChooser();fc.setTitle("请选择文件保存位置");fc.setInitialDirectory($原始文件位置 ...
- python3实现阿里云短信发送功能
# -*- coding: utf-8 -*- import uuid import sys import json import uuid from aliyunsdkcore.client imp ...
- Linux vim编辑器介绍
vim是Linux中常用的文件编辑器,作用包括两个:维护文本文件内容,维护Linux系统中的各种配置信息.她是程序开发者爱不释手的一款程序开发工具.下面简简单单介绍一下vim. 安装vim编辑器 ...
- 代码随想录第七天 | 字符串part01
最近这两天上班回去真的有点晚不想动了,趁着周末有时间赶快补补: 344.反转字符串 建议: 本题是字符串基础题目,就是考察 reverse 函数的实现,同时也明确一下 平时刷题什么时候用 库函数,什么 ...