tf.config.run_functions_eagerly(True)
tf.data.experimental.enable_debug_mode()
tf.debugging.set_log_device_placement(True)
tf.config.set_soft_device_placement(True) tf_gpus = tf.config.list_physical_devices('GPU')
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logical_gpus = tf.config.list_logical_devices("GPU")
tf_logical_cpus = tf.config.list_logical_devices("CPU") tf_gpus = tf.config.list_physical_devices('GPU')
tf_logi_gpus = tf.config.list_logical_devices("GPU")
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logi_cpus = tf.config.list_logical_devices("CPU")
print("TF_PHYS_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_cpus]) or 'None'))
print("TF_LOGI_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_logi_cpus]) or 'None'))
print("TF_PHYS_GPUs:%s" % ('\n '.join(['%r'% c.name for c in tf_gpus ]) or 'None'))
print("TF_LOGI_GPUs:%s\n" % ('\n '.join(['%r'% c.name for c in tf_logi_gpus]) or 'None')) # tf_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
# tf.config.experimental_connect_to_cluster(tf_resolver)
# tf.tpu.experimental.initialize_tpu_system(resolver)
# tf_logi_tpus = tf.config.list_logical_devices('TPU')

Lifecycles, naming, and watching

  • tf.Variable instance have the same lifecycle as other Python objects in Python-based TensorFlow,

    When there are no references to a variable it is automatically deallocated.

  • Variables can also be named which can help you track and debug them.

    You can give two variables the same name.

# Create a and b; they will have the same name but will be backed by different tensors.
a = tf.Variable(my_tensor, name="Mark") # A new variable with the same name, but different value, Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark") # These are elementwise-unequal, despite having the same name
print(a == b)
  • Variable names are preserved when saving and loading models.

    By default, variables in models will acquire unique variable names automatically,so you don't need to assign them yourself unless you want to.

  • You can turn off gradients for a variable by setting trainable to false at creation. Although variables are important for differentiation, some variables will not need to be differentiated. An example of a variable that would not need gradients is a training step counter.

    step_counter = tf.Variable(1, trainable=False)

Placing variables and tensors

For better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype. This means most variables are placed on a GPU if one is available.

However, you can override this. In this snippet, place a float tensor and a variable on the CPU, even if a GPU is available. By turning on device placement logging (see Setup), you can see where the variable is placed.

Note: Although manual placement works, using distribution strategies can be a more convenient and scalable way to optimize your computation.

If you run this notebook on different backends with and without a GPU you will see different logging. Note that logging device placement must be turned on at the start of the session.


tf.debugging.set_log_device_placement(True) with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]]) with tf.device('GPU:0'):
# Element-wise multiply
k = a * b print(k)

Note: Because tf.config.set_soft_device_placement is turned on by default,

even if you run this code on a device without a GPU, it will still run,

The multiplication step will happen on the CPU.

print("Tensorflow CPUs:%s%s\n" % (
'\n phys:' + (', '.join(['\n %r'%(c.name,) for c in tf_cpus ]) if tf_cpus else ' None'),
'\n logi:' + (', '.join(['\n %r'%(c.name,) for c in tf_logical_gpus]) if tf_logical_cpus else ' None'),
))
print("Tensorflow GPUs:%s%s\n" % (
'\n phys:' + (', '.join(['\n %r'%(g.name,) for g in tf_gpus ]) if tf_gpus else ' None'),
'\n logi:' + (', '.join(['\n %r'%(g.name,) for g in tf_logical_gpus]) if tf_logical_gpus else ' None'),
)) with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]]) with tf.device('GPU:0'):
# Element-wise multiply
k = a * b gpus = tf.config.list_physical_devices('GPU')
if gpus: # Create 2 virtual GPUs with 1GB memory each
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs") try:
tf.config.set_logical_device_configuration( tf_gpus[0],
[ tf.config.LogicalDeviceConfiguration(memory_limit=1024),
tf.config.LogicalDeviceConfiguration(memory_limit=1024) ] )
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e) strategy = tf.distribute.MirroredStrategy(tf_gpus)
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse', optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))

SciTech-BigDataAIML-Tensorflow-Variables的更多相关文章

  1. (转)The Road to TensorFlow

    Stephen Smith's Blog All things Sage 300… The Road to TensorFlow – Part 7: Finally Some Code leave a ...

  2. (zhuan) Deep Deterministic Policy Gradients in TensorFlow

          Deep Deterministic Policy Gradients in TensorFlow AUG 21, 2016 This blog from: http://pemami49 ...

  3. Tensorflow_入门学习_1

    1.0 TensorFlow graphs Tensorflow是基于graph based computation: 如: a=(b+c)∗(c+2) 可分解为 d=b+c e=c+2 a=d∗e ...

  4. [源码解析] 深度学习分布式训练框架 horovod (7) --- DistributedOptimizer

    [源码解析] 深度学习分布式训练框架 horovod (7) --- DistributedOptimizer 目录 [源码解析] 深度学习分布式训练框架 horovod (7) --- Distri ...

  5. 『TensorFlow』使用集合collection控制variables

    Variable Tensorflow使用Variable类表达.更新.存储模型参数. Variable是在可变更的,具有保持性的内存句柄,存储着Tensor 在整个session运行之前,图中的全部 ...

  6. Why do we name variables in Tensorflow?

    Reference:Stack Overflow. The name parameter is optional (you can create variables and constants wit ...

  7. WARNING:tensorflow:From /usr/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py:189: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed

    initialize_all_variables已被弃用,将在2017-03-02之后删除. 说明更新:使用tf.global_variables_initializer代替. 就把tf.initia ...

  8. tensorflow中的基本概念

    本文是在阅读官方文档后的一些个人理解. 官方文档地址:https://www.tensorflow.org/versions/r0.12/get_started/basic_usage.html#ba ...

  9. Tensorflow 处理libsvm格式数据生成TFRecord (parse libsvm data to TFRecord)

    #写libsvm格式 数据 write libsvm     #!/usr/bin/env python #coding=gbk # ================================= ...

  10. Tensorflow二分类处理dense或者sparse(文本分类)的输入数据

    这里做了一些小的修改,感谢谷歌rd的帮助,使得能够统一处理dense的数据,或者类似文本分类这样sparse的输入数据.后续会做进一步学习优化,比如如何多线程处理. 具体如何处理sparse 主要是使 ...

随机推荐

  1. 把 Java WebApi 快速转为 Mcp-Server(使用 Solon AI MCP)

    solon-ai-mcp,提供了各种 mcp 相关能力,支持 java8, java11, java17, java21, java24 .是 solon-ai 项目的重要组成部分,也可以嵌入到 sp ...

  2. python相关函数

    1.pow()函数 pow()函数解释 pow(x,y):表示x的y次幂. >>> pow(2,4) 16 >>> pow(x,y,z):表示x的y次幂后除以z的余 ...

  3. FastAPI与Tortoise-ORM模型配置及aerich迁移工具

    title: FastAPI与Tortoise-ORM模型配置及aerich迁移工具 date: 2025/04/30 00:11:45 updated: 2025/04/30 00:11:45 au ...

  4. 鸿蒙Next开发实战教程:实现抖音长按快速评论特效

    开篇点题,今天玩点花的. 不知道大家有没有发现,抖音上的评论键长按会弹出一排表情框用于快速评论,不过现在鸿蒙原生版的抖音还没有这个功能,今天幽蓝君就小试牛刀,在鸿蒙上做一下这个功能,也是应一位友友的私 ...

  5. OpenPPL的执行流程与类间关系UML表达

    上一讲,对OpenPPL进行了介绍,以及通过官方文档,学习了它的python与C++的操作流程,以及如添加新的引擎与Op算子. 本节,将通过阅读代码通过UML梳理操作流程以及类之间的相互关系 src地 ...

  6. B1009 说反话

    给定一句英语,要求你编写程序,将句中所有单词的顺序颠倒输出. 输入格式: 测试输入包含一个测试用例,在一行内给出总长度不超过 80 的字符串.字符串由若干单词和若干空格组成,其中单词是由英文字母(大小 ...

  7. Serial-Studio 上位机编译全过程深度讲解,解决串口数据可视化工具

    Windows环境下编译Serial-Studio Serial-Studio是一个开源的串口数据可视化工具,广泛应用于物联网.嵌入式系统调试和数据分析等领域.从源代码编译Serial-Studio可 ...

  8. Go与C/C++ 互相调用

    A. Go调用C 1.Go调用C:在go文件里调C(以下代码中除了开头的注释之外,其他注释不可删除) /* * go 和 C 互调用程序 */ package main /* int Add( int ...

  9. PyQt实现跨平台毛玻璃背景(全网首发)

    找了很久,大部分都需要调用 win32 API 无法跨平台,终于找到啦 项目地址 安装: python -m pip install BlurWindow 简单例子 import sys from P ...

  10. 【2020.11.25提高组模拟】太空漫步(walking) 题解

    [2020.11.25提高组模拟]太空漫步(walking) 题解 题目描述 Do not go gentle into that good night. Old age should burn an ...