tf.config.run_functions_eagerly(True)
tf.data.experimental.enable_debug_mode()
tf.debugging.set_log_device_placement(True)
tf.config.set_soft_device_placement(True) tf_gpus = tf.config.list_physical_devices('GPU')
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logical_gpus = tf.config.list_logical_devices("GPU")
tf_logical_cpus = tf.config.list_logical_devices("CPU") tf_gpus = tf.config.list_physical_devices('GPU')
tf_logi_gpus = tf.config.list_logical_devices("GPU")
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logi_cpus = tf.config.list_logical_devices("CPU")
print("TF_PHYS_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_cpus]) or 'None'))
print("TF_LOGI_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_logi_cpus]) or 'None'))
print("TF_PHYS_GPUs:%s" % ('\n '.join(['%r'% c.name for c in tf_gpus ]) or 'None'))
print("TF_LOGI_GPUs:%s\n" % ('\n '.join(['%r'% c.name for c in tf_logi_gpus]) or 'None')) # tf_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
# tf.config.experimental_connect_to_cluster(tf_resolver)
# tf.tpu.experimental.initialize_tpu_system(resolver)
# tf_logi_tpus = tf.config.list_logical_devices('TPU')

Lifecycles, naming, and watching

  • tf.Variable instance have the same lifecycle as other Python objects in Python-based TensorFlow,

    When there are no references to a variable it is automatically deallocated.

  • Variables can also be named which can help you track and debug them.

    You can give two variables the same name.

# Create a and b; they will have the same name but will be backed by different tensors.
a = tf.Variable(my_tensor, name="Mark") # A new variable with the same name, but different value, Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark") # These are elementwise-unequal, despite having the same name
print(a == b)
  • Variable names are preserved when saving and loading models.

    By default, variables in models will acquire unique variable names automatically,so you don't need to assign them yourself unless you want to.

  • You can turn off gradients for a variable by setting trainable to false at creation. Although variables are important for differentiation, some variables will not need to be differentiated. An example of a variable that would not need gradients is a training step counter.

    step_counter = tf.Variable(1, trainable=False)

Placing variables and tensors

For better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype. This means most variables are placed on a GPU if one is available.

However, you can override this. In this snippet, place a float tensor and a variable on the CPU, even if a GPU is available. By turning on device placement logging (see Setup), you can see where the variable is placed.

Note: Although manual placement works, using distribution strategies can be a more convenient and scalable way to optimize your computation.

If you run this notebook on different backends with and without a GPU you will see different logging. Note that logging device placement must be turned on at the start of the session.


tf.debugging.set_log_device_placement(True) with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]]) with tf.device('GPU:0'):
# Element-wise multiply
k = a * b print(k)

Note: Because tf.config.set_soft_device_placement is turned on by default,

even if you run this code on a device without a GPU, it will still run,

The multiplication step will happen on the CPU.

print("Tensorflow CPUs:%s%s\n" % (
'\n phys:' + (', '.join(['\n %r'%(c.name,) for c in tf_cpus ]) if tf_cpus else ' None'),
'\n logi:' + (', '.join(['\n %r'%(c.name,) for c in tf_logical_gpus]) if tf_logical_cpus else ' None'),
))
print("Tensorflow GPUs:%s%s\n" % (
'\n phys:' + (', '.join(['\n %r'%(g.name,) for g in tf_gpus ]) if tf_gpus else ' None'),
'\n logi:' + (', '.join(['\n %r'%(g.name,) for g in tf_logical_gpus]) if tf_logical_gpus else ' None'),
)) with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]]) with tf.device('GPU:0'):
# Element-wise multiply
k = a * b gpus = tf.config.list_physical_devices('GPU')
if gpus: # Create 2 virtual GPUs with 1GB memory each
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs") try:
tf.config.set_logical_device_configuration( tf_gpus[0],
[ tf.config.LogicalDeviceConfiguration(memory_limit=1024),
tf.config.LogicalDeviceConfiguration(memory_limit=1024) ] )
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e) strategy = tf.distribute.MirroredStrategy(tf_gpus)
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse', optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))

SciTech-BigDataAIML-Tensorflow-Variables的更多相关文章

  1. (转)The Road to TensorFlow

    Stephen Smith's Blog All things Sage 300… The Road to TensorFlow – Part 7: Finally Some Code leave a ...

  2. (zhuan) Deep Deterministic Policy Gradients in TensorFlow

          Deep Deterministic Policy Gradients in TensorFlow AUG 21, 2016 This blog from: http://pemami49 ...

  3. Tensorflow_入门学习_1

    1.0 TensorFlow graphs Tensorflow是基于graph based computation: 如: a=(b+c)∗(c+2) 可分解为 d=b+c e=c+2 a=d∗e ...

  4. [源码解析] 深度学习分布式训练框架 horovod (7) --- DistributedOptimizer

    [源码解析] 深度学习分布式训练框架 horovod (7) --- DistributedOptimizer 目录 [源码解析] 深度学习分布式训练框架 horovod (7) --- Distri ...

  5. 『TensorFlow』使用集合collection控制variables

    Variable Tensorflow使用Variable类表达.更新.存储模型参数. Variable是在可变更的,具有保持性的内存句柄,存储着Tensor 在整个session运行之前,图中的全部 ...

  6. Why do we name variables in Tensorflow?

    Reference:Stack Overflow. The name parameter is optional (you can create variables and constants wit ...

  7. WARNING:tensorflow:From /usr/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py:189: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed

    initialize_all_variables已被弃用,将在2017-03-02之后删除. 说明更新:使用tf.global_variables_initializer代替. 就把tf.initia ...

  8. tensorflow中的基本概念

    本文是在阅读官方文档后的一些个人理解. 官方文档地址:https://www.tensorflow.org/versions/r0.12/get_started/basic_usage.html#ba ...

  9. Tensorflow 处理libsvm格式数据生成TFRecord (parse libsvm data to TFRecord)

    #写libsvm格式 数据 write libsvm     #!/usr/bin/env python #coding=gbk # ================================= ...

  10. Tensorflow二分类处理dense或者sparse(文本分类)的输入数据

    这里做了一些小的修改,感谢谷歌rd的帮助,使得能够统一处理dense的数据,或者类似文本分类这样sparse的输入数据.后续会做进一步学习优化,比如如何多线程处理. 具体如何处理sparse 主要是使 ...

随机推荐

  1. 【命令详解001】top

    top命令可以用于实时监控cpu的状态,显示系统中各个进程的资源占用情况. 本次来详细看下top命令. 常用命令示例: top # 对,无参数的top命令是最长用的资源监控命令. [root@VM_0 ...

  2. IO流-转换流、序列化流--java进阶day14

    1.转换流 转换流本质还是字符流的子类 转换流的作用 1.可以按照指定的编码进行读写操作 我们使用的IO流,默认格式都是UTF-8,如果一个文件是GBK格式,在读写的时候就会乱码,此时就可以使用转换流 ...

  3. 深入浅出了解生成模型-1:GAN模型原理以及代码实战

    更加好排版:https://www.big-yellow-j.top/posts/2025/05/08/GAN.html 日常使用比较多的生成模型比如GPT/Qwen等这些大多都是"文生文& ...

  4. 使用HuggingFace 模型并预测

    下载HuggingFace 模型 首先打开网址:https://huggingface.co/models 这个网址是huggingface/transformers支持的所有模型,目前大约一千多个. ...

  5. TVM:visitor设计模式

    visitor模式,因为它在编译器的框架中应用的广泛,在TVM中也是无处不在. visitor模式介绍 Visitor(访问者)模式的定义:将作用于某种数据结构中的各元素的操作分离出来封装成独立的类, ...

  6. OS期末复习总结

    期末样题 : 链接:https://pan.baidu.com/s/12Mfi_lnhBDbuke6B_qCiJg 提取码:khp7 一.易错易混点: 下列进程调度算法中,可能引起进程长时间得不到运行 ...

  7. 初次使用 Jetbrains Rider 编写 C#(.Net) 代码

    前段时间,Jetbrains公司 公布了 Rider IDE 对非商业用途免费,看到很多业界的朋友都用到这个IDE,今天便下载下来使用一下. 1.界面的差异 Rider的界面跟我前段时间学习调试安卓代 ...

  8. 「Log」CSP-S 2023 游记

    Day 0 什么题也没写,稍微复习了一下,晚上打了些板子. 整个人处于放空状态. Day 1 早上睡了懒觉,老爹早就给我点了肯德基早餐. 边吃早餐边看番,吃完了去群里水了一水,讨论了点杂七杂八的东西, ...

  9. 「Note」数论方向 - 数论基础

    0. 前置知识 0.1. 费马小定理 \[a ^{p-1}\equiv1\pmod p(p\in\mathbb P,a\perp p) \] 由此可以推出模意义下乘法逆元: \[a ^{-1}\equ ...

  10. SpringBoot模板一

    SpringBoot模板一 SpringBoot员工管理系统 用到的技术: Version1 JDBC MySQL SSM thymeleaf lombok shiro WebSocket Swagg ...