https://github.com/tensorflow/tensorflow/issues/3212

NaNs usually indicate something wrong with your training. Perhaps your learning rate is too high, perhaps you have invalid data. Maybe you have an invalid operation like a divide by zero. Tensorflow refusing to write any NaNs is giving you a warning that something has gone wrong with your training.

If you  still suspect there is an underlying bug, you need to provide us a reproducible test case (as small as possible), plus information about what environment (please see the issue submission template).

https://stackoverflow.com/questions/33712178/tensorflow-nan-bug?newreg=c7e31a867765444280ba3ca50b657a07

Actually, it turned out to be something stupid. I'm posting this in case anyone else would run into a similar error.

cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))

is actually a horrible way of computing the cross-entropy. In some samples, certain classes could be excluded with certainty after a while, resulting in y_conv=0 for that sample. That's normally not a problem since you're not interested in those, but in the way cross_entropy is written there, it yields 0*log(0) for that particular sample/class. Hence the NaN.

Replacing it with

cross_entropy = -tf.reduce_sum(y_*tf.log(tf.clip_by_value(y_conv,1e-10,1.0)))

https://stackoverflow.com/questions/33922937/why-does-tensorflow-return-nan-nan-instead-of-probabilities-from-a-csv-file

Try throwing in a few of these.  Instead of this line:

tf_softmax = tf.nn.softmax(tf.matmul(tf_in,tf_weight) + tf_bias)

Try:

tf_bias = tf.Print(tf_bias, [tf_bias], "Bias: ")
tf_weight = tf.Print(tf_weight, [tf_weight], "Weight: ")
tf_in = tf.Print(tf_in, [tf_in], "TF_in: ")
matmul_result = tf.matmul(tf_in, tf_weight)
matmul_result = tf.Print(matmul_result, [matmul_result], "Matmul: ")
tf_softmax = tf.nn.softmax(matmul_result + tf_bias)

to see what Tensorflow thinks the intermediate values are.  If the NaNs are showing up earlier in the pipeline, it should give you a better idea of where the problem lies.  Good luck!  If you get some data out of this, feel free to follow up and we'll see if we can get you further.

Updated to add:  Here's a stripped-down debugging version to try, where I got rid of the input functions and just generated some random data:

https://stackoverflow.com/questions/38810424/how-does-one-debug-nan-values-in-tensorflow

There are a couple of reasons WHY you can get a NaN-result, often it is because of too high a learning rate but plenty other reasons are possible like for example corrupt data in your input-queue or a log of 0 calculation.

Anyhow, debugging with a print as you describe cannot be done by a simple print (as this would result only in the printing of the tensor-information inside the graph and not print any actual values).

However, if you use tf.print as an op in bulding the graph (tf.print) then when the graph gets executed you will get the actual values printed (and it IS a good exercise to watch these values to debug and understand the behavior of your net).

However, you are using the print-statement not entirely in the correct manner. This is an op, so you need to pass it a tensor and request a result-tensor that you need to work with later on in the executing graph. Otherwise the op is not going to be executed and no printing occurs. Try this:

Z = tf.sqrt(Delta_tilde)
Z = tf.Print(Z,[Z], message="my Z-values:") # <-------- TF PRINT STATMENT
Z = Transform(Z) # potentially some transform, currently I have it to return Z for debugging (the identity)
Z = tf.pow(Z, 2.0)

tensorflow nan的更多相关文章

  1. 常用深度学习框——Caffe/ TensorFlow / Keras/ PyTorch/MXNet

    常用深度学习框--Caffe/ TensorFlow / Keras/ PyTorch/MXNet 一.概述 近几年来,深度学习的研究和应用的热潮持续高涨,各种开源深度学习框架层出不穷,包括Tenso ...

  2. 解决tensorflow在训练的时候权重是nan问题

    搭建普通的卷积CNN网络. nan表示的是无穷或者是非数值,比如说你在tensorflow中使用一个数除以0,那么得到的结果就是nan. 在一个matrix中,如果其中的值都为nan很有可能是因为采用 ...

  3. TensorFlow | ReluGrad input is not finite. Tensor had NaN values

    问题的出现 Question 这个问题是我基于TensorFlow使用CNN训练MNIST数据集的时候遇到的.关键的相关代码是以下这部分: cross_entropy = -tf.reduce_sum ...

  4. tensorflow训练中出现nan

    问题暂记: 之后看 https://blog.csdn.net/qq_23142123/article/details/80526931 https://www.zhihu.com/question/ ...

  5. tensorflow 训练网络loss突然出现nan的情况

    1.问题描述:开始训练一切都是那么的平静,很正常! 突然loss变为nan,瞬间懵逼! 2.在网上看了一些解答,可能是梯度爆炸,可能是有关于0的计算.然后我觉得可能是关于0的吧,然后进行了验证. 3. ...

  6. tensorflow 训练的时候loss=nan

    出现loss为nan 可能是使用了relu激活函数,导致的.因为在负半轴上输出都是0

  7. 深度学习中损失值(loss值)为nan(以tensorflow为例)

    我做的是一个识别验证码的深度学习模型,识别的图片如下 验证码图片识别4个数字,数字间是有顺序的,设立标签时设计了四个onehot向量链接起来,成了一个长度为40的向量,然后模型的输入也是40维向量用s ...

  8. tensorflow学习

    tensorflow安装时遇到gcc: error trying to exec 'as': execvp: No such file or directory. 截止到2016年11月13号,源码编 ...

  9. Tensorflow 实现稠密输入数据的逻辑回归二分类

    首先 实现一个尽可能少调用tf.nn模块儿的,自己手写相关的function     import tensorflow as tf import numpy as np import melt_da ...

随机推荐

  1. git比较两个分支的文件的差异

    Git diff branch1 branch2 --stat   //显示出所有有差异的文件列表 Git diff branch1 branch2 文件名(带路径)   //显示指定文件的详细差异 ...

  2. Scrapy学习篇(十)之下载器中间件(Downloader Middleware)

    下载器中间件是介于Scrapy的request/response处理的钩子框架,是用于全局修改Scrapy request和response的一个轻量.底层的系统. 激活Downloader Midd ...

  3. autofac中文文档

    https://autofaccn.readthedocs.io/zh/latest/index.html

  4. JS高级-ES6

    let/const case1 { //js var a = 10 a = 20 // es6 let b = 10 b = 30 const c = 10 c = 40 //报错 } case2 { ...

  5. BZOJ 4584 luogu P3643: [Apio2016]赛艇

    4584: [Apio2016]赛艇 Time Limit: 70 Sec  Memory Limit: 256 MB[Submit][Status][Discuss] Description 在首尔 ...

  6. 【Linux】【Jenkins】编译过程中遇到ERROR: Failed to parse POMs的解决方案

    自动化构建的时候报错,网搜查询说是maven的jenkinks配置问题导致的.修改系统工具配置的maven配置就可以了 Started by user XX Building in workspace ...

  7. 63.1拓展之纯 CSS 创作一个摇摇晃晃的 loader

    效果地址:https://scrimba.com/c/cqKv4VCR HTML code: <div class="loader"> <span>Load ...

  8. 修改java在进程中的映像名

    java小程序用java -jar xxx.jar  启动的进程映像名都是java.exe. 如果启动多个小程序就不好区分,导致监控程序无法定位到具体需要守护的小程序上. 解决办法: 在java安装目 ...

  9. LVM逻辑卷疑问?

    创建完逻辑卷后,删除以/dev/vdb1和/dev/vdb2为基础的分区后,逻辑卷依然生效???

  10. maven pom文件

    setting.xml主要用于配置maven的运行环境等一系列通用的属性,是全局级别的配置文件:而pom.xml主要描述了项目的maven坐标,依赖关系,开发者需要遵循的规则,缺陷管理系统,组织和li ...