3.tensorflow——NN
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data numClasses=10
inputsize=784
numHiddenUnits=50
trainningIterations=50000#total steps
batchSize=64# #1.dataset
mnist=input_data.read_data_sets('data/',one_hot=True)
############################################################
#2.tarin
X=tf.placeholder(tf.float32,shape=[None,inputsize])
y=tf.placeholder(tf.float32,shape=[None,numClasses])
#2.1 initial paras
#y1=X*W1+B1
W1=tf.Variable(tf.truncated_normal([inputsize,numHiddenUnits],stddev=0.1))
B1=tf.Variable(tf.constant(0.1),[numHiddenUnits])
#y=y1*W2+B2
W2=tf.Variable(tf.truncated_normal([numHiddenUnits,numClasses],stddev=0.1))
B2=tf.Variable(tf.constant(0.1),[numClasses])
#layers
hiddenLayerOutput=tf.nn.relu(tf.matmul(X,W1)+B1)
finalOutput=tf.nn.relu(tf.matmul(hiddenLayerOutput,W2)+B2) #2.2 tarin set up
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=finalOutput))
opt=tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)
correct_prediction=tf.equal(tf.argmax(finalOutput,1),tf.argmax(y,1))
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) #2.3 run tarin
sess=tf.Session()
init=tf.global_variables_initializer()
sess.run(init)
for i in range(trainningIterations):
batch=mnist.train.next_batch(batchSize)
batchInput=batch[0]
batchLabels=batch[1]
sess.run(opt,feed_dict={X:batchInput,y:batchLabels})
if i%1000 == 0:
train_accuracy=sess.run(accuracy,feed_dict={X:batchInput,y:batchLabels})
print("step %d, tarinning accuracy %g" % (i,train_accuracy)) #2.4 run test to accuracy
batch=mnist.test.next_batch(batchSize)
testAccuracy=sess.run(accuracy,feed_dict={X:batch[0],y:batch[1]})
print("test accuracy %g" % (testAccuracy))
输出结果:
step 0, tarinning accuracy 0.171875
step 1000, tarinning accuracy 0.84375
step 2000, tarinning accuracy 0.953125
step 3000, tarinning accuracy 0.84375
step 4000, tarinning accuracy 0.953125
step 5000, tarinning accuracy 1
step 6000, tarinning accuracy 0.984375
step 7000, tarinning accuracy 1
step 8000, tarinning accuracy 0.984375
step 9000, tarinning accuracy 1
step 10000, tarinning accuracy 1
step 11000, tarinning accuracy 0.96875
step 12000, tarinning accuracy 1
step 13000, tarinning accuracy 0.96875
step 14000, tarinning accuracy 1
step 15000, tarinning accuracy 0.984375
step 16000, tarinning accuracy 0.953125
step 17000, tarinning accuracy 1
step 18000, tarinning accuracy 1
step 19000, tarinning accuracy 1
step 20000, tarinning accuracy 1
step 21000, tarinning accuracy 1
step 22000, tarinning accuracy 1
step 23000, tarinning accuracy 1
step 24000, tarinning accuracy 1
step 25000, tarinning accuracy 1
step 26000, tarinning accuracy 1
step 27000, tarinning accuracy 1
step 28000, tarinning accuracy 1
step 29000, tarinning accuracy 1
step 30000, tarinning accuracy 1
step 31000, tarinning accuracy 1
step 32000, tarinning accuracy 1
step 33000, tarinning accuracy 1
step 34000, tarinning accuracy 1
step 35000, tarinning accuracy 1
step 36000, tarinning accuracy 1
step 37000, tarinning accuracy 1
step 38000, tarinning accuracy 1
step 39000, tarinning accuracy 1
step 40000, tarinning accuracy 0.984375
step 41000, tarinning accuracy 1
step 42000, tarinning accuracy 1
step 43000, tarinning accuracy 1
step 44000, tarinning accuracy 1
step 45000, tarinning accuracy 1
step 46000, tarinning accuracy 1
step 47000, tarinning accuracy 1
step 48000, tarinning accuracy 1
step 49000, tarinning accuracy 1
test accuracy 0.984375
3.tensorflow——NN的更多相关文章
- tensorflow.nn.bidirectional_dynamic_rnn()函数的用法
在分析Attention-over-attention源码过程中,对于tensorflow.nn.bidirectional_dynamic_rnn()函数的总结: 首先来看一下,函数: def bi ...
- Tensorflow.nn 核心模块详解
看过前面的例子,会发现实现深度神经网络需要使用 tensorflow.nn 这个核心模块.我们通过源码来一探究竟. # Copyright 2015 Google Inc. All Rights Re ...
- Tensorflow学习笔记(2):tf.nn.dropout 与 tf.layers.dropout
A quick glance through tensorflow/python/layers/core.py and tensorflow/python/ops/nn_ops.pyreveals t ...
- 『PyTorch x TensorFlow』第八弹_基本nn.Module层函数
『TensorFlow』网络操作API_上 『TensorFlow』网络操作API_中 『TensorFlow』网络操作API_下 之前也说过,tf 和 t 的层本质区别就是 tf 的是层函数,调用即 ...
- tensorflow 手写数字识别
https://www.kaggle.com/kakauandme/tensorflow-deep-nn 本人只是负责将这个kernels的代码整理了一遍,具体还是请看原链接 import numpy ...
- tensorflow项目构建流程
https://blog.csdn.net/hjimce/article/details/51899683 一.构建路线 个人感觉对于任何一个深度学习库,如mxnet.tensorflow.thean ...
- tensorflow代码中的一个bug
tensorflow-gpu版本号 pip show tensorflow-gpu Name: tensorflow-gpu Version: 1.11.0 Summary: TensorFlow i ...
- tensorflow中的sequence_loss_by_example
在编写RNN程序时,一个很常见的函数就是sequence_loss_by_example loss = tf.contrib.legacy_seq2seq.sequence_loss_by_examp ...
- TensorFlow API 汉化
TensorFlow API 汉化 模块:tf 定义于tensorflow/__init__.py. 将所有公共TensorFlow接口引入此模块. 模块 app module:通用入口点脚本. ...
随机推荐
- JavaScript Tre
function BinarySearchTree() { var Node = function(key) { this.key = key; this.left = null; this.righ ...
- docker hub 本地镜像登录
docker的登录信息存放在home目录下的.docker文件夹下,查看 cat ~/.docker/config.json { "auths": { "gcyimgs. ...
- 在Linux下将HTML文件转换成PDF文件
今天要写一个上交的作业,本来是想用Office Word来写的,但是,我的Office貌似不能用了,但是,Linux下的LibreOffice写出的文档,在打印的时候是经常出现乱码的.所以,后来想到可 ...
- vue 数组中嵌套的对象添加新属性--页面更新
vue 数组中嵌套的对象添加新属性--页面更新:https://www.jianshu.com/p/8f0e5bb13735
- Python入门习题3.天天向上
例3.1 一年365天,以第一天的能力值为基数,记为1.0,当好好学习时能力值相比前一天提高1%,当没有学习时能力值相比前一天下降1%.每天努力(dayup)和每天放任(daydown),一年下来的能 ...
- HDU 6386 Age of Moyu (最短路+set)
<题目链接> 题目大意:给定一张无向图,有n个点m条边,从一条边到另一条边,如果两边的指不同 花费就要+1,如果相同就不需要花费. 先从1走到n问最小花费是多少.(第一条边的花费都是1) ...
- 【Leetcode周赛】从contest-121开始。(一般是10个contest写一篇文章)
Contest 121 (题号981-984)(2019年1月27日) 链接:https://leetcode.com/contest/weekly-contest-121 总结:2019年2月22日 ...
- 一个历时五天的 Bug
一个程序员在没有成长成为架构师之前,几乎都要跟 Bug为伴,程序员有很多时间都是花在了查找各种 Bug上. 我印象深刻的一个Bug, 是一个服务器网络框架无锁队列的 Bug .那个 Bug 连续查找了 ...
- 大数据之hadoop框架知识
https://blog.csdn.net/zytbft/article/details/79285500
- Quartz.Net 任务调度之时间策略(4)
在复杂的业务逻辑中,cron可以灵活的制定时间策略 先说使用方法 ITrigger trigger = TriggerBuilder.Create() .WithIdentity("trig ...