import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data #载入数据集
mnist = input_data.read_data_sets("MNIST_data",one_hot=True) #每个批次的大小
batch_size = 64
#计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size #定义三个placeholder
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
keep_prob=tf.placeholder(tf.float32) # 784-1000-500-10
W1 = tf.Variable(tf.truncated_normal([784,1000],stddev=0.1))
b1 = tf.Variable(tf.zeros([1000])+0.1)
L1 = tf.nn.tanh(tf.matmul(x,W1)+b1)
L1_drop = tf.nn.dropout(L1,keep_prob) W2 = tf.Variable(tf.truncated_normal([1000,500],stddev=0.1))
b2 = tf.Variable(tf.zeros([500])+0.1)
L2 = tf.nn.tanh(tf.matmul(L1_drop,W2)+b2)
L2_drop = tf.nn.dropout(L2,keep_prob) W3 = tf.Variable(tf.truncated_normal([500,10],stddev=0.1))
b3 = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.nn.softmax(tf.matmul(L2_drop,W3)+b3) #交叉熵
loss = tf.losses.softmax_cross_entropy(y,prediction)
#使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss) #初始化变量
init = tf.global_variables_initializer() #结果存放在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) with tf.Session() as sess:
sess.run(init)
for epoch in range(31):
for batch in range(n_batch):
batch_xs,batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys,keep_prob:0.5}) test_acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels,keep_prob:1.0})
train_acc = sess.run(accuracy,feed_dict={x:mnist.train.images,y:mnist.train.labels,keep_prob:1.0})
print("Iter " + str(epoch) + ",Testing Accuracy " + str(test_acc) +",Training Accuracy " + str(train_acc))
Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
Iter 0,Testing Accuracy 0.9201,Training Accuracy 0.91234547
Iter 1,Testing Accuracy 0.9256,Training Accuracy 0.9229636
Iter 2,Testing Accuracy 0.9359,Training Accuracy 0.9328182
Iter 3,Testing Accuracy 0.9375,Training Accuracy 0.93716365
Iter 4,Testing Accuracy 0.9408,Training Accuracy 0.9411273
Iter 5,Testing Accuracy 0.9407,Training Accuracy 0.94365454
Iter 6,Testing Accuracy 0.9472,Training Accuracy 0.9484909
Iter 7,Testing Accuracy 0.9472,Training Accuracy 0.9502
Iter 8,Testing Accuracy 0.9516,Training Accuracy 0.95336366
Iter 9,Testing Accuracy 0.9522,Training Accuracy 0.95552725
Iter 10,Testing Accuracy 0.9525,Training Accuracy 0.95632726
Iter 11,Testing Accuracy 0.9566,Training Accuracy 0.9578909
Iter 12,Testing Accuracy 0.9574,Training Accuracy 0.9606182
Iter 13,Testing Accuracy 0.9573,Training Accuracy 0.96107274
Iter 14,Testing Accuracy 0.9587,Training Accuracy 0.9614546
Iter 15,Testing Accuracy 0.9581,Training Accuracy 0.9616727
Iter 16,Testing Accuracy 0.9599,Training Accuracy 0.96369094
Iter 17,Testing Accuracy 0.9601,Training Accuracy 0.96403635
Iter 18,Testing Accuracy 0.9618,Training Accuracy 0.9658909
Iter 19,Testing Accuracy 0.9608,Training Accuracy 0.9652
Iter 20,Testing Accuracy 0.9618,Training Accuracy 0.96607274
Iter 21,Testing Accuracy 0.9634,Training Accuracy 0.96794546
Iter 22,Testing Accuracy 0.9639,Training Accuracy 0.96836364
Iter 23,Testing Accuracy 0.964,Training Accuracy 0.96965456
Iter 24,Testing Accuracy 0.9644,Training Accuracy 0.9693091
Iter 25,Testing Accuracy 0.9647,Training Accuracy 0.9703818
Iter 26,Testing Accuracy 0.9639,Training Accuracy 0.9702
Iter 27,Testing Accuracy 0.9651,Training Accuracy 0.9708909
Iter 28,Testing Accuracy 0.9666,Training Accuracy 0.9711818
Iter 29,Testing Accuracy 0.9644,Training Accuracy 0.9710364
Iter 30,Testing Accuracy 0.9659,Training Accuracy 0.97205454

8.Dropout的更多相关文章

  1. 在RNN中使用Dropout

    dropout在前向神经网络中效果很好,但是不能直接用于RNN,因为RNN中的循环会放大噪声,扰乱它自己的学习.那么如何让它适用于RNN,就是只将它应用于一些特定的RNN连接上.   LSTM的长期记 ...

  2. Deep Learning 23:dropout理解_之读论文“Improving neural networks by preventing co-adaptation of feature detectors”

    理论知识:Deep learning:四十一(Dropout简单理解).深度学习(二十二)Dropout浅层理解与实现.“Improving neural networks by preventing ...

  3. 正则化方法:L1和L2 regularization、数据集扩增、dropout

    正则化方法:防止过拟合,提高泛化能力 在训练数据不够多时,或者overtraining时,常常会导致overfitting(过拟合).其直观的表现如下图所示,随着训练过程的进行,模型复杂度增加,在tr ...

  4. 深度学习(dropout)

    other_techniques_for_regularization 随手翻译,略作参考,禁止转载 www.cnblogs.com/santian/p/5457412.html Dropout: D ...

  5. Deep learning:四十一(Dropout简单理解)

    前言 训练神经网络模型时,如果训练样本较少,为了防止模型过拟合,Dropout可以作为一种trikc供选择.Dropout是hintion最近2年提出的,源于其文章Improving neural n ...

  6. 简单理解dropout

    dropout是CNN(卷积神经网络)中的一个trick,能防止过拟合. 关于dropout的详细内容,还是看论文原文好了: Hinton, G. E., et al. (2012). "I ...

  7. [转]理解dropout

    理解dropout 原文地址:http://blog.csdn.net/stdcoutzyx/article/details/49022443     理解dropout 注意:图片都在github上 ...

  8. [CS231n-CNN] Training Neural Networks Part 1 : parameter updates, ensembles, dropout

    课程主页:http://cs231n.stanford.edu/ ___________________________________________________________________ ...

  9. 正则化,数据集扩增,Dropout

    正则化方法:防止过拟合,提高泛化能力 在训练数据不够多时,或者overtraining时,常常会导致overfitting(过拟合).其直观的表现如下图所示,随着训练过程的进行,模型复杂度增加,在tr ...

  10. [Neural Networks] Dropout阅读笔记

    多伦多大学Hinton组 http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf 一.目的 降低overfitting的风险 二.原理 ...

随机推荐

  1. Kubernetes web界面kubernetes-dashboard安装【h】

    本文讲述的是如何部署K8s的web UI,前提是已经有一个k8s集群后,按照如下步骤进行即可.(如下步骤都是在master节点上进行操作) 1.下载kubernetes-dashboard.yaml文 ...

  2. Dell 12V/18A电源适配器接口改造

    手头有几个航模用的充电器,原来一直用实验室电源,不方便移动,为了便携省地方,就想配个合适的电源.在网上找了下,航模专用的适配器价格太高,国产的杂牌适配器功率虚标严重并且可靠性是个问题,工业用的电源基本 ...

  3. 20190903 - CSDN 的奇葩替换

    可能是出于安全原因 CSDN 对内容中的代码,作了很多奇葩的替换. 比如下面两行,是否有差别? # - # -16 有.其实 cut 后的短横线,内部编码不同,前者复制后无法被识别. 再比如下面两个词 ...

  4. C#中的属性-Property

    C#的属性一直都有用,但具体了解的不是很深,而且一些注意事项也没有太在意过,糊里糊涂的用着.这两天看了C#的书专门学习了一下属性,这才知道,原来属性也有这么多东西~ ~今天记录一下,算是对学习的一个检 ...

  5. 无监督异常检测之卷积AE和卷积VAE

    尝试用卷积AE和卷积VAE做无监督检测,思路如下: 1.先用正常样本训练AE或VAE 2.输入测试集给AE或VAE,获得重构的测试集数据. 3.计算重构的数据和原始数据的误差,如果误差大于某一个阈值, ...

  6. 【计算机视觉】Vibe Vibe+

    ViBe是一种像素级的背景建模.前景检测算法,该算法主要不同之处是背景模型的更新策略,随机选择需要替换的像素的样本,随机选择邻域像素进行更新.在无法确定像素变化的模型时,随机的更新策略,在一定程度上可 ...

  7. 【DSP开发】DSP通用并行端口uPP

      这是翻译TI官方文档<KeyStone Architecture Universal Parallel Port (uPP)>SPRUHG9有关通用并行端口uPP的内容(除寄存器部分) ...

  8. speedtest-cli 命令

    speedtest-cli是一个使用python编写的命令行脚本,通过调用speedtest.net测试上下行的接口来完成速度测试,项目地址:https://github.com/sivel/spee ...

  9. 【Python】【demo实验8】【练习实例】【计算当天是当年的第几天】

    题目:输入某年某月某日,判断这一天是这一年的第几天? 程序分析: 以3月5日为例,应该先把前两个月的加起来,然后再加上5天即本年的第几天,特殊情况,闰年且输入月份大于2时需考虑多加一天: 对于年份,需 ...

  10. 基于从库+binlog方式恢复数据

    基于从库+binlog方式恢复数据 将bkxt从库的全备份在rescs5上恢复一份,恢复到6306端口,用cmdb操作 恢复全备后执行如下操作 set global read_only=OFF; st ...