不加Dropout,训练数据的准确率高,基本上可以接近100%,但是,对于测试集来说,效果并不好;

加上Dropout,训练数据的准确率可能变低,但是,对于测试集来说,效果更好了,所以说Dropout可以防止过拟合。

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data # 载入数据集
mnist = input_data.read_data_sets("MNIST_data", one_hot=True) # 每个批次的大小
batch_size = 100
# 计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size # 定义两个placeholder
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32) # 创建一个简单的神经网络
W1 = tf.Variable(tf.truncated_normal([784, 2000], stddev=0.1))
b1 = tf.Variable(tf.zeros([2000]) + 0.1)
L1 = tf.nn.tanh(tf.matmul(x, W1) + b1)
L1_drop = tf.nn.dropout(L1, keep_prob) W2 = tf.Variable(tf.truncated_normal([2000, 2000], stddev=0.1))
b2 = tf.Variable(tf.zeros([2000]) + 0.1)
L2 = tf.nn.tanh(tf.matmul(L1_drop, W2) + b2)
L2_drop = tf.nn.dropout(L2, keep_prob) W3 = tf.Variable(tf.truncated_normal([2000, 1000], stddev=0.1))
b3 = tf.Variable(tf.zeros([1000]) + 0.1)
L3 = tf.nn.tanh(tf.matmul(L2_drop, W3) + b3)
L3_drop = tf.nn.dropout(L3, keep_prob) W4 = tf.Variable(tf.truncated_normal([1000, 10], stddev=0.1))
b4 = tf.Variable(tf.zeros([10]) + 0.1)
prediction = tf.nn.softmax(tf.matmul(L3_drop, W4) + b4) # 二次代价函数
# loss = tf.reduce_mean(tf.square(y-prediction))
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=prediction))
# 使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss) # 初始化变量
init = tf.global_variables_initializer() # 结果存放在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1)) #argmax返回一维张量中最大的值所在的位置
# 求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) with tf.Session() as sess:
sess.run(init)
for epoch in range(31):
for batch in range(n_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 0.7}) test_acc = sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels, keep_prob: 1.0})
train_acc = sess.run(accuracy, feed_dict={x: mnist.train.images, y: mnist.train.labels, keep_prob: 1.0})
print("Iter " + str(epoch) + ",Testing Accuracy " + str(test_acc) + ",Training Accuracy " + str(train_acc))

04Dropout的更多相关文章

随机推荐

  1. 电脑新安装JDK版本并运行使用该JDK版本问题

    情景:电脑上已正常安装一个jdk版本,如:1.7.0_71,因考虑到一些情况,现需要使用版本为1.7.0_80(1.8),故需新安装JDK,并使服务可以运行使用新安装的JDK版本. 网络找寻方法: ( ...

  2. Codeforces Round #369 (Div. 2) A. Bus to Udayland (水题)

    Bus to Udayland 题目链接: http://codeforces.com/contest/711/problem/A Description ZS the Coder and Chris ...

  3. yum命令查询详解

    一.列举包文件列出资源库中所有可以安装或更新的rpm包# yum list列出资源库中特定的可以安装或更新以及已经安装的rpm包# yum list perl           //列出名为perl ...

  4. MySQL慢日志分析之pt-query-digest

    http://www.php.cn/mysql-tutorials-357655.html 监控慢日志: pt-query-digest 切割分析慢日志 anemometer 删掉垃圾查询 pt-ki ...

  5. p4841 城市规划

    分析 https://www.luogu.org/blog/DRA/solution-p4841 代码(似乎附赠了一个全家桶呢) #pragma GCC optimize(2) #pragma GCC ...

  6. Java面试中hashCode()与equals(Object obj)方法关系的准确回答

    原文地址: https://blog.csdn.net/qq_19260029/article/details/77917925 hashCode()与equals(Object obj)都是Java ...

  7. Week4 - 500.Keyboard Row & 557.Reverse Words in a String III

    500.Keyboard Row & 557.Reverse Words in a String III 500.Keyboard Row Given a List of words, ret ...

  8. 【FICO系列】SAP 财务帐与后勤不一致情况

    公众号:SAP Technical 本文作者:matinal 原文出处:http://www.cnblogs.com/SAPmatinal/ 原文链接:[FICO系列]SAP 财务帐与后勤不一致情况 ...

  9. Dynamic Programming and Policy Evaluation

    Dynamic Programming divides the original problem into subproblems, and then complete the whole task ...

  10. 应用安全-web安全-WebShell整理

    shellcode.aspx <%@ Page Language="C#" AutoEventWireup="true" Inherits="S ...