TensorFlow 之 手写数字识别MNIST
MNIST For ML Beginners - https://www.tensorflow.org/get_started/mnist/beginners
Deep MNIST for Experts - https://www.tensorflow.org/get_started/mnist/pros
版本: 
TensorFlow 1.2.0 + Flask 0.12 + Gunicorn 19.6
相关文章: 
TensorFlow 之 入门体验 
TensorFlow 之 手写数字识别MNIST 
TensorFlow 之 物体检测 
TensorFlow 之 构建人物识别系统
MNIST相当于机器学习界的Hello World。
这里在页面通过 Canvas 画一个数字,然后传给TensorFlow识别,分别给出Softmax回归模型、多层卷积网络的识别结果。
(1)文件结构
│  main.py 
│  requirements.txt 
│  runtime.txt 
├─mnist 
│  │  convolutional.py 
│  │  model.py 
│  │  regression.py 
│  │  __init__.py 
│  └─data 
│          convolutional.ckpt.data-00000-of-00001 
│          convolutional.ckpt.index 
│          regression.ckpt.data-00000-of-00001 
│          regression.ckpt.index 
├─src 
│  └─js 
│          main.js 
├─static 
│  ├─css 
│  │      bootstrap.min.css 
│  └─js 
│          jquery.min.js 
│          main.js 
└─templates 
        index.html
(2)训练数据
下载以下文件放入/tmp/data/,不用解压,训练代码会自动解压。
http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
执行命令训练数据(Softmax回归模型、多层卷积网络)
- # python regression.py
 - # python convolutional.py
 
执行完成后 在 mnist/data/ 里会生成以下几个文件,重新训练前需要把这几个文件先删掉。
convolutional.ckpt.index
regression.ckpt.data-00000-of-00001
regression.ckpt.index
(3)启动Web服务测试
- # cd /usr/local/tensorflow2/tensorflow-models/tf-mnist
 - # pip install -r requirements.txt
 - # gunicorn main:app --log-file=- --bind=localhost:8000
 
浏览器中访问:http://localhost:8000 
*** 运行的TensorFlow版本、数据训练的模型、还有这里Canvas的转换都对识别率有一定的影响~!
(4)源代码
Web部分比较简单,页面上放置一个Canvas,鼠标抬起时将Canvas的图像通过Ajax传给后台API,然后显示API结果。
templates/index.html
main.py
- import numpy as np
 - import tensorflow as tf
 - from flask import Flask, jsonify, render_template, request
 - from mnist import model
 - x = tf.placeholder("float", [None, 784])
 - sess = tf.Session()
 - # restore trained data
 - with tf.variable_scope("regression"):
 - y1, variables = model.regression(x)
 - saver = tf.train.Saver(variables)
 - saver.restore(sess, "mnist/data/regression.ckpt")
 - with tf.variable_scope("convolutional"):
 - keep_prob = tf.placeholder("float")
 - y2, variables = model.convolutional(x, keep_prob)
 - saver = tf.train.Saver(variables)
 - saver.restore(sess, "mnist/data/convolutional.ckpt")
 - def regression(input):
 - return sess.run(y1, feed_dict={x: input}).flatten().tolist()
 - def convolutional(input):
 - return sess.run(y2, feed_dict={x: input, keep_prob: 1.0}).flatten().tolist()
 - # webapp
 - app = Flask(__name__)
 - @app.route('/api/mnist', methods=['POST'])
 - def mnist():
 - input = ((255 - np.array(request.json, dtype=np.uint8)) / 255.0).reshape(1, 784)
 - output1 = regression(input)
 - output2 = convolutional(input)
 - print(output1)
 - print(output2)
 - return jsonify(results=[output1, output2])
 - @app.route('/')
 - def main():
 - return render_template('index.html')
 - if __name__ == '__main__':
 - app.run()
 
mnist/model.py
- import tensorflow as tf
 - # Softmax Regression Model
 - def regression(x):
 - W = tf.Variable(tf.zeros([784, 10]), name="W")
 - b = tf.Variable(tf.zeros([10]), name="b")
 - y = tf.nn.softmax(tf.matmul(x, W) + b)
 - return y, [W, b]
 - # Multilayer Convolutional Network
 - def convolutional(x, keep_prob):
 - def conv2d(x, W):
 - return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
 - def max_pool_2x2(x):
 - return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
 - def weight_variable(shape):
 - initial = tf.truncated_normal(shape, stddev=0.1)
 - return tf.Variable(initial)
 - def bias_variable(shape):
 - initial = tf.constant(0.1, shape=shape)
 - return tf.Variable(initial)
 - # First Convolutional Layer
 - x_image = tf.reshape(x, [-1, 28, 28, 1])
 - W_conv1 = weight_variable([5, 5, 1, 32])
 - b_conv1 = bias_variable([32])
 - h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
 - h_pool1 = max_pool_2x2(h_conv1)
 - # Second Convolutional Layer
 - W_conv2 = weight_variable([5, 5, 32, 64])
 - b_conv2 = bias_variable([64])
 - h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
 - h_pool2 = max_pool_2x2(h_conv2)
 - # Densely Connected Layer
 - W_fc1 = weight_variable([7 * 7 * 64, 1024])
 - b_fc1 = bias_variable([1024])
 - h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
 - h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
 - # Dropout
 - h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 - # Readout Layer
 - W_fc2 = weight_variable([1024, 10])
 - b_fc2 = bias_variable([10])
 - y = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
 - return y, [W_conv1, b_conv1, W_conv2, b_conv2, W_fc1, b_fc1, W_fc2, b_fc2]
 
mnist/convolutional.py
- import os
 - import model
 - import tensorflow as tf
 - from tensorflow.examples.tutorials.mnist import input_data
 - data = input_data.read_data_sets("/tmp/data/", one_hot=True)
 - # model
 - with tf.variable_scope("convolutional"):
 - x = tf.placeholder(tf.float32, [None, 784])
 - keep_prob = tf.placeholder(tf.float32)
 - y, variables = model.convolutional(x, keep_prob)
 - # train
 - y_ = tf.placeholder(tf.float32, [None, 10])
 - cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
 - train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
 - correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
 - accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
 - saver = tf.train.Saver(variables)
 - with tf.Session() as sess:
 - sess.run(tf.global_variables_initializer())
 - for i in range(20000):
 - batch = data.train.next_batch(50)
 - if i % 100 == 0:
 - train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
 - print("step %d, training accuracy %g" % (i, train_accuracy))
 - sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
 - print(sess.run(accuracy, feed_dict={x: data.test.images, y_: data.test.labels, keep_prob: 1.0}))
 - path = saver.save(
 - sess, os.path.join(os.path.dirname(__file__), 'data', 'convolutional.ckpt'),
 - write_meta_graph=False, write_state=False)
 - print("Saved:", path)
 
mnist/regression.py
- import os
 - import model
 - import tensorflow as tf
 - from tensorflow.examples.tutorials.mnist import input_data
 - data = input_data.read_data_sets("/tmp/data/", one_hot=True)
 - # model
 - with tf.variable_scope("regression"):
 - x = tf.placeholder(tf.float32, [None, 784])
 - y, variables = model.regression(x)
 - # train
 - y_ = tf.placeholder("float", [None, 10])
 - cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
 - train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
 - correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
 - accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
 - saver = tf.train.Saver(variables)
 - with tf.Session() as sess:
 - sess.run(tf.global_variables_initializer())
 - for _ in range(1000):
 - batch_xs, batch_ys = data.train.next_batch(100)
 - sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
 - print(sess.run(accuracy, feed_dict={x: data.test.images, y_: data.test.labels}))
 - path = saver.save(
 - sess, os.path.join(os.path.dirname(__file__), 'data', 'regression.ckpt'),
 - write_meta_graph=False, write_state=False)
 - print("Saved:", path)
 
参考: 
http://memo.sugyan.com/entry/20151124/1448292129
TensorFlow 之 手写数字识别MNIST的更多相关文章
- OpenCV+TensorFlow图片手写数字识别(附源码)
		
初次接触TensorFlow,而手写数字训练识别是其最基本的入门教程,网上关于训练的教程很多,但是模型的测试大多都是官方提供的一些素材,能不能自己随便写一串数字让机器识别出来呢?纸上得来终觉浅,带着这 ...
 - keras框架的MLP手写数字识别MNIST,梳理?
		
keras框架的MLP手写数字识别MNIST 代码: # coding: utf-8 # In[1]: import numpy as np import pandas as pd from kera ...
 - python-积卷神经网络全面理解-tensorflow实现手写数字识别
		
首先,关于神经网络,其实是一个结合很多知识点的一个算法,关于cnn(积卷神经网络)大家需要了解: 下面给出我之前总结的这两个知识点(基于吴恩达的机器学习) 代价函数: 代价函数 代价函数(Cost F ...
 - Tensorflow实战 手写数字识别(Tensorboard可视化)
		
一.前言 为了更好的理解Neural Network,本文使用Tensorflow实现一个最简单的神经网络,然后使用MNIST数据集进行测试.同时使用Tensorboard对训练过程进行可视化,算是打 ...
 - 【转】机器学习教程 十四-利用tensorflow做手写数字识别
		
模式识别领域应用机器学习的场景非常多,手写识别就是其中一种,最简单的数字识别是一个多类分类问题,我们借这个多类分类问题来介绍一下google最新开源的tensorflow框架,后面深度学习的内容都会基 ...
 - 100天搞定机器学习|day39 Tensorflow Keras手写数字识别
		
提示:建议先看day36-38的内容 TensorFlow™ 是一个采用数据流图(data flow graphs),用于数值计算的开源软件库.节点(Nodes)在图中表示数学操作,图中的线(edge ...
 - Tensorflow手写数字识别---MNIST
		
MNIST数据集:包含数字0-9的灰度图, 图片size为28x28.训练样本:55000,测试样本:10000,验证集:5000
 - 吴裕雄 python 神经网络——TensorFlow实现AlexNet模型处理手写数字识别MNIST数据集
		
import tensorflow as tf # 输入数据 from tensorflow.examples.tutorials.mnist import input_data mnist = in ...
 - 吴裕雄 python 神经网络TensorFlow实现LeNet模型处理手写数字识别MNIST数据集
		
import tensorflow as tf tf.reset_default_graph() # 配置神经网络的参数 INPUT_NODE = 784 OUTPUT_NODE = 10 IMAGE ...
 
随机推荐
- Oracle核心技术之 SQL TRACE
			
1.SQL TRACE说明: 参数类型 布尔型 缺省值 false 参数类别 动态 取值范围 True|false 2.类型 1)sql trace参数:alter system改变对全局进程影响,如 ...
 - GCD之各种派发
			
dispatch_apply的用法 并行模拟for循环,将指定的代码循环10次,一般会把这些代码附加到一个queue上,然后在 dispatch_apply里并行 dispatch_queue_t q ...
 - htop详解
			
一.Htop的使用简介 大家可能对top监控软件比较熟悉,今天我为大家介绍另外一个监控软件Htop,姑且称之为top的增强版,相比top其有着很多自身的优势.如下: 两者相比起来,top比较繁琐 默认 ...
 - JQUERY中各个ajax函数
			
1.$(selecter).load() --- load() 方法从服务器加载数据,并把返回的数据放入被选元素中 2.$.get(url,callback()) 3.$.post(url,d ...
 - CentOS 7显卡驱动问题
			
CentOS 7 KDE桌面安装后有时会出现nouveau 驱动问题,导致系统不定时死机或者重启,那么这时只能禁用nouveau 1. 在配置文件中禁用nouveauvi /etc/modprobe. ...
 - 【Fiddler】杂乱基础学习
			
1.过滤fiddler筛选 打开fiddler>Tools>Fiddler Options>HTTPS>...from remote clients only,勾选这个选项就可 ...
 - requests库的post请求
			
requests库的post请求 #coding:utf-8 import requests import json class Trans(object): def __init__(self, w ...
 - 带分数dfs+剪枝优化
			
#include<iostream>#include<cstdio>#include<cstdlib>#include<ctime>using name ...
 - mysql 内置功能 触发器 实验
			
#准备表命令表 CREATE TABLE cmd ( id INT PRIMARY KEY auto_increment, ), priv ), cmd ), sub_time datetime, # ...
 - 001-windows下Elasticsearch安装、Elasticsearch-header安装
			
一.window安装Elasticsearch安装 elasticsearch的客户端版本必须与服务端版本主版本保持一致. 1.java安装[略] 2.elasticsearch下载 地址:https ...