mxnet框架下超全手写字体识别—从数据预处理到网络的训练—模型及日志的保存

import numpy as np
import mxnet as mx
import logging logging.getLogger().setLevel(logging.DEBUG) batch_size = 100
mnist = mx.test_utils.get_mnist()
train_iter = mx.io.NDArrayIter(mnist['train_data'], mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size) data = mx.sym.var('data')
# first conv layer
conv1= mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1= mx.sym.Activation(data=conv1, act_type="tanh")
pool1= mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv layer
conv2= mx.sym.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2= mx.sym.Activation(data=conv2, act_type="tanh")
pool2= mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc layer
flatten= mx.sym.Flatten(data=pool2)
fc1= mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3= mx.sym.Activation(data=fc1, act_type="tanh")
# second fullc
fc2= mx.sym.FullyConnected(data=tanh3, num_hidden=10)
# softmax loss
lenet= mx.sym.SoftmaxOutput(data=fc2, name='softmax') # create a trainable module on GPU 0
lenet_model = mx.mod.Module(
symbol=lenet,
context=mx.cpu()) # train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
batch_end_callback = mx.callback.Speedometer(batch_size, 100),
num_epoch=10)

INFO:root:Epoch[0] Batch [100] Speed: 1504.57 samples/sec accuracy=0.113564
INFO:root:Epoch[0] Batch [200] Speed: 1516.40 samples/sec accuracy=0.118100
INFO:root:Epoch[0] Batch [300] Speed: 1515.71 samples/sec accuracy=0.116600
INFO:root:Epoch[0] Batch [400] Speed: 1505.61 samples/sec accuracy=0.110200
INFO:root:Epoch[0] Batch [500] Speed: 1406.21 samples/sec accuracy=0.107600
INFO:root:Epoch[0] Train-accuracy=0.108081
INFO:root:Epoch[0] Time cost=40.572
INFO:root:Epoch[0] Validation-accuracy=0.102800
INFO:root:Epoch[1] Batch [100] Speed: 1451.87 samples/sec accuracy=0.115050
INFO:root:Epoch[1] Batch [200] Speed: 1476.86 samples/sec accuracy=0.179600
INFO:root:Epoch[1] Batch [300] Speed: 1409.67 samples/sec accuracy=0.697100
INFO:root:Epoch[1] Batch [400] Speed: 1379.52 samples/sec accuracy=0.871900
INFO:root:Epoch[1] Batch [500] Speed: 1374.88 samples/sec accuracy=0.901000
INFO:root:Epoch[1] Train-accuracy=0.925556
INFO:root:Epoch[1] Time cost=42.527
INFO:root:Epoch[1] Validation-accuracy=0.936900
INFO:root:Epoch[2] Batch [100] Speed: 1376.59 samples/sec accuracy=0.936436
INFO:root:Epoch[2] Batch [200] Speed: 1379.29 samples/sec accuracy=0.948100
INFO:root:Epoch[2] Batch [300] Speed: 1375.07 samples/sec accuracy=0.953400
INFO:root:Epoch[2] Batch [400] Speed: 1369.65 samples/sec accuracy=0.958600
INFO:root:Epoch[2] Batch [500] Speed: 1371.79 samples/sec accuracy=0.960900
INFO:root:Epoch[2] Train-accuracy=0.966667
INFO:root:Epoch[2] Time cost=43.660
INFO:root:Epoch[2] Validation-accuracy=0.972900
INFO:root:Epoch[3] Batch [100] Speed: 1230.74 samples/sec accuracy=0.969505
INFO:root:Epoch[3] Batch [200] Speed: 1335.27 samples/sec accuracy=0.970800
INFO:root:Epoch[3] Batch [300] Speed: 1264.43 samples/sec accuracy=0.972600
INFO:root:Epoch[3] Batch [400] Speed: 1242.03 samples/sec accuracy=0.974100
INFO:root:Epoch[3] Batch [500] Speed: 1322.77 samples/sec accuracy=0.974600
INFO:root:Epoch[3] Train-accuracy=0.976465
INFO:root:Epoch[3] Time cost=46.860
INFO:root:Epoch[3] Validation-accuracy=0.980700
INFO:root:Epoch[4] Batch [100] Speed: 1342.42 samples/sec accuracy=0.978020
INFO:root:Epoch[4] Batch [200] Speed: 1339.98 samples/sec accuracy=0.980600
INFO:root:Epoch[4] Batch [300] Speed: 1344.36 samples/sec accuracy=0.981000
INFO:root:Epoch[4] Batch [400] Speed: 1338.13 samples/sec accuracy=0.980000
INFO:root:Epoch[4] Batch [500] Speed: 1343.76 samples/sec accuracy=0.979000
INFO:root:Epoch[4] Train-accuracy=0.983535
INFO:root:Epoch[4] Time cost=44.694
INFO:root:Epoch[4] Validation-accuracy=0.985700
INFO:root:Epoch[5] Batch [100] Speed: 1333.50 samples/sec accuracy=0.981584
INFO:root:Epoch[5] Batch [200] Speed: 1342.07 samples/sec accuracy=0.985400
INFO:root:Epoch[5] Batch [300] Speed: 1339.04 samples/sec accuracy=0.984300
INFO:root:Epoch[5] Batch [400] Speed: 1323.42 samples/sec accuracy=0.983500

mxnet卷积神经网络训练MNIST数据集测试的更多相关文章

  1. TensorFlow——CNN卷积神经网络处理Mnist数据集

    CNN卷积神经网络处理Mnist数据集 CNN模型结构: 输入层:Mnist数据集(28*28) 第一层卷积:感受视野5*5,步长为1,卷积核:32个 第一层池化:池化视野2*2,步长为2 第二层卷积 ...

  2. Tensorflow学习教程------利用卷积神经网络对mnist数据集进行分类_利用训练好的模型进行分类

    #coding:utf-8 import tensorflow as tf from PIL import Image,ImageFilter from tensorflow.examples.tut ...

  3. TensorFlow初探之简单神经网络训练mnist数据集(TensorFlow2.0代码)

    from __future__ import print_function from tensorflow.examples.tutorials.mnist import input_data #加载 ...

  4. 使用一层神经网络训练mnist数据集

    import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dat ...

  5. 实践详细篇-Windows下使用VS2015编译的Caffe训练mnist数据集

    上一篇记录的是学习caffe前的环境准备以及如何创建好自己需要的caffe版本.这一篇记录的是如何使用编译好的caffe做训练mnist数据集,步骤编号延用上一篇 <实践详细篇-Windows下 ...

  6. 3层-CNN卷积神经网络预测MNIST数字

    3层-CNN卷积神经网络预测MNIST数字 本文创建一个简单的三层卷积网络来预测 MNIST 数字.这个深层网络由两个带有 ReLU 和 maxpool 的卷积层以及两个全连接层组成. MNIST 由 ...

  7. 使用caffe训练mnist数据集 - caffe教程实战(一)

    个人认为学习一个陌生的框架,最好从例子开始,所以我们也从一个例子开始. 学习本教程之前,你需要首先对卷积神经网络算法原理有些了解,而且安装好了caffe 卷积神经网络原理参考:http://cs231 ...

  8. Python实现bp神经网络识别MNIST数据集

    title: "Python实现bp神经网络识别MNIST数据集" date: 2018-06-18T14:01:49+08:00 tags: [""] cat ...

  9. deep_learning_LSTM长短期记忆神经网络处理Mnist数据集

    1.RNN(Recurrent Neural Network)循环神经网络模型 详见RNN循环神经网络:https://www.cnblogs.com/pinard/p/6509630.html 2. ...

随机推荐

  1. Natural Language Processing 课程,文章,论文

    CS224n: Natural Language Processing with Deep Learning http://cs224d.stanford.edu/syllabus.html http ...

  2. (原创)c++11改进我们的模式之改进单例模式

    我会写关于c++11的一个系列的文章,会讲到如何使用c++11改进我们的程序,本次讲如何改进我们的模式,会讲到如何改进单例模式.观察者模式.访问者模式.工厂模式.命令模式等模式.通过c++11的改进, ...

  3. MySQL load数据的时候自动更新时间

    MySQL load数据的时候自动更新时间 前提 CREATE TABLE table_name ( dt varchar(255) NULL , ctime timestamp NULL ON UP ...

  4. openfire聊天记录插件

    package com.sqj.openfire.chat.logs; import java.io.File; import java.util.Date; import java.util.Lis ...

  5. pip升级Python程序包

    列出当前安装的包: pip list 列出可升级的包: pip list --outdate 升级一个包: pip install --upgrade requests // mac,linux,un ...

  6. spring filter 配置

    web xml <filter>    <filter-name>DelegatingFilterProxy</filter-name>    <filter ...

  7. 6. 支持向量机(SVM)核函数

    1. 感知机原理(Perceptron) 2. 感知机(Perceptron)基本形式和对偶形式实现 3. 支持向量机(SVM)拉格朗日对偶性(KKT) 4. 支持向量机(SVM)原理 5. 支持向量 ...

  8. curl Array to string conversion 错误

    0x00 故障 由于GuzzleHttp在iis上使用错误,于是开始替换其为Unirest,没想到发送了一个curl Array to string conversion 错误 0x01 原因 跟踪调 ...

  9. 大数据 Hive 简介

    第一部分:Hive简介 什么是Hive •Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能. •本质是将SQL转换为MapReduce程序 ...

  10. Ubuntu 14.04 编译 Android 4.2.2 for Tiny4412

    . . . . . 在学校里是用 Redhat 6.4 编译的 Android 4.2.2 很顺利,把源码包拷贝到笔记本上的 Ubuntu 14.04 上再编译遭遇了各种坑,所以便有了这篇博客记录解决 ...