mxnet框架下超全手写字体识别—从数据预处理到网络的训练—模型及日志的保存

import numpy as np
import mxnet as mx
import logging logging.getLogger().setLevel(logging.DEBUG) batch_size = 100
mnist = mx.test_utils.get_mnist()
train_iter = mx.io.NDArrayIter(mnist['train_data'], mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size) data = mx.sym.var('data')
# first conv layer
conv1= mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1= mx.sym.Activation(data=conv1, act_type="tanh")
pool1= mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
# second conv layer
conv2= mx.sym.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2= mx.sym.Activation(data=conv2, act_type="tanh")
pool2= mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))
# first fullc layer
flatten= mx.sym.Flatten(data=pool2)
fc1= mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3= mx.sym.Activation(data=fc1, act_type="tanh")
# second fullc
fc2= mx.sym.FullyConnected(data=tanh3, num_hidden=10)
# softmax loss
lenet= mx.sym.SoftmaxOutput(data=fc2, name='softmax') # create a trainable module on GPU 0
lenet_model = mx.mod.Module(
symbol=lenet,
context=mx.cpu()) # train with the same
lenet_model.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
batch_end_callback = mx.callback.Speedometer(batch_size, 100),
num_epoch=10)

INFO:root:Epoch[0] Batch [100] Speed: 1504.57 samples/sec accuracy=0.113564
INFO:root:Epoch[0] Batch [200] Speed: 1516.40 samples/sec accuracy=0.118100
INFO:root:Epoch[0] Batch [300] Speed: 1515.71 samples/sec accuracy=0.116600
INFO:root:Epoch[0] Batch [400] Speed: 1505.61 samples/sec accuracy=0.110200
INFO:root:Epoch[0] Batch [500] Speed: 1406.21 samples/sec accuracy=0.107600
INFO:root:Epoch[0] Train-accuracy=0.108081
INFO:root:Epoch[0] Time cost=40.572
INFO:root:Epoch[0] Validation-accuracy=0.102800
INFO:root:Epoch[1] Batch [100] Speed: 1451.87 samples/sec accuracy=0.115050
INFO:root:Epoch[1] Batch [200] Speed: 1476.86 samples/sec accuracy=0.179600
INFO:root:Epoch[1] Batch [300] Speed: 1409.67 samples/sec accuracy=0.697100
INFO:root:Epoch[1] Batch [400] Speed: 1379.52 samples/sec accuracy=0.871900
INFO:root:Epoch[1] Batch [500] Speed: 1374.88 samples/sec accuracy=0.901000
INFO:root:Epoch[1] Train-accuracy=0.925556
INFO:root:Epoch[1] Time cost=42.527
INFO:root:Epoch[1] Validation-accuracy=0.936900
INFO:root:Epoch[2] Batch [100] Speed: 1376.59 samples/sec accuracy=0.936436
INFO:root:Epoch[2] Batch [200] Speed: 1379.29 samples/sec accuracy=0.948100
INFO:root:Epoch[2] Batch [300] Speed: 1375.07 samples/sec accuracy=0.953400
INFO:root:Epoch[2] Batch [400] Speed: 1369.65 samples/sec accuracy=0.958600
INFO:root:Epoch[2] Batch [500] Speed: 1371.79 samples/sec accuracy=0.960900
INFO:root:Epoch[2] Train-accuracy=0.966667
INFO:root:Epoch[2] Time cost=43.660
INFO:root:Epoch[2] Validation-accuracy=0.972900
INFO:root:Epoch[3] Batch [100] Speed: 1230.74 samples/sec accuracy=0.969505
INFO:root:Epoch[3] Batch [200] Speed: 1335.27 samples/sec accuracy=0.970800
INFO:root:Epoch[3] Batch [300] Speed: 1264.43 samples/sec accuracy=0.972600
INFO:root:Epoch[3] Batch [400] Speed: 1242.03 samples/sec accuracy=0.974100
INFO:root:Epoch[3] Batch [500] Speed: 1322.77 samples/sec accuracy=0.974600
INFO:root:Epoch[3] Train-accuracy=0.976465
INFO:root:Epoch[3] Time cost=46.860
INFO:root:Epoch[3] Validation-accuracy=0.980700
INFO:root:Epoch[4] Batch [100] Speed: 1342.42 samples/sec accuracy=0.978020
INFO:root:Epoch[4] Batch [200] Speed: 1339.98 samples/sec accuracy=0.980600
INFO:root:Epoch[4] Batch [300] Speed: 1344.36 samples/sec accuracy=0.981000
INFO:root:Epoch[4] Batch [400] Speed: 1338.13 samples/sec accuracy=0.980000
INFO:root:Epoch[4] Batch [500] Speed: 1343.76 samples/sec accuracy=0.979000
INFO:root:Epoch[4] Train-accuracy=0.983535
INFO:root:Epoch[4] Time cost=44.694
INFO:root:Epoch[4] Validation-accuracy=0.985700
INFO:root:Epoch[5] Batch [100] Speed: 1333.50 samples/sec accuracy=0.981584
INFO:root:Epoch[5] Batch [200] Speed: 1342.07 samples/sec accuracy=0.985400
INFO:root:Epoch[5] Batch [300] Speed: 1339.04 samples/sec accuracy=0.984300
INFO:root:Epoch[5] Batch [400] Speed: 1323.42 samples/sec accuracy=0.983500

mxnet卷积神经网络训练MNIST数据集测试的更多相关文章

  1. TensorFlow——CNN卷积神经网络处理Mnist数据集

    CNN卷积神经网络处理Mnist数据集 CNN模型结构: 输入层:Mnist数据集(28*28) 第一层卷积:感受视野5*5,步长为1,卷积核:32个 第一层池化:池化视野2*2,步长为2 第二层卷积 ...

  2. Tensorflow学习教程------利用卷积神经网络对mnist数据集进行分类_利用训练好的模型进行分类

    #coding:utf-8 import tensorflow as tf from PIL import Image,ImageFilter from tensorflow.examples.tut ...

  3. TensorFlow初探之简单神经网络训练mnist数据集(TensorFlow2.0代码)

    from __future__ import print_function from tensorflow.examples.tutorials.mnist import input_data #加载 ...

  4. 使用一层神经网络训练mnist数据集

    import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dat ...

  5. 实践详细篇-Windows下使用VS2015编译的Caffe训练mnist数据集

    上一篇记录的是学习caffe前的环境准备以及如何创建好自己需要的caffe版本.这一篇记录的是如何使用编译好的caffe做训练mnist数据集,步骤编号延用上一篇 <实践详细篇-Windows下 ...

  6. 3层-CNN卷积神经网络预测MNIST数字

    3层-CNN卷积神经网络预测MNIST数字 本文创建一个简单的三层卷积网络来预测 MNIST 数字.这个深层网络由两个带有 ReLU 和 maxpool 的卷积层以及两个全连接层组成. MNIST 由 ...

  7. 使用caffe训练mnist数据集 - caffe教程实战(一)

    个人认为学习一个陌生的框架,最好从例子开始,所以我们也从一个例子开始. 学习本教程之前,你需要首先对卷积神经网络算法原理有些了解,而且安装好了caffe 卷积神经网络原理参考:http://cs231 ...

  8. Python实现bp神经网络识别MNIST数据集

    title: "Python实现bp神经网络识别MNIST数据集" date: 2018-06-18T14:01:49+08:00 tags: [""] cat ...

  9. deep_learning_LSTM长短期记忆神经网络处理Mnist数据集

    1.RNN(Recurrent Neural Network)循环神经网络模型 详见RNN循环神经网络:https://www.cnblogs.com/pinard/p/6509630.html 2. ...

随机推荐

  1. 一分钟上手, 让 Golang 操作数据库成为一种享受

    gorose, 最风骚的 go orm, 拥有链式操作, 开箱即用, 一分钟上手等八大风骚, 让 golang 操作数据库成为一种享受, 妈妈再也看不到我处理数据的痛苦了, 下面就来为大家一一讲解 g ...

  2. 并发编程 – Concurrent 用户指南--转

    1. java.util.concurrent – Java 并发工具包 Java 5 添加了一个新的包到 Java 平台,java.util.concurrent 包.这个包包含有一系列能够让 Ja ...

  3. MacOS Sierra10.12.4编译Android7.1.1源代码必须跳的坑

    简单介绍 下载Android7.1.1源代码花费了两天,编译整个源代码相同花费了2天,期间遇到无数个坑. 如今编译源代码,一旦中间遇到错误,则要又一次開始. 本文记录编译过程遇到的问题及解决方式,如有 ...

  4. 菜鸟教程之工具使用(三)——Maven自动部署到Tomcat

    书接上回,上一篇博客介绍了如何用Maven将项目打包,这篇文章就说一下如何用Maven将打完的war包部署到Tomcat,而不是手动的copy过去. 目前比较流行的方式有两种:一种是利用Tomcat官 ...

  5. OpenStack OVS GRE/VXLAN

    https://www.jianshu.com/p/0b52de73a4b3 OpenStack OVS GRE/VXLAN网络 学习或者使用OpenStack普遍有这样的现象:50%的时间花费在了网 ...

  6. 数据库的ACID

    一.事务 定义:所谓事务,它是一个操作序列,这些操作要么都执行,要么都不执行,它是一个不可分割的工作单位. 准备工作:为了说明事务的ACID原理,我们使用银行账户及资金管理的案例进行分析. [sql] ...

  7. [Windows Azure] Load Testing in Windows Azure

    The primary goal of a load test is to simulate many users accessing a web application at the same ti ...

  8. __slots__ Python Class限制添加属性

    正常情况下,当我们定义了一个class,创建了一个class的实例后,我们可以给该实例绑定任何属性和方法,这就是动态语言的灵活性.先定义class: class Student(object): pa ...

  9. 【Linux技术】常用的Linux系统调用

    下面一些函数已经过时,被新的更好的函数所代替了(gcc在链接这些函数时会发出警告),但因为兼容的原因还保留着,这些函数将在前面标上“*”号以示区别. 一.进程控制 fork 创建一个新进程 clone ...

  10. 【线程】linux之thread错误解决方案

      1.错误现象:   undefined reference to 'pthread_create' undefined reference to 'pthread_join' 2.问题原因: pt ...