基于Numpy的神经网络+手写数字识别
基于Numpy的神经网络+手写数字识别
本文代码来自Tariq Rashid所著《Python神经网络编程》
代码分为三个部分,框架如下所示:
# neural network class definition
class neuralNetwork:
# initialise the neural network
def __init__():
pass
# train the neural network
def train():
pass
# query the neural network
def query():
pass
这是一个坚实的框架,可以在这个框架之上,充实神经网络工作的详细细节。
import numpy as np
import scipy.special
import matplotlib.pyplot as plt
#neural network class definition
class neuralNetwork :
#initialise the neural network
def __init__(self, inputNodes, hiddenNodes, outputNodes, learningrate) :
#set number of nodes in each input, hidden, output layer
self.inodes = inputNodes
self.hnodes = hiddenNodes
self.onodes = outputNodes
#learning rate
self.lr = learningrate
#link weight matrices, wih and who
self.wih = np.random.normal(0.0, pow(self.hnodes, -0.5), (self.hnodes, self.inodes))
self.who = np.random.normal(0.0, pow(self.onodes, -0.5), (self.onodes, self.hnodes))
#activation function is the sigmoid function
self.activation_function = lambda x : scipy.special.expit(x)
pass
# train the neural network
def train(self, inputs_list, targets_list):
#convert inputs_list, targets_list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#calculate signals into hidden layer
hidden_inputs = np.dot(self.wih, inputs)
#calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
#calculate signals into final output layer
final_inputs = np.dot(self.who, hidden_outputs)
#calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
#output layer error is the (target-actual)
output_errors = targets - final_outputs
#hidden layer error is the output_errors, split by weights, recombined at hidden nodes
hidden_errors = np.dot(self.who.T, output_errors)
#update the weights for the links between the hidden and output layers
self.who += self.lr * np.dot((output_errors * final_outputs * (1.0 - final_outputs)), np.transpose(hidden_outputs))
#update the weights for the links between the input and hidden layers
self.wih += self.lr * np.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), np.transpose(inputs))
pass
# query the neural network
def query(self, inputs_list):
#convert inputs_list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
#calculate signals into hidden layer
hidden_inputs = np.dot(self.wih, inputs)
#calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
#calculate signals into final output layer
final_inputs = np.dot(self.who, hidden_outputs)
#calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
return final_outputs
pass
使用以上定义的神经网络类:
#number of input,hidden and output nodes
input_nodes = 784
hidden_nodes = 200
output_nodes = 10
#learning rate is 0.1
learning_rate = 0.1
#create instance of neural network
n = neuralNetwork(input_nodes, hidden_nodes, output_nodes, learning_rate)
#load the minist training data CSV file into a list
training_data_file = open("mnist_dataset/mnist_train.csv", "r")
training_data_list = training_data_file.readlines()
training_data_file.close()
#train the neural network
#epochs is the number of times the training data set is used for training
epochs = 5
for e in range(epochs):
#go through all records in the training data set
for record in training_data_list:
#split the record by the "," commas
all_values = record.split(",")
#scale and shift the inputs
inputs = (np.asfarray(all_values[1:])/255.0*0.99) + 0.01
#create the target output values (all 0.01, except the desired label which is 0.99)
targets = np.zeros(output_nodes) + 0.01
#all_values[0] is the target label for this record
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
pass
#load the minist test data CSV file into a list
test_data_file = open("mnist_dataset/mnist_test.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()
#test the neural network
#scorecard for how well the network performs, initially empty
scorecard = []
#go through all the records in the test data set
for record in test_data_list:
#split the record by the ',' commas
all_values = record.split(',')
#correct answer is the first value
correct_label = int(all_values[0])
#scale and shift the inputs
inputs = (np.asfarray(all_values[1:])/255.0*0.99) + 0.01
#query the network
outputs = n.query(inputs)
#the index of the highest value corresponds to the label
label = np.argmax(outputs)
#append correct or incorrect to list
if(label == correct_label):
#network's answer matches correct answer, add 1 to scorecard
scorecard.append(1)
else:
#network's answer doesn't matche correct answer, add 0 to scorecard
scorecard.append(0)
pass
pass
#calculate the performance score, the fraction of correct answers
scorecard_array = np.asarray(scorecard)
print("performance = ", scorecard_array.sum()/scorecard_array.size)
以上训练中所用到的数据集:
基于Numpy的神经网络+手写数字识别的更多相关文章
- 基于tensorflow的MNIST手写数字识别(二)--入门篇
http://www.jianshu.com/p/4195577585e6 基于tensorflow的MNIST手写字识别(一)--白话卷积神经网络模型 基于tensorflow的MNIST手写数字识 ...
- 基于TensorFlow的MNIST手写数字识别-初级
一:MNIST数据集 下载地址 MNIST是一个包含很多手写数字图片的数据集,一共4个二进制压缩文件 分别是test set images,test set labels,training se ...
- [Python]基于CNN的MNIST手写数字识别
目录 一.背景介绍 1.1 卷积神经网络 1.2 深度学习框架 1.3 MNIST 数据集 二.方法和原理 2.1 部署网络模型 (1)权重初始化 (2)卷积和池化 (3)搭建卷积层1 (4)搭建卷积 ...
- TensorFlow 卷积神经网络手写数字识别数据集介绍
欢迎大家关注我们的网站和系列教程:http://www.tensorflownews.com/,学习更多的机器学习.深度学习的知识! 手写数字识别 接下来将会以 MNIST 数据集为例,使用卷积层和池 ...
- 深度学习-使用cuda加速卷积神经网络-手写数字识别准确率99.7%
源码和运行结果 cuda:https://github.com/zhxfl/CUDA-CNN C语言版本参考自:http://eric-yuan.me/ 针对著名手写数字识别的库mnist,准确率是9 ...
- 神经网络手写数字识别numpy实现
本文摘自Michael Nielsen的Neural Network and Deep Learning,该书的github网址为:https://github.com/mnielsen/neural ...
- 深度学习(一):Python神经网络——手写数字识别
声明:本文章为阅读书籍<Python神经网络编程>而来,代码与书中略有差异,书籍封面: 源码 若要本地运行,请更改源码中图片与数据集的位置,环境为 Python3.6x. 1 import ...
- 基于多层感知机的手写数字识别(Tensorflow实现)
import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dat ...
- 基于TensorFlow的MNIST手写数字识别-深入
构建多层卷积神经网络时需要多组W和偏移项b,我们封装2个方法来产生W和b 初级MNIST中用0初始化W和b,这里用噪声初始化进行对称打破,防止产生梯度0,同时用一个小的正值来初始化b避免dead ne ...
随机推荐
- Leetcode 172.阶乘后的零
阶乘后的零 给定一个整数 n,返回 n! 结果尾数中零的数量. 示例 1: 输入: 3 输出: 0 解释: 3! = 6, 尾数中没有零. 示例 2: 输入: 5 输出: 1 解释: 5! = 120 ...
- 走进矩阵树定理--「CodePlus 2017 12 月赛」白金元首与独舞
n,m<=200,n*m的方阵,有ULRD表示在这个格子时下一步要走到哪里,有一些待决策的格子用.表示,可以填ULRD任意一个,问有多少种填法使得从每个格子出发都能走出这个方阵,答案取模.保证未 ...
- VIM使用技巧15
在vim的插入模式下,有时需要插入寄存器中的文本: 1.使用<C-r>{register} 2.使用<C-r><C-p>{register} 3.使用<C-r ...
- Ajax核心知识(2)
对于Ajax核心的东西需要在进行总结提炼一下: xmlHttp对象. 方法:xml.responseText获取后台传递到前台的数据,经常性的使用var object=xml.responseText ...
- codevs——1039 数的划分
1039 数的划分 2001年NOIP全国联赛提高组 时间限制: 1 s 空间限制: 128000 KB 题目等级 : 黄金 Gold 题解 题目描述 Description 将整数 ...
- maven的安装与环境变量配置
1.下载maven 地址:http://maven.apache.org/download.cgi 点击下载 apache-maven-3.2.1-bin.zip. 2.安装配置,假设maven 解压 ...
- Ubuntu 16.04安装Memcached(单机)
Ubuntu 16.04安装Memcached,不过不仅限与Ubuntu,可以用CentOS等去安装,只不过测试时使用的是Ubuntu机器.Windows下不建议使用,本机调试可以使用,线上环境除了W ...
- Android GIS开发系列-- 入门季(15) 网络图层加载
一.首先我们来看一个网络图层: http://services.arcgisonline.com/arcgis/rest/services/World_Street_Map/MapServer,这是全 ...
- RIP
距离矢量路由协议 假设网络拓扑如下 192.168.1.0网段 - - - - R1 - - 192.168.12.0网段 - - R2 - - 192.168.23.0网段 - - R3 - - - ...
- mybatis最重要的mapper文件书写
1.MyBatis中在查询进行select映射的时候,返回类型可以用resultType,也可以用resultMap. 也只有在mapper的select标签中,才会指定resultMap属性的值,其 ...