基于Numpy的神经网络+手写数字识别
基于Numpy的神经网络+手写数字识别
本文代码来自Tariq Rashid所著《Python神经网络编程》
代码分为三个部分,框架如下所示:
# neural network class definition
class neuralNetwork:
# initialise the neural network
def __init__():
pass
# train the neural network
def train():
pass
# query the neural network
def query():
pass
这是一个坚实的框架,可以在这个框架之上,充实神经网络工作的详细细节。
import numpy as np
import scipy.special
import matplotlib.pyplot as plt
#neural network class definition
class neuralNetwork :
#initialise the neural network
def __init__(self, inputNodes, hiddenNodes, outputNodes, learningrate) :
#set number of nodes in each input, hidden, output layer
self.inodes = inputNodes
self.hnodes = hiddenNodes
self.onodes = outputNodes
#learning rate
self.lr = learningrate
#link weight matrices, wih and who
self.wih = np.random.normal(0.0, pow(self.hnodes, -0.5), (self.hnodes, self.inodes))
self.who = np.random.normal(0.0, pow(self.onodes, -0.5), (self.onodes, self.hnodes))
#activation function is the sigmoid function
self.activation_function = lambda x : scipy.special.expit(x)
pass
# train the neural network
def train(self, inputs_list, targets_list):
#convert inputs_list, targets_list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#calculate signals into hidden layer
hidden_inputs = np.dot(self.wih, inputs)
#calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
#calculate signals into final output layer
final_inputs = np.dot(self.who, hidden_outputs)
#calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
#output layer error is the (target-actual)
output_errors = targets - final_outputs
#hidden layer error is the output_errors, split by weights, recombined at hidden nodes
hidden_errors = np.dot(self.who.T, output_errors)
#update the weights for the links between the hidden and output layers
self.who += self.lr * np.dot((output_errors * final_outputs * (1.0 - final_outputs)), np.transpose(hidden_outputs))
#update the weights for the links between the input and hidden layers
self.wih += self.lr * np.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), np.transpose(inputs))
pass
# query the neural network
def query(self, inputs_list):
#convert inputs_list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
#calculate signals into hidden layer
hidden_inputs = np.dot(self.wih, inputs)
#calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
#calculate signals into final output layer
final_inputs = np.dot(self.who, hidden_outputs)
#calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
return final_outputs
pass
使用以上定义的神经网络类:
#number of input,hidden and output nodes
input_nodes = 784
hidden_nodes = 200
output_nodes = 10
#learning rate is 0.1
learning_rate = 0.1
#create instance of neural network
n = neuralNetwork(input_nodes, hidden_nodes, output_nodes, learning_rate)
#load the minist training data CSV file into a list
training_data_file = open("mnist_dataset/mnist_train.csv", "r")
training_data_list = training_data_file.readlines()
training_data_file.close()
#train the neural network
#epochs is the number of times the training data set is used for training
epochs = 5
for e in range(epochs):
#go through all records in the training data set
for record in training_data_list:
#split the record by the "," commas
all_values = record.split(",")
#scale and shift the inputs
inputs = (np.asfarray(all_values[1:])/255.0*0.99) + 0.01
#create the target output values (all 0.01, except the desired label which is 0.99)
targets = np.zeros(output_nodes) + 0.01
#all_values[0] is the target label for this record
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
pass
#load the minist test data CSV file into a list
test_data_file = open("mnist_dataset/mnist_test.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()
#test the neural network
#scorecard for how well the network performs, initially empty
scorecard = []
#go through all the records in the test data set
for record in test_data_list:
#split the record by the ',' commas
all_values = record.split(',')
#correct answer is the first value
correct_label = int(all_values[0])
#scale and shift the inputs
inputs = (np.asfarray(all_values[1:])/255.0*0.99) + 0.01
#query the network
outputs = n.query(inputs)
#the index of the highest value corresponds to the label
label = np.argmax(outputs)
#append correct or incorrect to list
if(label == correct_label):
#network's answer matches correct answer, add 1 to scorecard
scorecard.append(1)
else:
#network's answer doesn't matche correct answer, add 0 to scorecard
scorecard.append(0)
pass
pass
#calculate the performance score, the fraction of correct answers
scorecard_array = np.asarray(scorecard)
print("performance = ", scorecard_array.sum()/scorecard_array.size)
以上训练中所用到的数据集:
基于Numpy的神经网络+手写数字识别的更多相关文章
- 基于tensorflow的MNIST手写数字识别(二)--入门篇
http://www.jianshu.com/p/4195577585e6 基于tensorflow的MNIST手写字识别(一)--白话卷积神经网络模型 基于tensorflow的MNIST手写数字识 ...
- 基于TensorFlow的MNIST手写数字识别-初级
一:MNIST数据集 下载地址 MNIST是一个包含很多手写数字图片的数据集,一共4个二进制压缩文件 分别是test set images,test set labels,training se ...
- [Python]基于CNN的MNIST手写数字识别
目录 一.背景介绍 1.1 卷积神经网络 1.2 深度学习框架 1.3 MNIST 数据集 二.方法和原理 2.1 部署网络模型 (1)权重初始化 (2)卷积和池化 (3)搭建卷积层1 (4)搭建卷积 ...
- TensorFlow 卷积神经网络手写数字识别数据集介绍
欢迎大家关注我们的网站和系列教程:http://www.tensorflownews.com/,学习更多的机器学习.深度学习的知识! 手写数字识别 接下来将会以 MNIST 数据集为例,使用卷积层和池 ...
- 深度学习-使用cuda加速卷积神经网络-手写数字识别准确率99.7%
源码和运行结果 cuda:https://github.com/zhxfl/CUDA-CNN C语言版本参考自:http://eric-yuan.me/ 针对著名手写数字识别的库mnist,准确率是9 ...
- 神经网络手写数字识别numpy实现
本文摘自Michael Nielsen的Neural Network and Deep Learning,该书的github网址为:https://github.com/mnielsen/neural ...
- 深度学习(一):Python神经网络——手写数字识别
声明:本文章为阅读书籍<Python神经网络编程>而来,代码与书中略有差异,书籍封面: 源码 若要本地运行,请更改源码中图片与数据集的位置,环境为 Python3.6x. 1 import ...
- 基于多层感知机的手写数字识别(Tensorflow实现)
import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_dat ...
- 基于TensorFlow的MNIST手写数字识别-深入
构建多层卷积神经网络时需要多组W和偏移项b,我们封装2个方法来产生W和b 初级MNIST中用0初始化W和b,这里用噪声初始化进行对称打破,防止产生梯度0,同时用一个小的正值来初始化b避免dead ne ...
随机推荐
- Java语言编写TPL语言词法分析器
程序实现原理: 将TXT文本中的数据读出,并按照其类别的不同,将关键字.数字以及运算符识别出来. 一.词法分析实验步骤 1. 熟悉TPL语言 2. 编写TPL语言程序,至少3个,一个简单,一个复杂的( ...
- Spring & Java
Spring & Java https://spring.io/ Spring Boot https://www.shiyanlou.com/courses/1152 Spring Boot入 ...
- P1334 瑞瑞的木板 洛谷
https://www.luogu.org/problem/show?pid=1334 题目描述 瑞瑞想要亲自修复在他的一个小牧场周围的围栏.他测量栅栏并发现他需要N(1≤N≤20,000)根木板,每 ...
- hdu 5288 OO’s Sequence(2015 Multi-University Training Contest 1)
OO's Sequence Time Limit: 4000/2000 MS (Jav ...
- c语言有头循环单链表
/************************************************************************* > File Name: singleLin ...
- 可利用空间表(Free List)
写这篇文章的动因是因为 2015 年 04 月 02 日的阿里在线笔试题考到了这个知识点.我当时模模糊糊的写了一些,估计写的也不对,所以在这里总结一下. 原题 常常会有频繁申请.释放内存的需求,比如在 ...
- lua遍历目录
require"lfs" function findindir (path, wefind, r_table, intofolder) for file in lfs.dir(pa ...
- Mac OS X 10.10, Eclipse+ADT真机调试代码时,Device Chooser中不显示真机的解决方式
Mac OS X 10.10的环境下.Eclipse+ADT,进行真机调试时,会出现一个问题. Device Chooser对话框里不显示真机设备,仅仅有又一次插拔数据线才干够. 经过測试.有两个暂时 ...
- 怎样将DrawerLayout显示在ActionBar/Toolbar和status bar之间
控制status bar utm_source=tuicool#toc_1" style="color:rgb(0,0,0); text-decoration:none; line ...
- ACdream 1412 DP+排列组合
2-3 Trees Problem Description 2-3 tree is an elegant data structure invented by John Hopcroft. It is ...