深度学习实践系列(3)- 使用Keras搭建notMNIST的神经网络
前期回顾:
深度学习实践系列(1)- 从零搭建notMNIST逻辑回归模型
深度学习实践系列(2)- 搭建notMNIST的深度神经网络
在第二篇系列中,我们使用了TensorFlow搭建了第一个深度神经网络,并且尝试了很多优化方式去改进神经网络学习的效率和提高准确性。在这篇文章,我们将要使用一个强大的神经网络学习框架Keras配合TensorFlow重新搭建一个深度神经网络。
什么是Keras?
官方对于Keras的定义如下:
“Keras: Deep Learning library for Theano and TensorFlow
You have just found Keras.
Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow orTheano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Use Keras if you need a deep learning library that:
- Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
- Supports both convolutional networks and recurrent networks, as well as combinations of the two.
- Runs seamlessly on CPU and GPU.“
知乎上面对其的评价:如何评价深度学习框架Keras?
今年1月Keras被添加到TensorFlow被作为默认API:Keras 将被添加到谷歌 TensorFlow 成为默认 API
总结下来有以下几点:
1. Keras是基于TensorFlow和Theano更高级的封装框架,因此提供很多现成的功能更容易实现
2. 灵活度不够,并且由于封装对于外界是黑盒,所以定制也较难
3. 社区非常活跃,并且获得TensorFlow的认可,因此TensorFlow+Keras会成为初学者上手很好的一个平台
使用Keras搭建神经网络
环境准备
安装TensorFlow:See installation instructions.
安装Keras
sudo pip install keras
依赖包引入
引入了numpy, tensorflow, keras, six.moves
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
np.random.seed(1337) # for reproducibility from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.optimizers import RMSprop
from keras.optimizers import Adam
from keras.regularizers import l2
读取数据
nb_classes = 10 pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f:
save = pickle.load(f)
X_train = save['train_dataset']
y_train = save['train_labels']
X_valid = save['valid_dataset']
y_valid = save['valid_labels']
X_test = save['test_dataset']
y_test = save['test_labels']
del save # hint to help gc free up memory
print('Training set', X_train.shape, y_train.shape)
print('Validation set', X_valid.shape, y_valid.shape)
print('Test set', X_test.shape, y_test.shape) X_train = X_train.reshape(200000, 784)
X_valid = X_valid.reshape(10000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_valid /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_valid.shape[0], 'valid samples')
print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_valid = np_utils.to_categorical(y_valid, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
从以前系列中获得的训练文件”notMNIST.pickle“读取出数据,分为三组:training, validation, test。
原始的数据X_train的类型是(200000, 28, 28),将其转化成(200000, 784),用于训练输入。
X_train = X_train.reshape(200000, 784)
将数据类型转化成float32类型
X_train = X_train.astype('float32')
对数据进行Normalization,使得所有数据都在[0,1]的范围内
X_train /= 255
将数据进行转化成binary class matrix,用于后续训练。(Converts a class vector (integers) to binary class matrix.)
Y_train = np_utils.to_categorical(y_train, nb_classes)
设计神经网络模型
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5)) model.add(Dense(10))
model.add(Activation('softmax')) model.summary()
Sequential是Keras的一种串行的层级模型。上述的模型解释如下:
1. 输入层:大小为784的数据集
2. hidden layer 1: 512个节点,使用ReLUs激活函数,0.5 Dropout
3. hidden layer 2: 512个节点,使用ReLUs激活函数,0.5 Dropout
4. 输出层: 10个节点,使用softmax
训练神经网络模型
batch_size = 128
nb_epoch = 20
model.compile(loss='categorical_crossentropy',
#optimizer=SGD(lr=0.01),
optimizer=Adam(),
metrics=['accuracy']) history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_valid, Y_valid))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
compile函数里面的几个参数:
1. loss='categorical_crossentropy': 使用crossentropy作为loss function
2. optimizer=Adam(): SGD的高级版本,可以动态调整learning rate, 可以结合动量惯性
model.fix通过training data和validation data进行训练
model.evaluate通过在测试数据上进行计算得出最终的准确性
训练过程中的输入如下:
Epoch 1/20
200000/200000 [==============================] - 19s - loss: 0.6951 - acc: 0.7995 - val_loss: 0.5094 - val_acc: 0.8473
Epoch 2/20
200000/200000 [==============================] - 19s - loss: 0.5073 - acc: 0.8486 - val_loss: 0.4573 - val_acc: 0.8596
Epoch 3/20
200000/200000 [==============================] - 22s - loss: 0.4646 - acc: 0.8601 - val_loss: 0.4253 - val_acc: 0.8668
Epoch 4/20
200000/200000 [==============================] - 22s - loss: 0.4356 - acc: 0.8680 - val_loss: 0.3980 - val_acc: 0.8784
Epoch 5/20
200000/200000 [==============================] - 20s - loss: 0.4159 - acc: 0.8736 - val_loss: 0.3851 - val_acc: 0.8810
Epoch 6/20
200000/200000 [==============================] - 18s - loss: 0.3990 - acc: 0.8788 - val_loss: 0.3735 - val_acc: 0.8850
Epoch 7/20
200000/200000 [==============================] - 19s - loss: 0.3868 - acc: 0.8819 - val_loss: 0.3615 - val_acc: 0.8869
Epoch 8/20
200000/200000 [==============================] - 19s - loss: 0.3768 - acc: 0.8846 - val_loss: 0.3576 - val_acc: 0.8872
Epoch 9/20
200000/200000 [==============================] - 19s - loss: 0.3674 - acc: 0.8875 - val_loss: 0.3506 - val_acc: 0.8929
Epoch 10/20
200000/200000 [==============================] - 18s - loss: 0.3610 - acc: 0.8889 - val_loss: 0.3417 - val_acc: 0.8939
Epoch 11/20
200000/200000 [==============================] - 19s - loss: 0.3542 - acc: 0.8911 - val_loss: 0.3392 - val_acc: 0.8967
Epoch 12/20
200000/200000 [==============================] - 20s - loss: 0.3476 - acc: 0.8928 - val_loss: 0.3350 - val_acc: 0.8966
Epoch 13/20
200000/200000 [==============================] - 19s - loss: 0.3419 - acc: 0.8940 - val_loss: 0.3334 - val_acc: 0.8977
Epoch 14/20
200000/200000 [==============================] - 19s - loss: 0.3381 - acc: 0.8952 - val_loss: 0.3288 - val_acc: 0.9008
Epoch 15/20
200000/200000 [==============================] - 20s - loss: 0.3326 - acc: 0.8971 - val_loss: 0.3286 - val_acc: 0.8994
Epoch 16/20
200000/200000 [==============================] - 20s - loss: 0.3273 - acc: 0.8989 - val_loss: 0.3248 - val_acc: 0.9001
Epoch 17/20
200000/200000 [==============================] - 19s - loss: 0.3237 - acc: 0.8996 - val_loss: 0.3246 - val_acc: 0.8998
Epoch 18/20
200000/200000 [==============================] - 20s - loss: 0.3198 - acc: 0.9003 - val_loss: 0.3180 - val_acc: 0.9028
Epoch 19/20
200000/200000 [==============================] - 18s - loss: 0.3181 - acc: 0.9009 - val_loss: 0.3209 - val_acc: 0.9015
Epoch 20/20
200000/200000 [==============================] - 18s - loss: 0.3131 - acc: 0.9022 - val_loss: 0.3155 - val_acc: 0.9028
Test score: 0.139541323698
Test accuracy: 0.957
最终获得了大概95.7%的准确率,大家也可以不断去调整神经网络的结构,看看是否可以提高准确率,祝大家玩得开心。
附上最终的完整代码:
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
import numpy as np
np.random.seed(1337) # for reproducibility from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.optimizers import RMSprop
from keras.optimizers import Adam
from keras.regularizers import l2 batch_size = 128
nb_classes = 10
nb_epoch = 20 pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f:
save = pickle.load(f)
X_train = save['train_dataset']
y_train = save['train_labels']
X_valid = save['valid_dataset']
y_valid = save['valid_labels']
X_test = save['test_dataset']
y_test = save['test_labels']
del save # hint to help gc free up memory
print('Training set', X_train.shape, y_train.shape)
print('Validation set', X_valid.shape, y_valid.shape)
print('Test set', X_test.shape, y_test.shape) X_train = X_train.reshape(200000, 784)
X_valid = X_valid.reshape(10000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_valid /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_valid.shape[0], 'valid samples')
print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_valid = np_utils.to_categorical(y_valid, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5)) model.add(Dense(10))
model.add(Activation('softmax')) model.summary() model.compile(loss='categorical_crossentropy',
#optimizer=SGD(lr=0.01),
optimizer=Adam(),
metrics=['accuracy']) history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_valid, Y_valid))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
深度学习实践系列(3)- 使用Keras搭建notMNIST的神经网络的更多相关文章
- 深度学习实践系列(2)- 搭建notMNIST的深度神经网络
如果你希望系统性的了解神经网络,请参考零基础入门深度学习系列,下面我会粗略的介绍一下本文中实现神经网络需要了解的知识. 什么是深度神经网络? 神经网络包含三层:输入层(X).隐藏层和输出层:f(x) ...
- 深度学习实践系列(1)- 从零搭建notMNIST逻辑回归模型
MNIST 被喻为深度学习中的Hello World示例,由Yann LeCun等大神组织收集的一个手写数字的数据集,有60000个训练集和10000个验证集,是个非常适合初学者入门的训练集.这个网站 ...
- 深度学习基础系列(五)| 深入理解交叉熵函数及其在tensorflow和keras中的实现
在统计学中,损失函数是一种衡量损失和错误(这种损失与“错误地”估计有关,如费用或者设备的损失)程度的函数.假设某样本的实际输出为a,而预计的输出为y,则y与a之间存在偏差,深度学习的目的即是通过不断地 ...
- mnist手写数字识别——深度学习入门项目(tensorflow+keras+Sequential模型)
前言 今天记录一下深度学习的另外一个入门项目——<mnist数据集手写数字识别>,这是一个入门必备的学习案例,主要使用了tensorflow下的keras网络结构的Sequential模型 ...
- 【转】RHadoop实践系列之一:Hadoop环境搭建
RHadoop实践系列之一:Hadoop环境搭建 RHadoop实践系列文章,包含了R语言与Hadoop结合进行海量数据分析.Hadoop主要用来存储海量数据,R语言完成MapReduce 算法,用来 ...
- 深度学习基础系列(九)| Dropout VS Batch Normalization? 是时候放弃Dropout了
Dropout是过去几年非常流行的正则化技术,可有效防止过拟合的发生.但从深度学习的发展趋势看,Batch Normalizaton(简称BN)正在逐步取代Dropout技术,特别是在卷积层.本文将首 ...
- Nagios学习实践系列——基本安装篇
开篇介绍 最近由于工作需要,学习研究了一下Nagios的安装.配置.使用,关于Nagios的介绍,可以参考我上篇随笔Nagios学习实践系列——产品介绍篇 实验环境 操作系统:Red Hat Ente ...
- Nagios学习实践系列——配置研究[监控当前服务器]
其实上篇Nagios学习实践系列——基本安装篇只是安装了Nagios基本组件,虽然能够打开主页,但是如果不配置相关配置文件文件,那么左边菜单很多页面都打不开,相当于只是一个空壳子.接下来,我们来学习研 ...
- 《神经网络和深度学习》系列文章三:sigmoid神经元
出处: Michael Nielsen的<Neural Network and Deep Leraning>,点击末尾“阅读原文”即可查看英文原文. 本节译者:哈工大SCIR硕士生 徐伟 ...
随机推荐
- 学习git的使用--在当地的简单命令--01
<----------git安装完成后操作-----------------> git config --global user.name "scy"添加用户名git ...
- struts2接收参数的5种方法
以下形式中最常用的是前两种 1. 使用Action的属性: 在action 里面定义要接收的参数,并提供相应的setter,getter,和提交参数的名称一致, 并不用做数据类型的转换相应提交方式可以 ...
- 实例了解js面向对象的封装和继承等特点
1.面向对象特点 相比之前按照过程式写法,面向对象有以下几个特点; 1.抽象:抓住核心问题,就是将很多个方法放在一个对象上.对象由属性和方法组成,属性就是我们定义的变量,它是静态的:方法就是行为操作, ...
- 规范 : disable account
前台的cookies在后台会去拿account出来,之后在filter status = disable的 用户在登入使用界面请求一个ajax,这时发现是401没有权限,这通常是admin把用户的ac ...
- C#:判断100--999之前的水仙花数
//判断100--999之前的水仙花数.水仙花数举例:153=13+53+33. using System;public class Program { public static void ...
- 通过 Composer Github Packagist制作发布共享PHP包
参考来源: https://laravel-china.org/topics/1002 https://rivsen.github.io/post/how-to-publish-package-to- ...
- 1782: [Usaco2010 Feb]slowdown 慢慢游
1782: [Usaco2010 Feb]slowdown 慢慢游 Time Limit: 1 Sec Memory Limit: 64 MBSubmit: 570 Solved: 346[Sub ...
- iOS切圆角的几个方法
这几天在研究到切圆角的方法,也找了下网上的资料 ---------- 切圆角尽量避免离屏渲染. 1.直接用视图中layer中的两个属性来设置圆角,这种方法比较简单,但是及其影响性能不推荐: @pro ...
- linux计算程序运行时间
转自: http://www.cnblogs.com/NeilHappy/archive/2012/12/08/2808417.html #include <sys/time.h> int ...
- babel入门基础
背景 babel的官网说babel是下一代的js语法编译器,现在自己也在很多项目中使用了babel,可是自己对babel的认识呢,只停留在从google和别人项目中copy的配置代码上,内心感到很不安 ...