前期回顾:

深度学习实践系列(1)- 从零搭建notMNIST逻辑回归模型

深度学习实践系列(2)- 搭建notMNIST的深度神经网络

在第二篇系列中,我们使用了TensorFlow搭建了第一个深度神经网络,并且尝试了很多优化方式去改进神经网络学习的效率和提高准确性。在这篇文章,我们将要使用一个强大的神经网络学习框架Keras配合TensorFlow重新搭建一个深度神经网络。

什么是Keras?

官方对于Keras的定义如下:

Keras: Deep Learning library for Theano and TensorFlow

You have just found Keras.

Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow orTheano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

Use Keras if you need a deep learning library that:

  • Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
  • Supports both convolutional networks and recurrent networks, as well as combinations of the two.
  • Runs seamlessly on CPU and GPU.“

知乎上面对其的评价:如何评价深度学习框架Keras?

今年1月Keras被添加到TensorFlow被作为默认API:Keras 将被添加到谷歌 TensorFlow 成为默认 API

总结下来有以下几点:

1. Keras是基于TensorFlow和Theano更高级的封装框架,因此提供很多现成的功能更容易实现

2. 灵活度不够,并且由于封装对于外界是黑盒,所以定制也较难

3. 社区非常活跃,并且获得TensorFlow的认可,因此TensorFlow+Keras会成为初学者上手很好的一个平台

使用Keras搭建神经网络

环境准备

安装TensorFlow:See installation instructions.

安装Keras

sudo pip install keras

依赖包引入

引入了numpy, tensorflow, keras, six.moves

from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
np.random.seed(1337) # for reproducibility from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.optimizers import RMSprop
from keras.optimizers import Adam
from keras.regularizers import l2

读取数据

nb_classes = 10

pickle_file = 'notMNIST.pickle'

with open(pickle_file, 'rb') as f:
save = pickle.load(f)
X_train = save['train_dataset']
y_train = save['train_labels']
X_valid = save['valid_dataset']
y_valid = save['valid_labels']
X_test = save['test_dataset']
y_test = save['test_labels']
del save # hint to help gc free up memory
print('Training set', X_train.shape, y_train.shape)
print('Validation set', X_valid.shape, y_valid.shape)
print('Test set', X_test.shape, y_test.shape) X_train = X_train.reshape(200000, 784)
X_valid = X_valid.reshape(10000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_valid /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_valid.shape[0], 'valid samples')
print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_valid = np_utils.to_categorical(y_valid, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

从以前系列中获得的训练文件”notMNIST.pickle“读取出数据,分为三组:training, validation, test。

原始的数据X_train的类型是(200000, 28, 28),将其转化成(200000, 784),用于训练输入。

X_train = X_train.reshape(200000, 784)

将数据类型转化成float32类型

X_train = X_train.astype('float32')

对数据进行Normalization,使得所有数据都在[0,1]的范围内

X_train /= 255

将数据进行转化成binary class matrix,用于后续训练。(Converts a class vector (integers) to binary class matrix.)

Y_train = np_utils.to_categorical(y_train, nb_classes)

设计神经网络模型

model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5)) model.add(Dense(10))
model.add(Activation('softmax')) model.summary()

Sequential是Keras的一种串行的层级模型。上述的模型解释如下:

1. 输入层:大小为784的数据集

2. hidden layer 1: 512个节点,使用ReLUs激活函数,0.5 Dropout

3. hidden layer 2: 512个节点,使用ReLUs激活函数,0.5 Dropout

4. 输出层: 10个节点,使用softmax

训练神经网络模型

batch_size = 128
nb_epoch = 20
model.compile(loss='categorical_crossentropy',
#optimizer=SGD(lr=0.01),
optimizer=Adam(),
metrics=['accuracy']) history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_valid, Y_valid))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

compile函数里面的几个参数:

1. loss='categorical_crossentropy': 使用crossentropy作为loss function

2. optimizer=Adam(): SGD的高级版本,可以动态调整learning rate, 可以结合动量惯性

model.fix通过training data和validation data进行训练

model.evaluate通过在测试数据上进行计算得出最终的准确性

训练过程中的输入如下:

Epoch 1/20
200000/200000 [==============================] - 19s - loss: 0.6951 - acc: 0.7995 - val_loss: 0.5094 - val_acc: 0.8473
Epoch 2/20
200000/200000 [==============================] - 19s - loss: 0.5073 - acc: 0.8486 - val_loss: 0.4573 - val_acc: 0.8596
Epoch 3/20
200000/200000 [==============================] - 22s - loss: 0.4646 - acc: 0.8601 - val_loss: 0.4253 - val_acc: 0.8668
Epoch 4/20
200000/200000 [==============================] - 22s - loss: 0.4356 - acc: 0.8680 - val_loss: 0.3980 - val_acc: 0.8784
Epoch 5/20
200000/200000 [==============================] - 20s - loss: 0.4159 - acc: 0.8736 - val_loss: 0.3851 - val_acc: 0.8810
Epoch 6/20
200000/200000 [==============================] - 18s - loss: 0.3990 - acc: 0.8788 - val_loss: 0.3735 - val_acc: 0.8850
Epoch 7/20
200000/200000 [==============================] - 19s - loss: 0.3868 - acc: 0.8819 - val_loss: 0.3615 - val_acc: 0.8869
Epoch 8/20
200000/200000 [==============================] - 19s - loss: 0.3768 - acc: 0.8846 - val_loss: 0.3576 - val_acc: 0.8872
Epoch 9/20
200000/200000 [==============================] - 19s - loss: 0.3674 - acc: 0.8875 - val_loss: 0.3506 - val_acc: 0.8929
Epoch 10/20
200000/200000 [==============================] - 18s - loss: 0.3610 - acc: 0.8889 - val_loss: 0.3417 - val_acc: 0.8939
Epoch 11/20
200000/200000 [==============================] - 19s - loss: 0.3542 - acc: 0.8911 - val_loss: 0.3392 - val_acc: 0.8967
Epoch 12/20
200000/200000 [==============================] - 20s - loss: 0.3476 - acc: 0.8928 - val_loss: 0.3350 - val_acc: 0.8966
Epoch 13/20
200000/200000 [==============================] - 19s - loss: 0.3419 - acc: 0.8940 - val_loss: 0.3334 - val_acc: 0.8977
Epoch 14/20
200000/200000 [==============================] - 19s - loss: 0.3381 - acc: 0.8952 - val_loss: 0.3288 - val_acc: 0.9008
Epoch 15/20
200000/200000 [==============================] - 20s - loss: 0.3326 - acc: 0.8971 - val_loss: 0.3286 - val_acc: 0.8994
Epoch 16/20
200000/200000 [==============================] - 20s - loss: 0.3273 - acc: 0.8989 - val_loss: 0.3248 - val_acc: 0.9001
Epoch 17/20
200000/200000 [==============================] - 19s - loss: 0.3237 - acc: 0.8996 - val_loss: 0.3246 - val_acc: 0.8998
Epoch 18/20
200000/200000 [==============================] - 20s - loss: 0.3198 - acc: 0.9003 - val_loss: 0.3180 - val_acc: 0.9028
Epoch 19/20
200000/200000 [==============================] - 18s - loss: 0.3181 - acc: 0.9009 - val_loss: 0.3209 - val_acc: 0.9015
Epoch 20/20
200000/200000 [==============================] - 18s - loss: 0.3131 - acc: 0.9022 - val_loss: 0.3155 - val_acc: 0.9028
Test score: 0.139541323698
Test accuracy: 0.957

最终获得了大概95.7%的准确率,大家也可以不断去调整神经网络的结构,看看是否可以提高准确率,祝大家玩得开心。

附上最终的完整代码:

from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
import numpy as np
np.random.seed(1337) # for reproducibility from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.optimizers import RMSprop
from keras.optimizers import Adam
from keras.regularizers import l2 batch_size = 128
nb_classes = 10
nb_epoch = 20 pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f:
save = pickle.load(f)
X_train = save['train_dataset']
y_train = save['train_labels']
X_valid = save['valid_dataset']
y_valid = save['valid_labels']
X_test = save['test_dataset']
y_test = save['test_labels']
del save # hint to help gc free up memory
print('Training set', X_train.shape, y_train.shape)
print('Validation set', X_valid.shape, y_valid.shape)
print('Test set', X_test.shape, y_test.shape) X_train = X_train.reshape(200000, 784)
X_valid = X_valid.reshape(10000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_valid = X_valid.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_valid /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_valid.shape[0], 'valid samples')
print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_valid = np_utils.to_categorical(y_valid, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5)) model.add(Dense(10))
model.add(Activation('softmax')) model.summary() model.compile(loss='categorical_crossentropy',
#optimizer=SGD(lr=0.01),
optimizer=Adam(),
metrics=['accuracy']) history = model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_valid, Y_valid))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

深度学习实践系列(3)- 使用Keras搭建notMNIST的神经网络的更多相关文章

  1. 深度学习实践系列(2)- 搭建notMNIST的深度神经网络

    如果你希望系统性的了解神经网络,请参考零基础入门深度学习系列,下面我会粗略的介绍一下本文中实现神经网络需要了解的知识. 什么是深度神经网络? 神经网络包含三层:输入层(X).隐藏层和输出层:f(x) ...

  2. 深度学习实践系列(1)- 从零搭建notMNIST逻辑回归模型

    MNIST 被喻为深度学习中的Hello World示例,由Yann LeCun等大神组织收集的一个手写数字的数据集,有60000个训练集和10000个验证集,是个非常适合初学者入门的训练集.这个网站 ...

  3. 深度学习基础系列(五)| 深入理解交叉熵函数及其在tensorflow和keras中的实现

    在统计学中,损失函数是一种衡量损失和错误(这种损失与“错误地”估计有关,如费用或者设备的损失)程度的函数.假设某样本的实际输出为a,而预计的输出为y,则y与a之间存在偏差,深度学习的目的即是通过不断地 ...

  4. mnist手写数字识别——深度学习入门项目(tensorflow+keras+Sequential模型)

    前言 今天记录一下深度学习的另外一个入门项目——<mnist数据集手写数字识别>,这是一个入门必备的学习案例,主要使用了tensorflow下的keras网络结构的Sequential模型 ...

  5. 【转】RHadoop实践系列之一:Hadoop环境搭建

    RHadoop实践系列之一:Hadoop环境搭建 RHadoop实践系列文章,包含了R语言与Hadoop结合进行海量数据分析.Hadoop主要用来存储海量数据,R语言完成MapReduce 算法,用来 ...

  6. 深度学习基础系列(九)| Dropout VS Batch Normalization? 是时候放弃Dropout了

    Dropout是过去几年非常流行的正则化技术,可有效防止过拟合的发生.但从深度学习的发展趋势看,Batch Normalizaton(简称BN)正在逐步取代Dropout技术,特别是在卷积层.本文将首 ...

  7. Nagios学习实践系列——基本安装篇

    开篇介绍 最近由于工作需要,学习研究了一下Nagios的安装.配置.使用,关于Nagios的介绍,可以参考我上篇随笔Nagios学习实践系列——产品介绍篇 实验环境 操作系统:Red Hat Ente ...

  8. Nagios学习实践系列——配置研究[监控当前服务器]

    其实上篇Nagios学习实践系列——基本安装篇只是安装了Nagios基本组件,虽然能够打开主页,但是如果不配置相关配置文件文件,那么左边菜单很多页面都打不开,相当于只是一个空壳子.接下来,我们来学习研 ...

  9. 《神经网络和深度学习》系列文章三:sigmoid神经元

    出处: Michael Nielsen的<Neural Network and Deep Leraning>,点击末尾“阅读原文”即可查看英文原文. 本节译者:哈工大SCIR硕士生 徐伟 ...

随机推荐

  1. UU农场平台开发 UU农场拆复利系统

    UU农场平台开发 UU农场拆复利系统今年比较新的一款游戏,类似于QQ农场,但又加入了很多新型的互联网理财模式!UU农场平台开发 UU农场拆复利系统.UU农场开发.UU农场游戏平台开发.UU农场平台开发 ...

  2. SSIS 数据流的连接和查找转换

    在SSIS的数据流组件中,SSIS引擎使用Merge Join组件和 Lookup组件实现TSQL语句中的inner join 和 outer join 功能,Lookup查找组件的功能更类似TSQL ...

  3. Asp.net mvc 知多少(十)

    本系列主要翻译自<ASP.NET MVC Interview Questions and Answers >- By Shailendra Chauhan,想看英文原版的可访问http:/ ...

  4. MegaCli 安装过程

    首先说下自己遇到的坑: 百度搜索了一篇关于安装 MegaCli 的文章,于是乎就开始安装,装完之后获取不到 raid 的信息,后来发现是版本问题,就又搜索了一堆文章,最后搞定了 [root@web-0 ...

  5. .NET学习路线图

    文章转载自「开发者圆桌」一个关于开发者入门.进阶.踩坑的微信公众号 你可以通过百度云盘下载.NET学习路线图相关视频资源 链接: http://pan.baidu.com/s/1pL2gCK7 密码: ...

  6. Binary Search Tree Iterator leetcode

    Implement an iterator over a binary search tree (BST). Your iterator will be initialized with the ro ...

  7. 雪花降落CAEmitterLayer粒子效果

    CAEmitterLayer 实现雪花效果   首先需要导入#import <QuartzCore/QuartzCore.h>   /**在iOS 5中,苹果引入了一个新的CALayer子 ...

  8. 使用IDEA的gradle整合spring+springmvc+mybatis 采用javaconfig配置

    1.在上篇博客里讲述了spring+mybatis的整合,这边在上篇的基础上进行开发. 上篇博客链接http://www.cnblogs.com/huangyichun/p/6149946.html ...

  9. 关于Maven的安装及初步使用

    关于Maven的初步使用 1.  下载: 进入http://maven.apache.org/download.cgi下载  Maven 3.3.1 2.  将压缩包解压到自己的硬盘中,最好放在某个盘 ...

  10. python调用SOA服务

    python调用SOA服务,运用suds模块 #! /usr/bin/python # coding:gbk import suds,time,sys reload(sys) sys.setdefau ...