【python实现卷积神经网络】定义训练和测试过程
代码来源:https://github.com/eriklindernoren/ML-From-Scratch
卷积神经网络中卷积层Conv2D(带stride、padding)的具体实现:https://www.cnblogs.com/xiximayou/p/12706576.html
激活函数的实现(sigmoid、softmax、tanh、relu、leakyrelu、elu、selu、softplus):https://www.cnblogs.com/xiximayou/p/12713081.html
损失函数定义(均方误差、交叉熵损失):https://www.cnblogs.com/xiximayou/p/12713198.html
优化器的实现(SGD、Nesterov、Adagrad、Adadelta、RMSprop、Adam):https://www.cnblogs.com/xiximayou/p/12713594.html
卷积层反向传播过程:https://www.cnblogs.com/xiximayou/p/12713930.html
全连接层实现:https://www.cnblogs.com/xiximayou/p/12720017.html
批量归一化层实现:https://www.cnblogs.com/xiximayou/p/12720211.html
池化层实现:https://www.cnblogs.com/xiximayou/p/12720324.html
padding2D实现:https://www.cnblogs.com/xiximayou/p/12720454.html
Flatten层实现:https://www.cnblogs.com/xiximayou/p/12720518.html
上采样层UpSampling2D实现:https://www.cnblogs.com/xiximayou/p/12720558.html
Dropout层实现:https://www.cnblogs.com/xiximayou/p/12720589.html
激活层实现:https://www.cnblogs.com/xiximayou/p/12720622.html
首先是所有的代码:
from __future__ import print_function, division
from terminaltables import AsciiTable
import numpy as np
import progressbar
from mlfromscratch.utils import batch_iterator
from mlfromscratch.utils.misc import bar_widgets class NeuralNetwork():
"""Neural Network. Deep Learning base model.
Parameters:
-----------
optimizer: class
The weight optimizer that will be used to tune the weights in order of minimizing
the loss.
loss: class
Loss function used to measure the model's performance. SquareLoss or CrossEntropy.
validation: tuple
A tuple containing validation data and labels (X, y)
"""
def __init__(self, optimizer, loss, validation_data=None):
self.optimizer = optimizer
self.layers = []
self.errors = {"training": [], "validation": []}
self.loss_function = loss()
self.progressbar = progressbar.ProgressBar(widgets=bar_widgets) self.val_set = None
if validation_data:
X, y = validation_data
self.val_set = {"X": X, "y": y} def set_trainable(self, trainable):
""" Method which enables freezing of the weights of the network's layers. """
for layer in self.layers:
layer.trainable = trainable def add(self, layer):
""" Method which adds a layer to the neural network """
# If this is not the first layer added then set the input shape
# to the output shape of the last added layer
if self.layers:
layer.set_input_shape(shape=self.layers[-1].output_shape()) # If the layer has weights that needs to be initialized
if hasattr(layer, 'initialize'):
layer.initialize(optimizer=self.optimizer) # Add layer to the network
self.layers.append(layer) def test_on_batch(self, X, y):
""" Evaluates the model over a single batch of samples """
y_pred = self._forward_pass(X, training=False)
loss = np.mean(self.loss_function.loss(y, y_pred))
acc = self.loss_function.acc(y, y_pred) return loss, acc def train_on_batch(self, X, y):
""" Single gradient update over one batch of samples """
y_pred = self._forward_pass(X)
loss = np.mean(self.loss_function.loss(y, y_pred))
acc = self.loss_function.acc(y, y_pred)
# Calculate the gradient of the loss function wrt y_pred
loss_grad = self.loss_function.gradient(y, y_pred)
# Backpropagate. Update weights
self._backward_pass(loss_grad=loss_grad) return loss, acc def fit(self, X, y, n_epochs, batch_size):
""" Trains the model for a fixed number of epochs """
for _ in self.progressbar(range(n_epochs)): batch_error = []
for X_batch, y_batch in batch_iterator(X, y, batch_size=batch_size):
loss, _ = self.train_on_batch(X_batch, y_batch)
batch_error.append(loss) self.errors["training"].append(np.mean(batch_error)) if self.val_set is not None:
val_loss, _ = self.test_on_batch(self.val_set["X"], self.val_set["y"])
self.errors["validation"].append(val_loss) return self.errors["training"], self.errors["validation"] def _forward_pass(self, X, training=True):
""" Calculate the output of the NN """
layer_output = X
for layer in self.layers:
layer_output = layer.forward_pass(layer_output, training) return layer_output def _backward_pass(self, loss_grad):
""" Propagate the gradient 'backwards' and update the weights in each layer """
for layer in reversed(self.layers):
loss_grad = layer.backward_pass(loss_grad) def summary(self, name="Model Summary"):
# Print model name
print (AsciiTable([[name]]).table)
# Network input shape (first layer's input shape)
print ("Input Shape: %s" % str(self.layers[0].input_shape))
# Iterate through network and get each layer's configuration
table_data = [["Layer Type", "Parameters", "Output Shape"]]
tot_params = 0
for layer in self.layers:
layer_name = layer.layer_name()
params = layer.parameters()
out_shape = layer.output_shape()
table_data.append([layer_name, str(params), str(out_shape)])
tot_params += params
# Print network configuration table
print (AsciiTable(table_data).table)
print ("Total Parameters: %d\n" % tot_params) def predict(self, X):
""" Use the trained model to predict labels of X """
return self._forward_pass(X, training=False)
接着我们来一个一个函数进行分析:
1、初始化__init__:这里面定义好优化器optimizer、模型层layers、错误errors、损失函数loss_function、用于显示进度条progressbar,这里从mlfromscratch.utils.misc中导入了bar_widgets,我们看看这是什么:
bar_widgets = [
'Training: ', progressbar.Percentage(), ' ', progressbar.Bar(marker="-", left="[", right="]"),
' ', progressbar.ETA()
]
2、set_trainable():用于设置哪些模型层需要进行参数的更新
3、add():将一个模块放入到卷积神经网络中,例如卷积层、池化层、激活层等等。
4、test_on_batch():使用batch进行测试,这里不需要进行反向传播。
5、train_on_batch():使用batch进行训练,包括前向传播计算损失以及反向传播更新参数。
6、fit():喂入数据进行训练或验证,这里需要定义好epochs和batch_size的大小,同时有一个读取数据的函数batch_iterator(),位于mlfromscratch.utils下的data_manipulation.py中:
def batch_iterator(X, y=None, batch_size=64):
""" Simple batch generator """
n_samples = X.shape[0]
for i in np.arange(0, n_samples, batch_size):
begin, end = i, min(i+batch_size, n_samples)
if y is not None:
yield X[begin:end], y[begin:end]
else:
yield X[begin:end]
7、_forward_pass():模型层的前向传播。
8、_backward_pass():模型层的反向传播。
9、summary():用于输出模型的每层的类型、参数数量以及输出大小。
10、predict():用于输出预测值。
不难发现,该代码是借鉴了tensorflow中的一些模块的设计思想。
【python实现卷积神经网络】定义训练和测试过程的更多相关文章
- 【python实现卷积神经网络】开始训练
代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...
- 基于Python的卷积神经网络和特征提取
基于Python的卷积神经网络和特征提取 用户1737318发表于人工智能头条订阅 224 在这篇文章中: Lasagne 和 nolearn 加载MNIST数据集 ConvNet体系结构与训练 预测 ...
- 《TensorFlow实战》中AlexNet卷积神经网络的训练中
TensorFlow实战中AlexNet卷积神经网络的训练 01 出错 TypeError: as_default() missing 1 required positional argument: ...
- python机器学习卷积神经网络(CNN)
卷积神经网络(CNN) 关注公众号"轻松学编程"了解更多. 一.简介 卷积神经网络(Convolutional Neural Network,CNN)是一种前馈神经网络,它的人 ...
- 【python实现卷积神经网络】损失函数的定义(均方误差损失、交叉熵损失)
代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...
- Python CNN卷积神经网络代码实现
# -*- coding: utf-8 -*- """ Created on Wed Nov 21 17:32:28 2018 @author: zhen "& ...
- 使用卷积神经网络CNN训练识别mnist
算的的上是自己搭建的第一个卷积神经网络.网络结构比较简单. 输入为单通道的mnist数据集.它是一张28*28,包含784个特征值的图片 我们第一层输入,使用5*5的卷积核进行卷积,输出32张特征图, ...
- 【python实现卷积神经网络】卷积层Conv2D反向传播过程
代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...
- 【python实现卷积神经网络】激活函数的实现(sigmoid、softmax、tanh、relu、leakyrelu、elu、selu、softplus)
代码来源:https://github.com/eriklindernoren/ML-From-Scratch 卷积神经网络中卷积层Conv2D(带stride.padding)的具体实现:https ...
随机推荐
- bash编程练习,带选项,添加或删除用户
脚本练习题: 可以接受选项及参数,而后能获取每一个选项,及选项的参数,并能根据选项及参数做出特定的操作: 比如:adminusers.sh -a|--add user .. -d|--del user ...
- JAVA EE,JAVA SE,JAVA ME,JDK,JRE,JVM之间的区别
JAVA EE是开发企业级应用,主要针对web开发有一套解决方案. JAVA SE是针对普通的桌面开发和小应用开发. JAVA ME是针对嵌入式设备开发,如手机. JRE是程序的运行环境 JDK是程序 ...
- 全国职业技能大赛信息安全管理与评估-一些细节tips
Base64加解密: ubuntu@VM-0-5-ubuntu:~$ echo iloveyou | base64aWxvdmV5b3UKubuntu@VM-0-5-ubuntu:~$ echo aW ...
- cmdb客户端代码完善2
目录: 1.面试提问 2.完善采集端代码 3.唯一标识的问题 4.API的验证 1.面试会问到的问题: # 1. 为啥要做CMDB?# - 实现运维自动化, 而CMDB是实现运维自动化的基石# - 之 ...
- Slam笔记I
视觉Slam笔记I 第二讲-三位空间刚体运动 点与坐标系: 基础概念: 坐标系:左手系和右手系.右手系更常用.定义坐标系时,会定义世界坐标系,相机坐标系,以及其他关心对象的坐标系.空间中任意一点可由空 ...
- Github标星3K+,热榜第三,一网打尽数据科学速查表
这几天,Github上的趋势榜一天一换. 这次一个名为 Data-Science--Cheat-Sheet 的项目突然蹿到了第三名. 仔细一看,确实干货满满.来,让文摘菌推荐一下~ 这个项目本质上是备 ...
- Ubuntu系统下命令行查看自己已安装的桌面环境问题
原因:有时我们进行远程连接时需要知道我们的Ubuntu系统已安装的桌面环境,这时我们可以使用[dpkg]命令. [dpkg]:dpkg命令是Debian Linux系统用来安装.创建和管理软件包的实用 ...
- spring中BeanPostProcessor之一:InstantiationAwareBeanPostProcessor(02)
在上篇博客中写道了bean后置处理器InstantiationAwareBeanPostProcessor,只介绍了其中一个方法的作用及用法,现在来看postProcessBeforeInstanti ...
- PHP序列化及反序列化分析学习小结
PHP反序列化 最近又遇到php反序列化,就顺便来做个总结. 0x01 PHP序列化和反序列化 php序列化:php对象 序列化的最主要的用处就是在传递和保存对象的时候,保证对象的完整性和可传递性.序 ...
- vulnhub~Djinn:2
这道题挺难的,和Djinn:1相比,正如作者所言,有许多相似的地方.仍然开放着端口 可以看到5个端口开放着,1337是web端口,这里面如djinn1一样,write your wish,但是send ...