Recurrent neural network (RNN) - Pytorch版
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms # 配置GPU或CPU设置
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 超参数设置
sequence_length = 28
input_size = 28
hidden_size = 128
num_layers = 2
num_classes = 10
batch_size = 100
num_epochs = 2
learning_rate = 0.01 # MNIST dataset
train_dataset = torchvision.datasets.MNIST(root='./data/',
train=True,
transform=transforms.ToTensor(),# 将PIL Image或者 ndarray 转换为tensor,并且归一化至[0-1],归一化至[0-1]是直接除以255
download=True) test_dataset = torchvision.datasets.MNIST(root='./data/',
train=False,
transform=transforms.ToTensor())# 将PIL Image或者 ndarray 转换为tensor,并且归一化至[0-1],归一化至[0-1]是直接除以255 # 训练数据加载,按照batch_size大小加载,并随机打乱
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
# 测试数据加载,按照batch_size大小加载
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False) # Recurrent neural network (many-to-one) 多对一
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN, self).__init__() # 继承 __init__ 功能
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) # if use nn.RNN(), it hardly learns LSTM 效果要比 nn.RNN() 好多了
self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x):
# Set initial hidden and cell states
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # Forward propagate LSTM
out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size) # Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out model = RNN(input_size, hidden_size, num_layers, num_classes).to(device)
print(model)
# RNN((lstm): LSTM(28, 128, num_layers=2, batch_first=True)
# (fc): Linear(in_features=128, out_features=10, bias=True)) # 损失函数与优化器设置
# 损失函数
criterion = nn.CrossEntropyLoss()
# 优化器设置 ,并传入RNN模型参数和相应的学习率
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # 训练模型
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.reshape(-1, sequence_length, input_size).to(device)
labels = labels.to(device) # 前向传播
outputs = model(images)
# 计算损失 loss
loss = criterion(outputs, labels) # 反向传播与优化
# 清空上一步的残余更新参数值
optimizer.zero_grad()
# 反向传播
loss.backward()
# 将参数更新值施加到RNN model的parameters上
optimizer.step()
# 每迭代一定步骤,打印结果值
if (i + 1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch + 1, num_epochs, i + 1, total_step, loss.item())) # 测试模型
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, sequence_length, input_size).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item() print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total)) # 保存已经训练好的模型
# Save the model checkpoint
torch.save(model.state_dict(), 'model.ckpt')
Recurrent neural network (RNN) - Pytorch版的更多相关文章
- Convolutional neural network (CNN) - Pytorch版
import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # ...
- Recurrent Neural Network(循环神经网络)
Reference: Alex Graves的[Supervised Sequence Labelling with RecurrentNeural Networks] Alex是RNN最著名变种 ...
- 机器学习: Python with Recurrent Neural Network
之前我们介绍了Recurrent neural network (RNN) 的原理: http://blog.csdn.net/matrix_space/article/details/5337404 ...
- Recurrent Neural Network系列2--利用Python,Theano实现RNN
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network系列3--理解RNN的BPTT算法和梯度消失
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 这是RNN教程的第三部分. 在前面的教程中,我们从头实现了一个循环 ...
- 循环神经网络(Recurrent Neural Network,RNN)
为什么使用序列模型(sequence model)?标准的全连接神经网络(fully connected neural network)处理序列会有两个问题:1)全连接神经网络输入层和输出层长度固定, ...
- 4.5 RNN循环神经网络(recurrent neural network)
自己开发了一个股票智能分析软件,功能很强大,需要的点击下面的链接获取: https://www.cnblogs.com/bclshuai/p/11380657.html 1.1 RNN循环神经网络 ...
- Recurrent Neural Network系列1--RNN(循环神经网络)概述
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network系列4--利用Python,Theano实现GRU或LSTM
yi作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORK ...
随机推荐
- ICEM-八分之一球体
原视频下载链接:https://yunpan.cn/c6seunjgbwRhJ 访问密码 c780
- Mysql 查看所有线程,被锁的表
查看所有MySQl相关的线程 show full processlist; 杀死线程id为2的线程 kill 2 查看服务器状态 show status like '%lock%'; 查看服务器配置参 ...
- JVM相关文章和GC原理算法
参考推荐: Java内存模型及GC原理 一个优秀的Java程序员必须了解的GC机制 Android 智能指针原理(推荐) Java虚拟机规范 Java虚拟机参数 Java内存模型 Java系列教程(推 ...
- DELPHI给整个项目加编译开关
DELPHI给整个项目加编译开关 project--options
- Oracle导出/导入数据库的三种模式
导出 模式一:全量导出(慎用) exp 用户名/密码@数据库实例 owner=用户名 file=文件存储路径 log=日志存储路径 full=y 栗子:exp Mark/123456@151.2.*. ...
- BAT 电脑名 用户名
@echo offecho 当前盘符:%~d0echo 当前登陆用户:%username%echo 当前盘符和路径:%~dp0echo 当前盘符和路径的短文件名格式:%~sdp0echo 当前批处理全 ...
- java.lang.UnsupportedClassVersionError: com/mysql/cj/jdbc/Driver : Unsupported major.minor version 52.0 (unable to load class [com.mysql.cj.jdbc.Driver])
原因: com/mysql/cj/jdbc/Driver是6.0版本的驱动,兼容JDK8环境,不兼容JDK7环境,在基于jdk7的tomcat中编译运行会出错,在基于jdk8的tomcat中编译运行则 ...
- TP5 分页数据加锚点
TP5 分页数据加锚点跳转到相应位置 有这样一个需求,就是加载评论后,点下一页的时候回到相应的位置. $comment = Db('comment')->order('addtime' ...
- linux: ubuntu 14.04 和16.04 快速下载
由于官网服务器在国外,下载速度奇慢,所以我们可以利用阿里云镜像下载ubuntuubuntu 14.04:http://mirrors.aliyun.com/ubuntu-releases/14.04/ ...
- 实战一:LoadRunner性能测试利器
转自:https://blog.csdn.net/weixin_42350428/article/details/82106603 企业的网络应用环境都必须支持大量用户,网络体系架构中含各类应用环境且 ...