基于TensorFlow的循环神经网络(RNN)
RNN适用场景
循环神经网络(Recurrent Neural Network)适合处理和预测时序数据
RNN的特点
RNN的隐藏层之间的节点是有连接的,他的输入是输入层的输出向量.extend(上一时刻隐藏层的状态向量)。
demo:单层全连接网络作为循环体的RNN
输入层维度:x
隐藏层维度:h
每个循环体的输入大小为:x+h
每个循环体的输出大小为:h
循环体的输出有两个用途:
- 下一时刻循环体的输入的一部分
- 经过另一个全连接神经网络,得到当前时刻的输出
序列长度
理论上RNN支持任意序列长度,但过长会导致优化时梯度消散的问题,因此一般都设定一个最大长度。超过该长度是,进行截断。
论文原文:On the difficulty of training Recurrent Neural Networks
长短时记忆网络(LSTM结构)
论文原文:Long Short-term memory
循环体:拥有输入门、遗忘门、输出门的特殊网络结构
遗忘门:决定忘记当前输入、上一时刻状态和上一时刻输出中的哪一部分
输入门:决定当前输入、上一时刻状态、上一时刻输出中,哪些部分将进入当前时刻的状态
RNN的变种
- 双向RNN
- 深层RNN
RNN的dropout
不同层的循环体之间使用dropout,同一层循环体之间不使用dropout
demo
import os
import re
import io
import requests
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from zipfile import ZipFile
from tensorflow.python.framework import ops
ops.reset_default_graph()
1. start a graph session and set RNN parameters
sess = tf.Session()
epochs = 20 # run 20 epochs. An epoch equals to all batches of this training set.
batch_size = 250
max_sequence_length = 25
rnn_size = 10 # The RNN will be of size 10 units.
embedding_size = 50 # every word will be embedded in a trainable vector of size 50
min_word_frequency = 10 # We will only consider words that appear at least 10 times in our vocabulary
learning_rate = 0.0005
dropout_keep_prob = tf.placeholder(tf.float32)
2. Download or open data
Check if it was already downloaded and, if so,read in the file.
Otherwise, download the data and save it
# Download or open data
data_dir = 'data'
data_file = 'text_data.txt'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if not os.path.isfile(os.path.join(data_dir, data_file)):
zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
r = requests.get(zip_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('SMSSpamCollection')
# Format Data
text_data = file.decode()
text_data = text_data.encode('ascii',errors='ignore')
text_data = text_data.decode().split('\n')
# Save data to text file
with open(os.path.join(data_dir, data_file), 'w') as file_conn:
for text in text_data:
file_conn.write("{}\n".format(text)) # append "\n" to each row. Format method is from re lib.
else:
# Open data from text file
text_data = []
with open(os.path.join(data_dir, data_file), 'r') as file_conn:
for row in file_conn:
text_data.append(row)
text_data = text_data[:-1]
text_data = [x.split('\t') for x in text_data if len(x)>=1]
[text_data_target, text_data_train] = [list(x) for x in zip(*text_data)]
3. Create a text cleaning function then clean the data
def clean_text(text_string):
text_string = re.sub(r'([^\s\w]|_|[0-9])+', '', text_string) # \w匹配包括下划线的任何单词字符 [^\s\w]匹配空格开头字符串
text_string = " ".join(text_string.split())
text_string = text_string.lower()
return(text_string)
# Clean texts
text_data_train = [clean_text(x) for x in text_data_train]
4. Change texts into numeric vectors
This will convert a text to an appropriate list of indices
x_shuffled = text_processed[shuffled_ix]
y_shuffled = text_data_target[shuffled_ix]
# Split train/test set
ix_cutoff = int(len(y_shuffled)*0.80)
x_train, x_test = x_shuffled[:ix_cutoff], x_shuffled[ix_cutoff:]
y_train, y_test = y_shuffled[:ix_cutoff], y_shuffled[ix_cutoff:]
vocab_size = len(vocab_processor.vocabulary_)
print("Vocabulary Size: {:d}".format(vocab_size))
print("80-20 Train Test split: {:d} -- {:d}".format(len(y_train), len(y_test)))
# Create placeholders
x_data = tf.placeholder(tf.int32, [None, max_sequence_length])
y_output = tf.placeholder(tf.int32, [None])
# Create embedding
embedding_mat = tf.Variable(tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0))
embedding_output = tf.nn.embedding_lookup(embedding_mat, x_data)
#embedding_output_expanded = tf.expand_dims(embedding_output, -1)
# Define the RNN cell
#tensorflow change >= 1.0, rnn is put into tensorflow.contrib directory. Prior version not test.
if tf.__version__[0]>='1':
cell=tf.contrib.rnn.BasicRNNCell(num_units = rnn_size)
else:
cell = tf.nn.rnn_cell.BasicRNNCell(num_units = rnn_size)
output, state = tf.nn.dynamic_rnn(cell, embedding_output, dtype=tf.float32)
output = tf.nn.dropout(output, dropout_keep_prob)
# Get output of RNN sequence
output = tf.transpose(output, [1, 0, 2])
last = tf.gather(output, int(output.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([rnn_size, 2], stddev=0.1))
bias = tf.Variable(tf.constant(0.1, shape=[2]))
logits_out = tf.matmul(last, weight) + bias
# Loss function
losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_out, labels=y_output) # logits=float32, labels=int32
loss = tf.reduce_mean(losses)
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(logits_out, 1), tf.cast(y_output, tf.int64)), tf.float32))
optimizer = tf.train.RMSPropOptimizer(learning_rate)
train_step = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess.run(init)
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
# Start training
for epoch in range(epochs):
# Shuffle training data
shuffled_ix = np.random.permutation(np.arange(len(x_train)))
x_train = x_train[shuffled_ix]
y_train = y_train[shuffled_ix]
num_batches = int(len(x_train)/batch_size) + 1
# TO DO CALCULATE GENERATIONS ExACTLY
for i in range(num_batches):
# Select train data
min_ix = i * batch_size
max_ix = np.min([len(x_train), ((i+1) * batch_size)])
x_train_batch = x_train[min_ix:max_ix]
y_train_batch = y_train[min_ix:max_ix]
# Run train step
train_dict = {x_data: x_train_batch, y_output: y_train_batch, dropout_keep_prob:0.5}
sess.run(train_step, feed_dict=train_dict)
# Run loss and accuracy for training
temp_train_loss, temp_train_acc = sess.run([loss, accuracy], feed_dict=train_dict)
train_loss.append(temp_train_loss)
train_accuracy.append(temp_train_acc)
# Run Eval Step
test_dict = {x_data: x_test, y_output: y_test, dropout_keep_prob:1.0}
temp_test_loss, temp_test_acc = sess.run([loss, accuracy], feed_dict=test_dict)
test_loss.append(temp_test_loss)
test_accuracy.append(temp_test_acc)
print('Epoch: {}, Test Loss: {:.2}, Test Acc: {:.2}'.format(epoch+1, temp_test_loss, temp_test_acc))
# Plot loss over time
epoch_seq = np.arange(1, epochs+1)
plt.plot(epoch_seq, train_loss, 'k--', label='Train Set')
plt.plot(epoch_seq, test_loss, 'r-', label='Test Set')
plt.title('Softmax Loss')
plt.xlabel('Epochs')
plt.ylabel('Softmax Loss')
plt.legend(loc='upper left')
plt.show()
# Plot accuracy over time
plt.plot(epoch_seq, train_accuracy, 'k--', label='Train Set')
plt.plot(epoch_seq, test_accuracy, 'r-', label='Test Set')
plt.title('Test Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='upper left')
plt.show()
Vocabulary Size: 1124
80-20 Train Test split: 4459 -- 1115
C:\Users\Diane\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py
基于TensorFlow的循环神经网络(RNN)的更多相关文章
- 循环神经网络(RNN, Recurrent Neural Networks)介绍(转载)
循环神经网络(RNN, Recurrent Neural Networks)介绍 这篇文章很多内容是参考:http://www.wildml.com/2015/09/recurrent-neur ...
- 通过keras例子理解LSTM 循环神经网络(RNN)
博文的翻译和实践: Understanding Stateful LSTM Recurrent Neural Networks in Python with Keras 正文 一个强大而流行的循环神经 ...
- 深度学习之循环神经网络RNN概述,双向LSTM实现字符识别
深度学习之循环神经网络RNN概述,双向LSTM实现字符识别 2. RNN概述 Recurrent Neural Network - 循环神经网络,最早出现在20世纪80年代,主要是用于时序数据的预测和 ...
- 循环神经网络(RNN, Recurrent Neural Networks)介绍
原文地址: http://blog.csdn.net/heyongluoyao8/article/details/48636251# 循环神经网络(RNN, Recurrent Neural Netw ...
- 用纯Python实现循环神经网络RNN向前传播过程(吴恩达DeepLearning.ai作业)
Google TensorFlow程序员点赞的文章! 前言 目录: - 向量表示以及它的维度 - rnn cell - rnn 向前传播 重点关注: - 如何把数据向量化的,它们的维度是怎么来的 ...
- 循环神经网络RNN及LSTM
一.循环神经网络RNN RNN综述 https://juejin.im/entry/5b97e36cf265da0aa81be239 RNN中为什么要采用tanh而不是ReLu作为激活函数? htt ...
- 循环神经网络RNN模型和长短时记忆系统LSTM
传统DNN或者CNN无法对时间序列上的变化进行建模,即当前的预测只跟当前的输入样本相关,无法建立在时间或者先后顺序上出现在当前样本之前或者之后的样本之间的联系.实际的很多场景中,样本出现的时间顺序非常 ...
- 从网络架构方面简析循环神经网络RNN
一.前言 1.1 诞生原因 在普通的前馈神经网络(如多层感知机MLP,卷积神经网络CNN)中,每次的输入都是独立的,即网络的输出依赖且仅依赖于当前输入,与过去一段时间内网络的输出无关.但是在现实生活中 ...
- 通俗易懂--循环神经网络(RNN)的网络结构!(TensorFlow实现)
1. 什么是RNN 循环神经网络(Recurrent Neural Network, RNN)是一类以序列(sequence)数据为输入,在序列的演进方向进行递归(recursion)且所有节点(循环 ...
随机推荐
- kernighan lin算法
这个算法主要用在网络节点的分割.他的思想是将一个网络节点图分割成两个相等的节点集合.为了连接两个社区的边权最小. step1:随机产生两个节点的集合A和B. step2:计算A和B中的每个节点的int ...
- Python学习之路——基础1
python作为一门解释型的编程语言,和c/c++等其他语言都或多或少有相通的地方,所以有语言基础的话,学起来还是方便一些.所以我的笔记对于相对简单的概念可能会选择放过,但对自己记录的东西我会力求完备 ...
- ssm整合-图片上传功能(转)
本文介绍 ssm (Spring+SpringMVC+Mybatis)实现上传功能. 以一个添加用户的案例介绍(主要是将上传文件). 一.需求介绍 我们要实现添加用户的时候上传图片(其实任何文件都可以 ...
- VS2013使用自带的数据库 Microsoft SQL Server 2012 Express LocalDB
注:DeptLocalDB:自己取的数据库实例名称 DeptSharedLocalDB:自己取的实例共享名称np:\\.\pipe\LOCALDB#SH7C6ED5\tsql\query:命名管道名称 ...
- linux环境下安装 openOffice 并启动服务
一.背景故事 这两天遇到一个大坑,客户要做office 文档在线预览功能,于是乎就要把office文档转换成pdf交给前端显示. 在某度找了一圈都说openOffice+jodconvert ...
- 汇编:实现C语言的 ||与&&运算
;C程序转汇编(或运算链接) DATAS SEGMENT a Dw b dw cc dw d dw m dw n dw string db dup(?) DATAS ends CODES SEGMEN ...
- OpenStack Grizzly版本部署(离线)
参考原版地址:https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide 图转自原版地址 部署拓扑: 部署环境:vmware stat ...
- 【CSS】多行溢出显示省略号
display: -webkit-box; -webkit-box-orient: vertical; -webkit-line-clamp: 3;//超出三行隐藏 overflow: hidden; ...
- manjaro安装teamviewer实现远程连接
不要安装库里面的这两个版本,安装后桌面快捷方式和命令行运行都正常显示窗口,但没有teamviewer ID和随机密码 12.x版本也不用下载尝试了 ➜ ~ teamviewer Init...Chec ...
- bash:/usr/bin/mogod/:connot execute binary:exec fotmat error
前两天博主在安装mogodb的时候出现以下错误,很是郁闷,明明按照教程里面做的,怎么到最后 执行命令的时候出错了呢,以下为错误execute binary:exec fotmat error" ...