Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:

We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains N×MN×M characters, where NN is the batch size (the number of sequences) and MM is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into NN sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want NN sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is N×(M∗K)N×(M∗K) where KK is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a N×MN×M window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from 00 to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr. Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr) // characters_per_batch # Keep only enough characters to make full batches
arr = arr[:n_batches*characters_per_batch] # Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y batches = get_batches(encoded, 10, 50)
x, y = next(batches) print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Making training mini-batches的更多相关文章
- 第四章——训练模型(Training Models)
前几章在不知道原理的情况下,已经学会使用了多个机器学习模型机器算法.Scikit-Learn很方便,以至于隐藏了太多的实现细节. 知其然知其所以然是必要的,这有利于快速选择合适的模型.正确的训练算法. ...
- deeplearning.ai 改善深层神经网络 week2 优化算法 听课笔记
这一周的主题是优化算法. 1. Mini-batch: 上一门课讨论的向量化的目的是去掉for循环加速优化计算,X = [x(1) x(2) x(3) ... x(m)],X的每一个列向量x(i)是 ...
- Must Know Tips/Tricks in Deep Neural Networks
Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei) Deep Neural Networks, especially C ...
- Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week2, Assignment(Optimization Methods)
声明:所有内容来自coursera,作为个人学习笔记记录在这里. 请不要ctrl+c/ctrl+v作业. Optimization Methods Until now, you've always u ...
- MLlib之LR算法源码学习
/** * :: DeveloperApi :: * GeneralizedLinearModel (GLM) represents a model trained using * Generaliz ...
- Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)
http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...
- GeneralizedLinearAlgorithm in Spark MLLib
GeneralizedLinearAlgorithm SparkMllib涉及到的算法 Classification Linear Support Vector Machines (SVMs) Log ...
- 课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第二周(Optimization algorithms) —— 2.Programming assignments:Optimization
Optimization Welcome to the optimization's programming assignment of the hyper-parameters tuning spe ...
- 几种梯度下降方法对比(Batch gradient descent、Mini-batch gradient descent 和 stochastic gradient descent)
https://blog.csdn.net/u012328159/article/details/80252012 我们在训练神经网络模型时,最常用的就是梯度下降,这篇博客主要介绍下几种梯度下降的变种 ...
- 吴恩达深度学习笔记(五) —— 优化算法:Mini-Batch GD、Momentum、RMSprop、Adam、学习率衰减
主要内容: 一.Mini-Batch Gradient descent 二.Momentum 四.RMSprop 五.Adam 六.优化算法性能比较 七.学习率衰减 一.Mini-Batch Grad ...
随机推荐
- A Translation for Quaternion 一篇对四元数的翻译
一篇写的非常好的博客:http://www.cnblogs.com/lookof/archive/2012/02/24/2360749.html
- git 简单使用规范
分支管理办法 创建一个主仓库dev 每个成员fork一份dev分支 在自己fork出来的代码里做开发 开发完成后发出一个合并请求 pull request,等待被其他有合并权限的同事合并代码,合并代码 ...
- django学习笔记【002】创建第一个django app
2.3.3 1.创建一个名叫polls的app python3. manage.py startapp polls tree mysite/ mysite/ ├── db.sqlite3 ├── ma ...
- laravel数据库操作sql语句用Eloquent ORM来构造
现在有查询语句: SELECT users.sNmame, users.iCreateTime, users_ext.iAge, users_ext.sSex FROM users LEFT JOIN ...
- NanoHttpd
NanoHttpd是个很强大的开源库,仅仅用一个Java类,就实现了一个轻量级的 Web Server,可以非常方便地集成到Android应用中去,让你的App支持 HTTP GET, POST, P ...
- 多线程-ReentrantReadWriteLock
ReentrantLock具有完全互斥排他的效果,即同一时间只有一个线程在执行ReentrantLock.lock()方法后面的任务.这样做虽然保证了实例变量的线程安全,但效率却是非常低下的.JDK中 ...
- 并发insert情况下会发生重复的数据插入问题
1.背景 用多线程接收推送的订单数据,把接收的订单数据存到一个表中,实现的需求是:如果接收的订单消息在数据库中已经存在,那么执行update操作:如果没有存在,那么执行insert操作代码逻辑: if ...
- JAVA 数组格式的json字符串转换成List
一. import org.codehaus.jackson.type.TypeReference; import org.codehaus.jackson.map.ObjectMapper; Obj ...
- weex-iOS集成
weex-iOS集成 weex只是刚刚起步,还存在一些bug,有些功能还有待完善和提高.但是其使用起来还是可以节省些时间. 这里我们说说如何把weex集成到我们的iOS项目中 1. 下载weex源代码 ...
- php 区分中文,英文,中英混合
$str1="是你"; $strA = trim($str1); $lenA = strlen($strA); $lenB = mb_strlen($strA,"utf- ...