Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:

We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.

The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains N×MN×M characters, where NN is the batch size (the number of sequences) and MM is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.

After that, we need to split arr into NN sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want NN sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is N×(M∗K)N×(M∗K) where KK is the number of batches.

Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a N×MN×M window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:

y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]

where x is the input batch and y is the target batch.

The way I like to do this window is use range to take steps of size n_steps from 00 to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.

 def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr. Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr) // characters_per_batch # Keep only enough characters to make full batches
arr = arr[:n_batches*characters_per_batch] # Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y batches = get_batches(encoded, 10, 50)
x, y = next(batches) print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])

Making training mini-batches的更多相关文章

  1. 第四章——训练模型(Training Models)

    前几章在不知道原理的情况下,已经学会使用了多个机器学习模型机器算法.Scikit-Learn很方便,以至于隐藏了太多的实现细节. 知其然知其所以然是必要的,这有利于快速选择合适的模型.正确的训练算法. ...

  2. deeplearning.ai 改善深层神经网络 week2 优化算法 听课笔记

    这一周的主题是优化算法. 1.  Mini-batch: 上一门课讨论的向量化的目的是去掉for循环加速优化计算,X = [x(1) x(2) x(3) ... x(m)],X的每一个列向量x(i)是 ...

  3. Must Know Tips/Tricks in Deep Neural Networks

    Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)   Deep Neural Networks, especially C ...

  4. Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week2, Assignment(Optimization Methods)

    声明:所有内容来自coursera,作为个人学习笔记记录在这里. 请不要ctrl+c/ctrl+v作业. Optimization Methods Until now, you've always u ...

  5. MLlib之LR算法源码学习

    /** * :: DeveloperApi :: * GeneralizedLinearModel (GLM) represents a model trained using * Generaliz ...

  6. Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)

    http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...

  7. GeneralizedLinearAlgorithm in Spark MLLib

    GeneralizedLinearAlgorithm SparkMllib涉及到的算法 Classification Linear Support Vector Machines (SVMs) Log ...

  8. 课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第二周(Optimization algorithms) —— 2.Programming assignments:Optimization

    Optimization Welcome to the optimization's programming assignment of the hyper-parameters tuning spe ...

  9. 几种梯度下降方法对比(Batch gradient descent、Mini-batch gradient descent 和 stochastic gradient descent)

    https://blog.csdn.net/u012328159/article/details/80252012 我们在训练神经网络模型时,最常用的就是梯度下降,这篇博客主要介绍下几种梯度下降的变种 ...

  10. 吴恩达深度学习笔记(五) —— 优化算法:Mini-Batch GD、Momentum、RMSprop、Adam、学习率衰减

    主要内容: 一.Mini-Batch Gradient descent 二.Momentum 四.RMSprop 五.Adam 六.优化算法性能比较 七.学习率衰减 一.Mini-Batch Grad ...

随机推荐

  1. centos(7.0) 上 crontab 计划任务

    yum install vixie-cron yum install crontabs /bin/systemctl restart crond.service  #启动服务 /bin/systemc ...

  2. CentOS — MySQL备份 Shell 脚本

    原文链接:http://www.cnblogs.com/bruceleeliya/archive/2012/05/04/2482733.html 新建一个 Shell 脚本文件 vi /home/wo ...

  3. php安全处理

    1.php.ini 修改 open_basedir='d:\wwwroot' //配置只能访问指定的网站目录 2.php.ini 修改 disable_funcitons=system,passthr ...

  4. jcifs 具体解释读取网络共享文件数据

    时隔1年半,没有发过新的帖子了,也没怎么来过CSDN逛逛了,人也懒散了. 今天收到网友的提问,才回来看看.认为应该再写点什么出来.只是.发现自己研究是不是太深入,写不出那么高深的东西.那就写点肤浅的东 ...

  5. 常见typedef 用法

    1.常规变量类型定义例如:typedef unsigned char uchar描述:uchar等价于unsigned char类型定义      uchar c声明等于unsigned char c ...

  6. 227. Mock Hanoi Tower by Stacks【easy】

    In the classic problem of Towers of Hanoi, you have 3 towers and N disks of different sizes which ca ...

  7. dmesg命令应用

    昨晚上线服务的时候,看log偶然发现服务在启动半小时左右就会被supervise重新拉起,也没有core.通过重新启动的服务发现内存飙涨,且持续增加,怀疑是内存打满,进程被kill了. 其实怀疑是正确 ...

  8. Linux_Command

    系统 # uname -a # 查看内核/操作系统/CPU信息 # head -n 1 /etc/issue # 查看操作系统版本 # cat /proc/cpuinfo # 查看CPU信息 # ho ...

  9. maven+nexus setting.xml配置(收藏)

    <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://mav ...

  10. javascript中的函数作用域和声明提前

    在一些类C的编程语言中,花括号内的每一段代码都具有各自作用域,并且变量在声明他们的代码段之外是不可见的,这个概念叫做块级作用域. javascript中没有块级作用域的概念,有的是函数作用域的概念:变 ...