Temporal-Difference Learning for Prediction
In Monte Carlo Learning, we've got the estimation of value function:

Gt is the episode return from time t, which can be calculated by:

Please recall, Gt can be only calculated at the end of a given episode. This reveals a disadvantage of Monte Carlo Learning: have to wait until the end of episodes.
TD(0) algorithm replace Gt of the equation to the immediate reward and estimated value function of the next state:

The algorithm updates the Estimated State-Value Function at time t+1, because everything in the equation is determined. This means we will wait until the agent reaching the next state, so that the agent can get the immediate reward Rt+1 and know which state the system will transition to at time t+1.
The equations below are State-Value Function for Dynamic Programming, in which the whole environment is known. Compare to these equations:

TD algorithm is quite like 6.4 Bellman Equation, but it does not take expectation. Instead, it uses the knowledge till now to estimate how much reward I am going to get from this state. The whole algorithm can be demonstrated as:

TD Target, TD Error
Bias/ Viriance trade-off
Bootstraping
Temporal-Difference Learning for Prediction的更多相关文章
- 【PPT】 Least squares temporal difference learning
最小二次方时序差分学习 原文地址: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd= ...
- PP: Multi-Horizon Time Series Forecasting with Temporal Attention Learning
Problem: multi-horizon probabilistic forecasting tasks; Propose an end-to-end framework for multi-ho ...
- [Reinforcement Learning] Model-Free Prediction
上篇文章介绍了 Model-based 的通用方法--动态规划,本文内容介绍 Model-Free 情况下 Prediction 问题,即 "Estimate the value funct ...
- [Machine Learning] 机器学习常见算法分类汇总
声明:本篇博文根据http://www.ctocio.com/hotnews/15919.html整理,原作者张萌,尊重原创. 机器学习无疑是当前数据分析领域的一个热点内容.很多人在平时的工作中都或多 ...
- (转) Deep Learning Research Review Week 2: Reinforcement Learning
Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...
- Awesome Reinforcement Learning
Awesome Reinforcement Learning A curated list of resources dedicated to reinforcement learning. We h ...
- Machine Learning 学习笔记1 - 基本概念以及各分类
What is machine learning? 并没有广泛认可的定义来准确定义机器学习.以下定义均为译文,若以后有时间,将补充原英文...... 定义1.来自Arthur Samuel(上世纪50 ...
- Distributional Reinforcement Learning with Quantile Regression
郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布! arXiv:1710.10044v1 [cs.AI] 27 Oct 2017 In AAAI Conference on Artifici ...
- 3. Distributional Reinforcement Learning with Quantile Regression
C51算法理论上用Wasserstein度量衡量两个累积分布函数间的距离证明了价值分布的可行性,但在实际算法中用KL散度对离散支持的概率进行拟合,不能作用于累积分布函数,不能保证Bellman更新收敛 ...
随机推荐
- Ubuntu 16.04安装N卡驱动、cuda、cudnn和tensorflow GPU版
安装驱动 最开始在英伟达官网下载了官方驱动,安装之后无法登录系统,在登录界面反复循环,用cuda里的驱动也出现了同样的问题.最后解决办法是把驱动卸载之后,通过命令行在线安装驱动. 卸载驱动: sudo ...
- 洛谷 P1896 [SCOI2005]互不侵犯 (状态压缩DP)
题目描述 在N×N的棋盘里面放K个国王,使他们互不攻击,共有多少种摆放方案.国王能攻击到它上下左右,以及左上左下右上右下八个方向上附近的各一个格子,共8个格子. 注:数据有加强(2018/4/25) ...
- hdu2955_Robberies 01背包
有一个强盗要去几个银行偷盗,他既想多投点钱,又想尽量不被抓到.已知各个银行 的金钱数和被抓的概率,以及强盗能容忍的最大被抓概率.求他最多能偷到多少钱? 解:以概率为价值 问价值在合理范围背包的最大容量 ...
- 给定两个大小为 m 和 n 的有序数组 nums1 和 nums2。 请你找出这两个有序数组的中位数,并且要求算法的时间复杂度为 O(log(m + n))。 你可以假设 nums1 和 nums2 不会同时为空。
class Solution { public double findMedianSortedArrays(int[] A, int[] B) { int m = A.length; int n = ...
- linux ab 压测
https://www.cnblogs.com/shenshangzz/p/8340640.html https://www.cnblogs.com/shenshangzz/p/8340640.htm ...
- Zookeeper实现哨兵机制
master选举使用场景及结构 现在很多时候我们的服务需要7*24小时工作,假如一台机器挂了,我们希望能有其它机器顶替它继续工作.此类问题现在多采用master-salve模式,也就是常说的主从模式, ...
- [CF286C] Main Sequence
问题描述 定义幸运数列: 空数列是幸运数列 如果 S 是幸运数列,那么 {r, S, -r} 也是幸运数列 (r > 0) 如果 S 和 T 都是幸运数列,那么 {S, T} 也是幸运数列 给定 ...
- 机器学习中的偏差(bias)和方差(variance)
转发:http://blog.csdn.net/mingtian715/article/details/53789487请移步原文 内容参见stanford课程<机器学习> 对于已建立 ...
- LeetCode--146--LRU缓存机制(python)
运用你所掌握的数据结构,设计和实现一个 LRU (最近最少使用) 缓存机制.它应该支持以下操作: 获取数据 get 和 写入数据 put . 获取数据 get(key) - 如果密钥 (key) 存 ...
- LeetCode--128--最长连续序列(python)
给定一个未排序的整数数组,找出最长连续序列的长度. 要求算法的时间复杂度为 O(n). 示例: 输入: [100, 4, 200, 1, 3, 2]输出: 4解释: 最长连续序列是 [1, 2, 3, ...