Temporal-Difference Control: SARSA and Q-Learning
SARSA
SARSA algorithm also estimate Action-Value functions rather than State-Value function. The difference between SARSA and Monte Carlo is: SARSA does not need to wait the actual return untill the end of the episode, instead it learns from each time step using estimations of the return.
In every step, the agent takes an action A from state S, then it receives a reward R and gets to a new state S'. Based on the policy π, we know the algorithm will greedily pick the action A'. So now we have:S,A,R,S',A', and the task is to estimate Q function of S,A pair.

We borrow the idea of estimating State-Value functions and use it onto Action-Value function estimation, then we get:

Here is the Sudo code for SARSA:

On-Policy vs Off-Policy
If we look into the learning process, there are actually two steps, firstly taking an action A from state S based on policy π, geting the reward R, and the next state S' coming; the second step is using the Q-function of action A' followd the same policy π. Both of the two steps use the same policy π, but actually they can be different. On the first step, the policy is called Target Policy, which is the policy that we will update. The second policy is Behavior Policy, this is how we pick the oprimal action from S'. Q-Learning uses different Policies on the two steps.
Q-Learning
From state S', Q-Learning algorithm picks the action maximizing the Q-function. It stands at state S', looking into all possible actions, and then chooses the best one.



Temporal-Difference Control: SARSA and Q-Learning的更多相关文章
- 强化学习9-Deep Q Learning
之前讲到Sarsa和Q Learning都不太适合解决大规模问题,为什么呢? 因为传统的强化学习都有一张Q表,这张Q表记录了每个状态下,每个动作的q值,但是现实问题往往极其复杂,其状态非常多,甚至是连 ...
- 增强学习(五)----- 时间差分学习(Q learning, Sarsa learning)
接下来我们回顾一下动态规划算法(DP)和蒙特卡罗方法(MC)的特点,对于动态规划算法有如下特性: 需要环境模型,即状态转移概率\(P_{sa}\) 状态值函数的估计是自举的(bootstrapping ...
- 【PPT】 Least squares temporal difference learning
最小二次方时序差分学习 原文地址: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd= ...
- 论文笔记之:Human-level control through deep reinforcement learning
Human-level control through deep reinforcement learning Nature 2015 Google DeepMind Abstract RL 理论 在 ...
- 如何用简单例子讲解 Q - learning 的具体过程?
作者:牛阿链接:https://www.zhihu.com/question/26408259/answer/123230350来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业转载请注明 ...
- 强化学习_Deep Q Learning(DQN)_代码解析
Deep Q Learning 使用gym的CartPole作为环境,使用QDN解决离散动作空间的问题. 一.导入需要的包和定义超参数 import tensorflow as tf import n ...
- 深度强化学习介绍 【PPT】 Human-level control through deep reinforcement learning (DQN)
这个是平时在实验室讲reinforcement learning 的时候用到PPT, 交期末作业.汇报都是一直用的这个,觉得比较不错,保存一下,也为分享,最早该PPT源于师弟汇报所做.
- The Difference between Gamification and Game-Based Learning
http://inservice.ascd.org/the-difference-between-gamification-and-game-based-learning/ Have you trie ...
- deep Q learning小笔记
1.loss 是什么 2. Q-Table的更新问题变成一个函数拟合问题,相近的状态得到相近的输出动作.如下式,通过更新参数 θθ 使Q函数逼近最优Q值 深度神经网络可以自动提取复杂特征,因此,面对高 ...
随机推荐
- switch语句小练习
java有两钟选择判断语句,分别是if else和switch case语句. 下面我们做一个switch case语句的练习. // 定义一个扫描器 Scanner sacnner = new Sc ...
- 微信授权获取code/openid
微信网页授权 如果用户在微信客户端中访问第三方网页,公众号可以通过微信网页授权机制,来获取用户基本信息,进而实现业务逻辑. 关于网页授权回调域名的说明 1.在微信公众号请求用户网页授权之前,开发者需要 ...
- 10年前文章_eclipse下perl环境搭建
eclipse下perl环境搭建1.Eclipse下安装perl插件Help -Software Updates…- Available .- Add Site… :http://e-p-i-c ...
- 配置本地yum仓库
前言 我们知道yum工具是基于rpm的,其一个重要的特性就是可以自动解决依赖问题,但是yum的本质依旧是把后缀名.rpm的包下载到本地,然后按次序安装之.但是每次执行yum install x ...
- layui 单选框取消选中
<ul> <li> <span class="time">17:18</span> <span class="typ ...
- LeetCode--015--三数之和(python)
给定一个包含 n 个整数的数组 nums,判断 nums 中是否存在三个元素 a,b,c ,使得 a + b + c = 0 ?找出所有满足条件且不重复的三元组. 先对nums进行排序,再用双指针,L ...
- HTML和CSS遇到的细节问题
一.列表项标记窜出div盒子 列表项标记窜出盒子,是因为设置了 *; } ,消除了<li>元素的默认外边距. 结解决方法:消除*{}选择器或是设置外边距 列表项目标记与边距有关 二.div ...
- Eclipse设置类和方法的注释模板
一.打开设置模板的窗口:Window->Preference->Java->Code Style->Code Template展开Comments,最常用的就是类和方法的注释, ...
- [CF342C]Cupboard and Balloons 题解
前言 博主太弱了 题解 这道题目是一个简单的贪心. 首先毋庸置疑,柜子的下半部分是要放满的. 于是我们很容易想到,分以下三种情况考虑: \[\small\text{请不要盗图,如需使用联系博主}\] ...
- AC自动机再加强版
AC自动机 拓扑排序优化,注意拓扑排序前要把所有入度为零的点都加进去 #include<bits/stdc++.h> using namespace std; #define maxn 1 ...