Reinforcement Learning: An Introduction读书笔记(3)--finite MDPs
> 目 录 <
- Agent–Environment Interface
- Goals and Rewards
- Returns and Episodes
- Policies and Value Functions
- Optimal Policies and Optimal Value Functions
> 笔 记 <
Agent–Environment Interface
MDPs are meant to be a straightforward framing of the problem of learning from interaction to achieve a goal. The learner and decision maker is called the agent. The thing it interacts with, comprising everything outside the agent, is called the environment. These interact continually, the agent selecting actions and the environment responding to these actions and presenting new situations to the agent.1 The environment also gives rise to rewards, special numerical values that the agent seeks to maximize over time through its choice of actions.
More specifcally, the agent and environment interact at each of a sequence of discrete time steps, t = 0,1,2,.... At each time step t, the agent receives some representation of the environment's state, $S_{t}\in S$, where $S$ is the set of possible states, and on that basis selects an action, $A_{t}\in A(S_{t})$, where $A(S_{t})$ is the set of actions available in state $S_{t}$. One time step later, in part as a consequence of its action, the agent receives a numerical reward, $R_{t+1}\in R \subset \mathbb{R}$, and finds itself in a new state, $S_{t+1}$.
At each time step, the agent implements a mapping from states to probabilities of selecting each possible action. This mapping is called the agent's policy and is denoted $\pi_{t}(a|s)$ is the probability that $A_{t}=a$ if $S_{t}=s$. Reinforcement learning methods specify how the agent changes its policy as a result of its experience. The agent's goal, roughly speaking, is to maximize the total amount of reward it receives over the long run.
the actions are the choices made by the agent; the states are the basis for making the choices; and the rewards are the basis for evaluating the choices.

图1. agent-environment interaction in a MDP
马尔可夫性(Markov property): 如果state signal具有马尔科夫性,那么当前状态只跟上一状态有关,它包含了所有从过去经历中得到的信息。马尔可夫性对RL而言很重要,∵decisions和values通常都被认为是一个只跟当前state相关的函数。
MDP的动态性:$p(s',r|s,a)=Pr\left \{ S_{t}=s',R_{t}=r|S_{t-1}=s,A_{t-1}=a \right \}$,
where $ \underset{s'\in S \ r\in R}{\sum \sum}p(s',r|s,a)=1 $, for all $s\in S$, $a\in A(s)$.
基于the dynamics of the MDP, 我们可以很容易地得到状态转移概率(state-transition probabilities, $p(s'|s,a)$),state-action的期望回报(the expected rewards for state–action pairs, $r(s,a)$),以及state-action-next state的期望回报(the expected rewards for state–action-next state, $r(s,a,s')$)。
Goals and Rewards
agent的goal是以一个从environment传递给agent的reward signal的形式存在的。我们通过定义reward signal的值,可以实现跟agent的交流,告诉它what you want it to achieve, not how you want it achieved。
Agent的目标是最大化total reward。因此,最大化的不是immediate reward,而是cumulative reward in the long run。
Returns and Episodes
The return is the function of future rewards that the agent seeks to maximize (in expected value). return有多种形式,取决于task本身和是否希望对回报进行折扣。
Expected return: $G_{t}=R_{t+1}+R_{t+2}+...+R_{T}$, where T is a final time step。适合于episodic tasks。
Episodic tasks: each episode ends in the terminal state, followed by a reset to a standard starting state or a sample from a standard distribution of starting states.
Continuing tasks: the agent–environment interaction doesn’t break naturally into identifiable episodes, but goes on continually without limit.
Expected discounted return: $G_{t}=R_{t+1}+\gamma R_{t+2}+ \gamma ^{2}R_{t+3}+...=\sum_{k=0}^{\infty}\gamma ^{k}R_{t+k+1}$。其中,折扣率(discount rate, $\gamma$)决定了未来rewards的当前价值。适合于continuing tasks。

Policies and Value Functions
Value functions: functions of states (or state-action pairs) that estimate how good it is for the agent to be in a given state (or how good it is to perform a given action in a given state). The notion of “how good” here is defined in terms of future rewards that can be expected, or, in terms of the expected return from that state (or state-action pair).
Policy: a mapping from states to probabilities of selecting each possible action. If the agent is following policy $\pi$ at time t, then $\pi(a|s)$ is the probability that $A_{t} = a$ if $S_{t} = s$.
the value function of a state s under a policy $\pi$: (i.e. the expected return when starting in s and following $\pi$ thereafter)
We call the function $ v_{\pi}$ is the state-value function for policy $\pi$
the value of taking action a in state s under a policy $\pi$: (i.e. the expected return starting from s, taking the action a, and thereafter following policy $\pi$)
We call $ q_{\pi}$ the action-value function for policy $\pi$
Bellman equation for $v_{\pi}$: It expresses a relationship between the value of a state and the values of its successor states.

Optimal Policies and Optimal Value Functions
Value functions define a partial ordering over policies. $\pi> \pi'$ if and only if $v_{\pi}(s) > v_{\pi'}(s)$, for all $s \in S$. The optimal value functions assign to each state, or state–action pair, the largest expected return achievable by any policy.
Optimal policy $\pi_{*}$:A policy whose value functions are optimal. There is always at least one (can be many) policy that is better than or equal to all other policies.
Optimal state-value function:
Optimal action-value function:
用$v_{*}$来表示$q_{*}$:
Any policy that is greedy with respect to the optimal value functions must be an optimal policy. The Bellman optimality equations are special consistency conditions that the optimal value functions must satisfy and that can, in principle, be solved for the optimal value functions.
Bellman optimality equation for $v_{*}$:

Bellman optimality equation for $q_{*}$:

Backup diagrams for $v_{*}$ and $q_{*}$:

Reinforcement Learning: An Introduction读书笔记(3)--finite MDPs的更多相关文章
- Reinforcement Learning: An Introduction读书笔记(4)--动态规划
> 目 录 < Dynamic programming Policy Evaluation (Prediction) Policy Improvement Policy Iterat ...
- Reinforcement Learning: An Introduction读书笔记(1)--Introduction
> 目 录 < learning & intelligence 的基本思想 RL的定义.特点.四要素 与其他learning methods.evolutionary m ...
- Reinforcement Learning: An Introduction读书笔记(2)--多臂机
> 目 录 < k-armed bandit problem Incremental Implementation Tracking a Nonstationary Problem ...
- 《Machine Learning Yearing》读书笔记
——深度学习的建模.调参思路整合. 写在前面 最近偶尔从师兄那里获取到了吴恩达教授的新书<Machine Learning Yearing>(手稿),该书主要分享了神经网络建模.训练.调节 ...
- Machine Learning for hackers读书笔记(六)正则化:文本回归
data<-'F:\\learning\\ML_for_Hackers\\ML_for_Hackers-master\\06-Regularization\\data\\' ranks < ...
- Machine Learning for hackers读书笔记(三)分类:垃圾邮件过滤
#定义函数,打开每一个文件,找到空行,将空行后的文本返回为一个字符串向量,该向量只有一个元素,就是空行之后的所有文本拼接之后的字符串 #很多邮件都包含了非ASCII字符,因此设为latin1就可以读取 ...
- Machine Learning for hackers读书笔记_一句很重要的话
为了培养一个机器学习领域专家那样的直觉,最好的办法就是,对你遇到的每一个机器学习问题,把所有的算法试个遍,直到有一天,你凭直觉就知道某些算法行不通.
- Machine Learning for hackers读书笔记(十二)模型比较
library('ggplot2')df <- read.csv('G:\\dataguru\\ML_for_Hackers\\ML_for_Hackers-master\\12-Model_C ...
- Machine Learning for hackers读书笔记(十)KNN:推荐系统
#一,自己写KNN df<-read.csv('G:\\dataguru\\ML_for_Hackers\\ML_for_Hackers-master\\10-Recommendations\\ ...
随机推荐
- 你可以这么理解五种I/O模型
因为项目需要,接触和使用了Netty,Netty是高性能NIO通信框架,在业界拥有很好的口碑,但知其然不知其所以然. 所以本系列文章将从基础开始学起,深入细致的学习NIO.本文主要是介绍五种I/O模型 ...
- Shiro基础
Factory<T>接口(org.apache.shiro.util.Factory) 接口的getInstance()方法是泛型方法,可以用来创建SecurityManager接口对象 ...
- [Postman]响应(7)
Postman响应查看器有助于确保API响应的正确性.API响应由正文,标题和状态代码组成.邮递员在不同的标签中组织正文和标题.选项卡旁边会显示API调用的状态代码和完成时间. 响应还包含HTTP规范 ...
- 原生js实现返回顶部特效
index.html <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> &l ...
- Spring Boot 自定义日志详解
本节内容基于 Spring Boot 2.0. 你所需具备的基础 什么是 Spring Boot? Spring Boot 核心配置文件详解 Spring Boot 开启的 2 种方式 Spring ...
- requests 处理异常错误 requests.exceptions.ConnectionError HTTPSConnectionPool [Errno 10060]
使用python requests模块调用vmallarg.vmall.com接口API时报如下错误: requests.exceptions.ConnectionError: HTTPSConnec ...
- ssh免密码快速登录配置
使用ssh登录服务器的时候,需要输入ip地址.端口.用户名.密码等信息,比较麻烦,容易输错.还好,通过客户端和服务器的配置参数,可实现免密码快速登录.服务器可通过保存客户端的公钥,用于验证客户端的身份 ...
- sql server 性能调优之 CPU消耗最大资源分析1 (自sqlserver服务启动以后)
一. 概述 上次在介绍性能调优中讲到了I/O的开销查看及维护,这次介绍CPU的开销及维护, 在调优方面是可以从多个维度去发现问题如I/O,CPU, 内存,锁等,不管从哪个维度去解决,都能达到调优的效 ...
- JAVA UUID 生成唯一标识
Writer:BYSocket(泥沙砖瓦浆木匠) 微博:BYSocket 豆瓣:BYSocket Reprint it anywhere u want 需求 项目在设计表的时候,要处理并发多的一些数据 ...
- Servlet JSP 二重修炼:Filter过滤器
摘要: 原创出处: http://www.cnblogs.com/Alandre/ 泥沙砖瓦浆木匠 希望转载,保留摘要,谢谢! 真正的朋友就是,当你蒙蔽了所有人的眼睛,也能看穿你真实的样子和心底的痛楚 ...