Policy Improvement and Policy Iteration
From the last post, we know how to evaluate a policy. But that's not enough, because the purpose of policy evaluation is to improve policies so that finally get the optimal policy. So in this post, we will discuss about how to improve a given policy, and how to from a given policy get to the optimal policy.
Firstly, when you have an evaluated policy, the Action-Value function is known for every state. That is, at a certain state s, we known which action can give the system the largest reward.

In the puzzle wandering example, we evaluate the random policy. However,the State-Value functions can be used for policy improvement. After 1 step calculating,we can conclude at the circled location, moving left is better than randomly picking a direction because left side has more reward.

After three steps, we've got a much better intuition about the map. We can change the random policy to a new better one.

The way to improve the current policy is to greedyly pick actions for every state. It is worth noting that greedily picking actions does not means it only consider one step (too greedy to consider multiple steps). Instead, when k=3, the algorithm can foresee three steps, and the greedy picking algorithm will select the best action for k steps.

The Policy Iteration Algorithm is keep doing evaluation and improvement tasks untill the policy becomes stable,


This process means Action-Value function of the improved policy picking the best return from a single action:

The algorithm is:

Policy Improvement and Policy Iteration的更多相关文章
- Provider Policy与Consumer Policy在bnd中的区别
首先需要了解的是bnd的相关知识: 1. API(也就是接口), 2. API Provider(接口的实现) 3. API Consumer( 接口的使用者) OSGi中的一个版本有4个部分: ...
- Reinforcement Learning Index Page
Reinforcement Learning Posts Step-by-step from Markov Property to Markov Decision Process Markov Dec ...
- Policy Gradient Algorithms
Policy Gradient Algorithms 2019-10-02 17:37:47 This blog is from: https://lilianweng.github.io/lil-l ...
- Deep Learning专栏--强化学习之从 Policy Gradient 到 A3C(3)
在之前的强化学习文章里,我们讲到了经典的MDP模型来描述强化学习,其解法包括value iteration和policy iteration,这类经典解法基于已知的转移概率矩阵P,而在实际应用中,我们 ...
- 使用 SecurityManager 和 Policy File 管理 Java 程序的权限
参考资料 该文中的内容来源于 Oracle 的官方文档.Oracle 在 Java 方面的文档是非常完善的.对 Java 8 感兴趣的朋友,可以从这个总入口 Java SE 8 Documentati ...
- Utility2:Appropriate Evaluation Policy
UCP收集所有Managed Instance的数据的机制,是通过启用各个Managed Instances上的Collection Set:Utility information(位于Managem ...
- trait与policy模板应用简单示例
trait与policy模板应用简单示例 accumtraits.hpp // 累加算法模板的trait // 累加算法模板的trait #ifndef ACCUMTRAITS_HPP #define ...
- trait与policy模板技术
trait与policy模板技术 我们知道,类有属性(即数据)和操作两个方面.同样模板也有自己的属性(特别是模板参数类型的一些具体特征,即trait)和算法策略(policy,即模板内部的操作逻辑). ...
- Network Policy - 每天5分钟玩转 Docker 容器技术(171)
Network Policy 是 Kubernetes 的一种资源.Network Policy 通过 Label 选择 Pod,并指定其他 Pod 或外界如何与这些 Pod 通信. 默认情况下,所有 ...
随机推荐
- 常用的前端框架Angular、React、Vue
VUE 一.vue导读 1.1 vue的优点:结合其他框架点,轻量级,中文API,数据驱动,双向绑定,MVVM设计模式,组件化开发,单页面应用 1.2 vue环境的导入: 本地导入 <!--方法 ...
- PAT Advanced 1036 Boys vs Girls (25 分)
This time you are asked to tell the difference between the lowest grade of all the male students and ...
- CF 937
A #include <bits/stdc++.h> #define PI acos(-1.0) #define mem(a,b) memset((a),b,sizeof(a)) #def ...
- ifconfig-push
ifconfig-push中的每一对IP地址表示虚拟客户端和服务器的IP端点.它们必须从连续的/30子网网段中获取(这里是/30表示xxx.xxx.xxx.xxx/30,即子网掩码位数为30),以便于 ...
- Flutter-常用插件庫
alibaba/flutter_boost:路由 install_plugin 2.0.0#app下载更新插件 audio_recorder: any #录音.播放 flutter_sound: ^1 ...
- ubuntu16.04 配置tomcat开机启动
使用脚本方式设置开机启动 1.将tomcat目录下/bin中的catalina.sh拷贝到/etc/init.d下: cp /usr/local/java/apache-tomcat-/bin/cat ...
- 快速幂(Fast Pow)
定义 快速求a^b%c的算法 原理 指数可以被二进制分解 那么a^b可以分解为a^2^k1*a^2^k2*…… 又显然a^2^(k+1)=a^(2^k*2)=(a^2^k)^2 所以可以将指数在二进制 ...
- 如何用 Jmeter 获取 Cookie
如何用 Jmeter 获取 Cookie 1.Jmeter 安装目录bin文件加下jmeter.properties文件修改,搜索CookieManager.save.cookies= 将Cookie ...
- RMI实现方案
- java 8 接口默认方法
解决问题:在java8 之前的版本,在修改已有的接口的时候,需要修改实现该接口的实现类. 作用:解决接口的修改与现有的实现不兼容的问题.在不影响原有实现类的结构下修改新的功能方法 案例: 首先定义一个 ...