RL到了第三章题目多的不可思议

前两章比较简单,就在博客随便写写了。之后的用pdf更新。

1.1: Self-play will result different move even from the first step due to randomization of the action choice. The method should then learn two sets of value functions, first hand and second hand. In general, I believe self-play would improve the ability to win over the long run but only converge at slower speed than playing against some one with knowledge. Indeed, the self play sets no learning object of incoming opponents and may result the exploitation of opponent’s weakness a harder job.

1.2: Mirror positions should be bind together to the same status in the value function. Either create 4 images for each status or perform rotation during playtime. However, if the opponent does not take the advantage of symmetry and has some strange belief in some patterns, value function should take each status differently in order to exploit such difference. Although, if trained with a well played opponent, such amendments are not necessary any more. Any way the agent has no information of his opponent as a priori.

1.3: Under the assumption that the agent explores, greedy may be good. For example, in 10-arm bandit problem, traditional solution is indeed greedy. However, less greedy algorithms such as soft max have better performance and convergence speed since it could quickly understand the outcome of all behaviors instead of the seemingly great ones. Of course, with change of policy from the opponent, greedy will be very slow to react and different action choice method has to be considered.

1.4: 略 题意不清。简单的说如果探索 但是把探索行为也更新前值 前面的行为会被错误的赋予一个探索才能引发的后果 会降低或提高所有行为的评价 而该行为序列却可能是不可重复的 因为这是随机的探索罢了

1.5: 略。开放性过大。很多优化其实后面才会说。比如结合DL。

2.1: 75%(sigma reflects possibility to explore the entire action space instead of the one other than the optimal)

2.2: A4 and all. A4 is not optimal and see 2.1 for the reason why all actions can be exploration.

2.3: the one with 0.01 possibility to explore. Limit is just higher than 0.99 which is higher than action with 0.1 possibility to explore. Given enough time step, indeed, the one explores less would always has higher cumulative rewards.

2.4: 数学就略了 跟n扯上关系 除法侧重前 乘法侧重后

2.5: TODO

2.6: 这真的恶心 我的推测是 一开始初始化为5实在太乐观了 所以选择什么 value就抖降 选完一轮降一轮 直到value接近于真实附近的时候 t在逐渐增加 但还不够大 以至于算法极为短暂的利用了几个可能最优的行为(由于取样太小 用value判断最优率也只有40%正确率)形成了图中的spike。但紧接着t增加 算法的第二项增加 促进了新一轮探索 使得收益率又降低 直到算法进入了稳定期 n也足够大 第二项已经名义上接近消失 算法无限接近于最优 所以先升再降再升

2.7: 数学 略 但大概是个exponential average 说的夸张点和kernel method 类似?

2.8: 和2.6差不多 我其实和在一起讲了

2.9: 这问题有点贱 这书明明没讲过sigmoid 是的他们显然是相似的 因为sigmoid本来就可以写成 e^2z/(e^z + e^2z) 相当于softmax选择

2.10: 在这个条件下一切超过0.5收益的算法都是刷流氓 传统的constant alpha或许可行 但其实最优结果目测是来自纯粹greedy。即使给了label

2.11: TODO

Reinforcement Learning Solutions Ed2 Chapter 1 - 2 问题解答的更多相关文章

  1. (转) Deep Reinforcement Learning: Playing a Racing Game

    Byte Tank Posts Archive Deep Reinforcement Learning: Playing a Racing Game OCT 6TH, 2016 Agent playi ...

  2. Awesome Reinforcement Learning

    Awesome Reinforcement Learning A curated list of resources dedicated to reinforcement learning. We h ...

  3. (转) Deep Reinforcement Learning: Pong from Pixels

    Andrej Karpathy blog About Hacker's guide to Neural Networks Deep Reinforcement Learning: Pong from ...

  4. 论文笔记之:Asynchronous Methods for Deep Reinforcement Learning

    Asynchronous Methods for Deep Reinforcement Learning ICML 2016 深度强化学习最近被人发现貌似不太稳定,有人提出很多改善的方法,这些方法有很 ...

  5. Reinforcement Learning in R

    Reinforcement learning has gained considerable traction as it mines real experiences with the help o ...

  6. [Reinforcement Learning] 动态规划(Planning)

    动态规划 动态规划(Dynamic Programming,简称DP)是一种通过把原问题分解为相对简单的子问题的方式求解复杂问题的方法. 动态规划常常适用于具有如下性质的问题: 具有最优子结构(Opt ...

  7. temporal credit assignment in reinforcement learning 【强化学习 经典论文】

    Sutton 出版论文的主页: http://incompleteideas.net/publications.html Phd  论文:   temporal credit assignment i ...

  8. [转]Introduction to Learning to Trade with Reinforcement Learning

    Introduction to Learning to Trade with Reinforcement Learning http://www.wildml.com/2018/02/introduc ...

  9. Introduction to Learning to Trade with Reinforcement Learning

    http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/ The academic ...

随机推荐

  1. jeecg入门操作—表单界面

    一.搭建jeecg开发环境 参考环境搭建步骤 https://www.cnblogs.com/dyh004/p/10687633.html 二.创建用户数据库表: 登录上jeecg平台,点击在线开发- ...

  2. [Alpha阶段]第九次Scrum Meeting

    Scrum Meeting博客目录 [Alpha阶段]第九次Scrum Meeting 基本信息 名称 时间 地点 时长 第九次Scrum Meeting 19/04/14 大运村寝室6楼 30min ...

  3. Colorful Bricks CodeForces - 1081C ( 组合数学 或 DP )

    On his free time, Chouti likes doing some housework. He has got one new task, paint some bricks in t ...

  4. hibernate 查询字段是重复名字的处理方法

    目前遇到了三种情况: 一:当表的字段是数字类型(int,long .....) select name,ifnull(conpih.uh_id,0) from user; 将重复的字段如上修改用ifn ...

  5. win10x64 批处理自动安装打印机

    系统版本:Windows 10企业版 64位(10.0 ,版本17134)- 中文(简体) 话不多说,直接上脚本: REM 提升管理员权限 @echo off chcp 65001 >nul s ...

  6. 核主成分分析方法(KPCA)怎么理解?

    先回顾下主成分分析方法.PCA的最大方差推导的结论是,把数据投影到特征向量的方向后,方差具有极大值的.假如先把数据映射到一个新的特征空间,再做PCA会怎样?对于一些数据,方差会更好地保留下来.而核方法 ...

  7. Appium could not connect to server are you sure it's running appium desktop

    use remote host value : 127.0.0.1 switch to Custom server to Automatic server adb kill-server adb st ...

  8. 2019-04-16 SpringMVC 学习笔记

    1. 配置过程: ① 配置servlet(org.springframework.web.servlet.DiapatcherServlet)拦截请求 ② SpringMVC的默认配置文件:servl ...

  9. SQL语句中 INNER JOIN的用法!

    一.SQL语句中  INNER JOIN的用法? 1.INNER JOIN的作用? 可以在两个或者更多的表中获取结果,得出一张新表. [隐式内连接] 表一 car  购物车 表二 user 用户 发现 ...

  10. HTML一

    什么是前端: 前端,也称web前端对于网站来说,通常是指网站的前台部分,通俗点就是用户可以看到的部分, 浏览器.APP.应用程序的界面展现和用户交互就是前端 前端要学习那些技术:html+css+ja ...