Reinforcement Learning Solutions Ed2 Chapter 1 - 2 问题解答
RL到了第三章题目多的不可思议
前两章比较简单,就在博客随便写写了。之后的用pdf更新。
1.1: Self-play will result different move even from the first step due to randomization of the action choice. The method should then learn two sets of value functions, first hand and second hand. In general, I believe self-play would improve the ability to win over the long run but only converge at slower speed than playing against some one with knowledge. Indeed, the self play sets no learning object of incoming opponents and may result the exploitation of opponent’s weakness a harder job.
1.2: Mirror positions should be bind together to the same status in the value function. Either create 4 images for each status or perform rotation during playtime. However, if the opponent does not take the advantage of symmetry and has some strange belief in some patterns, value function should take each status differently in order to exploit such difference. Although, if trained with a well played opponent, such amendments are not necessary any more. Any way the agent has no information of his opponent as a priori.
1.3: Under the assumption that the agent explores, greedy may be good. For example, in 10-arm bandit problem, traditional solution is indeed greedy. However, less greedy algorithms such as soft max have better performance and convergence speed since it could quickly understand the outcome of all behaviors instead of the seemingly great ones. Of course, with change of policy from the opponent, greedy will be very slow to react and different action choice method has to be considered.
1.4: 略 题意不清。简单的说如果探索 但是把探索行为也更新前值 前面的行为会被错误的赋予一个探索才能引发的后果 会降低或提高所有行为的评价 而该行为序列却可能是不可重复的 因为这是随机的探索罢了
1.5: 略。开放性过大。很多优化其实后面才会说。比如结合DL。
2.1: 75%(sigma reflects possibility to explore the entire action space instead of the one other than the optimal)
2.2: A4 and all. A4 is not optimal and see 2.1 for the reason why all actions can be exploration.
2.3: the one with 0.01 possibility to explore. Limit is just higher than 0.99 which is higher than action with 0.1 possibility to explore. Given enough time step, indeed, the one explores less would always has higher cumulative rewards.
2.4: 数学就略了 跟n扯上关系 除法侧重前 乘法侧重后
2.5: TODO
2.6: 这真的恶心 我的推测是 一开始初始化为5实在太乐观了 所以选择什么 value就抖降 选完一轮降一轮 直到value接近于真实附近的时候 t在逐渐增加 但还不够大 以至于算法极为短暂的利用了几个可能最优的行为(由于取样太小 用value判断最优率也只有40%正确率)形成了图中的spike。但紧接着t增加 算法的第二项增加 促进了新一轮探索 使得收益率又降低 直到算法进入了稳定期 n也足够大 第二项已经名义上接近消失 算法无限接近于最优 所以先升再降再升
2.7: 数学 略 但大概是个exponential average 说的夸张点和kernel method 类似?
2.8: 和2.6差不多 我其实和在一起讲了
2.9: 这问题有点贱 这书明明没讲过sigmoid 是的他们显然是相似的 因为sigmoid本来就可以写成 e^2z/(e^z + e^2z) 相当于softmax选择
2.10: 在这个条件下一切超过0.5收益的算法都是刷流氓 传统的constant alpha或许可行 但其实最优结果目测是来自纯粹greedy。即使给了label
2.11: TODO
Reinforcement Learning Solutions Ed2 Chapter 1 - 2 问题解答的更多相关文章
- (转) Deep Reinforcement Learning: Playing a Racing Game
Byte Tank Posts Archive Deep Reinforcement Learning: Playing a Racing Game OCT 6TH, 2016 Agent playi ...
- Awesome Reinforcement Learning
Awesome Reinforcement Learning A curated list of resources dedicated to reinforcement learning. We h ...
- (转) Deep Reinforcement Learning: Pong from Pixels
Andrej Karpathy blog About Hacker's guide to Neural Networks Deep Reinforcement Learning: Pong from ...
- 论文笔记之:Asynchronous Methods for Deep Reinforcement Learning
Asynchronous Methods for Deep Reinforcement Learning ICML 2016 深度强化学习最近被人发现貌似不太稳定,有人提出很多改善的方法,这些方法有很 ...
- Reinforcement Learning in R
Reinforcement learning has gained considerable traction as it mines real experiences with the help o ...
- [Reinforcement Learning] 动态规划(Planning)
动态规划 动态规划(Dynamic Programming,简称DP)是一种通过把原问题分解为相对简单的子问题的方式求解复杂问题的方法. 动态规划常常适用于具有如下性质的问题: 具有最优子结构(Opt ...
- temporal credit assignment in reinforcement learning 【强化学习 经典论文】
Sutton 出版论文的主页: http://incompleteideas.net/publications.html Phd 论文: temporal credit assignment i ...
- [转]Introduction to Learning to Trade with Reinforcement Learning
Introduction to Learning to Trade with Reinforcement Learning http://www.wildml.com/2018/02/introduc ...
- Introduction to Learning to Trade with Reinforcement Learning
http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/ The academic ...
随机推荐
- 关于map的初级应用
map实际采用了红黑树的实现,在此,我们先不讨论map的底层实现结构原理,先来看看map究竟是怎么用,以及我是怎么看待map的. 先上代码: #include <map> #include ...
- 初识Haskell 二:基本操作符、类型Type、数据结构
对Discrete Mathematics Using a Computer的第一章Introduction to Haskell进行总结.环境Windows 1. 在安装了ghci后,便可以进行Ha ...
- 第一个Appium脚本
测试环境 Win 10 64bit Python 3.5 Appium 1.7.2 Andriod 5.1.1 模拟器& Android 5.1 MX4 测试App:考研帮Android版 3 ...
- Python——pyqt5——各框架编程
一.日期时间(dateTimeEdit/dateEdit) setDateTime:设置日期(含时间) QDateTime.currentDateTime():当前日期(含时间) setDate:设置 ...
- java 中的引用数据类型
字符串String 在java 中,字符串不是基本数据类型,而是String 类的对象,当我们创建一个字符串的时候,真的是要使用new 来调用String 构造函数 String str = new ...
- SolidWorks装配体
- 数据分析三剑客之pandas
Pandas 引入 前面一篇文章我们介绍了numpy,但numpy的特长并不是在于数据处理,而是在它能非常方便地实现科学计算,所以我们日常对数据进行处理时用的numpy情况并不是很多,我们需要处理的数 ...
- HF-01
胡凡 本书在第2章对C语言的语法进行了详细的入门讲解,并在其中融入了部分C+的特性. 第3-5章是 入门部分. 第3章 初步训练读者最基本的编写代码能力: 第4章对 常用介绍,内容重要: 第5章是 ...
- 「雅礼集训 2017 Day5」珠宝
题目描述 Miranda 准备去市里最有名的珠宝展览会,展览会有可以购买珠宝,但可惜的是只能现金支付,Miranda 十分纠结究竟要带多少的现金,假如现金带多了,就会比较危险,假如带少了,看到想买的右 ...
- 百度地图--JS版
百度地图JS版本 ----选择关键字地图展示对应地址---- CSS body, html { width: %; height: %; margin: ; font-family: "微软 ...