Awesome Reinforcement Learning

A curated list of resources dedicated to reinforcement learning.

We have pages for other topics: awesome-rnnawesome-deep-visionawesome-random-forest

Maintainers: Hyunsoo KimJiwon Kim

We are looking for more contributors and maintainers!

Contributing

Please feel free to pull requests

Table of Contents

Codes

Theory

Lectures

Books

  • Richard Sutton and Andrew Barto, Reinforcement Learning: An Introduction [Book] [Code]
  • Csaba Szepesvari, Algorithms for Reinforcement Learning [Book]
  • David Poole and Alan Mackworth, Artificial Intelligence: Foundations of Computational Agents [Book Chapter]
  • Dimitri P. Bertsekas and John N. Tsitsiklis, Neuro-Dynamic Programming [Book (Amazon)] [Summary]
  • Mykel J. Kochenderfer, Decision Making Under Uncertainty: Theory and Application [Book (Amazon)]

Surveys

  • Leslie Pack Kaelbling, Michael L. Littman, Andrew W. Moore, Reinforcement Learning: A Survey, JAIR, 1996. [Paper]
  • S. S. Keerthi and B. Ravindran, A Tutorial Survey of Reinforcement Learning, Sadhana, 1994. [Paper]
  • Matthew E. Taylor, Peter Stone, Transfer Learning for Reinforcement Learning Domains: A Survey, JMLR, 2009. [Paper]
  • Jens Kober, J. Andrew Bagnell, Jan Peters, Reinforcement Learning in Robotics, A Survey, IJRR, 2013. [Paper]
  • Michael L. Littman, "Reinforcement learning improves behaviour from evaluative feedback." Nature 521.7553 (2015): 445-451. [Paper]
  • Marc P. Deisenroth, Gerhard Neumann, Jan Peter, A Survey on Policy Search for Robotics, Foundations and Trends in Robotics, 2014. [Book]

Papers / Thesis

  • Foundational Papers

    • Marvin Minsky, Steps toward Artificial Intelligence, Proceedings of the IRE, 1961. [Paper]

      • discusses issues in RL such as the "credit assignment problem"
    • Ian H. Witten, An Adaptive Optimal Controller for Discrete-Time Markov Environments, Information and Control, 1977. [Paper]
      • earliest publication on temporal-difference (TD) learning rule.
  • Methods

    • Dynamic Programming (DP):

      • Christopher J. C. H. Watkins, Learning from Delayed Rewards, Ph.D. Thesis, Cambridge University, 1989. [Thesis]
    • Monte Carlo:
      • Andrew Barto, Michael Duff, Monte Carlo Inversion and Reinforcement Learning, NIPS, 1994. [Paper]
      • Satinder P. Singh, Richard S. Sutton, Reinforcement Learning with Replacing Eligibility Traces, Machine Learning, 1996. [Paper]
    • Temporal-Difference:
      • Richard S. Sutton, Learning to predict by the methods of temporal differences. Machine Learning 3: 9-44, 1988.[Paper]
    • Q-Learning (Off-policy TD algorithm):
      • Chris Watkins, Learning from Delayed Rewards, Cambridge, 1989. [Thesis]
    • Sarsa (On-policy TD algorithm):
      • G.A. Rummery, M. Niranjan, On-line Q-learning using connectionist systems, Technical Report, Cambridge Univ., 1994. [Report]
      • Richard S. Sutton, Generalization in Reinforcement Learning: Successful examples using sparse coding, NIPS, 1996. [Paper]
    • R-Learning (learning of relative values)
      • Andrew Schwartz, A Reinforcement Learning Method for Maximizing Undiscounted Rewards, ICML, 1993.[Paper-Google Scholar]
    • Function Approximation methods (Least-Sqaure Temporal Difference, Least-Sqaure Policy Iteration)
      • Steven J. Bradtke, Andrew G. Barto, Linear Least-Squares Algorithms for Temporal Difference Learning, Machine Learning, 1996. [Paper]
      • Michail G. Lagoudakis, Ronald Parr, Model-Free Least Squares Policy Iteration, NIPS, 2001. [Paper] [Code]
    • Policy Search / Policy Gradient
      • Richard Sutton, David McAllester, Satinder Singh, Yishay Mansour, Policy Gradient Methods for Reinforcement Learning with Function Approximation, NIPS, 1999. [Paper]
      • Jan Peters, Sethu Vijayakumar, Stefan Schaal, Natural Actor-Critic, ECML, 2005. [Paper]
      • Jens Kober, Jan Peters, Policy Search for Motor Primitives in Robotics, NIPS, 2009. [Paper]
      • Jan Peters, Katharina Mulling, Yasemin Altun, Relative Entropy Policy Search, AAAI, 2010. [Paper]
      • Freek Stulp, Olivier Sigaud, Path Integral Policy Improvement with Covariance Matrix Adaptation, ICML, 2012.[Paper]
      • Nate Kohl, Peter Stone, Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion, ICRA, 2004.[Paper]
      • Marc Deisenroth, Carl Rasmussen, PILCO: A Model-Based and Data-Efficient Approach to Policy Search, ICML, 2011. [Paper]
      • Scott Kuindersma, Roderic Grupen, Andrew Barto, Learning Dynamic Arm Motions for Postural Recovery, Humanoids, 2011. [Paper]
    • Hierarchical RL
      • Richard Sutton, Doina Precup, Satinder Singh, Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning, Artificial Intelligence, 1999. [Paper]
      • George Konidaris, Andrew Barto, Building Portable Options: Skill Transfer in Reinforcement Learning, IJCAI, 2007.[Paper]
    • Deep Learning + Reinforcement Learning (A sample of recent works on DL+RL)
      • V. Mnih, et. al., Human-level Control through Deep Reinforcement Learning, Nature, 2015. [Paper]
      • Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard Lewis, Xiaoshi Wang, Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning, NIPS, 2014. [Paper]
      • Sergey Levine, Chelsea Finn, Trevor Darrel, Pieter Abbeel, End-to-End Training of Deep Visuomotor Policies. ArXiv, 16 Oct 2015. [ArXiv]
      • Tom Schaul, John Quan, Ioannis Antonoglou, David Silver, Prioritized Experience Replay, ArXiv, 18 Nov 2015.[ArXiv]
      • Hado van Hasselt, Arthur Guez, David Silver, Deep Reinforcement Learning with Double Q-Learning, ArXiv, 22 Sep 2015. [ArXiv]
      • Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, Asynchronous Methods for Deep Reinforcement Learning, ArXiv, 4 Feb 2016.[ArXiv]

Applications

Game Playing

  • Traditional Games

    • Backgammon - "TD-Gammon" game play using TD(λ) (Tesauro, ACM 1995) [Paper]
    • Chess - "KnightCap" program using TD(λ) (Baxter, arXiv 1999) [arXiv]
    • Chess - Giraffe: Using deep reinforcement learning to play chess (Lai, arXiv 2015) [arXiv]
  • Computer Games

Robotics

  • Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion (Kohl, ICRA 2004) [Paper]
  • Robot Motor SKill Coordination with EM-based Reinforcement Learning (Kormushev, IROS 2010) [Paper] [Video]
  • Generalized Model Learning for Reinforcement Learning on a Humanoid Robot (Hester, ICRA 2010) [Paper] [Video]
  • Autonomous Skill Acquisition on a Mobile Manipulator (Konidaris, AAAI 2011) [Paper] [Video]
  • PILCO: A Model-Based and Data-Efficient Approach to Policy Search (Deisenroth, ICML 2011) [Paper]
  • Incremental Semantically Grounded Learning from Demonstration (Niekum, RSS 2013) [Paper]
  • Efficient Reinforcement Learning for Robots using Informative Simulated Priors (Cutler, ICRA 2015) [Paper] [Video]

Control

  • An Application of Reinforcement Learning to Aerobatic Helicopter Flight (Abbeel, NIPS 2006) [Paper] [Video]
  • Autonomous helicopter control using Reinforcement Learning Policy Search Methods (Bagnell, ICRA 2011) [Paper]

Operations Research

  • Scaling Average-reward Reinforcement Learning for Product Delivery (Proper, AAAI 2004) [Paper]
  • Cross Channel Optimized Marketing by Reinforcement Learning (Abe, KDD 2004) [Paper]

Human Computer Interaction

  • Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System (Singh, JAIR 2002)[Paper]

Tutorials / Websites

Online Demos

Awesome Reinforcement Learning的更多相关文章

  1. Machine Learning Algorithms Study Notes(5)—Reinforcement Learning

    Reinforcement Learning 对于控制决策问题的解决思路:设计一个回报函数(reward function),如果learning agent(如上面的四足机器人.象棋AI程序)在决定 ...

  2. (转) Playing FPS games with deep reinforcement learning

    Playing FPS games with deep reinforcement learning 博文转自:https://blog.acolyer.org/2016/11/23/playing- ...

  3. (zhuan) Deep Reinforcement Learning Papers

    Deep Reinforcement Learning Papers A list of recent papers regarding deep reinforcement learning. Th ...

  4. (转) Deep Learning Research Review Week 2: Reinforcement Learning

      Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...

  5. Learning Roadmap of Deep Reinforcement Learning

    1. 知乎上关于DQN入门的系列文章 1.1 DQN 从入门到放弃 DQN 从入门到放弃1 DQN与增强学习 DQN 从入门到放弃2 增强学习与MDP DQN 从入门到放弃3 价值函数与Bellman ...

  6. Open source packages on Deep Reinforcement Learning

    智能车 self driving car + 强化学习 reinforcement learning + 神经网络 模拟 https://github.com/MorvanZhou/my_resear ...

  7. (转) Deep Reinforcement Learning: Playing a Racing Game

    Byte Tank Posts Archive Deep Reinforcement Learning: Playing a Racing Game OCT 6TH, 2016 Agent playi ...

  8. 论文笔记之:Dueling Network Architectures for Deep Reinforcement Learning

    Dueling Network Architectures for Deep Reinforcement Learning ICML 2016 Best Paper 摘要:本文的贡献点主要是在 DQN ...

  9. getting started with building a ROS simulation platform for Deep Reinforcement Learning

    Apparently, this ongoing work is to make a preparation for futural research on Deep Reinforcement Le ...

  10. (转) Deep Learning in a Nutshell: Reinforcement Learning

    Deep Learning in a Nutshell: Reinforcement Learning   Share: Posted on September 8, 2016by Tim Dettm ...

随机推荐

  1. XCode之entitlement

    entitlement是codesign的一个输入,参见:codesign. entitlement的意思是权力,也就是表明应用所具有的权利,可以访问什么,不能访问什么等.这些信息会在codesign ...

  2. rcc

    一.在STM32中,有五个时钟源,为HSI.HSE.LSI.LSE.PLL.   全名: high  speed  external ①HSI是高速内部时钟,RC振荡器,频率为8MHz. ②HSE是高 ...

  3. 2016 - 1- 14 UI阶段学习补充 transform属性详解

    UIView的transform属性 transform是view的一个重要属性,它在矩阵层面上改变view的显⽰状态,能实现view的缩放.旋转.平移等功能.transform是CGAffineTr ...

  4. Emacs常用命令

    1.离开Emacs 挂起Emacs C-z 退出Emacs C-x C-c 2.文件 打开文件 C-x C-f 保存文件 C-x C-s 保存所有的文件 C-x s 将一个文件的内容插入到当前buff ...

  5. 深入分析:Android中app之间的交互(一,使用Action)

    在我们开发Android App应用的时候,有些需求需要我们启动其他的App来处理一些逻辑,例如我们需要根据一个地址来调用系统或者相关的地图Map App,这样我们不用在自己的App中编写相应的功能, ...

  6. Interview----判断整数序列是否是二叉搜索树的后序遍历结果

    题目:输入一个整数数组,判断该数组是不是某二元查找树的后序遍历的结果. 如果是返回true,否则返回false. 例如输入5.7.6.9.11.10.8,由于这一整数序列是如下树的后序遍历结果:   ...

  7. BZOJ 4131 并行博弈

    发现必胜态只和(1,1)的状态有关. 无法得知必胜的方法,只知道谁会必胜. #include<iostream> #include<cstdio> #include<cs ...

  8. Win8远程[你的凭据不工作]的解决办法

    用户名前加你的计算机名称就可以了....win8也会有这样的问题,有点匪夷所思了......

  9. java作业5

    (一)用你的大数类实现加和减两个功能(乘除阶乘未实现) import java.util.Scanner; import java.io.IOException; import java.io.Inp ...

  10. HDU 2829 Lawrence (斜率DP)

    斜率DP 设dp[i][j]表示前i点,炸掉j条边的最小值.j<i dp[i][j]=min{dp[k][j-1]+cost[k+1][i]} 又由得出cost[1][i]=cost[1][k] ...