Reinforcement Learning 强化学习入门
https://www.zhihu.com/question/277325426
https://github.com/jinglescode/reinforcement-learning-tic-tac-toe/blob/master/README.md
Intuition
After a long day at work, you are deciding between 2 choices: to head home and write a Medium article or hang out with friends at a bar. If you choose to hang out with friends, your friends will make you feel happy; whereas heading home to write an article, you’ll end up feeling tired after a long day at work. In this example, enjoying yourself is a reward and feeling tired is viewed as a negative reward, so why write articles?
Because in life, we don’t just think about immediate rewards; we plan a course of actions to determine the possible future rewards that may follow. Perhaps writing an article may brush up your understanding of a particular topic really well, get recognised and ultimately lands you that dream job you’ve always wanted. In this scenario, getting your dream job is a delayed reward from a list of actions you took, then we want to assign some value for being at those states (for example “going home and write an article”). In order to determine the value of a state, we call this the “value function”.
So how do we learn from our past? Let’s say you made some great decisions and are in the best state of your life. Now look back at the various decisions you’ve made to reach this stage: what do you attribute your success to? What are the previous states that led you to this success? What are the actions you did in the past that led you to this state of receiving this reward? How is the action you are doing now related to the potential reward you may receive in the future?
Reinforcement Learning — Implement TicTacToe
How to use reinforcement learning to play tic-tac-toe
https://github.com/MJeremy2017/reinforcement-learning-implementation/tree/master/TicTacToe
直接看这个「井字棋」的代码,结合反复阅读这几篇文章,慢慢理解 Q-Learning 是个什么东西,每个参数的意义又是什么。
https://github.com/ZuzooVn/machine-learning-for-software-engineers
https://machinelearningmastery.com/machine-learning-for-programmers/#comment-358985
https://towardsdatascience.com/simple-reinforcement-learning-q-learning-fcddc4b6fe56
What’s ‘Q’?
The ‘q’ in q-learning stands for quality. Quality in this case represents how useful a given action is in gaining some future reward.
How Does Learning Rate Decay Help Modern Neural Networks?
https://smartlabai.medium.com/reinforcement-learning-algorithms-an-intuitive-overview-904e2dff5bbc
RL 分为 Model-free 和 Model-based 两类
Q-Learning 就属于 Model-free
http://incompleteideas.net/book/the-book-2nd.html
http://incompleteideas.net/book/code/code2nd.html
Reinforcement Learning 强化学习入门的更多相关文章
- [Reinforcement Learning] 强化学习介绍
随着AlphaGo和AlphaZero的出现,强化学习相关算法在这几年引起了学术界和工业界的重视.最近也翻了很多强化学习的资料,有时间了还是得自己动脑筋整理一下. 强化学习定义 先借用维基百科上对强化 ...
- The categories of Reinforcement Learning 强化学习分类
RL分为三大类: (1)通过行为的价值来选取特定行为的方法,具体 包括使用表格学习的 q learning, sarsa, 使用神经网络学习的 deep q network: (2)直接输出行为的 p ...
- 【论文研读】强化学习入门之DQN
最近在学习斯坦福2017年秋季学期的<强化学习>课程,感兴趣的同学可以follow一下,Sergey大神的,有英文字幕,语速有点快,适合有一些基础的入门生. 今天主要总结上午看的有关DQN ...
- 强化学习入门基础-马尔可夫决策过程(MDP)
作者:YJLAugus 博客: https://www.cnblogs.com/yjlaugus 项目地址:https://github.com/YJLAugus/Reinforcement-Lear ...
- gym强化学习入门demo——随机选取动作 其实有了这些动作和反馈值以后就可以用来训练DNN网络了
# -*- coding: utf-8 -*- import gym import time env = gym.make('CartPole-v0') observation = env.reset ...
- DQN(Deep Q-learning)入门教程(一)之强化学习介绍
什么是强化学习? 强化学习(Reinforcement learning,简称RL)是和监督学习,非监督学习并列的第三种机器学习方法,如下图示: 首先让我们举一个小时候的例子: 你现在在家,有两个动作 ...
- <Machine Learning - 李宏毅> 学习笔记
<Machine Learning - 李宏毅> 学习笔记 b站视频地址:李宏毅2019国语 第一章 机器学习介绍 Hand crafted rules Machine learning ...
- 【强化学习】MOVE37-Introduction(导论)/马尔科夫链/马尔科夫决策过程
写在前面的话:从今日起,我会边跟着硅谷大牛Siraj的MOVE 37系列课程学习Reinforcement Learning(强化学习算法),边更新这个系列.课程包含视频和文字,课堂笔记会按视频为单位 ...
- 强化学习 reinforcement learning: An Introduction 第一章, tic-and-toc 代码示例 (结构重建版,注释版)
强化学习入门最经典的数据估计就是那个大名鼎鼎的 reinforcement learning: An Introduction 了, 最近在看这本书,第一章中给出了一个例子用来说明什么是强化学习, ...
随机推荐
- 第九篇 -- 可以上网,连WIFI弹出网页
最近在调试WIFI模块时,程序路径没走对,导致运行了其他的函数,修改了配置文件,之后每次连接WIFI时都会弹出网页,并且明明可以上网,下面电脑符号那儿还会出现黄标,甚是心烦.上网搜索一番,终是解决了. ...
- idea 提示不能打开cmd.exe,idea 编译项目 CreateProcess error=740, 请求的操作需要提升 --->如何设置cmd以管理员身份运行
问题描述:idea 编译项目 idea 编译项目 CreateProcess error=740, 请求的操作需要提升 CreateProcess error=740, 请求的操作需要提升 全网搜索可 ...
- Spring Cloud专题之五:config
书接上回: SpringCloud专题之一:Eureka Spring Cloud专题之二:OpenFeign Spring Cloud专题之三:Hystrix Spring Cloud 专题之四:Z ...
- OpenFaaS实战之四:模板操作(template)
欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...
- Mybatis学习笔记-注解开发
面向接口编程 根本原因:[解耦],[可拓展],[更高规范性] 接口类型: abstract class interface 使用注解开发 简单语句可用注解开发(直接查询,列名与属性名相同) 本质:反射 ...
- IPSec组播概要
IPSec作为主流IP安全协议之一,在单播环境下,特别是在VPN场景中应用广泛.但是在组播环境貌似看到的不多,通过RFC4301了解到IPSec首先是支持组播的,即通过手动配置的方式可以实现组播包加密 ...
- Java面向对象06——类与对象小结
小结 /* 1. 类与对象 类是一个模板:抽象,对象是一个具体的实例 2. 方法 定义.调用 3. 对应的引用 引用类型: 基本类型(8) 对象是通过引用来操作的:栈-- ...
- CentOS后台服务管理类
目录 一.service 后台服务管理(临时,只对当前有效) 二.chkconfig 设置后台服务的自启配置(永久) 三.CentOS7 后添加的命令:systemctl 一.service 后台服务 ...
- 跟我一起写 Makefile(三)
Makefile 总述 ------- 一.Makefile里有什么? Makefile里主要包含了五个东西:显式规则.隐晦规则.变量定义.文件指示和注释. 1.显式规则.显式规则说明了,如何生成一个 ...
- Send Excerpts from Jenkins Console Output as Email Contents
Sometimes we need to send some excerpts from Jenkins console output (job logs) as email, such as tes ...