原文地址:

https://hub.baai.ac.cn/view/4029

========================================================

【论文标题】Dynamic allocation of limited memory resources in reinforcement learning
【作者团队】Nisheet Patel, Luigi Acerbi, Alexandre Pouget
【发表时间】2020/11/12
【论文链接】https://proceedings.neurips.cc//paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-Paper.pdf
【论文代码】https://github.com/nisheetpatel/DynamicResourceAllocator
【推荐理由】本文收录于NeurIPS 2020,来自日内瓦大学的研究人员研究强化学习与神经科学两个方面,提出了动态框架来对资源进行分配。
生物大脑固有的处理和存储信息的能力受到限制,但是仍然能够轻松地解决复杂的任务。
在本文中,研究人员提出了动态资源分配器(DRA),将其应用于强化学习中的两个标准任务和一个基于模型的计划任务,
发现它将更多资源分配给对内存有更高影响的项目。
此外,DRA从更高的资源预算开始学习时比为更好地完成任务而分配的学习速度要快,
这可以解释为什么生物大脑的额叶皮层区域在适应较低的渐近活动水平之前似乎更多地参与了学习的早期阶段。
本文的工作为学习如何将昂贵的资源分配给不确定的内存集合以适应环境变化的方式提供了一个规范性的解决方案。

代码地址:

https://github.com/nisheetpatel/DynamicResourceAllocator

======================================================

论文官方地址:

https://archive-ouverte.unige.ch/unige:149081

============================================

论文评审意见:

https://proceedings.neurips.cc/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-MetaReview.html

Dynamic allocation of limited memory resources in reinforcement learning


Meta Review

This paper nicely bridges between neuroscience and RL, and considers the important topic of limited memory resources in RL agents. The topic is well-suited for NeurIPS (R2) as it has broader applicability toward e.g. model-based RL and planning, although this is not extensively discussed or shown in the paper itself. All reviewers agreed that it is well-motivated and written (R1, R2, R3, R4), although R3 did ask for a bit more explanation on some methodological details. It is also appropriately situated with respect to related work (R1, R2, R3) although R2 suggests a separate related works section, and R4 wanted to see more discussion of work outside of neuroscience, focused on optimizing RL with limited capacity. R1 pointed out that perhaps there’s a bit of confusion between memory precision and use of memory resources, as the former is more accurate for agents, the latter perhaps for real brains - ie more precise representations require more resources to encode in the brain, but this seems to be a minor point. R1 also asked to include standard baseline implementations to test for issues such how their model scales compared to other methods. R4 was the least positive, expressing that the contribution to AI is unclear, that the tasks are too easy and wouldn’t be expected to challenge memory resources. Also the connection to neuroscience is a bit tenuous as the implementation doesn’t seem particularly biologically plausible. In the rebuttal, authors argue that this approach will allow them to generate testable predictions regarding neural representations during learning, some of which are already included in the discussion. I find this adequate, but these predictions should maybe be foregrounded more so as to make clearer the neuroscientific contribution. I’m overall quite impressed with how responsive the authors were in their response, including almost all of the requested analyses. I think the final paper, with all of these changes incorporated, is likely to be much stronger, and so I recommend accept.

======================================

论文的视频讲解:(外网)

https://www.youtube.com/watch?v=1MJJkJd_umA

===================================

【转载】 日内瓦大学 & NeurIPS 2020 | 在强化学习中动态分配有限的内存资源的更多相关文章

  1. 深度强化学习中稀疏奖励问题Sparse Reward

    Sparse Reward 推荐资料 <深度强化学习中稀疏奖励问题研究综述>1 李宏毅深度强化学习Sparse Reward4 ​ 强化学习算法在被引入深度神经网络后,对大量样本的需求更加 ...

  2. 强化学习中的无模型 基于值函数的 Q-Learning 和 Sarsa 学习

    强化学习基础: 注: 在强化学习中  奖励函数和状态转移函数都是未知的,之所以有已知模型的强化学习解法是指使用采样估计的方式估计出奖励函数和状态转移函数,然后将强化学习问题转换为可以使用动态规划求解的 ...

  3. 强化学习中REIINFORCE算法和AC算法在算法理论和实际代码设计中的区别

    背景就不介绍了,REINFORCE算法和AC算法是强化学习中基于策略这类的基础算法,这两个算法的算法描述(伪代码)参见Sutton的reinforcement introduction(2nd). A ...

  4. SpiningUP 强化学习 中文文档

    2020 OpenAI 全面拥抱PyTorch,  全新版强化学习教程已发布. 全网第一个中文译本新鲜出炉:http://studyai.com/course/detail/ba8e572a 个人认为 ...

  5. 强化学习中的经验回放(The Experience Replay in Reinforcement Learning)

    一.Play it again: reactivation of waking experience and memory(Trends in Neurosciences 2010) SWR发放模式不 ...

  6. 【转载】 “强化学习之父”萨顿:预测学习马上要火,AI将帮我们理解人类意识

    原文地址: https://yq.aliyun.com/articles/400366 本文来自AI新媒体量子位(QbitAI)     ------------------------------- ...

  7. (转) 深度强化学习综述:从AlphaGo背后的力量到学习资源分享(附论文)

    本文转自:http://mp.weixin.qq.com/s/aAHbybdbs_GtY8OyU6h5WA 专题 | 深度强化学习综述:从AlphaGo背后的力量到学习资源分享(附论文) 原创 201 ...

  8. temporal credit assignment in reinforcement learning 【强化学习 经典论文】

    Sutton 出版论文的主页: http://incompleteideas.net/publications.html Phd  论文:   temporal credit assignment i ...

  9. ICML 2018 | 从强化学习到生成模型:40篇值得一读的论文

    https://blog.csdn.net/y80gDg1/article/details/81463731 感谢阅读腾讯AI Lab微信号第34篇文章.当地时间 7 月 10-15 日,第 35 届 ...

  10. 【强化学习】1-1-2 “探索”(Exploration)还是“ 利用”(Exploitation)都要“面向目标”(Goal-Direct)

    title: [强化学习]1-1-2 "探索"(Exploration)还是" 利用"(Exploitation)都要"面向目标"(Goal ...

随机推荐

  1. MS SQL SERVER 创建表、索引、添加字段等常用脚本

    创建表: if not exists ( select 1 from sysobjects where id=object_id('PayChannelNm') ) create table [dbo ...

  2. http请求方式GET,POST工具类RestTemplate

    http请求方式GET,POST工具类RestTemplate import com.alibaba.fastjson.JSON; import com.alibaba.fastjson.serial ...

  3. Windows查看电源使用情况

    这里使用Windows自带的东西查看电源使用情况. 1.按Win键+R,输入cmd,回车. 2.输入Powercfg /batteryreport回车. 3.根据提示找到电池报告文件,双击打开即可.

  4. 【路径规划】OSQP曲线平滑 公式及代码

    参考与前言 apollo 代码:https://github.com/ApolloAuto/apollo/tree/master/modules/planning/math/smoothing_spl ...

  5. 案例源码公开!分享瑞芯微RK3568J与FPGA的PCIe通信案例,嵌入式必读!

    ​ ARM + FPGA架构有何种优势 近年来,随着中国新基建.中国制造2025的持续推进,单ARM处理器越来越难满足工业现场的功能要求,特别是能源电力.工业控制.智慧医疗等行业通常需要ARM + F ...

  6. 嵌入式必看!全志T113-i+玄铁HiFi4核心板硬件说明资料分享

    目 录 1 硬件资源 2 引脚说明(篇幅问题,暂不提供详细内容) 3 电气特性 4 机械尺寸 5 底板设计注意事项 硬件资源 SOM-TLT113核心板板载CPU.ROM.RAM.晶振.电源.LED等 ...

  7. Qt--共享内存监听工具

    共享内存概述 共享内存的特点: 1)共享内存是进程共享数据的一种最快的方法. 一个进程向共享内存区域写入了数据,共享这个内存区域的所有进程就可以立刻看到其中的内容. 2)使用共享内存要注意的是多个进程 ...

  8. 数据源dataSource以及事务tx的xml文件配置方式及代码配置方式

    所需要使用的依赖 <dependencies> <!--spring jdbc Spring 持久化层支持jar包--> <dependency> <grou ...

  9. Java子类是否能继承父类上的注解

    子类可以继承到父类上的注解吗? 在编写自定义注解时,可以通过指定@Inherited注解,申明自定义注解是否可以被继承:@Inherited只能实现类上的注解继承. 实现情况可细分为以下几种 未申明@ ...

  10. 论文阅读: 面向Planning的端到端智驾Planning-oriented Autonomous Driving

    原文地址:https://arxiv.org/abs/2212.10156 背景 当代自动驾驶系统多采用序列化的模块化的任务处理方式,比如感知.预测.规划等.为了处理多样的任务.达到高水平智能,当代智 ...