【转载】 日内瓦大学 & NeurIPS 2020 | 在强化学习中动态分配有限的内存资源
原文地址:
https://hub.baai.ac.cn/view/4029
========================================================
【论文标题】Dynamic allocation of limited memory resources in reinforcement learning
【作者团队】Nisheet Patel, Luigi Acerbi, Alexandre Pouget
【发表时间】2020/11/12
【论文链接】https://proceedings.neurips.cc//paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-Paper.pdf
【论文代码】https://github.com/nisheetpatel/DynamicResourceAllocator
【推荐理由】本文收录于NeurIPS 2020,来自日内瓦大学的研究人员研究强化学习与神经科学两个方面,提出了动态框架来对资源进行分配。
生物大脑固有的处理和存储信息的能力受到限制,但是仍然能够轻松地解决复杂的任务。
在本文中,研究人员提出了动态资源分配器(DRA),将其应用于强化学习中的两个标准任务和一个基于模型的计划任务,
发现它将更多资源分配给对内存有更高影响的项目。
此外,DRA从更高的资源预算开始学习时比为更好地完成任务而分配的学习速度要快,
这可以解释为什么生物大脑的额叶皮层区域在适应较低的渐近活动水平之前似乎更多地参与了学习的早期阶段。
本文的工作为学习如何将昂贵的资源分配给不确定的内存集合以适应环境变化的方式提供了一个规范性的解决方案。
代码地址:
https://github.com/nisheetpatel/DynamicResourceAllocator
======================================================
论文官方地址:
https://archive-ouverte.unige.ch/unige:149081
============================================
论文评审意见:
https://proceedings.neurips.cc/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-MetaReview.html
Dynamic allocation of limited memory resources in reinforcement learning
Meta Review
This paper nicely bridges between neuroscience and RL, and considers the important topic of limited memory resources in RL agents. The topic is well-suited for NeurIPS (R2) as it has broader applicability toward e.g. model-based RL and planning, although this is not extensively discussed or shown in the paper itself. All reviewers agreed that it is well-motivated and written (R1, R2, R3, R4), although R3 did ask for a bit more explanation on some methodological details. It is also appropriately situated with respect to related work (R1, R2, R3) although R2 suggests a separate related works section, and R4 wanted to see more discussion of work outside of neuroscience, focused on optimizing RL with limited capacity. R1 pointed out that perhaps there’s a bit of confusion between memory precision and use of memory resources, as the former is more accurate for agents, the latter perhaps for real brains - ie more precise representations require more resources to encode in the brain, but this seems to be a minor point. R1 also asked to include standard baseline implementations to test for issues such how their model scales compared to other methods. R4 was the least positive, expressing that the contribution to AI is unclear, that the tasks are too easy and wouldn’t be expected to challenge memory resources. Also the connection to neuroscience is a bit tenuous as the implementation doesn’t seem particularly biologically plausible. In the rebuttal, authors argue that this approach will allow them to generate testable predictions regarding neural representations during learning, some of which are already included in the discussion. I find this adequate, but these predictions should maybe be foregrounded more so as to make clearer the neuroscientific contribution. I’m overall quite impressed with how responsive the authors were in their response, including almost all of the requested analyses. I think the final paper, with all of these changes incorporated, is likely to be much stronger, and so I recommend accept.
======================================
论文的视频讲解:(外网)
https://www.youtube.com/watch?v=1MJJkJd_umA
===================================
【转载】 日内瓦大学 & NeurIPS 2020 | 在强化学习中动态分配有限的内存资源的更多相关文章
- 深度强化学习中稀疏奖励问题Sparse Reward
Sparse Reward 推荐资料 <深度强化学习中稀疏奖励问题研究综述>1 李宏毅深度强化学习Sparse Reward4 强化学习算法在被引入深度神经网络后,对大量样本的需求更加 ...
- 强化学习中的无模型 基于值函数的 Q-Learning 和 Sarsa 学习
强化学习基础: 注: 在强化学习中 奖励函数和状态转移函数都是未知的,之所以有已知模型的强化学习解法是指使用采样估计的方式估计出奖励函数和状态转移函数,然后将强化学习问题转换为可以使用动态规划求解的 ...
- 强化学习中REIINFORCE算法和AC算法在算法理论和实际代码设计中的区别
背景就不介绍了,REINFORCE算法和AC算法是强化学习中基于策略这类的基础算法,这两个算法的算法描述(伪代码)参见Sutton的reinforcement introduction(2nd). A ...
- SpiningUP 强化学习 中文文档
2020 OpenAI 全面拥抱PyTorch, 全新版强化学习教程已发布. 全网第一个中文译本新鲜出炉:http://studyai.com/course/detail/ba8e572a 个人认为 ...
- 强化学习中的经验回放(The Experience Replay in Reinforcement Learning)
一.Play it again: reactivation of waking experience and memory(Trends in Neurosciences 2010) SWR发放模式不 ...
- 【转载】 “强化学习之父”萨顿:预测学习马上要火,AI将帮我们理解人类意识
原文地址: https://yq.aliyun.com/articles/400366 本文来自AI新媒体量子位(QbitAI) ------------------------------- ...
- (转) 深度强化学习综述:从AlphaGo背后的力量到学习资源分享(附论文)
本文转自:http://mp.weixin.qq.com/s/aAHbybdbs_GtY8OyU6h5WA 专题 | 深度强化学习综述:从AlphaGo背后的力量到学习资源分享(附论文) 原创 201 ...
- temporal credit assignment in reinforcement learning 【强化学习 经典论文】
Sutton 出版论文的主页: http://incompleteideas.net/publications.html Phd 论文: temporal credit assignment i ...
- ICML 2018 | 从强化学习到生成模型:40篇值得一读的论文
https://blog.csdn.net/y80gDg1/article/details/81463731 感谢阅读腾讯AI Lab微信号第34篇文章.当地时间 7 月 10-15 日,第 35 届 ...
- 【强化学习】1-1-2 “探索”(Exploration)还是“ 利用”(Exploitation)都要“面向目标”(Goal-Direct)
title: [强化学习]1-1-2 "探索"(Exploration)还是" 利用"(Exploitation)都要"面向目标"(Goal ...
随机推荐
- The solution of P9194
10黑寄. problem & blog 考虑到处理加边并不简单,所以我们可以考虑一个黑点 \(p\),连边\((u,p)(p,v)\). 考虑在现在这棵树上连个点在原图中有变相连相当于有一个 ...
- Vue TypeScript 实战:掌握静态类型编程
title: Vue TypeScript 实战:掌握静态类型编程 date: 2024/6/10 updated: 2024/6/10 excerpt: 这篇文章介绍了如何在TypeScript环境 ...
- EF CORE 遇到“无法打开登录所请求的数据库 "win7bc"。登录失败。”
报错内容:ex:An exception has been raised that is likely due to a transient failure. Consider enabling tr ...
- 什么是 SpringMvc?
SpringMvc 是 spring 的一个模块,基于 MVC 的一个框架,无需中间整合层来整合.
- windows server 安装.net framework 3.5失败
windows server如果高版本的.net framework 那么在安装.net framework3.5时会提示已安装高版本的不能安装低版本的了 ---------------------- ...
- 实验12.dhcp服务器实验
实验12.dhcp服务器实验 测试DHCP服务的可用性 实验组 交换机配置 R1 interface GigabitEthernet0/0/0 ip address 192.168.1.1 255.2 ...
- 川普真会说中文?连嘴型都同步,VideoReTalking AI数字人下载介绍
你能想到这种画面吗?霉霉在节目中用普通话接受采访,特朗普在老家用中文脱口秀,蔡明老师操着一口流利的英文调侃潘长江老师.. 这听起来似乎很魔幻,可如今全部由VideoReTalking实现了 你只需要传 ...
- PyTorch程序练习(二):循环神经网络的PyTorch实现
一.RNN实现 结构原理 代码实现 import torch import torch.nn as nn class RNN(nn.Module): def __init__(self, input_ ...
- scala实现二分查找
package day04.scala/** * Description: 使用二分查找法,查找元素为"70"的索引值 java */object Demo2SecondaySea ...
- java 提取证书指纹
正文 用到的依赖 <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcprov ...