Meta Learning/ Learning to Learn/ One Shot Learning/ Lifelong Learning

2018-08-03 19:16:56

本文转自:https://github.com/floodsung/Meta-Learning-Papers

1 Legacy Papers

[1] Nicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks, 16(1):5–9, 2003.

[2] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.

[3] Kunikazu Kobayashi, Hiroyuki Mizoue, Takashi Kuremoto, and Masanao Obayashi. A meta-learning method based on temporal difference error. In International Conference on Neural Information Processing, pages 530–537. Springer, 2009.

[4] Sebastian Thrun and Lorien Pratt. Learning to learn: Introduction and overview. In Learning to learn, pages 3–17. Springer, 1998.

[5] A Steven Younger, Sepp Hochreiter, and Peter R Conwell. Meta-learning with backpropagation. In Neural Networks, 2001. Proceedings. IJCNN’01. International Joint Conference on, volume 3. IEEE, 2001.

[6] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18(2):77–95, 2002.

[7] Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAAI, volume 1, pp. 3, 2008.

[8] Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum.One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, volume 172, pp. 2, 2011.

[9] Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006.

[10] Ju ̈rgen Schmidhuber. A neural network that embeds its own meta-levels. In Neural Networks, 1993., IEEE International Conference on, pp. 407–412. IEEE, 1993.

[11] Sebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181–209. Springer, 1998.

[12] Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Universite ́ de Montre ́al, De ́partement d’informatique et de recherche ope ́rationnelle, 1990.

[13] Samy Bengio, Yoshua Bengio, and Jocelyn Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters, 2(4):26–30, 1995.

[14] Rich Caruana. Learning many related tasks at the same time with backpropagation. Advances in neural information processing systems, pp. 657–664, 1995.

[15] Giraud-Carrier, Christophe, Vilalta, Ricardo, and Brazdil, Pavel. Introduction to the special issue on meta-learning. Machine learning, 54(3):187–193, 2004.

[16] Jankowski, Norbert, Duch, Włodzisław, and Grabczewski, Krzysztof. Meta-learning in computational intelligence, volume 358. Springer Science & Business Media, 2011.

[17] N. E. Cotter and P. R. Conwell. Fixed-weight networks can learn. In International Joint Conference on Neural Networks, pages 553–559, 1990.

[18] J. Schmidhuber. Evolutionary principles in self-referential learning; On learning how to learn: The meta-meta-... hook. PhD thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.

[19] J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131–139, 1992.

[20] Jurgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Simple principles of metalearning. Technical report, SEE, 1996.

[21] Thrun, Sebastian and Pratt, Lorien. Learning to learn. Springer Science & Business Media, 1998.

2 Recent Papers

[1] Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pp. 3981–3989, 2016

[2] Ba, Jimmy, Hinton, Geoffrey E, Mnih, Volodymyr, Leibo, Joel Z, and Ionescu, Catalin. Using fast weights to attend to the recent past. In Advances In Neural Information Processing Systems, pp. 4331–4339, 2016

[3] David Ha, Andrew Dai and Le, Quoc V. Hypernetworks. In ICLR 2017, 2017.

[4] Koch, Gregory. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.

[5] Lake, Brenden M, Salakhutdinov, Ruslan R, and Tenenbaum, Josh. One-shot learning by inverting a compositional causal process. In Advances in neural information processing systems, pp. 2526–2534, 2013.

[6] Santoro, Adam, Bartunov, Sergey, Botvinick, Matthew, Wierstra, Daan, and Lillicrap, Timothy. Meta-learning with memory-augmented neural networks. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1842–1850, 2016.

[7] Vinyals, Oriol, Blundell, Charles, Lillicrap, Tim, Wierstra, Daan, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630–3638, 2016.

[8] Kaiser, Lukasz, Nachum, Ofir, Roy, Aurko, and Bengio, Samy. Learning to remember rare events. In ICLR 2017, 2017.

[9] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, D. Kumaran, and R. Hadsell. Learning to navigate in complex environments. Techni- cal report, DeepMind, 2016.

[10] B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. Technical report, submitted to ICLR 2017, 2016.

[11] Y. Duan, J. Schulman, X. Chen, P. Bartlett, I. Sutskever, and P. Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. Technical report, UC Berkeley and OpenAI, 2016.

[12] Li, Ke and Malik, Jitendra. Learning to optimize. International Conference on Learning Representations (ICLR), 2017.

[13] Edwards, Harrison and Storkey, Amos. Towards a neural statistician. International Conference on Learning Representations (ICLR), 2017.

[14] Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Ruslan. Actor-mimic: Deep multitask and transfer reinforcement learning. International Conference on Learning Representations (ICLR), 2016.

[15] Ravi, Sachin and Larochelle, Hugo. Optimization as a model for few-shot learning. In International Conference on Learning Representations (ICLR), 2017.

[16] Finn, C., Abbeel, P., & Levine, S. (2017). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv preprint arXiv:1703.03400.

[17] Chen, Y., Hoffman, M. W., Colmenarejo, S. G., Denil, M., Lillicrap, T. P., & de Freitas, N. (2016). Learning to Learn for Global Optimization of Black Box Functions. arXiv preprint arXiv:1611.03824.

[18] Munkhdalai T, Yu H. Meta Networks. arXiv preprint arXiv:1703.00837, 2017.

[19] Duan Y, Andrychowicz M, Stadie B, et al. One-Shot Imitation Learning. arXiv preprint arXiv:1703.07326, 2017.

[20] Woodward M, Finn C. Active One-shot Learning. arXiv preprint arXiv:1702.06559, 2017.

[21] Wichrowska O, Maheswaranathan N, Hoffman M W, et al. Learned Optimizers that Scale and Generalize. arXiv preprint arXiv:1703.04813, 2017.

[22] Hariharan, Bharath, and Ross Girshick. Low-shot visual object recognition arXiv preprint arXiv:1606.02819 (2016).

[23] Wang J X, Kurth-Nelson Z, Tirumala D, et al. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.

[24] Flood Sung, Zhang L, Xiang T, Hospedales T, et al. Learning to Learn: Meta-Critic Networks for Sample Efficient Learning. arXiv preprint arXiv:1706.09529, 2017.

[25] Li Z, Zhou F, Chen F, et al. Meta-SGD: Learning to Learn Quickly for Few Shot Learning. arXiv preprint arXiv:1707.09835, 2017.

[26] Mishra N, Rohaninejad M, Chen X, et al. Meta-Learning with Temporal Convolutions. arXiv preprint arXiv:1707.03141, 2017.

[27] Frans K, Ho J, Chen X, et al. Meta Learning Shared Hierarchies. arXiv preprint arXiv:1710.09767, 2017.

[28] Finn C, Yu T, Zhang T, et al. One-shot visual imitation learning via meta-learning. arXiv preprint arXiv:1709.04905, 2017.

[29] Flood Sung, Yongxin Yang, Zhang Li, Xiang T,Philip Torr, Hospedales T, et al Learning to Compare: Relation Network for Few Shot Learning. arXiv preprint arXiv:1711.06025, 2017.

[30] Brenden M Lake, Ruslan Salakhutdinov, Joshua B Tenenbaum Human-level concept learning through probabilistic program induction. In Science, volume 350, pp. 1332-1338, 2015.

[32] Xu D, Nair S, Zhu Y, et al. Neural task programming: Learning to generalize across hierarchical tasks. arXiv preprint arXiv:1710.01813, 2017.

[33] Bertinetto, L., Henriques, J. F., Valmadre, J., Torr, P., & Vedaldi, A. (2016). Learning feed-forward one-shot learners. In Advances in Neural Information Processing Systems (pp. 523-531).

[34] Wang, Yu-Xiong, and Martial Hebert. Learning to learn: Model regression networks for easy small sample learning.European Conference on Computer Vision. Springer International Publishing, 2016.

[35] Triantafillou, Eleni, Hugo Larochelle, Jake Snell, Josh Tenenbaum, Kevin Jordan Swersky, Mengye Ren, Richard Zemel, and Sachin Ravi. Meta-Learning for Semi-Supervised Few-Shot Classification. ICLR 2018.

[36] Rabinowitz, Neil C., Frank Perbet, H. Francis Song, Chiyuan Zhang, S. M. Eslami, and Matthew Botvinick. Machine Theory of Mind. arXiv preprint arXiv:1802.07740 (2018).

[37] Reed, Scott, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions. arXiv preprint arXiv:1710.10304 (2017).

[38] Xu, Zhongwen, Hado van Hasselt, and David Silver. Meta-Gradient Reinforcement Learning arXiv preprint arXiv:1805.09801 (2018).

[39] Xu, Kelvin, Ellis Ratner, Anca Dragan, Sergey Levine, and Chelsea Finn. Learning a Prior over Intent via Meta-Inverse Reinforcement Learning arXiv preprint arXiv:1805.12573 (2018).

[40] Finn, Chelsea, Kelvin Xu, and Sergey Levine. Probabilistic Model-Agnostic Meta-Learning arXiv preprint arXiv:1806.02817 (2018).

[41] Gupta, Abhishek, Benjamin Eysenbach, Chelsea Finn, and Sergey Levine. Unsupervised Meta-Learning for Reinforcement Learning arXiv preprint arXiv:1806.04640(2018).

[42] Yoon, Sung Whan, Jun Seo, and Jaekyun Moon. Meta Learner with Linear Nulling arXiv preprint arXiv:1806.01010 (2018).

[43] Kim, Taesup, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian Model-Agnostic Meta-Learning arXiv preprint arXiv:1806.03836 (2018).

[44] Gupta, Abhishek, Russell Mendonca, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Meta-Reinforcement Learning of Structured Exploration Strategies arXiv preprint arXiv:1802.07245 (2018).

[45] Clavera, Ignasi, Anusha Nagabandi, Ronald S. Fearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Learning to Adapt: Meta-Learning for Model-Based Control arXiv preprint arXiv:1803.11347 (2018).

[46] Houthooft, Rein, Richard Y. Chen, Phillip Isola, Bradly C. Stadie, Filip Wolski, Jonathan Ho, and Pieter Abbeel. Evolved policy gradients arXiv preprint arXiv:1802.04821 (2018).

[47] Xu, Tianbing, Qiang Liu, Liang Zhao, Wei Xu, and Jian Peng. Learning to Explore with Meta-Policy Gradient arXiv preprint arXiv:1803.05044 (2018).

[48] Stadie, Bradly C., Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, and Ilya Sutskever. Some considerations on learning to explore via meta-reinforcement learning arXiv preprint arXiv:1803.01118 (2018).

(转)Paper list of Meta Learning/ Learning to Learn/ One Shot Learning/ Lifelong Learning的更多相关文章

  1. Targeted Learning R Packages for Causal Inference and Machine Learning(转)

    Targeted learning methods build machine-learning-based estimators of parameters defined as features ...

  2. Evolutionary Computing: [reading notes]On the Life-Long Learning Capabilities of a NELLI*: A Hyper-Heuristic Optimisation System

    resource: On the Life-Long Learning Capabilities of a NELLI*: A Hyper-Heuristic Optimisation System ...

  3. Deep Learning论文笔记之(八)Deep Learning最新综述

    Deep Learning论文笔记之(八)Deep Learning最新综述 zouxy09@qq.com http://blog.csdn.net/zouxy09 自己平时看了一些论文,但老感觉看完 ...

  4. [DEEP LEARNING An MIT Press book in preparation]Deep Learning for AI

    动人的DL我们有六个月的时间,积累了一定的经验,实验,也DL有了一些自己的想法和理解.曾经想扩大和加深DL相关方面的一些知识. 然后看到了一个MIT按有关的对出版物DL图书http://www.iro ...

  5. Learning How to Learn, Part 1

    Jan 8, 2015 • vancexu Learning How to Learn: Powerful mental tools to help you master tough subjects ...

  6. Cousera课程Learning How to Learn学习报告

    花了三天完成了Cousera上的Learning how to learn的课程,由于未完成批阅他人作业,所以分不是很高,但是老师讲的课程非常的好,值得一听: 课程的笔记: 我们的一生是一个不断接触和 ...

  7. 【转载】论文笔记系列-Tree-CNN: A Deep Convolutional Neural Network for Lifelong Learning

    一. 引出主题¶ 深度学习领域一直存在一个比较严重的问题——“灾难性遗忘”,即一旦使用新的数据集去训练已有的模型,该模型将会失去对原数据集识别的能力.为解决这一问题,本文提出了树卷积神经网络,通过先将 ...

  8. 课程一(Neural Networks and Deep Learning),第一周(Introduction to Deep Learning)—— 0、学习目标

    1. Understand the major trends driving the rise of deep learning.2. Be able to explain how deep lear ...

  9. Learning How to Learn学习笔记(转)

    add by zhj: 工作中提高自己水平的最重要的一点是——快速的学习能力.这篇文章就是探讨这个问题的,掌握了快速学习能力的规律,你自然就有了快速学习能力了. 原文:Learning How to ...

随机推荐

  1. ModuleNotFoundError: No module named '_pydevd_bundle.pydevd_cython' error on debug

    现象:pycharm调试代码出现错误:ModuleNotFoundError: No module named '_pydevd_bundle.pydevd_cython' error on debu ...

  2. Vue系列之 => 自定义键盘修饰符

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  3. Python 7 -- 文件存储数据

    上一节总结了一个基本web应用的代码,这一节主要讲用户访问的数据记录在log文件中,并显示在页面上. 这节步骤: 按以下目录建好相应的文件夹及内容 webapp|----vsearch4web.py ...

  4. 开始Nginx的SSL模块

    nginx: [emerg] the "ssl" parameter requires ngx_http_ssl_module in /usr/local/nginx/conf/n ...

  5. 判断是移动端还是PC端

    // 判断是移动端还是PC端 $http_user_agent = isset($_SERVER['HTTP_USER_AGENT']) ? strtolower($_SERVER['HTTP_USE ...

  6. 怎样从外网访问内网Node.js?

    本地安装了一个Node.js,只能在局域网内访问,怎样从外网也能访问到本地的Node.js呢?本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Node.js 默认安装的Node.js端口 ...

  7. C++中static_cast和dynamic_cast强制类型转换

    在C++标准中,提供了关于类型层次转换中的两个关键字static_cast和dynamic_cast. 一.static_cast关键字(编译时类型检查) 用法:static_cast < ty ...

  8. matplotlib 画动态图以及plt.ion()和plt.ioff()的使用

    学习python的道路是漫长的,今天又遇到一个问题,所以想写下来自己的理解方便以后查看. 在使用matplotlib的过程中,常常会需要画很多图,但是好像并不能同时展示许多图.这是因为python可视 ...

  9. IOS 苹果手机fiddler抓包时出现了tunnel to 443 解决方案,亲测有效

    先上一张捉取成功图[版本需4.0以上,并非所有https数据可抓取,具体原因未知] 1.先对Fiddler进行设置[打开Fiddler ——> Options .然后打开的对话框中,选择HTTP ...

  10. redis 缓存锁的实现方法

    1. redis加锁分类 redis能用的的加锁命令分表是INCR.SETNX.SET 2. 第一种锁命令INCR 这种加锁的思路是, key 不存在,那么 key 的值会先被初始化为 0 ,然后再执 ...