Brief History of Machine Learning

My subjective ML timeline

Since the initial standpoint of science, technology and AI, scientists following Blaise Pascal and Von Leibniz ponder about a machine that is intellectually capable as much as humans. Famous writers like Jules

Pascal’s machine performing subtraction and summation – 1642

Machine Learning is one of the important lanes of AI which is very spicy hot subject in the research or industry. Companies, universities devote many resources to advance their knowledge. Recent advances in the field propel very solid results for different tasks, comparable to human performance (98.98% at Traffic Signs – higher than human-).

Here I would like to share a crude timeline of Machine Learning and sign some of the milestones by no means complete. In addition, you should add “up to my knowledge” to beginning of any argument in the text.

First step toward prevalent ML was proposed by Hebb , in 1949, based on a neuropsychological learning formulation. It is called Hebbian Learning theory. With a simple explanation, it pursues correlations between nodes of a Recurrent Neural Network (RNN). It memorizes any commonalities on the network and serves like a memory later. Formally, the argument states that;

Let us assume that the persistence or repetition of a reverberatory activity (or “trace”) tends to induce lasting cellular changes that add to its stability.… When an  axon  of cell  A is near enough to excite a cell  B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that  A’s efficiency, as one of the cells firing  B, is increased.[1]

Arthur Samuel

In 1952 , Arthur Samuel at IBM, developed a program playing Checkers . The program was able to observe positions and learn a implicit model that gives better moves for the latter cases. Samuel played so many games with the program and observed that the program was able to play better in the course of time.

With that program Samuel confuted the general providence dictating machines cannot go beyond the written codes and learn patterns like human-beings. He coined “machine learning, ” which he defines as;

a field of study that gives computer the ability without being explicitly programmed.

F. Rosenblatt

In 1957 , Rosenblatt’s Perceptron was the second model proposed again with neuroscientific background and it is more similar to today’s ML models. It was a very exciting discovery at the time and it was practically more applicable than Hebbian’s idea. Rosenblatt introduced the Perceptron with the following lines;

The perceptron is designed to illustrate some of the fundamental properties of intelligent systems in general, without becoming too deeply enmeshed in the special, and frequently unknown, conditions which hold for particular biological organisms.[2]

After 3 years later, Widrow [4]  engraved Delta Learning rule that is then used as practical procedure for Perceptron training. It is also known as Least Square  problem. Combination of those two ideas creates a good linear classifier. However, Perceptron’s excitement was hinged by Minsky [3] in 1969 . He proposed the famousXOR problem and the inability of Perceptrons in such linearly inseparable data distributions. It was the Minsky’s tackle to NN community. Thereafter, NN researches would be dormant up until 1980s

XOR problem which is nor linearly seperable data orientation

There had been not to much effort until the intuition of Multi-Layer Perceptron (MLP)  was suggested byWerbos[6] in 1981 with NN specific Backpropagation(BP) algorithm, albeit BP idea had been proposed before by  Linnainmaa [5]  in 1970 in the name “reverse mode of automatic differentiation”. Still BP is the key ingredient of today’s NN architectures. With those new ideas, NN researches accelerated again. In 1985 – 1986 NN researchers successively presented the idea of MLP  with practical BP training (Rumelhart, Hinton, Williams [7] –  Hetch, Nielsen[8])

From Hetch and Nielsen [8]

At the another spectrum, a very-well known ML algorithm was proposed by J. R. Quinlan [9] in 1986 that we call Decision Trees , more specifically ID3 algorithm. This was the spark point of the another mainstream ML.  Moreover, ID3 was also released as a software able to find more real-life use case with its simplistic rules and its clear inference, contrary to still black-box NN models.

After ID3, many different alternatives or improvements have been explored by the community (e.g. ID4, Regression Trees, CART …) and still it is one of the active topic in ML.

From Quinlan [9]

One of the most important ML breakthrough was Support Vector Machines (Networks) (SVM), proposed by Vapnik and Cortes[10]  in 1995 with very strong theoretical standing and empirical results. That was the time separating the ML community into two crowds as NN or SVM advocates. However the competition between two community was not very easy for the NN side  after Kernelized version of SVM by near 2000s .(I was not able to find the first paper about the topic), SVM got the best of many tasks that were occupied by NN models before. In addition, SVM was able to exploit all the profound knowledge of convex optimization, generalization margin theory and kernels against NN models. Therefore, it could find large push from different disciplines causing very rapid theoretical and practical improvements.

From Vapnik and Cortes [10]

NN took another damage by the work of Hochreiter’s thesis [40] in 1991 and   Hochreiter et. al.[11] in 2001 , showing the gradient loss after the saturation of NN units as we apply BP learning. Simply means, it is redundant to train NN units after a certain number of epochs owing to saturated units hence NNs are very inclined to over-fit in a short number of epochs.

Little before, another solid ML model was proposed by Freund and Schapire  in 1997 prescribed with boosted ensemble of weak classifiers called  Adaboost. This work also gave the Godel Prize to the authors at the time. Adaboost trains weak set of classifiers that are easy to train, by giving more importance to hard instances. This model still the basis of many different tasks like face recognition and detection. It is also a realization of PAC  (Probably Approximately Correct) learning theory. In general, so called weak classifiers are chosen as simple decision stumps (single decision tree nodes). They introduced Adaboost as ;

The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting…[11]

Another ensemble model explored by Breiman [12] in 2001 that ensembles multiple decision trees where each of them is curated by a random subset of instances and each node is selected from a random subset of features. Owing to its nature,  it is called Random Forests(RF) . RF has also theoretical and empirical proofs of endurance against over-fitting. Even AdaBoost shows weakness to over-fitting and outlier instances in the data, RF   is more robust model against these caveats.(For more detail about RF, refer tomy old post.). RF shows its success in many different tasks like Kaggle competitions as well.

Random forests are a combination of tree predictors such that each tree depends on the values of a
random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large[12]

As we come closer today, a new era of NN called Deep Learning has been commerced. This phrase simply refers NN models with many wide successive layers. The 3rd rise of NN has begun roughly in   2005  with the conjunction of many different discoveries from past and present by  recent mavens Hinton, LeCun, Bengio, Andrew Ng and other valuable older researchers. I enlisted some of the important headings (I guess, I will dedicate complete post for Deep Learning specifically) ;

  • GPU programming
  • Convolutional NNs [18][20][40]
    • Deconvolutional Networks [21]
  • Optimization algorithms
    • Stochastic Gradient Descent [19][22]
    • BFGS and L-BFGS [23]
    • Conjugate Gradient Descent [24]
    • Backpropagation [40][19]
  • Rectifier Units
  • Sparsity [15][16]
  • Dropout Nets [26]
    • Maxout Nets  [25]
  • Unsupervised NN models [14]
    • Deep Belief Networks [13]
    • Stacked Auto-Encoders [16][39]
    • Denoising NN models [17]

With the combination of all those ideas and non-listed ones, NN models are able to beat off state of art at very different tasks such as Object Recognition, Speech Recognition, NLP etc. However, it should be noted that this absolutely does not mean, it is the end of other ML streams. Even Deep Learning success stories grow rapidly , there are many critics directed to training cost and tuning exogenous parameters of  these models. Moreover, still SVM is being used more commonly owing to its simplicity. (said but may cause a huge debate  )

Before finish, I need to touch on one another relatively young ML trend. After the growth of WWW and Social Media, a new term,  BigData  emerged and affected ML research wildly. Because of the large problems arising from BigData , many strong ML algorithms are useless for reasonable systems (not for giant Tech Companies of course). Hence, research people come up with a new set of simple models that are dubbed  Bandit Algorithms [27 – 38] (formally predicated with Online Learning )   that makes learning easier and adaptable for large scale problems.

I would like to conclude this infant sheet of ML history. If you found something wrong (you should  ), insufficient or non-referenced, please don’t hesitate to warn me in all manner.

References —-

[1] Hebb D. O., The organization of behaviour. New York: Wiley & Sons.

[2] Rosenblatt, Frank. “The perceptron: a probabilistic model for information storage and organization in the brain.”  Psychological review 65.6 (1958): 386.

[3] Minsky, Marvin, and Papert Seymour. “Perceptrons.” (1969).

[4]Widrow, Hoff  “Adaptive switching circuits.” (1960): 96-104.

[5]S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor
expansion of the local rounding errors. Master’s thesis, Univ. Helsinki, 1970.

[6] P. J. Werbos. Applications of advances in nonlinear sensitivity analysis. In Proceedings of the 10th
IFIP Conference, 31.8 – 4.9, NYC, pages 762–770, 1981.

[7]  Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams.  Learning internal representations by error propagation. No. ICS-8506. CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE, 1985.

[8]  Hecht-Nielsen, Robert. “Theory of the backpropagation neural network.”  Neural Networks, 1989. IJCNN., International Joint Conference on. IEEE, 1989.

[9]  Quinlan, J. Ross. “Induction of decision trees.”  Machine learning 1.1 (1986): 81-106.

[10]  Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.”  Machine learning 20.3 (1995): 273-297.

[11]  Freund, Yoav, Robert Schapire, and N. Abe. “A short introduction to boosting.” Journal-Japanese Society For Artificial Intelligence 14.771-780 (1999): 1612.

[12]  Breiman, Leo. “Random forests.”  Machine learning 45.1 (2001): 5-32.

[13]  Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. “A fast learning algorithm for deep belief nets.”  Neural computation 18.7 (2006): 1527-1554.

[14] Bengio, Lamblin, Popovici, Larochelle, “Greedy Layer-Wise
Training of Deep Networks”, NIPS’2006

[15] Ranzato, Poultney, Chopra, LeCun ” Efficient Learning of  Sparse Representations with an Energy-Based Model “, NIPS’2006

[16] Olshausen B a, Field DJ. Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision Res. 1997;37(23):3311–25. Available at: http://www.ncbi.nlm.nih.gov/pubmed/9425546.

[17] Vincent, H. Larochelle Y. Bengio and P.A. Manzagol,  Extracting and Composing Robust Features with Denoising Autoencoders , Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML‘08), pages 1096 – 1103, ACM, 2008.

[18]  Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36, 193–202.

[19]  LeCun, Yann, et al. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86.11 (1998): 2278-2324.

[20]  LeCun, Yann, and Yoshua Bengio. “Convolutional networks for images, speech, and time series.”  The handbook of brain theory and neural networks3361 (1995).

[21]  Zeiler, Matthew D., et al. “Deconvolutional networks.”  Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010.

[22] S. Vishwanathan, N. Schraudolph, M. Schmidt, and K. Mur- phy. Accelerated training of conditional random fields with stochastic meta-descent. In International Conference on Ma- chine Learning (ICML ’06), 2006.

[23] Nocedal, J. (1980). ”Updating Quasi-Newton Matrices with Limited Storage.” Mathematics of Computation 35 (151): 773782. doi:10.1090/S0025-5718-1980-0572855-

[24] S. Yun and K.-C. Toh, “A coordinate gradient descent method for l1- regularized convex minimization,” Computational Optimizations and Applications, vol. 48, no. 2, pp. 273–307, 2011.

[25] Goodfellow I, Warde-Farley D. Maxout networks. arXiv Prepr arXiv …. 2013. Available at: http://arxiv.org/abs/1302.4389. Accessed March 20, 2014.

[26] Wan L, Zeiler M. Regularization of neural networks using dropconnect. Proc …. 2013;(1). Available at: http://machinelearning.wustl.edu/mlpapers/papers/icml2013_wan13. Accessed March 13, 2014.

[27]  Alekh Agarwal ,  Olivier Chapelle ,  Miroslav Dudik ,  John Langford ,  A Reliable Effective Terascale Linear Learning System , 2011

[28]  M. Hoffman ,  D. Blei ,  F. Bach ,  Online Learning for Latent Dirichlet Allocation , in Neural Information Processing Systems (NIPS) 2010.

[29]  Alina Beygelzimer ,  Daniel Hsu ,  John Langford , and  Tong Zhang   Agnostic Active Learning Without Constraints  NIPS 2010.

[30]  John Duchi ,  Elad Hazan , and  Yoram Singer ,  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , JMLR 2011 & COLT 2010.

[31]  H. Brendan McMahan ,  Matthew Streeter ,  Adaptive Bound Optimization for Online Convex Optimization , COLT 2010.

[32]  Nikos Karampatziakis  and  John Langford ,  Importance Weight Aware Gradient Updates  UAI 2010.

[33]  Kilian Weinberger ,  Anirban Dasgupta ,  John Langford ,  Alex Smola ,  Josh Attenberg ,  Feature Hashing for Large Scale Multitask Learning , ICML 2009.

[34]  Qinfeng Shi ,  James Petterson ,  Gideon Dror ,  John Langford ,  Alex Smola , and  SVN Vishwanathan , Hash Kernels for Structured Data , AISTAT 2009.

[35]  John Langford ,  Lihong Li , and  Tong Zhang ,  Sparse Online Learning via Truncated Gradient , NIPS 2008.

[36]  Leon Bottou ,  Stochastic Gradient Descent , 2007.

[37]  Avrim Blum ,  Adam Kalai , and  John Langford   Beating the Holdout: Bounds for KFold and Progressive Cross-Validation . COLT99 pages 203-208.

[38]  Nocedal, J.  (1980). “Updating Quasi-Newton Matrices with Limited Storage”. Mathematics of Computation 35: 773–782.

[39] D. H. Ballard. Modular learning in neural networks. In AAAI, pages 279–284, 1987.

[40] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut f ̈ur In-
formatik, Lehrstuhl Prof. Brauer, Technische Universit ̈at M ̈unchen, 1991. Advisor: J. Schmidhuber.

Brief History of Machine Learning的更多相关文章

  1. 机器学习简史brief history of machine learning

    BRIEF HISTORY OF MACHINE LEARNING My subjective ML timeline (click for larger) Since the initial sta ...

  2. 【机器学习Machine Learning】资料大全

    昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...

  3. 机器学习(Machine Learning)&深度学习(Deep Learning)资料

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost到随机森林.D ...

  4. FAQ: Machine Learning: What and How

    What: 就是将统计学算法作为理论,计算机作为工具,解决问题.statistic Algorithm. How: 如何成为菜鸟一枚? http://www.quora.com/How-can-a-b ...

  5. 机器学习(Machine Learning)&深入学习(Deep Learning)资料

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost 到随机森林. ...

  6. 机器学习(Machine Learning)&深度学习(Deep Learning)资料【转】

    转自:机器学习(Machine Learning)&深度学习(Deep Learning)资料 <Brief History of Machine Learning> 介绍:这是一 ...

  7. [ML] I'm back for Machine Learning

    Hi, Long time no see. Briefly, I plan to step into this new area, data analysis. In the past few yea ...

  8. 机器学习(Machine Learning)&深度学习(Deep Learning)资料汇总 (上)

    转载:http://dataunion.org/8463.html?utm_source=tuicool&utm_medium=referral <Brief History of Ma ...

  9. 机器学习(Machine Learning)&amp;深度学习(Deep Learning)资料

    机器学习(Machine Learning)&深度学习(Deep Learning)资料 機器學習.深度學習方面不錯的資料,轉載. 原作:https://github.com/ty4z2008 ...

随机推荐

  1. 记录:tf.saved_model 模块的简单使用(TensorFlow 模型存储与恢复)

    虽然说 TensorFlow 2.0 即将问世,但是有一些模块的内容却是不大变化的.其中就有 tf.saved_model 模块,主要用于模型的存储和恢复.为了防止学习记录文件丢失或者蠢笨的脑子直接遗 ...

  2. python基础篇----基本数据类型

    bit  #bit_length 当前数字的二进制,只用用n位来表示a = 123b = a.bit_length()print(b)#==>7

  3. 实验吧CTF天网管理系统

    天网你敢来挑战嘛 格式:ctf{ } 解题链接: http://ctf5.shiyanbar.com/10/web1/ 打开链接后,嗯,光明正大的放出账号密码,肯定是登不进的,查看源代码 看来是和md ...

  4. RN热更新

    说白了集成RN业务,就是集成RN离线包,解析并渲染.所以,RN热更新的根本原理就是更换js bundle文件和资源文件,并重新加载,新的内容就完美的展示出来了. 目前市场上出现的3种热更新模式如下:仅 ...

  5. 《Linux内核分析》第一周——计算机是如何工作的?

    杨舒雯 原创作品转载请注明出处 <Linux内核分析>MOOC课程http://mooc.study.163.com/course/USTC-1000029000 课程内容 1.诺曼依体系 ...

  6. Apache ActiveMQ 学习一

    Apache ActiveMQ 5.8.0 Java 7 support (compiled with jdk6 and validated with jdk7) apache-activemq-5. ...

  7. C语言入门:03.关键字、标识符、注释

    一.学习语法之前的提醒 (1)C语言属于一门高级语言,其实,所有高级语言的基本语法组成部分都是一样的,只是表现形式不太一样 (2)就好像亚洲人和非洲人,大家都有人类的结构:2只 手.2只脚.1个头,只 ...

  8. Centos7 安装netcat

    1.下载 下载地址:https://sourceforge.net/projects/netcat/files/netcat/0.7.1/ 下载的是netcat-0.7.1.tar.gz版本 2.安装 ...

  9. CentOS75 安装 telnet 进行使用.

    1. 安装必须要的服务 yum install xinetd telnet telnet-server 2. 修改增加root用户登录权限 vi /etc/securetty 在最后面增加两行 pts ...

  10. PHP4个载入语句的区别

    4个载入语句的区别 include和require的区别: include载入文件失败时(即没有找到该文件),报一个“提示错误”,然后继续执行后续代码: requre载入文件失败时,报错并立即终止执行 ...