Introduction to Machine Learning
Chapter 1 Introduction
1.1 What Is Machine Learning?
To solve a problem on a computer, we need an algorithm. An algorithm is a sequence of instructions that should be carried out to transform the input to output. For example, one can devise an algorithm for sorting. The input is a set of numbers and the output is their ordered list. For the same task, there may be various algorithms and we may be interested in finding the most efficient one, requiring the least number of instructions or memory or both.
For some tasks, however, we do not have an algorithm - for example, to tell spam emails from legitimate email. We know what the input is: an email document that in the simplest case is a file of characters. We know what the output should be: a yes/no output indicating whether the message is spam or not. We do not know how to transform the input to the output. What can be considered spam changes in time and from individual to individual.
What we lack in knowledge, we make up for in data. We can easily compile thousands of example messages some of which we know to be spam and what we want is to “learn” what constitutes spam from them. In other words, we would like the computer (machine) to extract automatically the algorithm for this task. There is no need to learn to sort numbers, we already have algorithms for that; but there are many applications for which we do not have an algorithm but do have example data.
With advances in computer technology, we currently have the ability to store and process large amounts of data, as well as to access it from physically distant locations over a computer network. Most data acquisition devices are digital now and record reliable data. Think, for example, of a supermarket chain that has hundreds of stores all over a country selling thousands of goods to millions of customers. The point of sale terminals record the details of each transactions: date, customer identification code, goods bought and their amount, total money spent, and so forth. This typically amounts to gigabytes of data every day. What the supermarket chain wants it to be able to predict who are the likely customers for a product. Again, the algorithm for this is not evident; it changes in time and by geographic location. The stored data becomes useful only when it is analyzed and turned into information that we can make use of, for example, to make predictions.
We may not be able to identify the process completely, but we believe we can construct a good and useful approximation. That approximation may not explain everything, but may still be able to account for some part of the data. We believe that thought identifying the complete process may not be possible, we can still detect certain patterns or regularities. This is the niche of machine learning. Such patterns may help us understand the process, or we can use those patterns to make predictions: Assuming that the future, at least the near future, will not be much different from the past when the sample data was collected, the future predictions can also be expected to be right.
Application of machine learning methods to large databases is called data mining. The analogy is that a large volume of each and raw material is extracted from a mine, which when processed leads to a small amount of very precious material; similarly, in data mining, a large volume of data is processed to construct a simple model with valuable use, for example, having high predictive accuracy. Its application areas are abundant: In addition to retail, in finance banks analyze their past data to build models to use in credit applications, fraud detection, and the stock market.
1.2.5 Reinforcement Learning
In some applications, the output of the system is a sequence of action. In such a case, a single action is not important; what is important is the policy that is the sequence of correct actions to reach the goal. There is no such thing as the best action in any intermediate state; an action is good if it is part of a good policy. In such a case, the machine learning program should be able to assess the goodness of policies and learn from past good action sequences to be able to generate a policy. Such learning methods are called reinforcement learning algorithms.
Chapter 2 Supervised Learning
We discuss supervised learning starting from the simplest case, which is learning a class from its positive and negative examples. We generalize and discuss the case of multiple classes, then regression, where the outputs are continuous.
2.1 Learning a Class from Examples
Let us say we want to learn the class, C, of a “family car”. We have a set of examples of cars, and we have a group of people that we survey to whom we show these cars. The people look at the cars and label them; the cars that they believe are family cars are positive examples, and the other cars are negative examples. Class learning is finding a description that is shared by all positive examples. Class learning is finding a description that is shared by all positive examples and none of the negative examples. Doing this, we can make a prediction: Given a car that we have not seen before, by checking with the description learned, we will be able to say whether it is a family car or not. Or we can do knowledge extraction: This study may be sponsored by a car company, and the aim may be to understand what people expect from a family car.
Chapter 3 Bayesian Decision Theory
We discuss probability theory as the framework for making decisions under uncertainty. In classification, Bayes' rule is used to calculate the probabilities of the classes. We generalize to discuss how we can make rational decisions among multiple actions to minimize expected risk. We also discuss learning association rules from data.
3.1 Introduction
Programming computers to make inference from data is a cross between statistics and computer science, where statisticians provide the mathematical framework of making inference from data and computer scientists work on the efficient implementation of the inference methods.
Data comes from a process that is not completely known. This lack of knowledge is indicated by modeling the process as a random process. Maybe the process is actually deterministic, but because we do not have access to complete knowledge about it, we model it as random and use probability theory to analyze it. At this point, it may be a good idea to jump the appendix and review basic probability theory before continuing with this chapter.
Chapter 4 Parametric Methods
Having discussed how to make optimal decisions when the uncertainty is modeled using probabilities, we now see how we can estimate these probabilities from a given training set. We start with the parametric approach for classification and regression. We discuss the semiparametric and non parametric approaches in later chapters. We introduce bias/variance dilemma and model selection methods for trading off model complexity and empirical error.
4.1 Introduction
A statistic is any value that is calculated from a given sample. In statistical inference, we make a decision using the information provided by a sample.
Introduction to Machine Learning的更多相关文章
- ML Lecture 0-1: Introduction of Machine Learning
本博客是针对李宏毅教授在Youtube上上传的课程视频<ML Lecture 0-1: Introduction of Machine Learning>的学习笔记.在Github上也po ...
- 【Machine Learning is Fun!】1.The world’s easiest introduction to Machine Learning
Bigger update: The content of this article is now available as a full-length video course that walks ...
- Introduction of Machine Learning
李宏毅主页 台湾大学语音处理实验室 人工智慧.机器学习与深度学习间有什么区别? 人工智能——目标 机器学习——手段 深度学习——机器学习的一种方法 人类设定好的天生本能 Machine Learnin ...
- 李宏毅老师机器学习课程笔记_ML Lecture 0-1: Introduction of Machine Learning
引言: 最近开始学习"机器学习",早就听说祖国宝岛的李宏毅老师的大名,一直没有时间看他的系列课程.今天听了一课,感觉非常棒,通俗易懂,而又能够抓住重点,中间还能加上一些很有趣的例子 ...
- Introduction To Machine Learning Self-Evaluation Test
Preface Section 1 - Mathematical background Multivariate calculus take derivatives and integrals; de ...
- Machine Learning Algorithms Study Notes(1)--Introduction
Machine Learning Algorithms Study Notes 高雪松 @雪松Cedro Microsoft MVP 目 录 1 Introduction 1 1.1 ...
- 【机器学习Machine Learning】资料大全
昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...
- Machine Learning Algorithms Study Notes(6)—遗忘的数学知识
机器学习中遗忘的数学知识 最大似然估计( Maximum likelihood ) 最大似然估计,也称为最大概似估计,是一种统计方法,它用来求一个样本集的相关概率密度函数的参数.这个方法最早是遗传学家 ...
- [Python & Machine Learning] 学习笔记之scikit-learn机器学习库
1. scikit-learn介绍 scikit-learn是Python的一个开源机器学习模块,它建立在NumPy,SciPy和matplotlib模块之上.值得一提的是,scikit-learn最 ...
随机推荐
- LinuxShell脚本攻略--第三章 以文件之名
生成任意大小的文件文件权限.所有权和粘滞位创建不可修改文件生成空白文件查找符号链接及其指向目标head 与 tail只列出目录的其他方法在命令行中用 pushd 和 popd 快速定位(cd -)统计 ...
- Windows Server 2012 (2008) 忘记密码重置方法 Windows 10 忘记密码
要使用windows server 2012安装DVD,选择光盘引导进入 进入修复系统---命令提示符---切换目录至系统目录--执行move命令 先备份 utilman.exe(他就是这个程 ...
- Edit Distance编辑距离(NM tag)- sam/bam格式解读进阶
sam格式很精炼,几乎包含了比对的所有信息,我们平常用到的信息很少,但特殊情况下,我们会用到一些较为生僻的信息,关于这些信息sam官方文档的介绍比较精简,直接看估计很难看懂. 今天要介绍的是如何通过b ...
- 数据库函数--nvl、coalesce、decode比较
SQL中 nvl().coalesce().decode()这三个函数nvl(bonus,0) 2个参数 if bonus is null return 0 else return bonus,ora ...
- VBA提高速度的技巧
此贴原转自EH论坛,我自己有所修改 [编者按]速度是程序设计永恒的热门话题,虽然速度技巧在各种语言之间可以相互借鉴,但差别有时也会很大,比如VC中由于字符串的存储方式决定了判断空串使用len函数更快, ...
- hdu 2473 Junk-Mail Filter (并查集之点的删除)
Junk-Mail Filter Time Limit: 15000/8000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others ...
- 九度 题目1437:To Fill or Not to Fill
题目描述: With highways available, driving a car from Hangzhou to any other city is easy. But since the ...
- Jsp开发自定义标签,自定义标签将字符串转成指定的时间格式显示
本例以将 字符串格式的时间转成指定的时间格式显示. 第一步.定义一个标签处理程序类,需要集成javax.servlet.jsp.tagext.TagSupport,代码如下: import java. ...
- 新手必看,老鸟绕道–LAMP简易安装
导读 LAMP是企业中最常用的服务,也是非常稳定的网站架构平台.其中L-指的是Linux,A-指的是Apache,m-指的是mysql或者marriDB,p-php.相信大家对这些都已经非常熟悉了,但 ...
- 一天完成把PC网站改为自适应!原来这么简单!
http://www.webkaka.com/blog/archives/how-to-modify-a-web-page-to-be-responsive.html 一天完成把PC网站改为自适应!原 ...