目录

Generalization: Peril of Overfitting

Low loss, but still a bad model?

How Do We Know If Our Model Is Good?

The ML Fine Print

Three basic assumptions

Summary

Glossay

Training and Test Sets

What If We Only Have One Data Set?

Splitting Data

Validation

Check Your Intuition

Validation

Another Partition

Tip

Programming Exercise


Generalization: Peril of Overfitting

  • Develop intuition about overfitting.
  • Determine whether a model is good or not.
  • Divide a data set into a training set and a test set.

Low loss, but still a bad model?

The model  overfits the peculiarities of the data it trained on.

An overfit model gets a low loss during training but does a poor job predicting new data.

Overfitting is caused by making a model more complex than necessary.

The fundamental tension of machine learning is between fitting our data well, but also fitting the data as simply as possible.


How Do We Know If Our Model Is Good?

  • Theoretically:

    • Interesting field: generalization theory
    • Based on ideas of measuring model simplicity / complexity
  • Intuition: formalization of Occam's Razor principle
    • The less complex a model is, the more likely that a good empirical result is not just due to the peculiarities of our sample
  • Empirically:
    • Asking: will our model do well on a new sample of data?
    • Evaluate: get a new sample of data-call it the test set
    • Good performance on the test set is a useful indicator of good performance on the new data in general:
      • If the test set is large enough
      • If we don't cheat by using the test set over and over

The ML Fine Print

Three basic assumptions

  1. We draw examples independently and identically (i.i.d.独立同分布) at random from the distribution
  2. The distribution is stationary: It doesn't change over time
  3. We always pull from the same distribution: Including training, validation, and test sets

In practice, we sometimes violate these assumptions. For example:

  • Consider a model that chooses ads to display. The i.i.d. assumption would be violated if the model bases its choice of ads, in part, on what ads the user has previously seen.
  • Consider a data set that contains retail sales information for a year. User's purchases change seasonally, which would violate stationarity.

When we know that any of the preceding three basic assumptions are violated, we must pay careful attention to metrics.


Summary

  • Overfitting occurs when a model tries to fit the training data so closely that it does not generalize well to new data.

  • If the key assumptions of supervised ML are not met, then we lose important theoretical guarantees on our ability to predict on new data.

Glossay

  • generalization:refers to your model's ability to adapt properly to new, previously unseen data, drawn from the same distribution as the one used to create the model.

  • overfitting:occurs when a model tries to fit the training data so closely that it does not generalize well to new data.

  • stationarity:A property of data in a data set, in which the data distribution stays constant across one or more dimensions. Most commonly, that dimension is time, meaning that data exhibiting stationarity doesn't change over time. For example, data that exhibits stationarity doesn't change from September to December.

  • training set:The subset of the data set used to train a model.
  • validation set:A subset of the data set—disjunct from the training set—that you use to adjust hyperparameters.
  • test set:The subset of the data set that you use to test your model after the model has gone through initial vetting by the validation set.

Training and Test Sets

  • Examine the benefits of dividing a data set into a training set and a test set.

What If We Only Have One Data Set?

  • Divide into two sets:

    • training set
    • test set
  • Classic gotcha: do not train on test data
    • Getting surprisingly low loss?
    • Before celebrating, check if you're accidentally training on test data

Splitting Data

Make sure that your test set meets the following two conditions:

  • Is large enough to yield statistically meaningful results.
  • Is representative of the data set as a whole. In other words, don't pick a test set with different characteristics than the training set.

Assuming that your test set meets the preceding two conditions, your goal is to create a model that generalizes well to new data. Our test set serves as a proxy for new data.

For example, consider the following figure. Notice that the model learned for the training data is very simple. This model doesn't do a perfect job—a few predictions are wrong. However, this model does about as well on the test data as it does on the training data. In other words, this simple model does not overfit the training data.

Never train on test data. If you are seeing surprisingly good results on your evaluation metrics, it might be a sign that you are accidentally training on the test set. For example, high accuracy might indicate that test data has leaked into the training set.

For example, consider a model that predicts whether an email is spam, using the subject line, email body, and sender's email address as features. We apportion the data into training and test sets, with an 80-20 split. After training, the model achieves 99% precision on both the training set and the test set. We'd expect a lower precision on the test set, so we take another look at the data and discover that many of the examples in the test set are duplicates of examples in the training set (we neglected to scrub duplicate entries for the same spam email from our input database before splitting the data). We've inadvertently trained on some of our test data, and as a result, we're no longer accurately measuring how well our model generalizes to new data.


Validation

Check Your Intuition

We looked at a process of using a test set and a training set to drive iterations of model development. On each iteration, we'd train on the training data and evaluate on the test data, using the evaluation results on test data to guide choices of and changes to various model hyperparameters like learning rate and features. Is there anything wrong with this approach? (Pick only one answer.)

  • (F)This is computationally inefficient. We should just pick a default set of hyperparameters and live with them to save resources.

Although these sorts of iterations are expensive, they are a critical part of model development. Hyperparameter settings can make an enormous difference in model quality, and we should always budget some amount of time and computational resources to ensure we're getting the best quality we can.

  • (T)Doing many rounds of this procedure might cause us to implicitly fit to the peculiarities of our specific test set.

Yes indeed! The more often we evaluate on a given test set, the more we are at risk for implicitly overfitting to that one test set. We'll look at a better protocol next.

  • (F)Totally fine, we're training on training data and evaluating on separate, held-out test data.

Actually, there's a subtle issue here. Think about what might happen if we did many, many iterations of this form.


Validation

  • Understand the importance of a validation set in a partitioning scheme.

Partitioning a data set into a training set and test set lets you judge whether a given model will generalize well to new data. However, using only two partitions may be insufficient when doing many rounds of hyperparameter tuning.


Another Partition

The previous module introduced partitioning a data set into a training set and a test set. This partitioning enabled you to train on one set of examples and then to test the model against a different set of examples.

With two partitions, the workflow could look as follows:

In the figure, "Tweak model" means adjusting anything about the model you can dream up—from changing the learning rate, to adding or removing features, to designing a completely new model from scratch. At the end of this workflow, you pick the model that does best on the test set.

Dividing the data set into two sets is a good idea, but not a panacea. You can greatly reduce your chances of overfitting by partitioning the data set into the three subsets shown in the following figure:

Use the validation set to evaluate results from the training set. Then, use the test set to double-check your evaluation after the model has "passed" the validation set. The following figure shows this new workflow:

In this improved workflow:

  1. Pick the model that does best on the validation set.
  2. Double-check that model against the test set.

This is a better workflow because it creates fewer exposures to the test set.

Tip

  • Test sets and validation sets "wear out" with repeated use. That is, the more you use the same data to make decisions about hyperparameter settings or other model improvements, the less confidence you'll have that these results actually generalize to new, unseen data. Note that validation sets typically wear out more slowly than test sets.
  • If possible, it's a good idea to collect more data to "refresh" the test set and validation set. Starting anew is a great reset.

Programming Exercise

Learning Objectives:

  • Use multiple features, instead of a single feature, to further improve the effectiveness of a model
  • Debug issues in model input data
  • Use a test data set to check if a model is overfitting the validation data

学习笔记(三): Generalization/Overfitting/Validation的更多相关文章

  1. Oracle学习笔记三 SQL命令

    SQL简介 SQL 支持下列类别的命令: 1.数据定义语言(DDL) 2.数据操纵语言(DML) 3.事务控制语言(TCL) 4.数据控制语言(DCL)  

  2. [Firefly引擎][学习笔记三][已完结]所需模块封装

    原地址:http://www.9miao.com/question-15-54671.html 学习笔记一传送门学习笔记二传送门 学习笔记三导读:        笔记三主要就是各个模块的封装了,这里贴 ...

  3. JSP学习笔记(三):简单的Tomcat Web服务器

    注意:每次对Tomcat配置文件进行修改后,必须重启Tomcat 在E盘的DATA文件夹中创建TomcatDemo文件夹,并将Tomcat安装路径下的webapps/ROOT中的WEB-INF文件夹复 ...

  4. java之jvm学习笔记三(Class文件检验器)

    java之jvm学习笔记三(Class文件检验器) 前面的学习我们知道了class文件被类装载器所装载,但是在装载class文件之前或之后,class文件实际上还需要被校验,这就是今天的学习主题,cl ...

  5. VSTO学习笔记(三) 开发Office 2010 64位COM加载项

    原文:VSTO学习笔记(三) 开发Office 2010 64位COM加载项 一.加载项简介 Office提供了多种用于扩展Office应用程序功能的模式,常见的有: 1.Office 自动化程序(A ...

  6. Java IO学习笔记三

    Java IO学习笔记三 在整个IO包中,实际上就是分为字节流和字符流,但是除了这两个流之外,还存在了一组字节流-字符流的转换类. OutputStreamWriter:是Writer的子类,将输出的 ...

  7. NumPy学习笔记 三 股票价格

    NumPy学习笔记 三 股票价格 <NumPy学习笔记>系列将记录学习NumPy过程中的动手笔记,前期的参考书是<Python数据分析基础教程 NumPy学习指南>第二版.&l ...

  8. Learning ROS for Robotics Programming Second Edition学习笔记(三) 补充 hector_slam

    中文译著已经出版,详情请参考:http://blog.csdn.net/ZhangRelay/article/category/6506865 Learning ROS for Robotics Pr ...

  9. Learning ROS for Robotics Programming Second Edition学习笔记(三) indigo rplidar rviz slam

    中文译著已经出版,详情请参考:http://blog.csdn.net/ZhangRelay/article/category/6506865 Learning ROS for Robotics Pr ...

  10. Typescript 学习笔记三:函数

    中文网:https://www.tslang.cn/ 官网:http://www.typescriptlang.org/ 目录: Typescript 学习笔记一:介绍.安装.编译 Typescrip ...

随机推荐

  1. Corn Fields(模板)

    题目链接 #include <stdio.h> #include <algorithm> #include <string.h> #include <iost ...

  2. NET Core 与 Vue.js 服务端渲染

    NET Core 与 Vue.js 服务端渲染 http://mgyongyosi.com/2016/Vuejs-server-side-rendering-with-aspnet-core/原作者: ...

  3. NET CORE 基础

    NET CORE 基础管理系统 [SF]开源的.NET CORE 基础管理系统 - 安装篇   [SF]开源的.NET CORE 基础管理系统 -系列导航 1.开发必备工具 IDE:VS2017 运行 ...

  4. python 7 dict和set

    dict Python内置了字典:dict的支持,dict全称dictionary,在其他语言中也称为map,使用键-值(key-value)存储,具有极快的查找速度. 举个例子,假设要根据同学的名字 ...

  5. 去掉 Ctrl + A 全选

    import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.table.*; public ...

  6. 关于byte[]和字符串的转换

    public static String byteToStr(byte[] b) { return new String(b); } public static byte[] strToByte(St ...

  7. Vijos 1002 过河 dp + 思维

    https://www.vijos.org/p/1002 设dp[i]表示跳到了第i个点,需要的最小的步数. 所以复杂度O(L * T), 不行 注意到T最大是10, 所以dp[i]最多只由10项递推 ...

  8. 《springcloud 二》微服务动态网关,网关集群

    动态网关    实际上是网关和分布式配置中心的整合,通过post手动刷新,生效 动态网关 传统方式将路由规则配置在配置文件中,如果路由规则发生了改变,需要重启服务器.结合整合SpringCloud C ...

  9. WebStorm技巧-在安卓手机上运行Ionic程序

    打开菜单项 Run -> Run-   选择 Edit Configurations-   添加一个 PhoneGap/Cordova 配置项,命名如: Ionic Android, 并输入相关 ...

  10. 选择控件js插件和使用方法

    记录一下 选择控件js插件 /* * 滚动控件,包含地址选择.时间选择.自定义单选 */ //js_inputbox input 输入框 //js_pickviewselect 下拉单项选择 //js ...