原文:http://googleresearch.blogspot.jp/2010/04/lessons-learned-developing-practical.html

Lessons learned developing a practical large scale machine learning system

Tuesday, April 06, 2010
Posted by Simon Tong, Google Research

When faced with a hard prediction problem, one possible approach is to attempt to perform statistical miracles on a small training set. If data is abundant then often a more fruitful approach is to design a highly scalable learning system and use several orders of magnitude more training data.

This general notion recurs in many other fields as well. For example, processing large quantities of data helps immensely for information retrieval and machine translation.

Several years ago we began developing a large scale machine learning system, and have been refining it over time. We gave it the codename “Seti” because it searches for signals in a large space. It scales to massive data sets and has become one of the most broadly used classification systems at Google.

After building a few initial prototypes, we quickly settled on a system with the following properties:

    • Binary classification (produces a probability estimate of the class label)
    • Parallelized
    • Scales to process hundreds of billions of instances and beyond
    • Scales to billions of features and beyond
    • Automatically identifies useful combinations of features
    • Accuracy is competitive with state-of-the-art classifiers
    • Reacts to new data within minutes

Seti’s accuracy appears to be pretty decent. For example, tests on standard smaller datasets indicate that it is comparable with modern classifiers.

Seti has the flexibility to be used on a broad range of training set sizes and feature sets. These sizes are substantially larger than those typically used in academia (e.g., the largest UCI datasethas 4 million instances). A sample of the data sets used with Seti gives the following statistics:

  Training set size Unique features
Mean 100 Billion 1 Billion
Median 1 Billion 10 Million

A good machine learning system is all about accuracy, right?

In the process of designing Seti we made plenty of mistakes. However, we made some good key decisions as well. Here are a few of the practical lessons that we learned. Some are obvious in hindsight, but we did not necessarily realize their importance at the time.

Lesson: Keep it simple (even at the expense of a little accuracy).

Having good accuracy across a variety of domains is very important, and we were tempted to focus exclusively on this aspect of the algorithm. However, in a practical system there are several other aspects of an algorithm that are equally critical:

    • Ease of use: Teams are more willing to experiment with a machine learning system that is simple to set up and use. Those teams are not necessarily die-hard machine learning experts, and so they do not want to waste much time figuring out how to get a system up and running.
    • System reliability: Teams are much more willing to deploy a reliable machine learning system in a live environment. They want a system that is dependable and unlikely to crash or need constant attention. Early versions of Seti had marginally better accuracy on large data sets, but were complex, stressed the network and GFS architecture considerably, and needed constant babysitting. The number of teams willing to deploy these versions was low.

Seti is typically used in places where a machine learning system will provide a significant improvement in accuracy over the existing system. The gains are usually large enough that most teams do not care about the small differences in accuracy between different flavors of algorithms. And, in practice, the small differences are often washed out by other effects such as better data filtering, adding another useful feature, parameter tuning, etc. Teams much prefer having a stable, scalable and easy-to-use classification system. We found that these other aspects can be the difference between a deployable system and one that gets abandoned.

It is perhaps less academically interesting to design an algorithm that is slightly worse in accuracy, but that has greater ease of use and system reliability. However, in our experience, it is very valuable in practice.

Lesson: Start with a few specific applications in mind.

It was tempting to build a learning system without focusing on any particular application. After all, our goal was to create a large scale system that would be useful on a wide variety of present and future classification tasks. Nevertheless, we decided to focus primarily on a small handful of initial applications. We believe this decision was useful in several ways:

    • We could examine what the small number of domains had in common. By building something that would work for a few domains, it was likely the resulting system would be useful for others.
    • More importantly, it helped us quickly decide what aspects were unnecessary. We noticed that it was surprisingly easy to over-generalize or over-engineer a machine learning system. The domains grounded our project in reality and drove our decision making. Without them, even deciding how broad to make the input file format would have been harder (e.g., is it important to permit binary/categorical/real-valued features? Multiple classes? Fractional labels? Weighted instances?).
    • Working with a few different teams as initial guinea pigs allowed us to learn about common teething problems, and helped us smooth the process of deployment for future teams.

Lesson: Know when to say “no”.

We have a hammer, but we don't want to end up with bent screws. Being machine learning practitioners, it was very tempting for us to always recommend using machine learning for a problem. We saw very early on that, despite its many significant benefits, machine learning typically adds complexity, opacity and unpredictability to a system. In reality, simpler techniques are sometimes good enough for the task at hand. And in the long run, the extra effort that would have been spent integrating, maintaining and diagnosing issues with a live machine learning system could be spent on other way of improving the system instead.

Seti is often used in places where there is a good chance of significantly improving predictive accuracy over the incumbent system. And we usually advise teams against trying the system when we believe there is likely to be only a small improvement.

Large-scale machine learning is an important and exciting area of research. It can be applied to many real world problems. We hope that we have given a flavor of the challenges that we face, and some of the practical lessons that we have learned.

Lessons learned developing a practical large scale machine learning system的更多相关文章

  1. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 17—Large Scale Machine Learning 大规模机器学习

    Lecture17 Large Scale Machine Learning大规模机器学习 17.1 大型数据集的学习 Learning With Large Datasets 如果有一个低方差的模型 ...

  2. [C12] 大规模机器学习(Large Scale Machine Learning)

    大规模机器学习(Large Scale Machine Learning) 大型数据集的学习(Learning With Large Datasets) 如果你回顾一下最近5年或10年的机器学习历史. ...

  3. 大规模机器学习(Large Scale Machine Learning)

    本博客是针对Andrew Ng在Coursera上的machine learning课程的学习笔记. 目录 在大数据集上进行学习(Learning with Large Data Sets) 随机梯度 ...

  4. (原创)Stanford Machine Learning (by Andrew NG) --- (week 10) Large Scale Machine Learning & Application Example

    本栏目来源于Andrew NG老师讲解的Machine Learning课程,主要介绍大规模机器学习以及其应用.包括随机梯度下降法.维批量梯度下降法.梯度下降法的收敛.在线学习.map reduce以 ...

  5. 斯坦福第十七课:大规模机器学习(Large Scale Machine Learning)

    17.1  大型数据集的学习 17.2  随机梯度下降法 17.3  微型批量梯度下降 17.4  随机梯度下降收敛 17.5  在线学习 17.6  映射化简和数据并行 17.1  大型数据集的学习

  6. 吴恩达机器学习笔记60-大规模机器学习(Large Scale Machine Learning)

    一.随机梯度下降算法 之前了解的梯度下降是指批量梯度下降:如果我们一定需要一个大规模的训练集,我们可以尝试使用随机梯度下降法(SGD)来代替批量梯度下降法. 在随机梯度下降法中,我们定义代价函数为一个 ...

  7. Ng第十七课:大规模机器学习(Large Scale Machine Learning)

    17.1  大型数据集的学习 17.2  随机梯度下降法 17.3  微型批量梯度下降 17.4  随机梯度下降收敛 17.5  在线学习 17.6  映射化简和数据并行 17.1  大型数据集的学习 ...

  8. Coursera在线学习---第十节.大规模机器学习(Large Scale Machine Learning)

    一.如何学习大规模数据集? 在训练样本集很大的情况下,我们可以先取一小部分样本学习模型,比如m=1000,然后画出对应的学习曲线.如果根据学习曲线发现模型属于高偏差,则应在现有样本上继续调整模型,具体 ...

  9. 吴恩达机器学习笔记(十一) —— Large Scale Machine Learning

    主要内容: 一.Batch gradient descent 二.Stochastic gradient descent 三.Mini-batch gradient descent 四.Online ...

随机推荐

  1. sklearn六大板块

    六大板块 分类 回归 聚类 数据降维 数据预处理 特征抽取 统一API estimator.fit(X_train,[y_train]) estimator.fit(X_train,[y_train] ...

  2. BZOJ.5311.贞鱼(DP 决策单调)

    题目链接 很容易写出\(O(n^2k)\)的DP方程.然后显然决策点是单调的,于是维护决策点就可以了.. 这个过程看代码或者别的博客吧我不写了..(其实是忘了) 这样复杂度\(O(nk\log n)\ ...

  3. 2017 ACM Amman Collegiate Programming Contest

    A - Watching TV /* 题意:求出出现次数最多的数字 */ #include <cstdio> #include <algorithm> #include < ...

  4. java设计模式(三)模板模式

    抽象类中公开定义了执行它的方法的方式,子类可以按需求重写方法实现,但调用将以抽象类中定义的方式进行,典型应用如银行办理业务流程.冲泡饮料流程.下面给出简单例子,用沸水冲泡饮料,分为四步:将水煮沸.泡制 ...

  5. [OpenGL]纹理贴图实现 总结

    实现步骤 第一步:设置所需要的OpenGL环境 设置上下文环境 删除已经存在的渲染的缓存 设置颜色缓存 设置帧缓存 清除缓存 设置窗口大小 开启功能 编译shander 使用program 获取sha ...

  6. php的哈希函数

    哈希函数: echo password_hash("rasmuslerdorf", PASSWORD_DEFAULT)."\n"; 验证函数: boolean  ...

  7. redis学习之一 - linux下安装配置

    Content 0.序 1.如何安装? 2.配置参数及其意义 3.设为linux服务 0.序 本文主要是记录Redis在 Centos下的安装配置 .文中如无特别说明.表示redis-3.2.10代码 ...

  8. 【原】MySQL实用SQL积累

    [文档简述] 本文档用来记录一些常用的SQL语句,以达到快速查询的目的. [常用SQL] 1.mysql数据库中获取某个表的所有字段名 select COLUMN_NAME from informat ...

  9. LogStash日志分析系统

    简介 通常日志管理是逐渐崩溃的——当日志对于人们最重要的时候,也就是出现问题的时候,这个渐进的过程就开始了.日志管理一般会经历一下3个阶段: 初级管理员将通过一些传统工具(如cat.tail.sed. ...

  10. ThinkPHP实现登录限制时__construct和_initialize的区别

    ThinkPHP支持两种构造方法:  __construct和_initialize(ThinkPHP内置的构造方法). 测试URL为:  http://oa.com/index.php/Admin/ ...