来源:https://databricks.com/blog/2016/02/08/auto-scaling-scikit-learn-with-apache-spark.html

Data scientists often spend hours or days tuning models to get the highest accuracy. This tuning typically involves running a large number of independent Machine Learning (ML) tasks coded in Python or R. Following some work presented at Spark Summit Europe 2015, we are excited to release scikit-learn integration package for Apache Spark that dramatically simplifies the life of data scientists using Python. This package automatically distributes the most repetitive tasks of model tuning on a Spark cluster, without impacting the workflow of data scientists:

  • When used on a single machine, Spark can be used as a substitute to the default multithreading framework used by scikit-learn (Joblib).
  • If a need comes to spread the work across multiple machines, no change is required in the code between the single-machine case and the cluster case.

Scale data science effortlessly

Python is one of the most popular programming languages for data exploration and data science, and this is in no small part due to high quality libraries such as Pandas for data exploration or scikit-learn for machine learning. Scikit-learn provides fast and robust implementations of standard ML algorithms such as clustering, classification, and regression.

Scikit-learn’s strength has typically been in the realm of computing on a single node, though. For some common scenarios, such as parameter tuning, a large number of small tasks can be run in parallel. These scenarios are perfect use cases for Spark.

We explored how to integrate Spark with scikit-learn, and the result is the Scikit-learn integration package for Spark. It combines the strengths of Spark and scikit-learn with no changes to users’ code. It re-implements some components of scikit-learn that benefit the most from distributed computing. Users will find a Spark-based cross-validator class that is fully compatible with scikit-learn’s cross-validation tools. By swapping out a single class import, users can distribute cross-validation for their existing scikit-learn workflows.

Distribute tuning of Random Forests

Consider a classical example of identifying digits in images. Here are a few examples of images taken from the popular digits dataset, with their labels:

We are going to train a random forest classifier to recognize the digits. This classifier has a number of parameters to adjust, and there is no easy way to know which parameters work best, other than trying out many different combinations. Scikit-learn provides GridSearchCV, a search algorithm that explores many parameter settings automatically. GridSearchCV uses selection by cross-validation, illustrated below. Each parameter setting produces one model, and the best-performing model is selected.

The original code, using only scikit-learn, is as follows:

from sklearn import grid_search, datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.grid_search import GridSearchCV
digits = datasets.load_digits()
X, y = digits.data, digits.target
param_grid = {"max_depth": [3, None],
"max_features": [1, 3, 10],
"min_samples_split": [1, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [True, False],
"criterion": ["gini", "entropy"],
"n_estimators": [10, 20, 40, 80]}
gs = grid_search.GridSearchCV(RandomForestClassifier(), param_grid=param_grid)
gs.fit(X, y)

The dataset is small (in the hundreds of kilobytes), but exploring all the combinations takes about 5 minutes on a single core. The scikit-learn package for Spark provides an alternative implementation of the cross-validation algorithm that distributes the workload on a Spark cluster. Each node runs the training algorithm using a local copy of the scikit-learn library, and reports the best model back to the master:

The code is the same as before, except for a one-line change:

from sklearn import grid_search, datasets
from sklearn.ensemble import RandomForestClassifier
# Use spark_sklearn’s grid search instead:
from spark_sklearn import GridSearchCV
digits = datasets.load_digits()
X, y = digits.data, digits.target
param_grid = {"max_depth": [3, None],
"max_features": [1, 3, 10],
"min_samples_split": [1, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [True, False],
"criterion": ["gini", "entropy"],
"n_estimators": [10, 20, 40, 80]}
gs = grid_search.GridSearchCV(RandomForestClassifier(), param_grid=param_grid)
gs.fit(X, y)

This example runs under 30 seconds on a 4-node cluster (which has 16 CPUs). For larger Datasets

" style="box-sizing: border-box; color: rgb(0, 0, 0) !important; text-decoration-line: none !important; border-bottom: 1px dotted rgb(0, 0, 0) !important;">datasets and more parameter settings, the difference is even more dramatic.

Get started

If you would like to try out this package yourself, it is available as a Spark package and as a PyPI library. To get started, check out this example notebook on Databricks.

In addition to distributing ML tasks in Python across a cluster, Scikit-learn integration package for Spark provides additional tools to export data from Spark to python and vice-versa. You can find methods to convert Spark DataFrames

" style="box-sizing: border-box; color: rgb(0, 0, 0) !important; text-decoration-line: none !important; border-bottom: 1px dotted rgb(0, 0, 0) !important;">DataFrames to Pandas dataframes and numpy arrays. More details can be found in this Spark Summit Europe presentation and in the API documentation.

Auto-scaling scikit-learn with Apache Spark的更多相关文章

  1. (原创)(三)机器学习笔记之Scikit Learn的线性回归模型初探

    一.Scikit Learn中使用estimator三部曲 1. 构造estimator 2. 训练模型:fit 3. 利用模型进行预测:predict 二.模型评价 模型训练好后,度量模型拟合效果的 ...

  2. Offset Management For Apache Kafka With Apache Spark Streaming

    An ingest pattern that we commonly see being adopted at Cloudera customers is Apache Spark Streaming ...

  3. Why Apache Spark is a Crossover Hit for Data Scientists [FWD]

    Spark is a compelling multi-purpose platform for use cases that span investigative, as well as opera ...

  4. Apache Spark 章节1

    作者:jiangzz 电话:15652034180 微信:jiangzz_wx 微信公众账号:jiangzz_wy 背景介绍 Spark是一个快如闪电的统一分析引擎(计算框架)用于大规模数据集的处理. ...

  5. APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL

    What’s New, What’s Changed and How to get Started. Are you ready for Apache Spark 2.0? If you are ju ...

  6. What’s new for Spark SQL in Apache Spark 1.3(中英双语)

    文章标题 What’s new for Spark SQL in Apache Spark 1.3 作者介绍 Michael Armbrust 文章正文 The Apache Spark 1.3 re ...

  7. A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets(中英双语)

    文章标题 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets 且谈Apache Spark的API三剑客:RDD.Dat ...

  8. Real Time Credit Card Fraud Detection with Apache Spark and Event Streaming

    https://mapr.com/blog/real-time-credit-card-fraud-detection-apache-spark-and-event-streaming/ Editor ...

  9. How-to: Tune Your Apache Spark Jobs (Part 1)

    Learn techniques for tuning your Apache Spark jobs for optimal efficiency. When you write Apache Spa ...

  10. Using Apache Spark and MySQL for Data Analysis

    What is Spark Apache Spark is a cluster computing framework, similar to Apache Hadoop. Wikipedia has ...

随机推荐

  1. 关于AR眼镜的小结

    根据IDC(International Data Corporation,国际数据公司,IDC是全球著名的信息技术.电信行业和消费科技咨询.顾问和活动服务专业提供商)的报告,2018年AR和VR的产值 ...

  2. EMC NW NMM to restore MS AG database

    Following last article, how to restore MS AG database , that is in the following: You see ?  Cheer u ...

  3. postman之设置token

    在做接口测试的时候,很多业务的接口是有依赖性的,下一个接口需要上一个接口的token依赖,例如余额查询接口,需要先登录,才能查询余额.那么,这种接口如何使用postman进行测试呢?今天让我来给小伙伴 ...

  4. Linux系统之LVS+Keepalived实现

    1.简述lvs四种集群特点及使用场景 LVS集群有4种类型,分别是NAT.DR.TUN.FULLNAT 从工作方式来讲,NAT和FULLNAT都要修改请求报文的目标IP和目标端口(NAT)或源IP目标 ...

  5. 动态获取bind dns日志IP脚本

    #!/usr/bin/env python #_*_coding:utf-8_*_ ''' python deny_dns_allip.py your_filelog_name 动态获取dns日志的I ...

  6. apache 负载均衡

    此次使用mod_proxy的方式来实现的,因为在Apache2以上的版本中已经集成了,因此不需要再另行安装和配置了. 只需要把注释去掉即可,去掉以下模块的注释: LoadModule proxy_mo ...

  7. btrfs文件系统简单学习

    1 btrfs文件系统 btrfs文件系统在生产环境应用还不多,因此,本文仅仅简单学习. 1.1 btrfs文件系统核心特性 1)多物理卷支持:btrfs可由多个底层物理卷组成(可以是单块物理磁盘,也 ...

  8. light oj 1014 - Ifter Party分解因子

    1014 - Ifter Party   I have an Ifter party at the 5th day of Ramadan for the contestants. For this r ...

  9. B树(B-树) 、B+树

    B树(B-树) 1.B-树(B树)的基本概念B-树中所有结点中孩子结点个数的最大值成为B-树的阶,通常用m表示,从查找效率考虑,一般要求m>=3.一棵m阶B-树或者是一棵空树,或者是满足以下条件 ...

  10. Executor 任务执行器

    Executor: 是一个接口 用于执行提交的任务 解耦任务提交和执行(线程的创建及调度) Executor的实现可以根据实际需求延展不同的逻辑:1. 对于提交的任务同步或者异步执行,如下同步执行: ...