Machine and statistical learning wizards are becoming more eager to perform analysis with Spark MLlibrary if this is only possible. It’s trendy, posh, spicy and gives the feeling of doing state of the art machine learning and being up to date with the newest computational trends. It is even more sexy and powerful when computations can be performed on the extraordinarily enormous computation cluster - let’s say 100 machines on YARN hadoop cluster makes you the real data cruncher! In this post I presentsparklyr package (by RStudio), the connector that will transform you from a regular R user, to the supa! data scientist that can invoke Scala code to perform machine learning algorithms on YARN cluster just from RStudio! Moreover, I present how I have extended the interface to K-means procedure, so that now it is also possible to compute cost for that model, which might be beneficial in determining the number of clusters in segmentation problems. Thought about learnig Scala? Leave it - user sparklyr!

If you don’t know much about Spark yet, you can read my April post Answers to FAQ about SparkR for R users - where I explained how could we use SparkR package that is distributed with Spark. Many things (code) might have changed since that time, due to the rapid development caused by great popularity of Spark. Now we can use version 2.0.0 of Spark. If you are migrating from previous versions I suggest you should look at Migration Guide - Upgrading From SparkR 1.6.x to 2.0.

sparklyr basics

This packages is based on sparkapi package that enables to run Spark applications locally or on YARN cluster just from R. It translates R code to bash invocation of spark-shell. It’s biggest advantage is dplyrinterface for working with Spark Data Frames (that might be Hive Tables) and possibility to invoke algorithms from Spark ML library.

Installation of sparklyr, then Spark itself and simple application initiation is described by this code

library(devtools)
install_github('rstudio/sparklyr')
library(sparklyr)
spark_install(version = "2.0.0")
sc <-
spark_connect(master="yarn",
config = list(
default = list(
spark.submit.deployMode= "client",
spark.executor.instances= 20,
spark.executor.memory= "2G",
spark.executor.cores= 4,
spark.driver.memory= "4G")))

One don’t have to specify config by himself, but if this is desired then remember that you could also specify parameters for Spark application with config.yml files so that you can benefit from many profiles (development, production). In version 2.0.0 it is desired to name master yarn instead of yarn-client and passing the deployMode parameter, which is different from version 1.6.x. All available parameters can be found in Running Spark on YARN documentation page.

dplyr and DBI interface on Spark

When connecting to YARN, it is most probable that you would like to use data tables that are stored on Hive. Remember that

Configuration of Hive is done by placing your hive-site.xml, core-site.xml (for security configuration), and hdfs-site.xml (for HDFS configuration) file in conf/.

where conf/ is set as HADOOP_CONF_DIR. Read more about using Hive tables from Spark

If everything is set up and the application runs properly, you can use dplyr interface to provide lazy evaluation for data manipulations. Data are stored on Hive, Spark application runs on YARN cluster, and the code is invoked from R in the simple language of data transformations (dplyr) - everything thanks to sparklyr team great job! Easy example is below

library(dplyr)
# give the list of tables
src_tbls(sc)
# copies iris from R to Hive
iris_tbl <- copy_to(sc, iris, "iris")
# create a hook for data stored on Hive
data_tbl <- tbl(sc, "table_name")
data_tbl2 <- tbl(sc, sql("SELECT * from table_name"))

You can also perform any operation on datasets use by Spark

iris_tbl %>%
select(Petal_Length, Petal_Width) %>%
top_n(40, Petal_Width) %>%
arrange(Petal_Length)

Note that original commas in iris names have been translated to _.

This package also provides interface for functions defined in DBI package

library(DBI)
dbListTables(sc)
dbGetQuery(sc, "use database_name")
data_tbl3 <- dbGetQuery(sc, "SELECT * from table_name")
dbListFields(sc, data_tbl3)

Running Spark ML Machine Learning K-means Algorithm from R

The basic example on how sparklyr invokes Scala code from Spark ML will be presented on K-means algorithm. If you check the code of sparklyr::ml_kmeans function you will see that for inputtbl_spark object, named x and character vector containing features’ names (featuers)

envir <- new.env(parent = emptyenv())
df <- spark_dataframe(x)
sc <- spark_connection(df)
df <- ml_prepare_features(df, features)
tdf <- ml_prepare_dataframe(df, features, ml.options = ml.options, envir = envir)

sparklyr ensures that you have proper connection to spark data frame and prepares features in convenient form and naming convention. At the end it prepares a Spark DataFrame for Spark ML routines.

This is done in a new environment, so that we can store arguments for future ML algorithm and the model itself in its own environment. This is safe and clean solution. You can construct a simple model calling a Spark ML class like this

envir$model <- "org.apache.spark.ml.clustering.KMeans"
kmeans <- invoke_new(sc, envir$model)

which invokes new object of class KMeans on which we can invoke parameters setters to change default parameters like this

model <- kmeans %>%
invoke("setK", centers) %>%
invoke("setMaxIter", iter.max) %>%
invoke("setTol", tolerance) %>%
invoke("setFeaturesCol", envir$features)
# features where set in ml_prepare_dataframe

For an existing object of KMeans class we can invoke its method called fit that is responsible for starting the K-means clustering algorithm

fit <- model %>%
invoke("fit", tdf)

which returns new object on which we can compute, e.g centers of outputted clustering

kmmCenters <- invoke(fit, "clusterCenters")

or the Within Set Sum of Squared Errors (called Cost) (which is mine small contribution #173 )

kmmCost <- invoke(fit, "computeCost", tdf)

This sometimes helps to decide how many clusters should we specify for clustering problem

and is presented in print method for ml_model_kmeans object

iris_tbl %>%
select(Petal_Width, Petal_Length) %>%
ml_kmeans(centers = 3, compute.cost = TRUE) %>%
print() K-means clustering with 3 clusters Cluster centers:
Petal_Width Petal_Length
1 1.359259 4.292593
2 2.047826 5.626087
3 0.246000 1.462000 Within Set Sum of Squared Errors = 31.41289

All that can be better understood if we’ll have a look on Spark ML docuemtnation for KMeans (be carefull not to confuse with Spark MLlib where methods and parameters have different names than those in Spark ML). This enabled me to provide simple update for ml_kmeans() (#179) so that we can specify tol (tolerance) parameter in ml_kmeans() to support tolerance of convergence.

 

inShare37

BioC 2016 Conference Overview and Few Ways of Downloading TCGA Data

Few weeks ago I have a great pleasure of attending BioC 2016: Where Software and Biology Connect Conference at Stanford, where I have learned a lot! It wouldn’t be possible without the scholarship that I received from Bioconductor (organizers), which I deeply appreciate. It was an excellent place for software developers, statisticians and biologists to exchange their experiences and to better explain their work, as the understanding between collaborators in interdisciplinary teams is essential. In this post I present my thoughts and feelings about the event and I share the knowledge that I have learned during the event, i.e. about many ways of downloading The Cancer Genome Atlas data.

转自:http://r-addict.com/2016/08/25/Extending-Sparklyr.html

Extending sparklyr to Compute Cost for K-means on YARN Cluster with Spark ML Library的更多相关文章

  1. KNN 与 K - Means 算法比较

    KNN K-Means 1.分类算法 聚类算法 2.监督学习 非监督学习 3.数据类型:喂给它的数据集是带label的数据,已经是完全正确的数据 喂给它的数据集是无label的数据,是杂乱无章的,经过 ...

  2. 软件——机器学习与Python,聚类,K——means

    K-means是一种聚类算法: 这里运用k-means进行31个城市的分类 城市的数据保存在city.txt文件中,内容如下: BJ,2959.19,730.79,749.41,513.34,467. ...

  3. [C2P3] Andrew Ng - Machine Learning

    ##Advice for Applying Machine Learning Applying machine learning in practice is not always straightf ...

  4. hr员工数据分析(实战)

    hr员工数据分析项目实战 (数据已脱敏) 背景说明 某公司最近公司发生多起重要员工意外离职.部分员工工作缺乏积极性等问题,受hr部门委托,开展数据分析工作. 经与hr部门沟通,确定以下需求: 制定数据 ...

  5. Python Machine Learning: Scikit-Learn Tutorial

    这是一篇翻译的博客,原文链接在这里.这是我看的为数不多的介绍scikit-learn简介而全面的文章,特别适合入门.我这里把这篇文章翻译一下,英语好的同学可以直接看原文. 大部分喜欢用Python来学 ...

  6. Sklearn 速查

    ## 版权所有,转帖注明出处 章节 SciKit-Learn 加载数据集 SciKit-Learn 数据集基本信息 SciKit-Learn 使用matplotlib可视化数据 SciKit-Lear ...

  7. Extending the Yahoo! Streaming Benchmark

    could accomplish with Flink back at Twitter. I had an application in mind that I knew I could make m ...

  8. TensorFlow训练神经网络cost一直为0

    问题描述 这几天在用TensorFlow搭建一个神经网络来做一个binary classifier,搭建一个典型的神经网络的基本思路是: 定义神经网络的layers(层)以及初始化每一层的参数 然后迭 ...

  9. 网络费用流-最小k路径覆盖

    多校联赛第一场(hdu4862) Jump Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Ot ...

随机推荐

  1. huffman编码【代码】

    哈夫曼编码应该算数据结构"树"这一章最重要的一个问题了,当时大一下学期学的时候没弄懂,一年后现在算是明白了. 首先,讲讲思路. 正好这学期在学算法,这里面就用到了贪心算法,刚好练练 ...

  2. C#中的泛型和泛型集合

    泛型 泛型引入了一个概念:类型参数.通过使用类型参数(T)减少了运行时强制转换或装箱操作的风险,通过泛型可以最大限度的重用代码,保护类型的安全及提高性能,他的最常见应用就是创建集合类,可以约束集合类中 ...

  3. vue2.0+elementUI构建单页面后台管理平台

    git:https://github.com/reg21st/vue2-management-platform 访问:https://reg21st.github.io/vue2-management ...

  4. 【DevExpresss】3、LookUpEdit详解(转载)

    [DevExpresss]3.LookUpEdit详解 哈,今天又用到了LookUpEdit控件,主要是用来实现模糊查询和自由输入功能,然而由于长时间没用了,一阵手忙脚乱的,这里把网上收集的一部分教程 ...

  5. Linux - 函数的栈帧

    栈帧(stack frame),机器用栈来传递过程参数,存储返回信息,保存寄存器用于以后恢复,以及本地存储.为单个过程(函数调用)分配的那部分栈称为栈帧.栈帧其实是两个指针寄存器, 寄存器%ebp为帧 ...

  6. configure: error: Cannot find php-config. Please use --with-php-config=PATH 错误的解决方案

    一般出现这个错误说明你执行 ./configure 时  --with-php-config 这个参数配置路径错误导致的. 修改为: ./configure --with-php-config=/us ...

  7. C#Execl

    using System.IO; using System.Text; namespace iLIS.Common { /// <summary> /// 生成Excel文档内容 /// ...

  8. 读书笔记 effective c++ Item 54 让你自己熟悉包括TR1在内的标准库

    1. C++0x的历史渊源 C++标准——也就是定义语言的文档和程序库——在1998被批准.在2003年,一个小的“修复bug”版本被发布.然而标准委员会仍然在继续他们的工作,一个“2.0版本”的C+ ...

  9. centos GUI界面与命令行的切换

    Linux 系统任何时候都运行在一个指定的运行级上,并且不同的运行级的程序和服务都不同,所要完成的工作和所要达到的目的都不同.Centos设置了如下表所示的运行级,并且系统可以在这些运行级别之间进行切 ...

  10. python2与python3的不兼容_urllib2

    网页下载器:将URL对应的网页以HTML下载到本地,用于后续分析 常见网页下载器:Python官方基础模块:urllib2 第三方功能包:requests python 3.x中urllib库和uri ...