python spark kmeans demo
官方的demo
from numpy import array
from math import sqrt from pyspark import SparkContext from pyspark.mllib.clustering import KMeans, KMeansModel sc = SparkContext(appName="clusteringExample")
# Load and parse the data
data = sc.textFile("/root/spark-2.1.1-bin-hadoop2.6/data/mllib/kmeans_data.txt")
parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')])) # Build the model (cluster the data)
clusters = KMeans.train(parsedData, 2, maxIterations=10, initializationMode="random") # Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)])) WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE)) # Save and load model
#clusters.save(sc, "target/org/apache/spark/PythonKMeansExample/KMeansModel")
#sameModel = KMeansModel.load(sc, "target/org/apache/spark/PythonKMeansExample/KMeansModel")
带归一化的例子:
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.sql.functions.{col, udf} case class DataRow(label: Double, x1: Double, x2: Double)
val data = sqlContext.createDataFrame(sc.parallelize(Seq(
DataRow(3, 1, 2),
DataRow(5, 3, 4),
DataRow(7, 5, 6),
DataRow(6, 0, 0)
))) val parsedData = data.rdd.map(s => Vectors.dense(s.getDouble(1),s.getDouble(2))).cache()
val clusters = KMeans.train(parsedData, 3, 20)
val t = udf { (x1: Double, x2: Double) => clusters.predict(Vectors.dense(x1, x2)) }
val result = data.select(col("label"), t(col("x1"), col("x2"))) The important part are the last two lines. Creates a UDF (user-defined function) which can be directly applied to Dataframe columns (in this case, the two columns x1 and x2). Selects the label column along with the UDF applied to the x1 and x2 columns. Since the UDF will predict closestCluster, after this result will be a Dataframe consisting of (label, closestCluster)
参考:https://stackoverflow.com/questions/31447141/spark-mllib-kmeans-from-dataframe-and-back-again
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.clustering._
val rows = data.rdd.map(r => (r.getDouble(1),r.getDouble(2))).cache()
val vectors = rows.map(r => Vectors.dense(r._1, r._2))
val kMeansModel = KMeans.train(vectors, 3, 20)
val predictions = rows.map{r => (r._1, kMeansModel.predict(Vectors.dense(r._1, r._2)))}
val df = predictions.toDF("id", "cluster")
df.show
Create column from RDD
It's very easy to obtain pairs of ids and clusters in form of RDD:
val idPointRDD = data.rdd.map(s => (s.getInt(0), Vectors.dense(s.getDouble(1),s.getDouble(2)))).cache()
val clusters = KMeans.train(idPointRDD.map(_._2), 3, 20)
val clustersRDD = clusters.predict(idPointRDD.map(_._2))
val idClusterRDD = idPointRDD.map(_._1).zip(clustersRDD)
Then you create DataFrame from that
val idCluster = idClusterRDD.toDF("id", "cluster")
It works because map doesn't change order of the data in RDD, which is why you can just zip ids with results of prediction.
Use UDF (User Defined Function)
Second method involves using clusters.predict method as UDF:
val bcClusters = sc.broadcast(clusters)
def predict(x: Double, y: Double): Int = {
bcClusters.value.predict(Vectors.dense(x, y))
}
sqlContext.udf.register("predict", predict _)
Now we can use it to add predictions to data:
val idCluster = data.selectExpr("id", "predict(x, y) as cluster")
Keep in mind that Spark API doesn't allow UDF deregistration. This means that closure data will be kept in the memory.
python spark kmeans demo的更多相关文章
- Python实现kMeans(k均值聚类)
Python实现kMeans(k均值聚类) 运行环境 Pyhton3 numpy(科学计算包) matplotlib(画图所需,不画图可不必) 计算过程 st=>start: 开始 e=> ...
- RPi 2B python opencv camera demo example
/************************************************************************************** * RPi 2B pyt ...
- [Spark][Python]spark 从 avro 文件获取 Dataframe 的例子
[Spark][Python]spark 从 avro 文件获取 Dataframe 的例子 从如下地址获取文件: https://github.com/databricks/spark-avro/r ...
- [Spark][Python]Spark 访问 mysql , 生成 dataframe 的例子:
[Spark][Python]Spark 访问 mysql , 生成 dataframe 的例子: mydf001=sqlContext.read.format("jdbc").o ...
- 【Python学习笔记】使用python进行kmeans聚类
使用python进行kmeans聚类 假设我们要解决一个这样的问题. 以下是一些同学,大萌是一个学霸,而我们想要找到这些人中的潜在学霸,所以我们要把这些人分为两类--学霸与非学霸. 高数 英语 Pyt ...
- 数据挖掘-聚类分析(Python实现K-Means算法)
概念: 聚类分析(cluster analysis ):是一组将研究对象分为相对同质的群组(clusters)的统计分析技术.聚类分析也叫分类分析,或者数值分类.聚类的输入是一组未被标记的样本,聚类根 ...
- python spark 随机森林入门demo
class pyspark.mllib.tree.RandomForest[source] Learning algorithm for a random forest model for class ...
- python spark 决策树 入门demo
Refer to the DecisionTree Python docs and DecisionTreeModel Python docs for more details on the API. ...
- 随机森林算法demo python spark
关键参数 最重要的,常常需要调试以提高算法效果的有两个参数:numTrees,maxDepth. numTrees(决策树的个数):增加决策树的个数会降低预测结果的方差,这样在测试时会有更高的accu ...
随机推荐
- Mysql数据类型(二)
字符类型 #官网:https://dev.mysql.com/doc/refman/5.7/en/char.html #注意:char和varchar括号内的参数指的都是字符的长度 #char类型:定 ...
- Solr.NET快速入门(二)
字典映射和动态字段 Solr dynamicFields可以根据用例不同地映射. 它们可以被"静态地"映射,例如,给定: <dynamicField name="p ...
- Aspose.cell中的Excel模板导出数据
//Excel模板导数据(Eexcel中根据DataTable中的个数,给多个Sheet中的模板赋值) public void DataSetToManyExcel(string fileName, ...
- Paint、Canvas.2
1:使用Cavans画个简单图形 2:过程 2.1:绘制最外部的圆 /*** 初始化 paint */ Paint paint; paint = new Paint(); paint.setColor ...
- Qt:&OpenCV—Q图像处理基本操作(Code)
原文链接:http://www.cnblogs.com/emouse/archive/2013/03/31/2991333.html 作者写作一系列:http://www.cnblogs.com/em ...
- Session和Cookie对比详解
会话(Session)跟踪是Web程序中常用的技术,用来跟踪用户的整个会话.常用的会话跟踪技术是Cookie与Session.Cookie通过在客户端记录信息确定用户身份,Session通过在服务器端 ...
- WIN系统查询版本
cmd -> DISM /online /Get-CurrentEdition //查询系统版本 WIN+R -> slmgr.vbs -ipk 查询系统注册信息slmgr.vbs -dl ...
- Vue2:实例生命周期、js生命周期
1.vue2生命周期 每个 Vue 实例在被创建之前都要经过一系列的初始化过程.例如,实例需要配置数据观测(data observer).编译模版.挂载实例到 DOM ,然后在数据变化时更新 DOM ...
- VS2008中C++打开Excel(MFC)
VS2008中C++打开Excel(MFC)——摘自网络,并加以细化 第一步:建立project(新建项目) 英文版 中文版 选择C++下的MFC Application(基于对话框的项目) 英文版 ...
- day34-1 面向对象概述
目录 面向对象编程 面向过程&面向对象 Python中一切皆对象 什么是对象? 面向对象编程 面向过程&面向对象 都是一种解决问题的思想 面向过程:在解决问题的时候,关注的是解决问题的 ...