官方的demo

from numpy import array
from math import sqrt from pyspark import SparkContext from pyspark.mllib.clustering import KMeans, KMeansModel sc = SparkContext(appName="clusteringExample")
# Load and parse the data
data = sc.textFile("/root/spark-2.1.1-bin-hadoop2.6/data/mllib/kmeans_data.txt")
parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')])) # Build the model (cluster the data)
clusters = KMeans.train(parsedData, 2, maxIterations=10, initializationMode="random") # Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)])) WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE)) # Save and load model
#clusters.save(sc, "target/org/apache/spark/PythonKMeansExample/KMeansModel")
#sameModel = KMeansModel.load(sc, "target/org/apache/spark/PythonKMeansExample/KMeansModel")

带归一化的例子:

import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.sql.functions.{col, udf} case class DataRow(label: Double, x1: Double, x2: Double)
val data = sqlContext.createDataFrame(sc.parallelize(Seq(
DataRow(3, 1, 2),
DataRow(5, 3, 4),
DataRow(7, 5, 6),
DataRow(6, 0, 0)
))) val parsedData = data.rdd.map(s => Vectors.dense(s.getDouble(1),s.getDouble(2))).cache()
val clusters = KMeans.train(parsedData, 3, 20)
val t = udf { (x1: Double, x2: Double) => clusters.predict(Vectors.dense(x1, x2)) }
val result = data.select(col("label"), t(col("x1"), col("x2"))) The important part are the last two lines. Creates a UDF (user-defined function) which can be directly applied to Dataframe columns (in this case, the two columns x1 and x2). Selects the label column along with the UDF applied to the x1 and x2 columns. Since the UDF will predict closestCluster, after this result will be a Dataframe consisting of (label, closestCluster)

参考:https://stackoverflow.com/questions/31447141/spark-mllib-kmeans-from-dataframe-and-back-again

import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.clustering._ val rows = data.rdd.map(r => (r.getDouble(1),r.getDouble(2))).cache()
val vectors = rows.map(r => Vectors.dense(r._1, r._2))
val kMeansModel = KMeans.train(vectors, 3, 20)
val predictions = rows.map{r => (r._1, kMeansModel.predict(Vectors.dense(r._1, r._2)))}
val df = predictions.toDF("id", "cluster")
df.show

Create column from RDD

It's very easy to obtain pairs of ids and clusters in form of RDD:

val idPointRDD = data.rdd.map(s => (s.getInt(0), Vectors.dense(s.getDouble(1),s.getDouble(2)))).cache()
val clusters = KMeans.train(idPointRDD.map(_._2), 3, 20)
val clustersRDD = clusters.predict(idPointRDD.map(_._2))
val idClusterRDD = idPointRDD.map(_._1).zip(clustersRDD)

Then you create DataFrame from that

val idCluster = idClusterRDD.toDF("id", "cluster")

It works because map doesn't change order of the data in RDD, which is why you can just zip ids with results of prediction.

Use UDF (User Defined Function)

Second method involves using clusters.predict method as UDF:

val bcClusters = sc.broadcast(clusters)
def predict(x: Double, y: Double): Int = {
bcClusters.value.predict(Vectors.dense(x, y))
}
sqlContext.udf.register("predict", predict _)

Now we can use it to add predictions to data:

val idCluster = data.selectExpr("id", "predict(x, y) as cluster")

Keep in mind that Spark API doesn't allow UDF deregistration. This means that closure data will be kept in the memory.

python spark kmeans demo的更多相关文章

  1. Python实现kMeans(k均值聚类)

    Python实现kMeans(k均值聚类) 运行环境 Pyhton3 numpy(科学计算包) matplotlib(画图所需,不画图可不必) 计算过程 st=>start: 开始 e=> ...

  2. RPi 2B python opencv camera demo example

    /************************************************************************************** * RPi 2B pyt ...

  3. [Spark][Python]spark 从 avro 文件获取 Dataframe 的例子

    [Spark][Python]spark 从 avro 文件获取 Dataframe 的例子 从如下地址获取文件: https://github.com/databricks/spark-avro/r ...

  4. [Spark][Python]Spark 访问 mysql , 生成 dataframe 的例子:

    [Spark][Python]Spark 访问 mysql , 生成 dataframe 的例子: mydf001=sqlContext.read.format("jdbc").o ...

  5. 【Python学习笔记】使用python进行kmeans聚类

    使用python进行kmeans聚类 假设我们要解决一个这样的问题. 以下是一些同学,大萌是一个学霸,而我们想要找到这些人中的潜在学霸,所以我们要把这些人分为两类--学霸与非学霸. 高数 英语 Pyt ...

  6. 数据挖掘-聚类分析(Python实现K-Means算法)

    概念: 聚类分析(cluster analysis ):是一组将研究对象分为相对同质的群组(clusters)的统计分析技术.聚类分析也叫分类分析,或者数值分类.聚类的输入是一组未被标记的样本,聚类根 ...

  7. python spark 随机森林入门demo

    class pyspark.mllib.tree.RandomForest[source] Learning algorithm for a random forest model for class ...

  8. python spark 决策树 入门demo

    Refer to the DecisionTree Python docs and DecisionTreeModel Python docs for more details on the API. ...

  9. 随机森林算法demo python spark

    关键参数 最重要的,常常需要调试以提高算法效果的有两个参数:numTrees,maxDepth. numTrees(决策树的个数):增加决策树的个数会降低预测结果的方差,这样在测试时会有更高的accu ...

随机推荐

  1. SqlServer显示“正在还原...”

    还原数据库时,提示还原成功,但是数据库一直显示“正在还原...”的状态. 可以通过执行以下命令即可 RESTORE DATABASE DB_NAME WITH RECOVERY 原因: 关于recov ...

  2. 使用QT的一些小Tipster

    1.在使用Qt Creator编程时,难免会用到将float类型转换为QString类型的方法:原文 1.1. 将QString类型转化为float类型,很简单 QString data;       ...

  3. UVa340(Master-Mind Hints)未完成

    #include<stdio.h> int main() { int num,a[100],i,j,b[100]; while(scanf("%d",&num) ...

  4. js去除html标记

    function ff(str) { var dd = str.replace(/<\/?.+?>/g, ""); var dds = dd.replace(/ /g, ...

  5. B/S架构的网站测试

      一.功能测试 1.链接测试 链接是Web应用系统的一个主要特征,它是在页面之间切换和指导用户去一些不知道地址的页面的主要手段.链接测试可分为三个方面.首先,测试所有链接是否按指示的那样确实链接到了 ...

  6. 创建一个dynamics CRM workflow (五) - Deploy Custom Workflows

    我们打开plugin registeration tool. 注册一个新的assembly. custom workflow 和 plugin注册的方法还有些不同. 这一步custom workflo ...

  7. 获取淘宝sessionkey 实时保存

    <?php/* * To change this license header, choose License Headers in Project Properties. * To chang ...

  8. 素数(Prime)

    素数的判断: #include<math.h> bool IsPrime(int n) { ) return false; int sqr = (int)sqrt(1.0*n); ; i& ...

  9. Webpack 学习记录之概念

    1 什么是webpack webpack是一个模块打包器,可以递归的构建一个依赖关系图,其中包含每个程序需要的每个模块,然后将所有模块打包成一个或多个bundle.他和其他的工具最大的不同在于他支持c ...

  10. 训练1-D

    把一个字符三角形掏空,就能节省材料成本,减轻重量,但关键是为了追求另一种视觉效果.在设计的过程中,需要给出各种花纹的材料和大小尺寸的三角形样板,通过电脑临时做出来,以便看看效果. Input 每行包含 ...