spark mllib prefixspan demo
./bin/spark-submit ~/src_test/prefix_span_test.py
source code:
import os
import sys
from pyspark.mllib.fpm import PrefixSpan
from pyspark import SparkContext
from pyspark import SparkConf sc = SparkContext("local","testing")
print(sc)
data = [
[['a'],["a", "b", "c"], ["a","c"],["d"],["c", "f"]],
[["a","d"], ["c"],["b", "c"], ["a", "e"]],
[["e", "f"], ["a", "b"], ["d","f"],["c"],["b"]],
[["e"], ["g"],["a", "f"],["c"],["b"],["c"]]
]
rdd = sc.parallelize(data, 2)
model = PrefixSpan.train(rdd, 0.5,4)
result = sorted(model.freqSequences().collect())
print("*"*88)
print(result)
print("*"*88)
output:
****************************************************************************************
[FreqSequence(sequence=[['a']], freq=4), FreqSequence(sequence=[['a'], ['a']], freq=2), FreqSequence(sequence=[['a'], ['b']], freq=4), FreqSequence(sequence=[['a'], ['b'], ['a']], freq=2), FreqSequence(sequence=[['a'], ['b'], ['c']], freq=2), FreqSequence(sequence=[['a'], ['b', 'c']], freq=2), FreqSequence(sequence=[['a'], ['b', 'c'], ['a']], freq=2), FreqSequence(sequence=[['a'], ['c']], freq=4), FreqSequence(sequence=[['a'], ['c'], ['a']], freq=2), FreqSequence(sequence=[['a'], ['c'], ['b']], freq=3), FreqSequence(sequence=[['a'], ['c'], ['c']], freq=3), FreqSequence(sequence=[['a'], ['d']], freq=2), FreqSequence(sequence=[['a'], ['d'], ['c']], freq=2), FreqSequence(sequence=[['a'], ['f']], freq=2), FreqSequence(sequence=[['b']], freq=4), FreqSequence(sequence=[['b'], ['a']], freq=2), FreqSequence(sequence=[['b'], ['c']], freq=3), FreqSequence(sequence=[['b'], ['d']], freq=2), FreqSequence(sequence=[['b'], ['d'], ['c']], freq=2), FreqSequence(sequence=[['b'], ['f']], freq=2), FreqSequence(sequence=[['b', 'a']], freq=2), FreqSequence(sequence=[['b', 'a'], ['c']], freq=2), FreqSequence(sequence=[['b', 'a'], ['d']], freq=2), FreqSequence(sequence=[['b', 'a'], ['d'], ['c']], freq=2), FreqSequence(sequence=[['b', 'a'], ['f']], freq=2), FreqSequence(sequence=[['b', 'c']], freq=2), FreqSequence(sequence=[['b', 'c'], ['a']], freq=2), FreqSequence(sequence=[['c']], freq=4), FreqSequence(sequence=[['c'], ['a']], freq=2), FreqSequence(sequence=[['c'], ['b']], freq=3), FreqSequence(sequence=[['c'], ['c']], freq=3), FreqSequence(sequence=[['d']], freq=3), FreqSequence(sequence=[['d'], ['b']], freq=2), FreqSequence(sequence=[['d'], ['c']], freq=3), FreqSequence(sequence=[['d'], ['c'], ['b']], freq=2), FreqSequence(sequence=[['e']], freq=3), FreqSequence(sequence=[['e'], ['a']], freq=2), FreqSequence(sequence=[['e'], ['a'], ['b']], freq=2), FreqSequence(sequence=[['e'], ['a'], ['c']], freq=2), FreqSequence(sequence=[['e'], ['a'], ['c'], ['b']], freq=2), FreqSequence(sequence=[['e'], ['b']], freq=2), FreqSequence(sequence=[['e'], ['b'], ['c']], freq=2), FreqSequence(sequence=[['e'], ['c']], freq=2), FreqSequence(sequence=[['e'], ['c'], ['b']], freq=2), FreqSequence(sequence=[['e'], ['f']], freq=2), FreqSequence(sequence=[['e'], ['f'], ['b']], freq=2), FreqSequence(sequence=[['e'], ['f'], ['c']], freq=2), FreqSequence(sequence=[['e'], ['f'], ['c'], ['b']], freq=2), FreqSequence(sequence=[['f']], freq=3), FreqSequence(sequence=[['f'], ['b']], freq=2), FreqSequence(sequence=[['f'], ['b'], ['c']], freq=2), FreqSequence(sequence=[['f'], ['c']], freq=2), FreqSequence(sequence=[['f'], ['c'], ['b']], freq=2)]
****************************************************************************************
spark mllib prefixspan demo的更多相关文章
- 在Java Web中使用Spark MLlib训练的模型
PMML是一种通用的配置文件,只要遵循标准的配置文件,就可以在Spark中训练机器学习模型,然后再web接口端去使用.目前应用最广的就是基于Jpmml来加载模型在javaweb中应用,这样就可以实现跨 ...
- 十二、spark MLlib的scala示例
简介 spark MLlib官网:http://spark.apache.org/docs/latest/ml-guide.html mllib是spark core之上的算法库,包含了丰富的机器学习 ...
- Spark MLlib + maven + scala 试水~
使用SGD算法逻辑回归的垃圾邮件分类器 package com.oreilly.learningsparkexamples.scala import org.apache.spark.{SparkCo ...
- Spark MLlib之线性回归源代码分析
1.理论基础 线性回归(Linear Regression)问题属于监督学习(Supervised Learning)范畴,又称分类(Classification)或归纳学习(Inductive Le ...
- spark mllib docs,MLlib: RDD-based API
MLlib: RDD-based API This page documents sections of the MLlib guide for the RDD-based API (the spar ...
- spark mllib lda 中文分词、主题聚合基本样例
github https://github.com/cclient/spark-lda-example spark mllib lda example 官方示例较为精简 在官方lda示例的基础上,给合 ...
- Spark MLlib中KMeans聚类算法的解析和应用
聚类算法是机器学习中的一种无监督学习算法,它在数据科学领域应用场景很广泛,比如基于用户购买行为.兴趣等来构建推荐系统. 核心思想可以理解为,在给定的数据集中(数据集中的每个元素有可被观察的n个属性), ...
- Spark MLlib - LFW
val path = "/usr/data/lfw-a/*" val rdd = sc.wholeTextFiles(path) val first = rdd.first pri ...
- 《Spark MLlib机器学习实践》内容简介、目录
http://product.dangdang.com/23829918.html Spark作为新兴的.应用范围最为广泛的大数据处理开源框架引起了广泛的关注,它吸引了大量程序设计和开发人员进行相 ...
随机推荐
- RDLC报表数据集的一个细节,导致错误为 尚未数据源提供数据源实例
报表中,数据集的名字DataSet_CZ, 这里报表这样加载,视乎是的. reportViewer1.LocalReport.DataSources.Add(new Microsoft.Reporti ...
- 通过宝塔webhook,实现git自动拉取服务器代码
1.宝塔安装webhook,添加一条记录,脚本内容为: #!/bin/bash echo "" #输出当前时间 date --date='0 days ago' "+%Y ...
- 《CSS世界》读书笔记(十四)
<!-- <CSS世界>张鑫旭著 --> 功勋卓越的 border 属性 border-width 不支持百分比值 border-style 类型 border-style ...
- sai u 2016
再过20分钟,我就要结束2016年的工作回家过春节了.真是难过的一天啊,从来没有今天那么感受深刻,那么嫌弃时间太慢,没有归家心似箭,没有近乡情怯,只是好想,呵呵,来个午睡,来场电影,来点小说,哈哈哈. ...
- Navicat Premium for Mac完美破解
前因:系统升级Mojave10.14.4,没升级成功,也可能是误删了系统下的private文件夹下的东西,导致内核崩溃. 自己鼓捣了下,恢复系统不成功,去苹果售后问了下,重装系统399,保留资料 ...
- hadoop过程中遇到的错误与解决方法
本文整理了在hadoop学习过程中遇到的各种问题. windows下开发环境搭建 大部分情况下,我们都是在windows下开发,hadoop则一般部署于linux服务器(无论是CDH还是原生hadoo ...
- Pandas:深市股票代码前补足0
#深市代码前补充0----------------- df[' #先增加一列 #将2列合并为新列 df['代码合并'] = df['补充'] + df['股票代码'] #再取后6位 df['股票代码' ...
- ajax的三次封装简单概况
原生ajax: readyState 准备状态 status 页面状态 ...
- mycat 单库多表实现水平分片
环境 mycat : 192.168.126.128 root root mysql1: 192.168.126.129:3306 root lizhenghua mysql2: 192.168.12 ...
- loj 3090 「BJOI2019」勘破神机 - 数学
题目传送门 传送门 题目大意 设$F_{n}$表示用$1\times 2$的骨牌填$2\times n$的网格的方案数,设$G_{n}$$表示用$1\times 2$的骨牌填$3\times n$的网 ...