#We will also standardise our data as we have done so far when performing distance-based clustering.

from pyspark.mllib.feature import StandardScaler
standardizer = StandardScaler(True, True)
t0 = time()
standardizer_model = standardizer.fit(parsed_data_values)
tt = time() - t0
standardized_data_values = standardizer_model.transform(parsed_data_values)
print "Data standardized in {} seconds".format(round(tt,3)) Data standardized in 9.54 seconds We can now perform k-means clustering. from pyspark.mllib.clustering import KMeans
t0 = time()
clusters = KMeans.train(standardized_data_values, 80,
maxIterations=10, runs=5,
initializationMode="random")
tt = time() - t0
print "Data clustered in {} seconds".format(round(tt,3)) Data clustered in 137.496 seconds

kmeans demo

摘自:http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#module-pyspark.mllib.feature

pyspark.mllib.feature module

Python package for feature in MLlib.

class pyspark.mllib.feature.Normalizer(p=2.0)[source]

Bases: pyspark.mllib.feature.VectorTransformer

Normalizes samples individually to unit Lp norm

For any 1 <= p < float(‘inf’), normalizes samples using sum(abs(vector) p) (1/p) as norm.

For p = float(‘inf’), max(abs(vector)) will be used as norm for normalization.

Parameters: p – Normalization in L^p^ space, p = 2 by default.
>>> v = Vectors.dense(range(3))
>>> nor = Normalizer(1)
>>> nor.transform(v)
DenseVector([0.0, 0.3333, 0.6667])
>>> rdd = sc.parallelize([v])
>>> nor.transform(rdd).collect()
[DenseVector([0.0, 0.3333, 0.6667])]
>>> nor2 = Normalizer(float("inf"))
>>> nor2.transform(v)
DenseVector([0.0, 0.5, 1.0])

New in version 1.2.0.

transform(vector)[source]

Applies unit length normalization on a vector.

Parameters: vector – vector or RDD of vector to be normalized.
Returns: normalized vector. If the norm of the input is zero, it will return the input vector.

New in version 1.2.0.

class pyspark.mllib.feature.StandardScalerModel(java_model)[source]

Bases: pyspark.mllib.feature.JavaVectorTransformer

Represents a StandardScaler model that can transform vectors.

New in version 1.2.0.

mean[source]

Return the column mean values.

New in version 2.0.0.

setWithMean(withMean)[source]

Setter of the boolean which decides whether it uses mean or not

New in version 1.4.0.

setWithStd(withStd)[source]

Setter of the boolean which decides whether it uses std or not

New in version 1.4.0.

std[source]

Return the column standard deviation values.

New in version 2.0.0.

transform(vector)[source]

Applies standardization transformation on a vector.

Note

In Python, transform cannot currently be used within an RDD transformation or action. Call transform directly on the RDD instead.

Parameters: vector – Vector or RDD of Vector to be standardized.
Returns: Standardized vector. If the variance of a column is zero, it will return default 0.0 for the column with zero variance.

New in version 1.2.0.

withMean[source]

Returns if the model centers the data before scaling.

New in version 2.0.0.

withStd[source]

Returns if the model scales the data to unit standard deviation.

New in version 2.0.0.

class pyspark.mllib.feature.StandardScaler(withMean=False, withStd=True)[source]

Bases: object

Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set.

Parameters:
  • withMean – False by default. Centers the data with mean before scaling. It will build a dense output, so take care when applying to sparse input.
  • withStd – True by default. Scales the data to unit standard deviation.
>>> vs = [Vectors.dense([-2.0, 2.3, 0]), Vectors.dense([3.8, 0.0, 1.9])]
>>> dataset = sc.parallelize(vs)
>>> standardizer = StandardScaler(True, True)
>>> model = standardizer.fit(dataset)
>>> result = model.transform(dataset)
>>> for r in result.collect(): r
DenseVector([-0.7071, 0.7071, -0.7071])
DenseVector([0.7071, -0.7071, 0.7071])
>>> int(model.std[0])
4
>>> int(model.mean[0]*10)
9
>>> model.withStd
True
>>> model.withMean
True

New in version 1.2.0.

fit(dataset)[source]

Computes the mean and variance and stores as a model to be used for later scaling.

Parameters: dataset – The data used to compute the mean and variance to build the transformation model.
Returns: a StandardScalarModel

New in version 1.2.0.

class pyspark.mllib.feature.HashingTF(numFeatures=1048576)[source]

Bases: object

Maps a sequence of terms to their term frequencies using the hashing trick.

Note

The terms must be hashable (can not be dict/set/list...).

Parameters: numFeatures – number of features (default: 2^20)
>>> htf = HashingTF(100)
>>> doc = "a a b b c d".split(" ")
>>> htf.transform(doc)
SparseVector(100, {...})

New in version 1.2.0.

indexOf(term)[source]

Returns the index of the input term.

New in version 1.2.0.

setBinary(value)[source]

If True, term frequency vector will be binary such that non-zero term counts will be set to 1 (default: False)

New in version 2.0.0.

transform(document)[source]

Transforms the input document (list of terms) to term frequency vectors, or transform the RDD of document to RDD of term frequency vectors.

New in version 1.2.0.

class pyspark.mllib.feature.IDFModel(java_model)[source]

Bases: pyspark.mllib.feature.JavaVectorTransformer

Represents an IDF model that can transform term frequency vectors.

New in version 1.2.0.

idf()[source]

Returns the current IDF vector.

New in version 1.4.0.

transform(x)[source]

Transforms term frequency (TF) vectors to TF-IDF vectors.

If minDocFreq was set for the IDF calculation, the terms which occur in fewer than minDocFreq documents will have an entry of 0.

Note

In Python, transform cannot currently be used within an RDD transformation or action. Call transform directly on the RDD instead.

Parameters: x – an RDD of term frequency vectors or a term frequency vector
Returns: an RDD of TF-IDF vectors or a TF-IDF vector

New in version 1.2.0.

class pyspark.mllib.feature.IDF(minDocFreq=0)[source]

Bases: object

Inverse document frequency (IDF).

The standard formulation is used: idf = log((m + 1) / (d(t) + 1)), where m is the total number of documents and d(t) is the number of documents that contain term t.

This implementation supports filtering out terms which do not appear in a minimum number of documents (controlled by the variable minDocFreq). For terms that are not in at least minDocFreq documents, the IDF is found as 0, resulting in TF-IDFs of 0.

Parameters: minDocFreq – minimum of documents in which a term should appear for filtering
>>> n = 4
>>> freqs = [Vectors.sparse(n, (1, 3), (1.0, 2.0)),
... Vectors.dense([0.0, 1.0, 2.0, 3.0]),
... Vectors.sparse(n, [1], [1.0])]
>>> data = sc.parallelize(freqs)
>>> idf = IDF()
>>> model = idf.fit(data)
>>> tfidf = model.transform(data)
>>> for r in tfidf.collect(): r
SparseVector(4, {1: 0.0, 3: 0.5754})
DenseVector([0.0, 0.0, 1.3863, 0.863])
SparseVector(4, {1: 0.0})
>>> model.transform(Vectors.dense([0.0, 1.0, 2.0, 3.0]))
DenseVector([0.0, 0.0, 1.3863, 0.863])
>>> model.transform([0.0, 1.0, 2.0, 3.0])
DenseVector([0.0, 0.0, 1.3863, 0.863])
>>> model.transform(Vectors.sparse(n, (1, 3), (1.0, 2.0)))
SparseVector(4, {1: 0.0, 3: 0.5754})

New in version 1.2.0.

fit(dataset)[source]

Computes the inverse document frequency.

Parameters: dataset – an RDD of term frequency vectors

New in version 1.2.0.

class pyspark.mllib.feature.Word2Vec[source]

Bases: object

Word2Vec creates vector representation of words in a text corpus. The algorithm first constructs a vocabulary from the corpus and then learns vector representation of words in the vocabulary. The vector representation can be used as features in natural language processing and machine learning algorithms.

We used skip-gram model in our implementation and hierarchical softmax method to train the model. The variable names in the implementation matches the original C implementation.

For original C implementation, see https://code.google.com/p/word2vec/ For research papers, see Efficient Estimation of Word Representations in Vector Space and Distributed Representations of Words and Phrases and their Compositionality.

>>> sentence = "a b " * 100 + "a c " * 10
>>> localDoc = [sentence, sentence]
>>> doc = sc.parallelize(localDoc).map(lambda line: line.split(" "))
>>> model = Word2Vec().setVectorSize(10).setSeed(42).fit(doc)

Querying for synonyms of a word will not return that word:

>>> syms = model.findSynonyms("a", 2)
>>> [s[0] for s in syms]
[u'b', u'c']

But querying for synonyms of a vector may return the word whose representation is that vector:

>>> vec = model.transform("a")
>>> syms = model.findSynonyms(vec, 2)
>>> [s[0] for s in syms]
[u'a', u'b']
>>> import os, tempfile
>>> path = tempfile.mkdtemp()
>>> model.save(sc, path)
>>> sameModel = Word2VecModel.load(sc, path)
>>> model.transform("a") == sameModel.transform("a")
True
>>> syms = sameModel.findSynonyms("a", 2)
>>> [s[0] for s in syms]
[u'b', u'c']
>>> from shutil import rmtree
>>> try:
... rmtree(path)
... except OSError:
... pass

New in version 1.2.0.

fit(data)[source]

Computes the vector representation of each word in vocabulary.

Parameters: data – training data. RDD of list of string
Returns: Word2VecModel instance

New in version 1.2.0.

setLearningRate(learningRate)[source]

Sets initial learning rate (default: 0.025).

New in version 1.2.0.

setMinCount(minCount)[source]

Sets minCount, the minimum number of times a token must appear to be included in the word2vec model’s vocabulary (default: 5).

New in version 1.4.0.

setNumIterations(numIterations)[source]

Sets number of iterations (default: 1), which should be smaller than or equal to number of partitions.

New in version 1.2.0.

setNumPartitions(numPartitions)[source]

Sets number of partitions (default: 1). Use a small number for accuracy.

New in version 1.2.0.

setSeed(seed)[source]

Sets random seed.

New in version 1.2.0.

setVectorSize(vectorSize)[source]

Sets vector size (default: 100).

New in version 1.2.0.

setWindowSize(windowSize)[source]

Sets window size (default: 5).

New in version 2.0.0.

class pyspark.mllib.feature.Word2VecModel(java_model)[source]

Bases: pyspark.mllib.feature.JavaVectorTransformer, pyspark.mllib.util.JavaSaveable, pyspark.mllib.util.JavaLoader

class for Word2Vec model

New in version 1.2.0.

findSynonyms(word, num)[source]

Find synonyms of a word

Parameters:
  • word – a word or a vector representation of word
  • num – number of synonyms to find
Returns:

array of (word, cosineSimilarity)

Note

Local use only

New in version 1.2.0.

getVectors()[source]

Returns a map of words to their vector representations.

New in version 1.4.0.

classmethod load(sc, path)[source]

Load a model from the given path.

New in version 1.5.0.

transform(word)[source]

Transforms a word to its vector representation

Note

Local use only

Parameters: word – a word
Returns: vector representation of word(s)

New in version 1.2.0.

class pyspark.mllib.feature.ChiSqSelector(numTopFeatures=50, selectorType='numTopFeatures', percentile=0.1, fpr=0.05, fdr=0.05, fwe=0.05)[source]

Bases: object

Creates a ChiSquared feature selector. The selector supports different selection methods: numTopFeatures, percentile, fpr, fdr, fwe.

  • numTopFeatures chooses a fixed number of top features according to a chi-squared test.
  • percentile is similar but chooses a fraction of all features instead of a fixed number.
  • fpr chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.
  • fdr uses the Benjamini-Hochberg procedure to choose all features whose false discovery rate is below a threshold.
  • fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection.

By default, the selection method is numTopFeatures, with the default number of top features set to 50.

>>> data = sc.parallelize([
... LabeledPoint(0.0, SparseVector(3, {0: 8.0, 1: 7.0})),
... LabeledPoint(1.0, SparseVector(3, {1: 9.0, 2: 6.0})),
... LabeledPoint(1.0, [0.0, 9.0, 8.0]),
... LabeledPoint(2.0, [7.0, 9.0, 5.0]),
... LabeledPoint(2.0, [8.0, 7.0, 3.0])
... ])
>>> model = ChiSqSelector(numTopFeatures=1).fit(data)
>>> model.transform(SparseVector(3, {1: 9.0, 2: 6.0}))
SparseVector(1, {})
>>> model.transform(DenseVector([7.0, 9.0, 5.0]))
DenseVector([7.0])
>>> model = ChiSqSelector(selectorType="fpr", fpr=0.2).fit(data)
>>> model.transform(SparseVector(3, {1: 9.0, 2: 6.0}))
SparseVector(1, {})
>>> model.transform(DenseVector([7.0, 9.0, 5.0]))
DenseVector([7.0])
>>> model = ChiSqSelector(selectorType="percentile", percentile=0.34).fit(data)
>>> model.transform(DenseVector([7.0, 9.0, 5.0]))
DenseVector([7.0])

New in version 1.4.0.

fit(data)[source]

Returns a ChiSquared feature selector.

Parameters: data – an RDD[LabeledPoint] containing the labeled dataset with categorical features. Real-valued features will be treated as categorical for each distinct value. Apply feature discretizer before using this function.

New in version 1.4.0.

setFdr(fdr)[source]

set FDR [0.0, 1.0] for feature selection by FDR. Only applicable when selectorType = “fdr”.

New in version 2.2.0.

setFpr(fpr)[source]

set FPR [0.0, 1.0] for feature selection by FPR. Only applicable when selectorType = “fpr”.

New in version 2.1.0.

setFwe(fwe)[source]

set FWE [0.0, 1.0] for feature selection by FWE. Only applicable when selectorType = “fwe”.

New in version 2.2.0.

setNumTopFeatures(numTopFeatures)[source]

set numTopFeature for feature selection by number of top features. Only applicable when selectorType = “numTopFeatures”.

New in version 2.1.0.

setPercentile(percentile)[source]

set percentile [0.0, 1.0] for feature selection by percentile. Only applicable when selectorType = “percentile”.

New in version 2.1.0.

setSelectorType(selectorType)[source]

set the selector type of the ChisqSelector. Supported options: “numTopFeatures” (default), “percentile”, “fpr”, “fdr”, “fwe”.

New in version 2.1.0.

class pyspark.mllib.feature.ChiSqSelectorModel(java_model)[source]

Bases: pyspark.mllib.feature.JavaVectorTransformer

Represents a Chi Squared selector model.

New in version 1.4.0.

transform(vector)[source]

Applies transformation on a vector.

Parameters: vector – Vector or RDD of Vector to be transformed.
Returns: transformed vector.

New in version 1.4.0.

class pyspark.mllib.feature.ElementwiseProduct(scalingVector)[source]

Bases: pyspark.mllib.feature.VectorTransformer

Scales each column of the vector, with the supplied weight vector. i.e the elementwise product.

>>> weight = Vectors.dense([1.0, 2.0, 3.0])
>>> eprod = ElementwiseProduct(weight)
>>> a = Vectors.dense([2.0, 1.0, 3.0])
>>> eprod.transform(a)
DenseVector([2.0, 2.0, 9.0])
>>> b = Vectors.dense([9.0, 3.0, 4.0])
>>> rdd = sc.parallelize([a, b])
>>> eprod.transform(rdd).collect()
[DenseVector([2.0, 2.0, 9.0]), DenseVector([9.0, 6.0, 12.0])]

New in version 1.5.0.

transform(vector)[source]

Computes the Hadamard product of the vector.

New in version 1.5.0.

spark 数据预处理 特征标准化 归一化模块的更多相关文章

  1. sklearn中的数据预处理----good!! 标准化 归一化 在何时使用

    RESCALING attribute data to values to scale the range in [0, 1] or [−1, 1] is useful for the optimiz ...

  2. Python数据预处理(sklearn.preprocessing)—归一化(MinMaxScaler),标准化(StandardScaler),正则化(Normalizer, normalize)

      关于数据预处理的几个概念 归一化 (Normalization): 属性缩放到一个指定的最大和最小值(通常是1-0)之间,这可以通过preprocessing.MinMaxScaler类实现. 常 ...

  3. python data analysis | python数据预处理(基于scikit-learn模块)

    原文:http://www.jianshu.com/p/94516a58314d Dataset transformations| 数据转换 Combining estimators|组合学习器 Fe ...

  4. 数据预处理:标准化(Standardization)

    注:本文是人工智能研究网的学习笔记 常用的数据预处理方式 Standardization, or mean removal and variance scaling Normalization: sc ...

  5. sklearn中的数据预处理和特征工程

    小伙伴们大家好~o( ̄▽ ̄)ブ,沉寂了这么久我又出来啦,这次先不翻译优质的文章了,这次我们回到Python中的机器学习,看一下Sklearn中的数据预处理和特征工程,老规矩还是先强调一下我的开发环境是 ...

  6. 机器学习实战基础(八):sklearn中的数据预处理和特征工程(一)简介

    1 简介 数据挖掘的五大流程: 1. 获取数据 2. 数据预处理 数据预处理是从数据中检测,纠正或删除损坏,不准确或不适用于模型的记录的过程 可能面对的问题有:数据类型不同,比如有的是文字,有的是数字 ...

  7. 数据的特征预处理?(归一化)&(标准化)&(缺失值)

    特征处理是什么: 通过特定的统计方法(数学方法)将数据转化成为算法要求的数据 sklearn特征处理API: sklearn.preprocessing 代码示例:  文末! 归一化: 公式:    ...

  8. Python数据预处理—归一化,标准化,正则化

    关于数据预处理的几个概念 归一化 (Normalization): 属性缩放到一个指定的最大和最小值(通常是1-0)之间,这可以通过preprocessing.MinMaxScaler类实现. 常用的 ...

  9. 关于使用sklearn进行数据预处理 —— 归一化/标准化/正则化

    一.标准化(Z-Score),或者去除均值和方差缩放 公式为:(X-mean)/std  计算时对每个属性/每列分别进行. 将数据按期属性(按列进行)减去其均值,并处以其方差.得到的结果是,对于每个属 ...

随机推荐

  1. ACM-ICPC Dhaka Regional 2012 题解

    B: Uva: 12582 - Wedding of Sultan 给定一个字符串(仅由大写字母构成)一个字母表示一个地点,经过这个点或离开这个点都输出这个地点的字母) 问: 每一个地点经过的次数(维 ...

  2. linux高级技巧:heartbeat+lvs(三)

    之前我们把LVS和heartbeat都单独进行了測试,是时候进行合并了 1.LVS+heartbeat:         首先显示我们的控制台:                        让这两个 ...

  3. Android-Volley网络通信框架(自己定义Request 请求:实现 GsonRequest)

    1.回想 上篇学习了android 通过 volley 网络通信框架 实现 请求图片的三种方法! 2.重点 (1)复习和熟悉 StringRequest ,JsonObjectRequest 方法 ( ...

  4. JAVA性能优化的五种方式

    一,JAVA性能优化之设计优化 设计优化处于性能优化手段的上层.它往往须要在软件开发之前进行.在软件开发之前,系统架构师应该就评估系统可能存在的各种潜在问题和技术难点,并给出合理的设计方案,因为软件设 ...

  5. CentOS7安装EPEL的两种方式

    转自:http://www.mamicode.com/info-detail-1671603.html epel是社区强烈打造的免费开源发行软件包版本库. EPEL,即Extra Packages f ...

  6. solarwind之安装

      1.  安装组件   2.  安装组件sql   3.  安装   4.  接受协议   5.  安装路径   6.  安装状态   7.  继续   8.  激活     9.  完成安装

  7. hadoop 编译自己的jar包并运行

    我修从网上找了份java代码 我为了让它在hadoop下跑起来居然花了两个多小时... 首先最好不要在java代码中设置package...使用default package即可... 然后在java ...

  8. Codeforces Round #289 Div 2

    A. Maximum in Table 题意:给定一个表格,它的第一行全为1,第一列全为1,另外的数满足a[i][j]=a[i-1][j]+a[i][j-1],求这个表格中的最大的数 a[n][n]即 ...

  9. 初识Git(二)

    与我们前一篇随笔一样创建文件夹,init我们创建的文件夹,并且创建一个test.txt文本文件,add文本文件,commit文本文件,接下来在文本文件中添加文本: 与上一次不同的是我们这一次在编辑文件 ...

  10. jQuery 完整 ajax示例

    $(function(){ //请求参数 var list = {}; // $.ajax({ //请求方式 type : "POST", //请求的媒体类型 contentTyp ...