sklearn.neighbors.LocalOutlierFactor

class sklearn.neighbors.LocalOutlierFactor(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]

Unsupervised Outlier Detection using Local Outlier Factor (LOF)

The anomaly score of each sample is called Local Outlier Factor.
It measures the local deviation of density of a given sample with
respect to its neighbors.
It is local in that the anomaly score depends on how isolated the object
is with respect to the surrounding neighborhood.
More precisely, locality is given by k-nearest neighbors, whose distance
is used to estimate the local density.
By comparing the local density of a sample to the local densities of
its neighbors, one can identify samples that have a substantially lower
density than their neighbors. These are considered outliers.

Parameters:
n_neighbors : int, optional (default=20)

Number of neighbors to use by default for kneighbors queries.
If n_neighbors is larger than the number of samples provided,
all samples will be used.

algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional

Algorithm used to compute the nearest neighbors:

  • ‘ball_tree’ will use BallTree
  • ‘kd_tree’ will use KDTree
  • ‘brute’ will use a brute-force search.
  • ‘auto’ will attempt to decide the most appropriate algorithm
    based on the values passed to fit method.

Note: fitting on sparse input will override the setting of
this parameter, using brute force.

leaf_size : int, optional (default=30)

Leaf size passed to BallTree or KDTree. This can
affect the speed of the construction and query, as well as the memory
required to store the tree. The optimal value depends on the
nature of the problem.

metric : string or callable, default ‘minkowski’

metric used for the distance computation. Any metric from scikit-learn
or scipy.spatial.distance can be used.

If ‘precomputed’, the training input X is expected to be a distance
matrix.

If metric is a callable function, it is called on each
pair of instances (rows) and the resulting value recorded. The callable
should take two arrays as input and return one value indicating the
distance between them. This works for Scipy’s metrics, but is less
efficient than passing the metric name as a string.

Valid values for metric are:

  • from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’,
    ‘manhattan’]
  • from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’,
    ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’,
    ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’,
    ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’,
    ‘yule’]

See the documentation for scipy.spatial.distance for details on these
metrics:
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html

p : integer, optional (default=2)

Parameter for the Minkowski metric from
sklearn.metrics.pairwise.pairwise_distances. When p = 1, this
is equivalent to using manhattan_distance (l1), and euclidean_distance
(l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.

metric_params : dict, optional (default=None)

Additional keyword arguments for the metric function.

contamination : float in (0., 0.5), optional (default=0.1)

The amount of contamination of the data set, i.e. the proportion
of outliers in the data set. When fitting this is used to define the
threshold on the decision function. If “auto”, the decision function
threshold is determined as in the original paper.

Changed in version 0.20: The default value of contamination will change from 0.1 in 0.20
to 'auto' in 0.22.

novelty : boolean, default False

By default, LocalOutlierFactor is only meant to be used for outlier
detection (novelty=False). Set novelty to True if you want to use
LocalOutlierFactor for novelty detection. In this case be aware that
that you should only use predict, decision_function and score_samples
on new unseen data and not on the training set.

n_jobs : int or None, optional (default=None)

The number of parallel jobs to run for neighbors search.
None means 1 unless in a joblib.parallel_backend context.
-1 means using all processors. See Glossary
for more details.
Affects only kneighbors and kneighbors_graph methods.

Attributes:
negative_outlier_factor_ : numpy array, shape (n_samples,)

The opposite LOF of the training samples. The higher, the more normal.
Inliers tend to have a LOF score close to 1 (negative_outlier_factor_
close to -1), while outliers tend to have a larger LOF score.

The local outlier factor (LOF) of a sample captures its
supposed ‘degree of abnormality’.
It is the average of the ratio of the local reachability density of
a sample and those of its k-nearest neighbors.

n_neighbors_ : integer

The actual number of neighbors used for kneighbors queries.

offset_ : float

Offset used to obtain binary labels from the raw scores.
Observations having a negative_outlier_factor smaller than offset_
are detected as abnormal.
The offset is set to -1.5 (inliers score around -1), except when a
contamination parameter different than “auto” is provided. In that
case, the offset is defined in such a way we obtain the expected
number of outliers in training.

References

[1] Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May).
LOF: identifying density-based local outliers. In ACM sigmod record.

Methods

fit(X[, y]) Fit the model using X as training data.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
__init__(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]
decision_function

Shifted opposite of the Local Outlier Factor of X.

Bigger is better, i.e. large values correspond to inliers.

The shift offset allows a zero threshold for being an outlier.
Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
shifted_opposite_lof_scores : array, shape (n_samples,)

The shifted opposite of the Local Outlier Factor of each input
samples. The lower, the more abnormal. Negative scores represent
outliers, positive scores represent inliers.

fit(X, y=None)[source]

Fit the model using X as training data.

Parameters:
X : {array-like, sparse matrix, BallTree, KDTree}

Training data. If array or matrix, shape [n_samples, n_features],
or [n_samples, n_samples] if metric=’precomputed’.

y : Ignored

not used, present for API consistency by convention.

Returns:
self : object
fit_predict

“Fits the model to the training set X and returns the labels.

Label is 1 for an inlier and -1 for an outlier according to the LOF
score and the contamination parameter.

Parameters:
X : array-like, shape (n_samples, n_features), default=None

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

y : Ignored

not used, present for API consistency by convention.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and 1 for inliers.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and
contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

kneighbors(X=None, n_neighbors=None, return_distance=True)[source]

Finds the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors to get (default is the value
passed to the constructor).

return_distance : boolean, optional. Defaults to True.

If False, distances will not be returned

Returns:
dist : array

Array representing the lengths to points, only present if
return_distance=True

ind : array

Indices of the nearest points in the population matrix.

Examples

In the following example, we construct a NeighborsClassifier
class from an array representing our data set and ask who’s
the closest point to [1,1,1]

>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))

As you can see, it returns [[0.5]], and [[2]], which means that the
element is at distance 0.5 and is the third element of samples
(indexes start at 0). You can also query for multiple points:

>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode=’connectivity’)[source]

Computes the (weighted) graph of k-Neighbors for points in X

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors for each sample.
(default is value passed to the constructor).

mode : {‘connectivity’, ‘distance’}, optional

Type of returned matrix: ‘connectivity’ will return the
connectivity matrix with ones and zeros, in ‘distance’ the
edges are Euclidean distance between points.

Returns:
A : sparse matrix in CSR format, shape = [n_samples, n_samples_fit]

n_samples_fit is the number of samples in the fitted data
A[i, j] is assigned the weight of edge that connects i to j.

Examples

>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
predict

Predict the labels (1 inlier, -1 outlier) of X according to LOF.

This method allows to generalize prediction to new observations (not
in the training set). Only available for novelty detection (when
novelty is set to True).

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and +1 for inliers.

score_samples

Opposite of the Local Outlier Factor of X.

It is the opposite as as bigger is better, i.e. large values correspond
to inliers.

Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.
The score_samples on training data is available by considering the
the negative_outlier_factor_ attribute.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
opposite_lof_scores : array, shape (n_samples,)

The opposite of the Local Outlier Factor of each input samples.
The lower, the more abnormal.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter> so that it’s possible to update each
component of a nested object.

Returns:
self
      </div>
</div>
</div>
<div class="clearer"></div>
</div>
</div> <div class="footer">
&copy; 2007 - 2018, scikit-learn developers (BSD License).
<a href="../../_sources/modules/generated/sklearn.neighbors.LocalOutlierFactor.rst.txt" rel="nofollow">Show this page source</a>
</div>
<div class="rel"> <div class="buttonPrevious">
<a href="sklearn.neighbors.KNeighborsRegressor.html">Previous
</a>
</div>
<div class="buttonNext">
<a href="sklearn.neighbors.RadiusNeighborsClassifier.html">Next
</a>
</div> </div> <script>
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-22606712-2', 'auto');
ga('set', 'anonymizeIp', true);
ga('send', 'pageview');
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script> <script>
(function() {
var cx = '016639176250731907682:tjtqbvtvij0';
var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true;
gcse.src = 'https://cse.google.com/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s);
})();
</script>

LocalOutlierFactor算法回归数据预处理的更多相关文章

  1. sklearn中的数据预处理和特征工程

    小伙伴们大家好~o( ̄▽ ̄)ブ,沉寂了这么久我又出来啦,这次先不翻译优质的文章了,这次我们回到Python中的机器学习,看一下Sklearn中的数据预处理和特征工程,老规矩还是先强调一下我的开发环境是 ...

  2. 借助 SIMD 数据布局模板和数据预处理提高 SIMD 在动画中的使用效率

    原文链接 简介 为发挥 SIMD1 的最大作用,除了对其进行矢量化处理2外,我们还需作出其他努力.可以尝试为循环添加 #pragma omp simd3,查看编译器是否成功进行矢量化,如果性能有所提升 ...

  3. 对数据预处理的一点理解[ZZ]

    数据预处理没有统一的标准,只能说是根据不同类型的分析数据和业务需求,在对数据特性做了充分的理解之后,再选择相关的数据预处理技术,一般会用到多种预处理技术,而且对每种处理之后的效果做些分析对比,这里面经 ...

  4. [数据预处理]-中心化 缩放 KNN(一)

    据预处理是总称,涵盖了数据分析师使用它将数据转处理成想要的数据的一系列操作.例如,对某个网站进行分析的时候,可能会去掉 html 标签,空格,缩进以及提取相关关键字.分析空间数据的时候,一般会把带单位 ...

  5. [机器学习]-[数据预处理]-中心化 缩放 KNN(二)

    上次我们使用精度评估得到的成绩是 61%,成绩并不理想,再使 recall 和 f1 看下成绩如何? 首先我们先了解一下 召回率和 f1. 真实结果 预测结果 预测结果   正例 反例 正例 TP 真 ...

  6. 【sklearn】数据预处理 sklearn.preprocessing

    数据预处理 标准化 (Standardization) 规范化(Normalization) 二值化 分类特征编码 推定缺失数据 生成多项式特征 定制转换器 1. 标准化Standardization ...

  7. sklearn学习笔记(一)——数据预处理 sklearn.preprocessing

    https://blog.csdn.net/zhangyang10d/article/details/53418227 数据预处理 sklearn.preprocessing 标准化 (Standar ...

  8. 数据预处理(Python scikit-learn)

    在机器学习任务中,经常会对数据进行预处理.如尺度变换,标准化,二值化,正规化.至于采用哪种方法更有效,则与数据分布和采用算法有关.不同算法对数据的假设不同,可能需要不同的变换,而且有时无需进行变换,也 ...

  9. 【Sklearn系列】使用Sklearn进行数据预处理

    这篇文章主要讲解使用Sklearn进行数据预处理,我们使用Kaggle中泰坦尼克号事件的数据作为样本. 读取数据并创建数据表格,查看数据相关信息 import pandas as pd import ...

随机推荐

  1. python 多继承详解

    class A(object): # A must be new-style class def __init__(self): print "enter A" print &qu ...

  2. oracle client PLSQL配置

    date:20140525auth:Jin platform :windows 一.服务端启动服务和创建账号# su - oracle$ lsnrctl start$ sqlplus / as sys ...

  3. [漏洞检测]Proxpy Web Scan设计与实现(未完待续)

    Proxpy Web Scan设计与实现 1.简介:          Proxpy Web Scan是基于开源的python漏洞扫描框架wapiti改造的web漏洞扫描器,其主要解决以下几个问题而生 ...

  4. Android Broadcast Security(转)

    原文地址:http://drops.wooyun.org/tips/4393 0x00 科普 Broadcast Recevier 广播接收器是一个专注于接收广播通知信息,并做出对应处理的组件.很多广 ...

  5. web安全之如何防止CSRF跨站请求伪造

    CSRF(Cross-site request forgery)跨站请求伪造,也被称为“One Click Attack”或者Session Riding,通常缩写为CSRF或者XSRF,是一种对网站 ...

  6. Android关于JSON数据解析

    一.什么是json json(Javascript Object Notation)是一种轻量级的数据交换格式,相比于xml这种数据交换格式来说,因为解析xml比较的复杂,而且需要编写大段的代码,所以 ...

  7. luci范例

    转自:http://www.cnblogs.com/souroot/p/4511760.html LuCI (Lua Configiration Interface) 是OpenWRT 的Web 管理 ...

  8. 如何提高iOS开发能力

    ① 阅读技术博客 在现在这个年代,博客的风头早已被微博盖过.但是每天早上上班后的半小时,一定是打开博客,其中有讨论技术的,也有总结个人的相关经历,读完后肯定会有所收获.阅读博客,还有一个原因是技术博客 ...

  9. javascript函数的四种调用模式及其this关键字的区别

    方法调用模式: 当一个函数被保存为对象的一个属性时,我们称它为一个方法.当一个方法被调用时,this被绑定到该对象. //方法调用模式 var myObject = { value: 0 , incr ...

  10. shader cycles静态分析

    mali Mali Offline Compiler https://developer.arm.com/products/software-development-tools/graphics-de ...