Curse of Dimensionality
Curse of Dimensionality
Curse of Dimensionality refers to non-intuitive properties of data observed when working in high-dimensional space *, specifically related to usability and interpretation of distances and volumes. This is one of my favourite topics in Machine Learning and Statistics since it has broad applications (not specific to any machine learning method), it is very counter-intuitive and hence awe inspiring, it has profound application for any of analytics techniques, and it has ‘cool’ scary name like some Egyptian curse!
For quick grasp, consider this example: Say, you dropped a coin on a 100 meter line. How do you find it? Simple, just walk on the line and search. But what if it’s 100 x 100 sq. m. field? It’s already getting tough, trying to search a (roughly) football ground for a single coin. But what if it’s 100 x 100 x 100 cu.m space?! You know, football ground now has thirty-story height. Good luck finding a coin there! That, in essence is “curse of dimensionality”.
Many ML methods use Distance Measure
Most segmentation and clustering methods rely on computing distances between observations. Well known k-Means segmentation assigns points to nearest center. DBSCAN and Hierarchical clustering also required distance metrics. Distribution and density based outlier detectionalgorithms also make use of distance relative to other distances to mark outliers.
Supervised classification solutions like k-Nearest Neighbours method also use distance between observations to assign class to unknown observation. Support Vector Machine method involves transforming observations around select Kernels based on distance between observation and the kernel.
Common form of recommendation systems involve distance based similarity among user and item attribute vectors. Even when other forms of distances are used, number of dimensions plays a role in analytic design.
One of the most common distance metrics is Euclidian Distance metric, which is simply linear distance between two points in multi-dimensional hyper-space. Euclidian Distance for point i and point j in n dimensional space can be computed as:
Distance plays havoc in high-dimension
Consider simple process of data sampling. Suppose the black outside box in Fig. 1 is data universe with uniform distribution of data points across whole volume, and that we want to sample 1% of observations as enclosed by red inside box. Black box is hyper-cube in multi-dimensional space with each side representing range of value in that dimension. For simple 3-dimensional example in Fig. 1, we may have following range:
Figure 1 : Sampling
What is proportion of each range should we sample to obtain that 1% sample? For 2-dimensions, 10% of range will achieve overall 1% sampling, so we may select x∈(0,10) and y∈(0,50) and expect to capture 1% of all observations. This is because 10%2=1%. Do you expect this proportion to be higher or lower for 3-dimension?
Even though our search is now in additional direction, proportional actually increases to 21.5%. And not only increases, for just one additional dimension, it doubles! And you can see that we have to cover almost one-fifth of each dimension just to get one-hundredth of overall! In 10-dimensions, this proportion is 63% and in 100-dimensions – which is not uncommon number of dimensions in any real-life machine learning – one has to sample 95% of range along each dimension to sample 1% of observations! This mind-bending result happens because in high dimensions spread of data points becomes larger even if they are uniformly spread.
This has consequence in terms of design of experiment and sampling. Process becomes very computationally expensive, even to the extent that sampling asymptotically approaches population despite sample size remaining much smaller than population.
Consider another huge consequence of high dimensionality. Many algorithms measure distance between two data points to define some sort of near-ness (DBSCAN, Kernels, k-Nearest Neighbour) in reference to some pre-defined distance threshold. In 2-dimensions, we can imagine that two points are near if one falls within certain radius of another. Consider left image in Fig. 2. What’s share of uniformly spaced points within black square fall inside the red circle? That is about
Figure 2 : Near-ness
So if you fit biggest circle possible inside the square, you cover 78% of square. Yet, biggest sphere possible inside the cube covers only
of the volume. This volume reduces exponentially to 0.24% for just 10-dimension! What it essentially means that in high-dimensional world every single data point is at corners and nothing really is center of volume, or in other words, center volume reduces to nothing because there is (almost) no center! This has huge consequences of distance based clustering algorithms. All the distances start looking like same and any distance more or less than other is more random fluctuation in data rather than any measure of dissimilarity!
Fig. 3 shows randomly generated 2-D data and corresponding all-to-all distances. Coefficient of Variation in distance, computed as Standard Deviation divided by Mean, is 45.9%. Corresponding number of similarly generated 5-D data is 26.5% and for 10-D is 19.1%. Admittedly this is one sample, but trend supports the conclusion that in high-dimensions every distance is about same, and none is near or far!
Figure 3 : Distance Clustering
High-dimension affects other things too
Apart from distances and volumes, number of dimensions creates other practical problems. Solution run-time and system-memory requirements often non-linearly escalate with increase in number of dimensions. Due to exponential increase in feasible solutions, many optimization methods cannot reach global optima and have to make do with local optima. Further, instead of closed-form solution, optimization must use search based algorithms like gradient descent, genetic algorithm and simulated annealing. More dimensions introduce possibility of correlation and parameter estimation can become difficult in regression approaches.
Dealing with High-dimension
This will be separate blog post in itself, but correlation analysis, clustering, information value, variance inflation factor, principal component analysis are some of the ways in which number of dimensions can be reduced.
* Number of variables, observations or features a data point is made up of is called dimension of data. For instance, any point in space can be represented using 3 co-ordinates of length, breadth, and height, and has 3 dimensions
Other Articles by the same author:
Understanding and Creating Decision Tree
Decision Trees: Development & Scoring
Other Related Links that you may like:
Curse of Dimensionality的更多相关文章
- [转]The Curse of Dimensionality(维数灾难)
原文章地址:维度灾难 - 柳枫的文章 - 知乎 https://zhuanlan.zhihu.com/p/27488363 对于大多数数据,在一维空间或者说是低维空间都是很难完全分割的,但是在高纬空间 ...
- [Stats385] Lecture 05: Avoid the curse of dimensionality
Lecturer 咖中咖 Tomaso A. Poggio Lecture slice Lecture video 三个基本问题: Approximation Theory: When and why ...
- 【PRML读书笔记-Chapter1-Introduction】1.4 The Curse of Dimensionality
维数灾难 给定如下分类问题: 其中x6和x7表示横轴和竖轴(即两个measurements),怎么分? 方法一(simple): 把整个图分成:16个格,当给定一个新的点的时候,就数他所在的格子中,哪 ...
- 对The Curse of Dimensionality(维度灾难)的理解
一个特性:低维(特征少)转向高维的过程中,样本会变的稀疏(可以有两种理解方式:1.样本数目不变,样本彼此之间距离增大.2.样本密度不变,所需的样本数目指数倍增长). 高维度带来的影响: 1.变得可分. ...
- Dimensionality and high dimensional data: definition, examples, curse of..
Dimensionality in statistics refers to how many attributes a dataset has. For example, healthcare da ...
- 第八章——降维(Dimensionality Reduction)
机器学习问题可能包含成百上千的特征.特征数量过多,不仅使得训练很耗时,而且难以找到解决方案.这一问题被称为维数灾难(curse of dimensionality).为简化问题,加速训练,就需要降维了 ...
- 壁虎书8 Dimensionality Reduction
many Machine Learning problems involve thousands or even millions of features for each training inst ...
- NLP点滴——文本相似度
[TOC] 前言 在自然语言处理过程中,经常会涉及到如何度量两个文本之间的相似性,我们都知道文本是一种高维的语义空间,如何对其进行抽象分解,从而能够站在数学角度去量化其相似性.而有了文本之间相似性的度 ...
- 【机器学习Machine Learning】资料大全
昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...
随机推荐
- 释放C盘空间的27招优化技巧
主要讲讲Windows操作系统在C盘空间不足的情况下,我们可以通过那些具体手段来增加C盘空间. 1.打开"我的电脑"-"工具"-"文件夹选项" ...
- 第十三章 调试及安全性(In .net4.5) 之 验证程序输入
1. 概述 本章介绍验证程序输入的重要性以及各种验证方法:Parse.TryParse.Convert.正则表达式.JavaScriptSerializer.XML Schemas. 2. 主要内容 ...
- 关于HTML中,绝对定位,相对定位的理解...(学习HTML过程中的小记录)
关于HTML中,绝对定位,相对定位的理解...(学习HTML过程中的小记录) 作者:王可利(Star·星星) HTML中 相对定位:position:relative; 绝对定位:position ...
- js-提前声明和new操作符理解
1.提前声明:声明变量后,js会把声明部分提前到作用域前面. var a=1; function aheadOfStatement(){ alert(a); var a=2; } 这段代码结果是und ...
- ExtJs桌面组件(DeskTop)
在desktop\js目录中包含了5个js文件,这5个js文件如下: 还有css样式表:desktop.css,图片素材 在这5个js文件中封装了用于模拟桌面的类,这些类如下: Ext.ux.Star ...
- iOS8 超简单的设置圆角按钮 ImageView等UIView
button.layer.cornerRadius = // 这个值根据你想要的效果可以更改 button.clipsToBounds = true 这种方法不止可以设置按钮,UIView应该都可以设 ...
- openSUSE install failed
install openSUSE 13.1 with vmware play (version 6.0.0) calling the yast module 'inst_autoinit has f ...
- java数据结构和算法------顺序查找
package iYou.neugle.search; public class Sequence_search { public static int SequenceSearch(double[] ...
- [网络配置相关]——ifconfig命令、ip命令、route命令
ifconfig命令 1. 查看已被激活的网卡的详细信息 # ifconfig eth0 Link encap:Ethernet HWaddr 00:30:67:F2:10:CF inet addr: ...
- Ajaxadr ajax跨域请求crossdomain
最近工作需要用到ajax跨域请求参数,网上找很很久,最终得到解决之道.分享一下吧,希望能帮到各位 也许你已经发现在浏览器直接敲路径能获得对方提供接口的参数,而一到项目中Ajax请求却老是失败.原因是, ...