Seven Techniques for Data Dimensionality Reduction
Seven Techniques for Data Dimensionality Reduction
Seven Techniques for Data Dimensionality Reduction
The recent explosion of data set size, in number of records and attributes, has triggered the development of a number of big data platforms as well as parallel data analytics algorithms. At the same time though, it has pushed for usage of data dimensionality reduction procedures. Indeed, more is not always better. Large amounts of data might sometimes produce worse performances in data analytics applications.
One of my most recent projects happened to be about churn prediction and to use the 2009 KDD Challenge large data set. The particularity of this data set consists of its very high dimensionality with 15K data columns. Most data mining algorithms are column-wise implemented, which makes them slower and slower on a growing number of data columns. The first milestone of the project was then to reduce the number of columns in the data set and lose the smallest amount of information possible at the same time.
Using the project as an excuse, we started exploring the state-of-the-art on dimensionality reduction techniques currently available and accepted in the data analytics landscape.
- Missing Values Ratio. Data columns with too many missing values are unlikely to carry much useful information. Thus data columns with number of missing values greater than a given threshold can be removed. The higher the threshold, the more aggressive the reduction.
- Low Variance Filter. Similarly to the previous technique, data columns with little changes in the data carry little information. Thus all data columns with variance lower than a given threshold are removed. A word of caution: variance is range dependent; therefore normalization is required before applying this technique.
- High Correlation Filter. Data columns with very similar trends are also likely to carry very similar information. In this case, only one of them will suffice to feed the machine learning model. Here we calculate the correlation coefficient between numerical columns and between nominal columns as the Pearson’s Product Moment Coefficient and thePearson's chi square value respectively. Pairs of columns with correlation coefficient higher than a threshold are reduced to only one. A word of caution: correlation is scale sensitive; therefore column normalization is required for a meaningful correlation comparison.
- Random Forests / Ensemble Trees. Decision Tree Ensembles, also referred to as random forests, are useful for feature selection in addition to being effective classifiers. One approach to dimensionality reduction is to generate a large and carefully constructed set of trees against a target attribute and then use each attribute’s usage statistics to find the most informative subset of features. Specifically, we can generate a large set (2000) of very shallow trees (2 levels), with each tree being trained on a small fraction (3) of the total number of attributes. If an attribute is often selected as best split, it is most likely an informative feature to retain. A score calculated on the attribute usage statistics in the random forest tells us ‒ relative to the other attributes ‒ which are the most predictive attributes.
- Principal Component Analysis (PCA). Principal Component Analysis (PCA) is a statistical procedure that orthogonally transforms the original n coordinates of a data set into a new set of n coordinates called principal components. As a result of the transformation, the first principal component has the largest possible variance; each succeeding component has the highest possible variance under the constraint that it is orthogonal to (i.e., uncorrelated with) the preceding components. Keeping only the first m < ncomponents reduces the data dimensionality while retaining most of the data information, i.e. the variation in the data. Notice that the PCA transformation is sensitive to the relative scaling of the original variables. Data column ranges need to be normalized before applying PCA. Also notice that the new coordinates (PCs) are not real system-produced variables anymore. Applying PCA to your data set loses its interpretability. If interpretability of the results is important for your analysis, PCA is not the transformation for your project.
- Backward Feature Elimination. In this technique, at a given iteration, the selected classification algorithm is trained on n input features. Then we remove one input feature at a time and train the same model on n-1 input features n times. The input feature whose removal has produced the smallest increase in the error rate is removed, leaving us with n-1 input features. The classification is then repeated using n-2 features, and so on. Each iteration k produces a model trained on n-k features and an error rate e(k). Selecting the maximum tolerable error rate, we define the smallest number of features necessary to reach that classification performance with the selected machine learning algorithm.
- Forward Feature Construction. This is the inverse process to the Backward Feature Elimination. We start with 1 feature only, progressively adding 1 feature at a time, i.e. the feature that produces the highest increase in performance. Both algorithms, Backward Feature Elimination and Forward Feature Construction, are quite time and computationally expensive. They are practically only applicable to a data set with an already relatively low number of input columns.
We picked this chance to compare those techniques on the smaller data set of the 2009 KDD challenge in terms of reduction ratio, degrading accuracy, and speed. The final accuracy and its degradation depend, of course, on the model selected for the analysis. Thus, the compromise between reduction ratio and final accuracy is optimized against a bag of three specific models: decision tree, neural networks, and naïve Bayes.
Running the optimization loop, the best cutoffs, in terms of lowest number of columns and best accuracy, were determined for each one of the seven dimensionality reduction techniques and for the best performing model. The final best model performance, as accuracy and Area under the ROC Curve, was compared with the performance of the baseline algorithm using all input features. Results of this comparison are reported in the table below.
| Dimensionality Reduction | Reduction Rate | Accuracy on validation set | Best Threshold | AuC | Notes |
|---|---|---|---|---|---|
| Baseline | 0% | 73% | - | 81% | Baseline models are using all input features |
| Missing Values Ratio | 71% | 76% | 0.4 | 82% | - |
| Low Variance Filter | 73% | 82% | 0.03 | 82% | Only for numerical columns |
| High Correlation Filter | 74% | 79% | 0.2 | 82% | No correlation available between numerical and nominal columns |
| PCA | 62% | 74% | - | 72% | Only for numerical columns |
| Random Forrest / Ensemble Trees | 86% | 76% | - | 82% | - |
| Backward Feature Elimination + missing values ratio | 99% | 94% | - | 78% | Backward Feature Elimination and Forward Feature Construction are prohibitively slow on high dimensional data sets. It becomes practical to use them, only if following other dimensionality reduction techniques, like here the one based on the number of missing values. |
| Forward Feature Construction + missing values ratio | 91% | 83% | - | 63% |
Notice that the highest reduction ratio without performance degradation is obtained by analyzing the decision cuts in many random forests (Random Forests/Ensemble Trees). However, even just counting the number of missing values, measuring the column variance, and measuring the correlation of pairs of columns can lead to a satisfactory reduction rate while keeping performance unaltered with respect to the baseline models.
What we have learned from this little review exercise, is that dimensionality reduction is not only useful to speed up algorithm execution, but also to improve model performance. The Area under the Curve (AuC) in the table shows a slight increase on the test data, when the missing value ratio, the low variance filter, the high correlation filter criteria, or the random forests are applied.
Indeed, in the era of big data, when more is axiomatically better, we have re-discovered that too many noisy or even faulty input data columns often lead to a less than desirable algorithm performance. Removing un-informative or even worse dis-informative input attributes might help build a model on more extensive data regions, with more general classification rules, and overall with better performances on new unseen data.
Recently, we asked data analysts on a LinkedIn group (https://www.linkedin.com/grp/post/35222-5998794653007171586) for the most used dimensionality reduction techniques, besides the seven described in this blog post. The answers involved Random Projections, NMF, (Stacked) Auto-encoders, Chi-square or Information Gain, Multidimensional Scaling, Correspondence Analysis, Factor Analysis, Clustering, and Bayesian Models. Thanks to Asterios Stergioudis, Raoul Savos, and Michael Will who provided the suggestions on the LinkedIn group.
The workflows described in this blog post are available on the KNIME EXAMPLES server under 003_Preprocessing/003005_dimensionality_reduction.
Both small and large data sets from the 2009 KDD Challenge can be downloaded from http://www.sigkdd.org/kdd-cup-2009-customer-relationship-prediction.
This is just a brief summary of the whole project. If you are interested in all the tiny details, you can always read the related whitepaper, in the Whitepapers section on the KNIME web site:https://www.knime.org/files/knime_seventechniquesdatadimreduction.pdf

Below are the ROC curves for all the evaluated dimensionality reduction techniques and the best performing machine learning algorithm. The value of the area under the curve is shown in the legend.

Further Reading:
Seven Techniques for Data Dimensionality Reduction的更多相关文章
- dimensionality reduction动机---data compression(使算法提速)
data compression可以使数据占用更少的空间,并且能使算法提速 什么是dimensionality reduction(维数约简) 例1:比如说我们有一些数据,它有很多很多的feat ...
- 可视化MNIST之降维探索Visualizing MNIST: An Exploration of Dimensionality Reduction
At some fundamental level, no one understands machine learning. It isn’t a matter of things being to ...
- Stanford机器学习笔记-10. 降维(Dimensionality Reduction)
10. Dimensionality Reduction Content 10. Dimensionality Reduction 10.1 Motivation 10.1.1 Motivation ...
- [Scikit-learn] 4.4 Dimensionality reduction - PCA
2.5. Decomposing signals in components (matrix factorization problems) 2.5.1. Principal component an ...
- 第八章——降维(Dimensionality Reduction)
机器学习问题可能包含成百上千的特征.特征数量过多,不仅使得训练很耗时,而且难以找到解决方案.这一问题被称为维数灾难(curse of dimensionality).为简化问题,加速训练,就需要降维了 ...
- 壁虎书8 Dimensionality Reduction
many Machine Learning problems involve thousands or even millions of features for each training inst ...
- [UFLDL] Dimensionality Reduction
博客内容取材于:http://www.cnblogs.com/tornadomeet/archive/2012/06/24/2560261.html Deep learning:三十五(用NN实现数据 ...
- 单细胞数据高级分析之初步降维和聚类 | Dimensionality reduction | Clustering
个人的一些碎碎念: 聚类,直觉就能想到kmeans聚类,另外还有一个hierarchical clustering,但是单细胞里面都用得不多,为什么?印象中只有一个scoring model是用kme ...
- 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 14—Dimensionality Reduction 降维
Lecture 14 Dimensionality Reduction 降维 14.1 降维的动机一:数据压缩 Data Compression 现在讨论第二种无监督学习问题:降维. 降维的一个作用是 ...
随机推荐
- CS学习
作者:匿名用户链接:https://www.zhihu.com/question/27368268/answer/36464143来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业转载请注 ...
- js获取浏览器窗口属性
网页可见区域宽: document.body.clientWidth;网页可见区域高: document.body.clientHeight;网页可见区域宽: document.body.offset ...
- UTC时间与北京时间
经常混淆于此,特地研究了一下,记录在此以备忘. 整个地球分为二十四时区,每个时区都有自己的本地时间.在国际无线电通信场合,为了统一起见,使用一个统一的时间,称为通用协调时(UTC, Universal ...
- vue 使用出现的问题(持续记录)
今天写vue 的时候,发现有几个警告.原因是 我把组件起的名字写的和默认标签的名字一样了,导致系统不知道,怎么解析. 我写了一个Header 组件, 和h5里面的header重名, 解决方案1: he ...
- 卸载Visual Studio最佳方法难道真的是重装系统?
卸载Visual Studio最佳方法难道真的是重装系统? 卸载Visual Studio最佳方法难道真的是重装系统? 使用TotalUninstaller貌似也没有效果,默认卸载的,程序列表里面还是 ...
- 【uoj#228】基础数据结构练习题 线段树+均摊分析
题目描述 给出一个长度为 $n$ 的序列,支持 $m$ 次操作,操作有三种:区间加.区间开根.区间求和. $n,m,a_i\le 100000$ . 题解 线段树+均摊分析 对于原来的两个数 $a$ ...
- Window系统 安装TFLearn
1. 确保成功安装了tensorflow 2. 查看当前tensorflow下的库文件,判断是否已经安装了h5py,scipy:conda list 3. 若没有安装,安装h5py,scipy.我的电 ...
- Python实现双色球和大乐透摇奖
实现代码: # code by kadycui # 模块引用 import random def select(): print('\n') print('请选择彩票种类') print('双色球输入 ...
- P4101 [HEOI2014]人人尽说江南好
题目描述 小 Z 是一个不折不扣的 ZRP(Zealot Round-game Player,回合制游戏狂热玩家),最近他 想起了小时候在江南玩过的一个游戏. 在过去,人们是要边玩游戏边填词的,比如这 ...
- Navicat使用教程:获取MySQL中的行数(第1部分)
下载Navicat Premium最新版本 Navicat Premium是一个可连接多种数据库的管理工具,它可以让你以单一程序同时连接到MySQL.Oracle及PostgreSQL数据库,让管理不 ...