Machine Learning Done Wrong
Machine Learning Done Wrong
Statistical modeling is a lot like engineering.
In engineering, there are various ways to build a key-value storage, and each design makes a different set of assumptions about the usage pattern. In statistical modeling, there are various algorithms to build a classifier, and each algorithm makes a different set of assumptions about the data.
When dealing with small amounts of data, it’s reasonable to try as many algorithms as possible and to pick the best one since the cost of experimentation is low. But as we hit “big data”, it pays off to analyze the data upfront and then design the modeling pipeline (pre-processing, modeling, optimization algorithm, evaluation, productionization) accordingly.
As pointed out in my previous post, there are dozens of ways to solve a given modeling problem. Each model assumes something different, and it’s not obvious how to navigate and identify which assumptions are reasonable. In industry, most practitioners pick the modeling algorithm they are most familiar with rather than pick the one which best suits the data. In this post, I would like to share some common mistakes (the don't-s). I’ll save some of the best practices (the do-s) in a future post.
1. Take default loss function for granted
Many practitioners train and pick the best model using the default loss function (e.g., squared error). In practice, off-the-shelf loss function rarely aligns with the business objective. Take fraud detection as an example. When trying to detect fraudulent transactions, the business objective is to minimize the fraud loss. The off-the-shelf loss function of binary classifiers weighs false positives and false negatives equally. To align with the business objective, the loss function should not only penalize false negatives more than false positives, but also penalize each false negative in proportion to the dollar amount. Also, data sets in fraud detection usually contain highly imbalanced labels. In these cases, bias the loss function in favor of the rare case (e.g., through up/down sampling).
2. Use plain linear models for non-linear interaction
When building a binary classifier, many practitioners immediately jump to logistic regression because it’s simple. But, many also forget that logistic regression is a linear model and the non-linear interaction among predictors need to be encoded manually. Returning to fraud detection, high order interaction features like "billing address = shipping address and transaction amount < $50" are required for good model performance. So one should prefer non-linear models like SVM with kernel or tree based classifiers that bake in higher-order interaction features.
3. Forget about outliers
Outliers are interesting. Depending on the context, they either deserve special attention or should be completely ignored. Take the example of revenue forecasting. If unusual spikes of revenue are observed, it's probably a good idea to pay extra attention to them and figure out what caused the spike. But if the outliers are due to mechanical error, measurement error or anything else that’s not generalizable, it’s a good idea to filter out these outliers before feeding the data to the modeling algorithm.
Some models are more sensitive to outliers than others. For instance, AdaBoost might treat those outliers as "hard" cases and put tremendous weights on outliers while decision tree might simply count each outlier as one false classification. If the data set contains a fair amount of outliers, it's important to either use modeling algorithm robust against outliers or filter the outliers out.
4. Use high variance model when n<<p
SVM is one of the most popular off-the-shelf modeling algorithms and one of its most powerful features is the ability to fit the model with different kernels. SVM kernels can be thought of as a way to automatically combine existing features to form a richer feature space. Since this power feature comes almost for free, most practitioners by default use kernel when training a SVM model. However, when the data has n<<p (number of samples << number of features) -- common in industries like medical data -- the richer feature space implies a much higher risk to overfit the data. In fact, high variance models should be avoided entirely when n<<p.
5. L1/L2/... regularization without standardization
Applying L1 or L2 to penalize large coefficients is a common way to regularize linear or logistic regression. However, many practitioners are not aware of the importance of standardizing features before applying those regularization.
Returning to fraud detection, imagine a linear regression model with a transaction amount feature. Without regularization, if the unit of transaction amount is in dollars, the fitted coefficient is going to be around 100 times larger than the fitted coefficient if the unit were in cents. With regularization, as the L1 / L2 penalize larger coefficient more, the transaction amount will get penalized more if the unit is in dollars. Hence, the regularization is biased and tend to penalize features in smaller scales. To mitigate the problem, standardize all the features and put them on equal footing as a preprocessing step.
6. Use linear model without considering multi-collinear predictors
Imagine building a linear model with two variables X1 and X2 and suppose the ground truth model is Y=X1+X2. Ideally, if the data is observed with small amount of noise, the linear regression solution would recover the ground truth. However, if X1 and X2 are collinear, to most of the optimization algorithms' concerns, Y=2*X1, Y=3*X1-X2 or Y=100*X1-99*X2 are all as good. The problem might not be detrimental as it doesn't bias the estimation. However, it does make the problem ill-conditioned and make the coefficient weight uninterpretable.
7. Interpreting absolute value of coefficients from linear or logistic regression as feature importance
Because many off-the-shelf linear regressor returns p-value for each coefficient, many practitioners believe that for linear models, the bigger the absolute value of the coefficient, the more important the corresponding feature is. This is rarely true as (a) changing the scale of the variable changes the absolute value of the coefficient (b) if features are multi-collinear, coefficients can shift from one feature to others. Also, the more features the data set has, the more likely the features are multi-collinear and the less reliable to interpret the feature importance by coefficients.
If you like the post, you can follow me (@chengtao_chu) on Twitter or subscribe to my blog "ML in the Valley". Also, special thanks Ian Wong (@ihat) for reading a draft of this.
Cheng-Tao Chu
Director of Analytics at Codecademy. Specialties: data engineering and machine learning. Formerly: Google, LinkedIn and Square.
Machine Learning Done Wrong的更多相关文章
- 【Machine Learning】KNN算法虹膜图片识别
K-近邻算法虹膜图片识别实战 作者:白宁超 2017年1月3日18:26:33 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本系列文章是作者结 ...
- 【Machine Learning】Python开发工具:Anaconda+Sublime
Python开发工具:Anaconda+Sublime 作者:白宁超 2016年12月23日21:24:51 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现 ...
- 【Machine Learning】机器学习及其基础概念简介
机器学习及其基础概念简介 作者:白宁超 2016年12月23日21:24:51 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本系列文章是作者结 ...
- 【Machine Learning】决策树案例:基于python的商品购买能力预测系统
决策树在商品购买能力预测案例中的算法实现 作者:白宁超 2016年12月24日22:05:42 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本 ...
- 【机器学习Machine Learning】资料大全
昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...
- [Machine Learning] Active Learning
1. 写在前面 在机器学习(Machine learning)领域,监督学习(Supervised learning).非监督学习(Unsupervised learning)以及半监督学习(Semi ...
- [Machine Learning & Algorithm]CAML机器学习系列2:深入浅出ML之Entropy-Based家族
声明:本博客整理自博友@zhouyong计算广告与机器学习-技术共享平台,尊重原创,欢迎感兴趣的博友查看原文. 写在前面 记得在<Pattern Recognition And Machine ...
- machine learning基础与实践系列
由于研究工作的需要,最近在看机器学习的一些基本的算法.选用的书是周志华的西瓜书--(<机器学习>周志华著)和<机器学习实战>,视频的话在看Coursera上Andrew Ng的 ...
- matlab基础教程——根据Andrew Ng的machine learning整理
matlab基础教程--根据Andrew Ng的machine learning整理 基本运算 算数运算 逻辑运算 格式化输出 小数位全局修改 向量和矩阵运算 矩阵操作 申明一个矩阵或向量 快速建立一 ...
- Machine Learning
Recently, I am studying Maching Learning which is our course. My English is not good but this course ...
随机推荐
- 【百度地图API】JS版本的常见问题
1.请问如何将我的店铺标注在百度地图上?我是否可以做区域代理?在百度地图上标注是否免费? 答复: 这里只负责API的技术咨询,不解决任何地图标注问题.在百度地图上标注自己公司,即气泡标注业务.该业务已 ...
- 20141211—C#面向对象,封装
封装 一个private 的变量.在变量名上右键-重构-封装字段 小建议:在创建封装字段的时候,在名字前加 “_”用以区分. 封装时,下划线会自动去除 点击确定后: 应用: 赋值的时候走 set 取值 ...
- UI1_UITabBarController
// // AppDelegate.h // UI1_UITabBarController // // Created by zhangxueming on 15/7/8. // Copyright ...
- CSS3选择器:nth-child和:nth-of-type之间的差异
CSS3选择器:nth-child和:nth-of-type之间的差异 这篇文章发布于 2011年06月21日,星期二,23:04,归类于 css相关. 阅读 57546 次, 今日 143 次 by ...
- JS一些语法
1.解构(ES6的语法) 我个人理解就是有一个对象,对象里有几个属性,然后在定义新的变量的时候可以直接指定为和对象里属性名一样的名字,然后就可以关联到新的变量上来.下面看一个小测试例子: //解构 l ...
- c#读写注册表示例分享
c#读写注册表示例,示例中有详细注释. 代码: //写注册表RegistryKey regWrite;//往HKEY_CURRENT_USER主键里的Software子键下写一个名为“Test”的子键 ...
- 禁用cookie后
服务器为某个访问者创建一个内存区域,这个就是所谓的session,这个区域的存在是有时间限制的,比如30分钟,这块区域诞生的时候,服务器会给这个区域分配一个钥匙,只有使用这个钥匙才能访问这个区域,这个 ...
- hibernate知识点理解
1.只有业务逻辑层出现的问题? 1.切换数据库麻烦 2.sql编写起来麻烦 3.我们的程序员不需要关注数据库,只希望关心业务本身 2.hibernate的好处 1.程序员只关心业务逻辑,使角色更加清楚 ...
- C#中窗体的互相访问
1.在父窗体中构造子窗体对象时,将父窗体传递过去: 如:FrmSub frm=new FrmSub(this);//this代表父窗体 2.将父窗体中要访问的变量和方法修改为public 3.在子窗体 ...
- 将Unity3D游戏移植到Android平台上
将Unity3D游戏移植到Android平台是一件很容易的事情,只需要在File->Build Settings中选择Android平台,然后点击Switch Platform并Build出ap ...