Machine Learning Done Wrong【转】
1. Take default loss function for granted
Many practitioners train and pick the best model using the default loss function (e.g., squared error). In practice, off-the-shelf loss function rarely aligns with the business objective. Take fraud detection as an example. When trying to detect fraudulent transactions, the business objective is to minimize the fraud loss. The off-the-shelf loss function of binary classifiers weighs false positives and false negatives equally. To align with the business objective, the loss function should not only penalize false negatives more than false positives, but also penalize each false negative in proportion to the dollar amount. Also, data sets in fraud detection usually contain highly imbalanced labels. In these cases, bias the loss function in favor of the rare case (e.g., through up/down sampling).
2. Use plain linear models for non-linear interaction
When building a binary classifier, many practitioners immediately jump to logistic regression because it’s simple. But, many also forget that logistic regression is a linear model and the non-linear interaction among predictors need to be encoded manually. Returning to fraud detection, high order interaction features like "billing address = shipping address and transaction amount < $50" are required for good model performance. So one should prefer non-linear models like SVM with kernel or tree based classifiers that bake in higher-order interaction features.
3. Forget about outliers
Outliers are interesting. Depending on the context, they either deserve special attention or should be completely ignored. Take the example of revenue forecasting. If unusual spikes of revenue are observed, it's probably a good idea to pay extra attention to them and figure out what caused the spike. But if the outliers are due to mechanical error, measurement error or anything else that’s not generalizable, it’s a good idea to filter out these outliers before feeding the data to the modeling algorithm.
Some models are more sensitive to outliers than others. For instance, AdaBoost might treat those outliers as "hard" cases and put tremendous weights on outliers while decision tree might simply count each outlier as one false classification. If the data set contains a fair amount of outliers, it's important to either use modeling algorithm robust against outliers or filter the outliers out.
4. Use high variance model when n<<p
SVM is one of the most popular off-the-shelf modeling algorithms and one of its most powerful features is the ability to fit the model with different kernels. SVM kernels can be thought of as a way to automatically combine existing features to form a richer feature space. Since this power feature comes almost for free, most practitioners by default use kernel when training a SVM model. However, when the data has n<<p (number of samples << number of features) -- common in industries like medical data -- the richer feature space implies a much higher risk to overfit the data. In fact, high variance models should be avoided entirely when n<<p.
5. L1/L2/... regularization without standardization
Applying L1 or L2 to penalize large coefficients is a common way to regularize linear or logistic regression. However, many practitioners are not aware of the importance of standardizing features before applying those regularization.
Returning to fraud detection, imagine a linear regression model with a transaction amount feature. Without regularization, if the unit of transaction amount is in dollars, the fitted coefficient is going to be around 100 times larger than the fitted coefficient if the unit were in cents. With regularization, as the L1 / L2 penalize larger coefficient more, the transaction amount will get penalized more if the unit is in dollars. Hence, the regularization is biased and tend to penalize features in smaller scales. To mitigate the problem, standardize all the features and put them on equal footing as a preprocessing step.
6. Use linear model without considering multi-collinear predictors
Imagine building a linear model with two variables X1 and X2 and suppose the ground truth model is Y=X1+X2. Ideally, if the data is observed with small amount of noise, the linear regression solution would recover the ground truth. However, if X1 and X2 are collinear, to most of the optimization algorithms' concerns, Y=2*X1, Y=3*X1-X2 or Y=100*X1-99*X2 are all as good. The problem might not be detrimental as it doesn't bias the estimation. However, it does make the problem ill-conditioned and make the coefficient weight uninterpretable.
7. Interpreting absolute value of coefficients from linear or logistic regression as feature importance
Because many off-the-shelf linear regressor returns p-value for each coefficient, many practitioners believe that for linear models, the bigger the absolute value of the coefficient, the more important the corresponding feature is. This is rarely true as (a) changing the scale of the variable changes the absolute value of the coefficient (b) if features are multi-collinear, coefficients can shift from one feature to others. Also, the more features the data set has, the more likely the features are multi-collinear and the less reliable to interpret the feature importance by coefficients.
So there you go: 7 common mistakes when doing ML in practice. This list is not meant to be exhaustive but merely to provoke the reader to consider modeling assumptions that may not be applicable to the data at hand. To achieve the best model performance, it is important to pick the modeling algorithm that makes the most fitting assumptions -- not just the one you’re most familiar with.
原文:http://ml.posthaven.com/machine-learning-done-wrong
Machine Learning Done Wrong【转】的更多相关文章
- 【Machine Learning】KNN算法虹膜图片识别
K-近邻算法虹膜图片识别实战 作者:白宁超 2017年1月3日18:26:33 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本系列文章是作者结 ...
- 【Machine Learning】Python开发工具:Anaconda+Sublime
Python开发工具:Anaconda+Sublime 作者:白宁超 2016年12月23日21:24:51 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现 ...
- 【Machine Learning】机器学习及其基础概念简介
机器学习及其基础概念简介 作者:白宁超 2016年12月23日21:24:51 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本系列文章是作者结 ...
- 【Machine Learning】决策树案例:基于python的商品购买能力预测系统
决策树在商品购买能力预测案例中的算法实现 作者:白宁超 2016年12月24日22:05:42 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本 ...
- 【机器学习Machine Learning】资料大全
昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...
- [Machine Learning] Active Learning
1. 写在前面 在机器学习(Machine learning)领域,监督学习(Supervised learning).非监督学习(Unsupervised learning)以及半监督学习(Semi ...
- [Machine Learning & Algorithm]CAML机器学习系列2:深入浅出ML之Entropy-Based家族
声明:本博客整理自博友@zhouyong计算广告与机器学习-技术共享平台,尊重原创,欢迎感兴趣的博友查看原文. 写在前面 记得在<Pattern Recognition And Machine ...
- machine learning基础与实践系列
由于研究工作的需要,最近在看机器学习的一些基本的算法.选用的书是周志华的西瓜书--(<机器学习>周志华著)和<机器学习实战>,视频的话在看Coursera上Andrew Ng的 ...
- matlab基础教程——根据Andrew Ng的machine learning整理
matlab基础教程--根据Andrew Ng的machine learning整理 基本运算 算数运算 逻辑运算 格式化输出 小数位全局修改 向量和矩阵运算 矩阵操作 申明一个矩阵或向量 快速建立一 ...
- Machine Learning
Recently, I am studying Maching Learning which is our course. My English is not good but this course ...
随机推荐
- Kubernetes master服务定制编译docker镜像
前言 之前部署了Kubernetes 1.13.0,发现master服务的启动方式与1.10.4版本有所区别,kube-apiserver.kube-controller-manager和kube-s ...
- Git-GIt检出
实际上在执行重置命令的时候没有使用任何参数对所要重置的分支名进行设置,这是因为重置命名实际上所针对的是头指针HEAD.之所以没有改变HEAD的内容是因为HEAD指向了一个引用refs/heads/ma ...
- 菜鸟学Linux - bash的配置文件
bash是各大Linux发行版都支持的shell.当我们登陆bash的时候,虽然我们什么都没做,但是我们已经可以在bash中调用各种各样的环境变量了.这是因为,系统中已经定义了一系列的配置文件,以及加 ...
- OpenCV学习笔记(九) 重映射、仿射变换
重映射 通过重映射来表达每个像素的位置 : 这里 是目标图像, 是源图像, 是作用于 的映射方法函数.想象一下我们有一个图像 , 我们想满足下面的条件作重映射:,图像会按照 轴方向发生翻 ...
- android shape.xml 文件使用
设置背景色可以通过在res/drawable里定义一个xml,如下: <?xml version="1.0" encoding="utf-8"?> ...
- 设计模式之第8章-策略模式(Java实现)
设计模式之第8章-策略模式(Java实现) “年前大酬宾了啊,现在理发冲500送300,冲1000送500了.鱼哥赶紧充钱啊,理发这事基本一个月一回,挺实惠的啊.不过话说那个理发店的老板好傻啊,冲10 ...
- Delphi字符串处理函数
1.Copy 功能说明:该函数用于从字符串中复制指定范围中的字符.该函数有3个参数.第一个参数是数据源(即被复制的字符串),第二个参数是从字符串某一处开始复制,第三个参数是要复制字符串的长度(即个数) ...
- c++ 吕凤翥 第六章 类和对象(二)
c++ 吕凤翥 第六章 类和对象(二) 指针 引用 和数组 一:对象指针和对象引用 1.指向类的成员的指针 分为指向成员变量和指向成员函数两种指针 成员变量的格式: 类型说明符 类名: ...
- 利用python列表实现堆栈和队列
堆栈: 堆栈是一个后进先出的数据结构,其工作方式就像生活中常见到的直梯,先进去的人肯定是最后出. 我们可以设置一个类,用列表来存放栈中的元素的信息,利用列表的append()和pop()方法可以实现栈 ...
- 三种框架对比react vue 和Angular对比
https://blog.csdn.net/runOnWay/article/details/80103880 angular 优点 背靠谷歌 使用typescript 便于后端人员开发上手 完整 不 ...