Data Transformation / Learning with Counts
机器学习中离散特征的处理方法
Updated: August 25, 2016
Learning with counts is an efficient way to create a compact set of features for a dataset, based on counts of the values. You can use the modules in this section to build a set of counts and features, and later update the counts and the features to take advantage of new data, or merge two sets of count data.
The basic idea underlying count-based featurization is simple: by calculating counts, you can quickly and easily get a summary of what columns contain the most important information. The module counts the number of times a value appears, and then provides that information as a feature for input to a model.
Example of Count-Based Learning
Imagine you’re trying to validate a credit card transaction. One crucial piece of information is where this transaction came from, and one of the most common encodings of that location is the postal code. However, there might be as many as 40,000 postal codes, zip codes, and geographical codes to account for. Does your model have the capacity to learn 40,000 more parameters? If you give it that capacity, do you now have enough training data to prevent it from overfitting?
If you had really good data with lots of samples, such fine-grained local granularity could be quite powerful. However, if you have only one sample of a fraudulent transaction from a small locality, does it mean that all of the transactions from that place are bad, or that you don’t have enough data?
One solution to this conundrum is to learn with counts. That is, rather than introduce 40,000 more features, you can observe the counts and proportions of fraud for each postal code. By using these counts as features, you gain a notion of the strength of the evidence for each value. Moreover, by encoding the relevant statistics of the counts, the learner can use the statistics to decide when to back off and use other features.
Count-based learning is very attractive for many reasons: You have fewer features, requiring fewer parameters, which makes for faster learning, faster prediction, smaller predictors, and less potential to overfit.
How Counts are Created
An example might help to demonstrate how count-based features are created and applied. This example is highly simplified, to give you an idea of the overall process, and how to use and interpret count-based features.
Suppose you have a table like this, with labels and inputs:
|
Label column |
Input value |
|---|---|
|
0 |
A |
|
0 |
A |
|
1 |
A |
|
0 |
B |
|
1 |
B |
|
1 |
B |
|
1 |
B |
Here is how count-based features are created:
Each case (or row, or sample) has a set of values in columns.
Here, the values are A, B, and so forth.
For a particular set of values, you find all the other cases in that dataset that have the same value.
In this case, there are three instances of A and four of B.
Next, you count their class memberships as features in themselves.
In this case, you get a small matrix, in which there are 2 cases where A=0, 1 case where A = 1, 1 case where B= 0, and 3 cases where B = 1.
When you create features based on this matrix, you get a variety of count-based features, including a calculation of the log-odds ratio as well as the counts for each target class:
|
Label |
0_0_Class000_Count |
0_0_Class001_Count |
0_0_Class000_LogOdds |
0_0_IsBackoff |
|---|---|---|---|---|
|
0 |
2 |
1 |
0.510826 |
0 |
|
0 |
2 |
1 |
0.510826 |
0 |
|
1 |
2 |
1 |
0.510826 |
0 |
|
0 |
1 |
3 |
-0.8473 |
0 |
|
1 |
1 |
3 |
-0.8473 |
0 |
|
1 |
1 |
3 |
-0.8473 |
0 |
|
1 |
1 |
3 |
-0.8473 |
0 |
Examples
The following article from the Microsoft Machine Learning team provides a detailed walkthrough of how to use counts in machine learning, and compares the efficacy of count-based modeling with other methods.
Technical Notes
How is the log-loss value calculated?
The Log-loss value is not the plain log-odds; the prior distribution is used to smooth the log-odds computation.
Suppose you have a data set used for binary classification. In this dataset, the prior frequency for class 0 is p_0, and the prior frequency for class 1 is p_1 = 1 – p_0. For a certain training example feature, the count for class 0 is x_0, and the count for class 1 is x_1.
Under these assumptions, the log-odds is computed as:
LogOdds = Log(x_0 + c * p_0) – Log (x_1 + c * p_1)
Where:
c is the prior coefficient, which can be set by the user.
Log uses the natural base.
In other words, for each class i:
Log_odds[i] = Log( (count[i] + prior_coefficient * prior_frequency[i]) / (sum_of_counts - count[i]) + prior_coefficient * (1 - prior_frequency[i]))
If the prior coefficient is positive, the log odds can be different from Log(count[i] / (sum_of_counts – count[i])).
Why are the log odds not computed for some items?
By default, all items with a count less than 10 are collected in a single bucket called the "garbage bin". You can change this behavior value by using the Garbage bin threshold option in the Modify Count Table Parameters module.
List of Modules
The Learning with Counts category includes the following modules:
|
Module |
Description |
|---|---|
|
Creates a count table and count-based features from a dataset, and saves it as a transformation |
|
|
Exports count table from a counting transform This module supports backward compatibility with experiments that create count-based features using Build Count Table (deprecated) and Count Featurizer (deprecated). |
|
|
Imports an existing count table This module supports backward compatibility with experiments that create count-based features using Build Count Table (deprecated) and Count Featurizer (deprecated). It supports conversion of count tables to count transformations. |
|
|
Merges two sets of count-based features |
|
|
Modifies count-based features derived from an existing count table |
Data Transformation / Learning with Counts的更多相关文章
- 【转】The most comprehensive Data Science learning plan for 2017
I joined Analytics Vidhya as an intern last summer. I had no clue what was in store for me. I had be ...
- 《从0到1学习Flink》—— Flink Data transformation(转换)
前言 在第一篇介绍 Flink 的文章 <<从0到1学习Flink>-- Apache Flink 介绍> 中就说过 Flink 程序的结构 Flink 应用程序结构就是如上图 ...
- Flink 从 0 到 1 学习 —— Flink Data transformation(转换)
toc: true title: Flink 从 0 到 1 学习 -- Flink Data transformation(转换) date: 2018-11-04 tags: Flink 大数据 ...
- Flink Data transformation(转换)
Flink Data transformation 算子学习 1.Source:数据源,Flink在流处理和批处理上的source大概有4类: 基于本地集合的source.基于文件的source.基于 ...
- Intermediate Python for Data Science learning 2 - Histograms
Histograms from:https://campus.datacamp.com/courses/intermediate-python-for-data-science/matplotlib? ...
- Intermediate Python for Data Science learning 1 - Basic plots with matplotlib
Basic plots with matplotlib from:https://campus.datacamp.com/courses/intermediate-python-for-data-sc ...
- Intro to Python for Data Science Learning 8 - NumPy: Basic Statistics
NumPy: Basic Statistics from:https://campus.datacamp.com/courses/intro-to-python-for-data-science/ch ...
- Intro to Python for Data Science Learning 7 - 2D NumPy Arrays
2D NumPy Arrays from:https://campus.datacamp.com/courses/intro-to-python-for-data-science/chapter-4- ...
- Intro to Python for Data Science Learning 5 - Packages
Packages From:https://campus.datacamp.com/courses/intro-to-python-for-data-science/chapter-3-functio ...
随机推荐
- .NET的ExcelOperate
using System; using System.Web; using Excel = Microsoft.Office.Interop.Excel; namespace Comm { /// & ...
- SQL 导出表结构到Excel
SQL 导出表结构到Excel SELECT 表名 then d.name else '' end, 表说明 then isnull(f.value,'') else '' end, 字段序号 = a ...
- 解决VS下“LC.exe已退出,代码为-1”问题
今天使用VS2015开发一个Winform程序,手一抖拖错了一个第三方控件,然后将其去掉并删除相关的引用,结果导致了LC.exe错误:"Lc.exe已退出,代码为-1 ". 经过上 ...
- 使用7-zip制作自解压安装包
7-zip制作自解压包很方便,只要在压缩时选择”创建自释放程序”选项. 而自解压安装包有点麻烦,不如WinRAR方便. 准备工具:下载 LZMA SDK 这里面有 7zSD.sfx (16.04版7z ...
- future
/*T ->return type, E -> error type, D -> parameter type */ template<typename T, typename ...
- tornado 反向代理后 获取真实客户端IP
首先,nginx必定会设置一个Header传送过来真实的IP nginx.conf server { proxy_set_header X-Real-IP $remote_addr; location ...
- msqlserver 千万级别单表数据去掉重复记录使用临时表
由于上周末小写把数据数据重复写入数据库,没办法,得去重! 最新使用的语句: use data set nocount ondelete DoRecordProperty from( select TI ...
- [转载]《民航科技》2012年4月专家论坛:罗喜伶《SWIM技术国际研究动态及对中国民航的借鉴意义》
专家介绍:罗喜伶,北京航空航天大学电子信息工程学院副教授,工学博士,硕士生导师,国家空管新航行系统技术重点实验室和协同式网络化空中交通管理系统研究教育部创新团队核心成员,民航空管广域信息系统专家组成员 ...
- java工作流软件发送邮件的方案
利用javamail的功能将发送邮件的功能集成到java工作流系统中.javamail包提供有发送邮件的方法,设置发送人地址,收件人地址,抄送,主题,邮件服务器地址,认证用户等信息,再调用javama ...
- truncate table语句和delete table语句的区别
truncate table 表名 ; delete from 表名; 都是用来删除表中所有的记录,前者删除数据后表的标识列会重新开始编号,它比delete语句使用的系统资源和事务日志资源更少,但是表 ...