Analysis Guidelines
This section describes some best practices for analysis. These practices come from experience of analysts in the Data Mining Team. We list a few things you should do (or at least consider doing) and some pitfalls to avoid. We provide a list of issues to keep in mind that could affect the the quality of your results. Finally, a list of tools and data sets are referenced that might help in your analysis.
Analysis Quality
- Did you spend time thinking about what question you were answering?
- Did you engage potential users of your analysis to ensure you address right questions?
- How much effort did you put into checking the quality of the data?
- How reproducible is your analysis? If you were to pick up your project 6 months from now could you reuse anything?
- Did you review your write up to your satisfaction?
- Did you have others review your analysis artifacts (scripts, code, etc.)?
- Is your write up something you would be proud to publish?
- Do you think readers of your analysis summary can understand the key points easily and benefit from them?
Analysis Do's
- Look at the distribution of your data. Always look at histograms (value and counts) for key fields in your analysis and see what pops out. In most cases, you will find some surprises that need futher investigation before you dive into your real analysis.
- Skewed Distributions. Most of the data distributions we see in our are very skewed "heavy or long tailed"). For example, if you are analyzing queries, there may be a handful of queries that dominate (e.g., "google'). The metrics computed for a particular feature or vertical may be heavily skewed because of those few queries.
- Segmentation. Metrics are more useful when segmented appropriately — not all segments are necessarily useful, but almost always some kind of segmentation can provide more useful insights. E.g. segmenting by dominant/not-dominant query (head vs. tail, "super-head" vs. rest). For more on this see section on Segmentation. See also a good blog: http://www.kaushik.net/avinash/2010/05/web-analytics-segments-three-category-recommendations.html on segmentation from the Web Analytics expert Avinash Kaushik.
- Deep dive: Always look at some unaggregated data as part of your analysis -- especially for results that are surprisingly (both positively or negatively). Some good ideas are to use Magic Mirror to get a few sample sessions to see what users are doing in detail. While that will not answer questions you have, but it may raise a few questions that may not have been considered or show up some assumptions you made are false.
- Make sure the data is correct. Talk to people who generated the data to verify that every field you are using means what you think it means. Don't trust your intuition, always check. For example, when using DQ field from one of the databases it is good to verify which verticals are included in the DQ computation. Not all are included and the list of the ones that are included differs depending in Competitive and Live Metrics databases.
- Think about baselines. Make sure that the numbers you are comparing are meaningful in their comparison. Often some subset of the population cannot be meaningfully compared to the population as a whole. For example, it isn't terribly meaningful to compare IE entry point Bing users to the global Bing user population in terms of value, because the global Bing user population will be biased by low-value marketing users, have different demographics, etc. It may be that you will simply demonstrate that marketing users are less likely to return than IE and Toolbar users, which is expected, and not what you set out to prove at all.
- Think ahead about possible shortfalls of your methods. Build specific experiments to test whether these shortcomings are real. The beginning of any analysis should project should include an active brainstorm of possible reasons the analysis method would be flawed. The project should specifically build in experiments and data sets to attempt to prove or disprove those possible shortcomings. For example, when developing Session Success Rate, we realized that there were concerns that success due to answers would not be properly measured, invalidating the metric for answers-related experiments. To help shed light on this we ensured we tested on data for a known-good answers ranker flight, to ensure that Session Success Rate didn't tell the wrong story in that case.
- Ensure your metric can find both good and bad. Sometimes your tools will have biases which can be found by testing both good and bad examples. If you metric always says that things are good, it probably isn't useful. This can sometimes be accomplished by having some prior knowledge about good cases and bad cases, and ensuring both are included in your set. For example, imagine that your analysis intends to find the impact of exposure to various Bing features on usage of Bing. In this case, the analysis should include both features like Instant Answers, which we believe are a positive experience for our users, and features like no-results pages, which we believe aren't a good experience for our users. In this case, if our analysis says that both are really good things, or both are really bad things, then we know our analysis hasn't produced reliable results.
- Communicate the analysis results. Allocate time and put some effort into communicating the results of your analysis to your customers as well as to anyone who may potentially be interested. Don't wait for them to contact you. Contact them first and ask if they are interested.
Analysis Dont's
- Don't go too broad in the analysis. When trying to look at everything it's very easy to drown in data.
- Don't use a page view-level quantity to determine a cohort of users without extreme care. This can introduce unexpected biases due to coverage effects, which can influence broad features of the cohort.
- Don't be afraid to turn away from some analysis method which is proving unproductive. Just because you've written up a plan and scheduled time for a project doesn't mean you should be afraid to fail fast if that's the right thing to do.
Analysis Issues
- Precision: add error bars (e.g. 95% confidence intervals). This is especially important when working with sampled data (samples NIF streams or Magic Mirror). For example if we compare two estimates (e.g. CTR) that are different, but the 95% confidence intervals overlap, we can't say that they are different (though we can't say that they're equal either).
- Accuracy: depending on the "ground truth" and data set used for the analysis, there may be a bias that needs to be understood to put the analysis results in perspective. For example when using a particular flight for the analysis, there is a mechanism for selecting users to be in that flight — i.e. the users in the flight may not be a true random sample from the population your analysis is interested in, in which case there's a bias introduced into the analysis. There can also be temporal bias, e.g. due to seasonal effects: browsing patterns may be different during the weeks before Christmas than say in February. Day of the week effects could also be an issue (best to use multiples of 7 days for analysis data, e.g. 35 days). Also (unless there is very good reason for it), don t aggregate over very long periods of time as the signal will likely change over long time. This presents a trade-off between aggregating over short term thus having less data and larger error versus aggregating over long term thus having more data and better precision, but yielding less sensitivity to temporal effects. In general a four or five week period best balances this trade-off.
- Weighted aggregation: When computing aggregate values, one can choose to add different weights to different data points. Currently Foray (flight analysis) and LiveMetrics compute aggregate metrics in different ways: LiveMetrics gives each impression equal weight, whereas Foray gives each user equal weight (by first computing aggregates per user and then aggregating these values over all users). As a result the metrics values in LiveMetrics represent heavy users more than light users. The results obtained from these two methods can differ both quantitatively and qualitatively. Depending on the analysis one or the other (or neither) may be most appropriate.
Analysis Guidelines的更多相关文章
- Dynamic Library Design Guidelines
https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100 ...
- Bjarne Stroustrup announces C++ Core Guidelines
This morning in his opening keynote at CppCon, Bjarne Stroustrup announced the C++ Core Guidelines ( ...
- Code Review Checklist and Guidelines for C# Developers
Checklist1. Make sure that there shouldn't be any project warnings.2. It will be much better if Code ...
- C++ Core Guidelines
C++ Core Guidelines September 9, 2015 Editors: Bjarne Stroustrup Herb Sutter This document is a very ...
- Guidelines for Successful SoC Verification in OVM/UVM
By Moataz El-Metwally, Mentor Graphics Cairo Egypt Abstract : With the increasing adoption of OVM/UV ...
- Java Programming Guidelines
This appendix contains suggestions to help guide you in performing low-level program design and in w ...
- Why many EEG researchers choose only midline electrodes for data analysis EEG分析为何多用中轴线电极
Source: Research gate Stafford Michahial EEG is a very low frequency.. and literature will give us t ...
- Automated Memory Analysis
catalogue . 静态分析.动态分析.内存镜像分析对比 . Memory Analysis Approach . volatility: An advanced memory forensics ...
- Sentiment Analysis resources
Wikipedia: Sentiment analysis (also known as opinion mining) refers to the use of natural language p ...
随机推荐
- java.lang.NoClassDefFoundError: com.nostra13.universalimageloader.core.DisplayImageOptions$Builder
今天在使用Universal-image-loader开源插件的时候,一直出现了这么个错误.原因是在ADT22版本中导入jar包的方式不对. 正确的导入jar包方式: 在adt17的版本之后,导入第三 ...
- java与.net平台之间进行RSA加密验证
RSA加密算法虽然不分平台,标准都是一样的,但是各个平台的实现方式都不尽相同,下面来我来说说java与.net平台之间该如何进行RSA加密验证,即java端加密->.net端验证和.net端加密 ...
- (转)HttpHandler与HttpModule的理解与应用
神秘的HttpHandler与HttpModule 大学时候我是从拖控件开始学习 asp.net的,对.net的很多类库对象都不是很了解.所以看到大家写一些个性的asp.net名词,就感觉asp.ne ...
- SQL Server自动化运维系列 - 监控磁盘剩余空间及SQL Server错误日志(Power Shell)
需求描述 在我们的生产环境中,大部分情况下需要有自己的运维体制,包括自己健康状态的检测等.如果发生异常,需要提前预警的,通知形式一般为发邮件告知. 在所有的自检流程中最基础的一个就是磁盘剩余空间检测. ...
- 利用switch语句计算特定的年份的月份共有几天。
//利用switch语句计算特定的年份的月份共有几天. let year =2015 let month =2 //先判断闰年中二月份的情况 ifmonth ==2 { if (year %400 = ...
- ES6的编码风格
编程风格 [转自http://es6.ruanyifeng.com/#docs/style] 块级作用域 字符串 解构赋值 对象 数组 函数 Map结构 Class 模块 ESLint的使用 本章探讨 ...
- 模板:优先队列(priority_queue)
#include <iostream> #include <cstdio> #include <queue> #include <vector> usi ...
- HA高可用配置
HA 即 (high available)高可用,又被叫做双机热备,用于关键性业务. 简单理解就是,有两台机器A和B,正常是A提供服务,B待命闲置,当A宕机或服务宕掉,会切换至B机器继续提供服务. 下 ...
- CentOS使用ufw的方法
ufwはファイアウォールの管理ツールで.Ubuntuで標準的に使われています.ufw allow 80/tcp のような簡単なコマンドでポートを開け閉めできます. CentOS用のパッケージは用意され ...
- Python的数据类型总结
原地可变类型和不可变类型 原地不可变类型又叫可哈希(hashable)类型,原地可变类型又叫不可哈希类型. 原地不可变类型: 数字类型:int, float, decimal.Decimal, fra ...