This section describes some best practices for analysis. These practices come from experience of analysts in the Data Mining Team. We list a few things you should do (or at least consider doing) and some pitfalls to avoid. We provide a list of issues to keep in mind that could affect the the quality of your results. Finally, a list of tools and data sets are referenced that might help in your analysis.

Analysis Quality

  • Did you spend time thinking about what question you were answering?
  • Did you engage potential users of your analysis to ensure you address right questions?
  • How much effort did you put into checking the quality of the data?
  • How reproducible is your analysis? If you were to pick up your project 6 months from now could you reuse anything?
  • Did you review your write up to your satisfaction?
  • Did you have others review your analysis artifacts (scripts, code, etc.)?
  • Is your write up something you would be proud to publish?
  • Do you think readers of your analysis summary can understand the key points easily and benefit from them?

Analysis Do's

  • Look at the distribution of your data. Always look at histograms (value and counts) for key fields in your analysis and see what pops out. In most cases, you will find some surprises that need futher investigation before you dive into your real analysis.
  • Skewed Distributions. Most of the data distributions we see in our are very skewed "heavy or long tailed"). For example, if you are analyzing queries, there may be a handful of queries that dominate (e.g., "google'). The metrics computed for a particular feature or vertical may be heavily skewed because of those few queries.
  • Segmentation. Metrics are more useful when segmented appropriately — not all segments are necessarily useful, but almost always some kind of segmentation can provide more useful insights. E.g. segmenting by dominant/not-dominant query (head vs. tail, "super-head" vs. rest). For more on this see section on Segmentation. See also a good blog: http://www.kaushik.net/avinash/2010/05/web-analytics-segments-three-category-recommendations.html on segmentation from the Web Analytics expert Avinash Kaushik.
  • Deep dive: Always look at some unaggregated data as part of your analysis -- especially for results that are surprisingly (both positively or negatively). Some good ideas are to use Magic Mirror to get a few sample sessions to see what users are doing in detail. While that will not answer questions you have, but it may raise a few questions that may not have been considered or show up some assumptions you made are false.
  • Make sure the data is correct. Talk to people who generated the data to verify that every field you are using means what you think it means. Don't trust your intuition, always check. For example, when using DQ field from one of the databases it is good to verify which verticals are included in the DQ computation. Not all are included and the list of the ones that are included differs depending in Competitive and Live Metrics databases.
  • Think about baselines. Make sure that the numbers you are comparing are meaningful in their comparison. Often some subset of the population cannot be meaningfully compared to the population as a whole. For example, it isn't terribly meaningful to compare IE entry point Bing users to the global Bing user population in terms of value, because the global Bing user population will be biased by low-value marketing users, have different demographics, etc. It may be that you will simply demonstrate that marketing users are less likely to return than IE and Toolbar users, which is expected, and not what you set out to prove at all.
  • Think ahead about possible shortfalls of your methods. Build specific experiments to test whether these shortcomings are real. The beginning of any analysis should project should include an active brainstorm of possible reasons the analysis method would be flawed. The project should specifically build in experiments and data sets to attempt to prove or disprove those possible shortcomings. For example, when developing Session Success Rate, we realized that there were concerns that success due to answers would not be properly measured, invalidating the metric for answers-related experiments. To help shed light on this we ensured we tested on data for a known-good answers ranker flight, to ensure that Session Success Rate didn't tell the wrong story in that case.
  • Ensure your metric can find both good and bad. Sometimes your tools will have biases which can be found by testing both good and bad examples. If you metric always says that things are good, it probably isn't useful. This can sometimes be accomplished by having some prior knowledge about good cases and bad cases, and ensuring both are included in your set. For example, imagine that your analysis intends to find the impact of exposure to various Bing features on usage of Bing. In this case, the analysis should include both features like Instant Answers, which we believe are a positive experience for our users, and features like no-results pages, which we believe aren't a good experience for our users. In this case, if our analysis says that both are really good things, or both are really bad things, then we know our analysis hasn't produced reliable results.
  • Communicate the analysis results. Allocate time and put some effort into communicating the results of your analysis to your customers as well as to anyone who may potentially be interested. Don't wait for them to contact you. Contact them first and ask if they are interested.

Analysis Dont's

  • Don't go too broad in the analysis. When trying to look at everything it's very easy to drown in data.
  • Don't use a page view-level quantity to determine a cohort of users without extreme care. This can introduce unexpected biases due to coverage effects, which can influence broad features of the cohort.
  • Don't be afraid to turn away from some analysis method which is proving unproductive. Just because you've written up a plan and scheduled time for a project doesn't mean you should be afraid to fail fast if that's the right thing to do.

Analysis Issues

  • Precision: add error bars (e.g. 95% confidence intervals). This is especially important when working with sampled data (samples NIF streams or Magic Mirror). For example if we compare two estimates (e.g. CTR) that are different, but the 95% confidence intervals overlap, we can't say that they are different (though we can't say that they're equal either).
  • Accuracy: depending on the "ground truth" and data set used for the analysis, there may be a bias that needs to be understood to put the analysis results in perspective. For example when using a particular flight for the analysis, there is a mechanism for selecting users to be in that flight — i.e. the users in the flight may not be a true random sample from the population your analysis is interested in, in which case there's a bias introduced into the analysis. There can also be temporal bias, e.g. due to seasonal effects: browsing patterns may be different during the weeks before Christmas than say in February. Day of the week effects could also be an issue (best to use multiples of 7 days for analysis data, e.g. 35 days). Also (unless there is very good reason for it), don t aggregate over very long periods of time as the signal will likely change over long time. This presents a trade-off between aggregating over short term thus having less data and larger error versus aggregating over long term thus having more data and better precision, but yielding less sensitivity to temporal effects. In general a four or five week period best balances this trade-off.
  • Weighted aggregation: When computing aggregate values, one can choose to add different weights to different data points. Currently Foray (flight analysis) and LiveMetrics compute aggregate metrics in different ways: LiveMetrics gives each impression equal weight, whereas Foray gives each user equal weight (by first computing aggregates per user and then aggregating these values over all users). As a result the metrics values in LiveMetrics represent heavy users more than light users. The results obtained from these two methods can differ both quantitatively and qualitatively. Depending on the analysis one or the other (or neither) may be most appropriate.

Analysis Guidelines的更多相关文章

  1. Dynamic Library Design Guidelines

    https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100 ...

  2. Bjarne Stroustrup announces C++ Core Guidelines

    This morning in his opening keynote at CppCon, Bjarne Stroustrup announced the C++ Core Guidelines ( ...

  3. Code Review Checklist and Guidelines for C# Developers

    Checklist1. Make sure that there shouldn't be any project warnings.2. It will be much better if Code ...

  4. C++ Core Guidelines

    C++ Core Guidelines September 9, 2015 Editors: Bjarne Stroustrup Herb Sutter This document is a very ...

  5. Guidelines for Successful SoC Verification in OVM/UVM

    By Moataz El-Metwally, Mentor Graphics Cairo Egypt Abstract : With the increasing adoption of OVM/UV ...

  6. Java Programming Guidelines

    This appendix contains suggestions to help guide you in performing low-level program design and in w ...

  7. Why many EEG researchers choose only midline electrodes for data analysis EEG分析为何多用中轴线电极

    Source: Research gate Stafford Michahial EEG is a very low frequency.. and literature will give us t ...

  8. Automated Memory Analysis

    catalogue . 静态分析.动态分析.内存镜像分析对比 . Memory Analysis Approach . volatility: An advanced memory forensics ...

  9. Sentiment Analysis resources

    Wikipedia: Sentiment analysis (also known as opinion mining) refers to the use of natural language p ...

随机推荐

  1. char与varchar区别

    char:储存定长数据,长度不够,以空格填满.储存效率高. varchar: 变长数据,根据数据长度储存,节省空间,效率低.

  2. 闲话:你今天OO了吗?

    如果你的分析习惯是在调研技术的时候最先弄清楚有多少业务流程,先画出业务流程图,然后顺藤摸瓜,找出业务流程中每一步骤的参与部门或岗位,弄清楚在这一步参与者所做的事情和填写表单的结果,并关心用户是如何把这 ...

  3. Linux基础1之磁盘与分区

    Linux上面设备皆文件,目前需要知道的,比如U盘和SARA硬盘的在Linux上面的文件名,/dev/sd[a-p].与IDE接口不同的是,SATA/USB接口的磁盘没有一定的顺序,这里就根据Linu ...

  4. c语言学习之基础知识点介绍(十一):字符串的介绍、使用

    本节主要介绍c语言中的字符串的应用. 一:字符串介绍 因为c语言中没有像Java.C#那样的字符串类型,所以无法直接用字符串.需要借助数组来解决这个问题. /* 定义:把多个字符连在一起就叫字符串.但 ...

  5. Hyper-V Windows 8.1 & Windows Server 2012 R2 Q&A

    从Windows8开始,x64位系统自带Hyper-V功能,很多开发者和专业用户往往希望利用的Microsoft提供的这一免费功能,但是微软在这方面并不是最佳. 主要写几个大家经常遇到的问题. Win ...

  6. php session_id()函数介绍及代码实例

    session_id()功能: 获取设置当前回话ID. 函数说明: string session_id ([ string $id ] ) 参数: 如果指定了参数$id,那么函数会替换当前的回话id. ...

  7. AWK详细用法

    awk非常的优秀,运行效率高,而且代码简单,对格式化的文本处理能力超强.基本上grep和sed能干的活awk全部都能干,而且干得更好. 先来一个很爽的例子:文件a,统计文件a的第一列中是浮点数的行的浮 ...

  8. JDBC——架构层、驱动

    JDBC(java Datebase Connector) jdbc驱动程序 四种类型: jdbc-odbc桥接驱动程序 Native-API JDBC-Net Native-Protocol (常见 ...

  9. C# 内存法图像处理

    内存法通过把图像储存在内存中进行处理,效率大大高于GetPixel方法,安全性高于指针法. 笔者当初写图像处理的时候发现网上多是用GetPixel方法实现,提到内存法的时候也没有具体实现,所以笔者在这 ...

  10. javascript Object的长度

    1.示例 var obj = { a:"hello", b:"world", c:"hehe" } var key = 0; for(var ...