This section describes some best practices for analysis. These practices come from experience of analysts in the Data Mining Team. We list a few things you should do (or at least consider doing) and some pitfalls to avoid. We provide a list of issues to keep in mind that could affect the the quality of your results. Finally, a list of tools and data sets are referenced that might help in your analysis.

Analysis Quality

  • Did you spend time thinking about what question you were answering?
  • Did you engage potential users of your analysis to ensure you address right questions?
  • How much effort did you put into checking the quality of the data?
  • How reproducible is your analysis? If you were to pick up your project 6 months from now could you reuse anything?
  • Did you review your write up to your satisfaction?
  • Did you have others review your analysis artifacts (scripts, code, etc.)?
  • Is your write up something you would be proud to publish?
  • Do you think readers of your analysis summary can understand the key points easily and benefit from them?

Analysis Do's

  • Look at the distribution of your data. Always look at histograms (value and counts) for key fields in your analysis and see what pops out. In most cases, you will find some surprises that need futher investigation before you dive into your real analysis.
  • Skewed Distributions. Most of the data distributions we see in our are very skewed "heavy or long tailed"). For example, if you are analyzing queries, there may be a handful of queries that dominate (e.g., "google'). The metrics computed for a particular feature or vertical may be heavily skewed because of those few queries.
  • Segmentation. Metrics are more useful when segmented appropriately — not all segments are necessarily useful, but almost always some kind of segmentation can provide more useful insights. E.g. segmenting by dominant/not-dominant query (head vs. tail, "super-head" vs. rest). For more on this see section on Segmentation. See also a good blog: http://www.kaushik.net/avinash/2010/05/web-analytics-segments-three-category-recommendations.html on segmentation from the Web Analytics expert Avinash Kaushik.
  • Deep dive: Always look at some unaggregated data as part of your analysis -- especially for results that are surprisingly (both positively or negatively). Some good ideas are to use Magic Mirror to get a few sample sessions to see what users are doing in detail. While that will not answer questions you have, but it may raise a few questions that may not have been considered or show up some assumptions you made are false.
  • Make sure the data is correct. Talk to people who generated the data to verify that every field you are using means what you think it means. Don't trust your intuition, always check. For example, when using DQ field from one of the databases it is good to verify which verticals are included in the DQ computation. Not all are included and the list of the ones that are included differs depending in Competitive and Live Metrics databases.
  • Think about baselines. Make sure that the numbers you are comparing are meaningful in their comparison. Often some subset of the population cannot be meaningfully compared to the population as a whole. For example, it isn't terribly meaningful to compare IE entry point Bing users to the global Bing user population in terms of value, because the global Bing user population will be biased by low-value marketing users, have different demographics, etc. It may be that you will simply demonstrate that marketing users are less likely to return than IE and Toolbar users, which is expected, and not what you set out to prove at all.
  • Think ahead about possible shortfalls of your methods. Build specific experiments to test whether these shortcomings are real. The beginning of any analysis should project should include an active brainstorm of possible reasons the analysis method would be flawed. The project should specifically build in experiments and data sets to attempt to prove or disprove those possible shortcomings. For example, when developing Session Success Rate, we realized that there were concerns that success due to answers would not be properly measured, invalidating the metric for answers-related experiments. To help shed light on this we ensured we tested on data for a known-good answers ranker flight, to ensure that Session Success Rate didn't tell the wrong story in that case.
  • Ensure your metric can find both good and bad. Sometimes your tools will have biases which can be found by testing both good and bad examples. If you metric always says that things are good, it probably isn't useful. This can sometimes be accomplished by having some prior knowledge about good cases and bad cases, and ensuring both are included in your set. For example, imagine that your analysis intends to find the impact of exposure to various Bing features on usage of Bing. In this case, the analysis should include both features like Instant Answers, which we believe are a positive experience for our users, and features like no-results pages, which we believe aren't a good experience for our users. In this case, if our analysis says that both are really good things, or both are really bad things, then we know our analysis hasn't produced reliable results.
  • Communicate the analysis results. Allocate time and put some effort into communicating the results of your analysis to your customers as well as to anyone who may potentially be interested. Don't wait for them to contact you. Contact them first and ask if they are interested.

Analysis Dont's

  • Don't go too broad in the analysis. When trying to look at everything it's very easy to drown in data.
  • Don't use a page view-level quantity to determine a cohort of users without extreme care. This can introduce unexpected biases due to coverage effects, which can influence broad features of the cohort.
  • Don't be afraid to turn away from some analysis method which is proving unproductive. Just because you've written up a plan and scheduled time for a project doesn't mean you should be afraid to fail fast if that's the right thing to do.

Analysis Issues

  • Precision: add error bars (e.g. 95% confidence intervals). This is especially important when working with sampled data (samples NIF streams or Magic Mirror). For example if we compare two estimates (e.g. CTR) that are different, but the 95% confidence intervals overlap, we can't say that they are different (though we can't say that they're equal either).
  • Accuracy: depending on the "ground truth" and data set used for the analysis, there may be a bias that needs to be understood to put the analysis results in perspective. For example when using a particular flight for the analysis, there is a mechanism for selecting users to be in that flight — i.e. the users in the flight may not be a true random sample from the population your analysis is interested in, in which case there's a bias introduced into the analysis. There can also be temporal bias, e.g. due to seasonal effects: browsing patterns may be different during the weeks before Christmas than say in February. Day of the week effects could also be an issue (best to use multiples of 7 days for analysis data, e.g. 35 days). Also (unless there is very good reason for it), don t aggregate over very long periods of time as the signal will likely change over long time. This presents a trade-off between aggregating over short term thus having less data and larger error versus aggregating over long term thus having more data and better precision, but yielding less sensitivity to temporal effects. In general a four or five week period best balances this trade-off.
  • Weighted aggregation: When computing aggregate values, one can choose to add different weights to different data points. Currently Foray (flight analysis) and LiveMetrics compute aggregate metrics in different ways: LiveMetrics gives each impression equal weight, whereas Foray gives each user equal weight (by first computing aggregates per user and then aggregating these values over all users). As a result the metrics values in LiveMetrics represent heavy users more than light users. The results obtained from these two methods can differ both quantitatively and qualitatively. Depending on the analysis one or the other (or neither) may be most appropriate.

Analysis Guidelines的更多相关文章

  1. Dynamic Library Design Guidelines

    https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100 ...

  2. Bjarne Stroustrup announces C++ Core Guidelines

    This morning in his opening keynote at CppCon, Bjarne Stroustrup announced the C++ Core Guidelines ( ...

  3. Code Review Checklist and Guidelines for C# Developers

    Checklist1. Make sure that there shouldn't be any project warnings.2. It will be much better if Code ...

  4. C++ Core Guidelines

    C++ Core Guidelines September 9, 2015 Editors: Bjarne Stroustrup Herb Sutter This document is a very ...

  5. Guidelines for Successful SoC Verification in OVM/UVM

    By Moataz El-Metwally, Mentor Graphics Cairo Egypt Abstract : With the increasing adoption of OVM/UV ...

  6. Java Programming Guidelines

    This appendix contains suggestions to help guide you in performing low-level program design and in w ...

  7. Why many EEG researchers choose only midline electrodes for data analysis EEG分析为何多用中轴线电极

    Source: Research gate Stafford Michahial EEG is a very low frequency.. and literature will give us t ...

  8. Automated Memory Analysis

    catalogue . 静态分析.动态分析.内存镜像分析对比 . Memory Analysis Approach . volatility: An advanced memory forensics ...

  9. Sentiment Analysis resources

    Wikipedia: Sentiment analysis (also known as opinion mining) refers to the use of natural language p ...

随机推荐

  1. .Net 下FCKeditor上传图片加水印

    配置FCKEditor请参考网上的. 如果你用的是.net的FCKEditor,把用到的FCKEditor.Net项目解压缩 在FCKEditor.net项目中,依次找到FileBrowser--&g ...

  2. C#微信公众号开发 -- (七)自定义菜单事件之VIEW及网页(OAuth2.0)授权

    通俗来讲VIEW其实就是我们在C#中常用的a标签,可以直接在自定义菜单URL的属性里面写上需要跳转的链接,也即为单纯的跳转. 但更多的情况下,我们是想通过VIEW来进入指定的页面并进行操作. 举一个简 ...

  3. 在js中window.open通过“post”传递参数

    在js中window.open通过“post”传递参数的步骤如下: 如:在A.jsp中 有一个js方法 winow.open,目标地址是 xx.do 1.在A.jsp建一个form,把要设置的值通过j ...

  4. eclipse中定位引用的源码

    如图,在eclipse中,我想看BaseContoller是怎么实现的,将鼠标放上去,按住Ctrl单击左键就行了

  5. 鼠标操作[OpenCV 笔记10]

    ) winname 窗口名字 onMouse 指定窗口每次鼠标事件发生的时候,被调用的函数指针.函数的原型应为void Foo(int event, int x, int y, int flags, ...

  6. Android中的自定义属性的实现

    Android开发中,如果系统提供的View组件不能满足我们的需求,我们就需要自定义自己的View,此时我们会想可不可以为自定义的View定义属性呢?答案是肯定的.我们可以定义自己的属性,然后像系统属 ...

  7. [Computer Vision] SIFT特征学习笔记

    SIFT(Scale Invariant Feature Transform),尺度空间不变特征,目前手工设计的最好vision特征. 以下是学习http://blog.csdn.net/zddblo ...

  8. jQuery选择器(一)

    1.#id根据给定的ID匹配一个元素. <div id="notMe"><p>id="notMe"</p></div& ...

  9. php计算最后一次,第一次字符串出现位置

    strpos($str, n) 首次,n在str第一次出现位置, strrpos($str, n) 最后一次,n在str最后一次出现位置 strripos区分大小写

  10. Uploadify 上传失败

    更改Apache的php.ini的配置 In my php.ini (created custom ini file in public_html) would this solve this pro ...