Recruit Coupon Purchase Winner's Interview: 2nd place, Halla Yang

Recruit Ponpare is Japan's leading joint coupon site, offering huge discounts on everything from hot yoga, to gourmet sushi, to a summer concert bonanza. The Recruit Coupon Purchase Prediction challenge asked the community to predict which coupons a customer would buy in a given period of time using past purchase and browsing behavior.

Halla Yang finished 2nd ahead of 1,191 other data scientists. His experience working with time series data helped him use unsupervised methods effectively in conjunction with gradient boosting. In this blog, Halla walks through his approach and shares key visualizations that helped him better understand and work with the dataset.

The Basics

What was your background prior to entering this challenge?

I've worked almost a decade in finance as a quantitative researcher and portfolio manager. I've also competed in several Kaggle contests, placing first in the Pfizer Volume Prediction Masters competition, sixth in the Merck Molecular Activity Challenge, and ninth in theDiabetic Retinopathy Detection.

Halla Yang's profile on Kaggle

Do you have any prior experience or domain knowledge that helped you succeed in this competition?

Predicting prices for thousands of stocks and predicting purchases by thousands of Japanese internet users are loosely similar problems. You can forecast stock returns by looking at time series data such as past returns and cross-sectional data such as industry averages. You can forecast coupon purchases by looking at time series features based on past purchases and cross-sectional features based on peer group averages.

Let's Get Technical

What preprocessing and supervised learning methods did you use?

For each (user, coupon) pair, I calculated the probability that the user would purchase that coupon during the test period using a gradient boosting classifier. I sorted the coupons for each user by probability, composing the ten highest probability coupons into my submission.

To train my classifier, I constructed training data for 24 "train periods" that simulated the test period. Train period 1 is the week from 2012-01-08 through 2012-01-14, and includes all coupons with a DISPFROM date - the date on which they're supposed to be first displayed - in that week. Train period 2 is the week from 2012-01-15 through 2012-01-21, and includes all coupons with a DISPFROM date in that week. Train period 24 is the week from 2012-06-17 through 2012-06-23, and includes all coupons with a DISPFROM date in that week.

For each of these training periods, I built a set of features for each (user, relevant coupon) pair. This set of features includes user-specific data, e.g. gender, days on site, and age; coupon-specific data, e.g. catalog price, genre, and price rate; as well as user-coupon interaction data, e.g. how often has the user viewed coupons of the same genre. The target for each observation is set to 1 if the user purchased that coupon during the training week, and 0 otherwise.

To calibrate the parameters of my model, I first trained a model on the first twenty-three weeks of data, and estimated my log loss and confusion matrix on the twenty-fourth week. I then trained a model on the full twenty-four weeks of data to generate my competition submission.

The only supervised learning method I used was gradient boosting, as implemented in the excellent xgboost package. I cycled through other algorithms at the start of my analysis to get a feel for their relative performance - logistic regressions, random forests, SVMs, as well as deep neural networks - but found that gradient boosting was the single best classifier for my approach.

What was your most important insight into the data?

First, many test set and training set coupons were viewed prior to their DISPFROM, the date on which they're supposed to be first displayed, and so one could use direct views as a forecasting variable. The violin plot below shows the distribution of first view times relative to DISPFROM. A negative x-value indicates the coupon was viewed prior to its DISPFROM. Over a quarter of coupons are first viewed more than twelve hours before their DISPFROM, and five percent of coupons are first viewed more than ninety hours before their DISPFROM.



Simply counting the number of times a user has viewed a test set coupon is tremendously helpful in forecasting test set purchases. As shown in the left panel of the figure below, users are 2.5% likely to buy a coupon if they've viewed it exactly once prior to its DISPFROM, but that probability rises to 32% if they've viewed the coupon four or more times.

Second, users tend to buy the same coupons over and over. As shown in the middle panel of the above figure, a user who has purchased a coupon with a given prefecture, genre, and catalog price four or more times has a 38% chance of buying a matched coupon again in the next week if it is offered for sale.

Third, peer group averages can help forecast the behavior of users with little or no history. The right panel of the above figure shows that a user's probability of buying a coupon increases from less than 0.1% to above 0.6% if more than ten percent of age, sex, and geography-matched peers have bought a coupon with the same characteristics.

Fourth, it's important to consider the geographic coverage of each coupon. To be specific, a coupon is relevant for the multiple prefectures listed in coupon_area_train.csv, not just the single prefecture listed for that coupon in coupon_list_train.csv. In the kernel density plots below, I show the purchase intensity for users based in four prefectures: Tokyo, Kanagawa, Osaka, and Aichi, using the geographic data in coupon_list_train.csv. The purchases for Osaka and Aichi users appear strongly bimodal, with an unusually large number of purchases occurring in the Tokyo region.

On the other hand, if we look at all the prefectures that map to a given coupon, we find that Osaka users purchased Tokyo coupons not because they planned to travel to Tokyo, but because these coupons were also local to Osaka. If we plot the geographic intensity of "nearest-to-user" prefecture rather than a coupon's primary listing prefecture, we see much more localized purchase behavior.

Words of Wisdom

Do you have any advice for those just getting started in data science?

Focus on understanding the problem. Without understanding the problem, it's impossible to develop a solution.

Start with simple approaches and models. A fast development cycle is key to testing out ideas and learning what works. Don't start building computationally expensive ensembles until you have iterated through most of your best ideas.

Bio

Halla Yang has worked as a quantitative researcher, portfolio manager and trader at Goldman Sachs Asset ManagementJump Trading and Arrowstreet Capital. He holds a Ph.D. in Business Economics from Harvard, and a B.A. in Physics, summa cum laude, also from Harvard. He is about to start a new position as data scientist at a management consulting firm.

Recruit Coupon Purchase Winner's Interview: 2nd place, Halla Yang的更多相关文章

  1. Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯

    Otto Product Classification Winner's Interview: 2nd place, Alexander Guschin ¯\_(ツ)_/¯ The Otto Grou ...

  2. CrowdFlower Winner's Interview: 1st place, Chenglong Chen

    CrowdFlower Winner's Interview: 1st place, Chenglong Chen The Crowdflower Search Results Relevance c ...

  3. How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo

    How Much Did It Rain? Winner's Interview: 1st place, Devin Anzelmo An early insight into the importa ...

  4. Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees)

    Facebook IV Winner's Interview: 1st place, Peter Best (aka fakeplastictrees) Peter Best (aka fakepla ...

  5. Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang

    Liberty Mutual Property Inspection, Winner's Interview: Qingchen Wang The hugely popular Liberty Mut ...

  6. ICDM Winner's Interview: 3rd place, Roberto Diaz

    ICDM Winner's Interview: 3rd place, Roberto Diaz This summer, the ICDM 2015 conference sponsored a c ...

  7. Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

    Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham Ben Graham finished at the top of the ...

  8. 如何在 Kaggle 首战中进入前 10%

    原文:https://dnc1994.com/2016/04/rank-10-percent-in-first-kaggle-competition/ Introduction Kaggle 是目前最 ...

  9. 【转载】如何在 Kaggle 首战中进入前 10%

    本文转载自如何在 Kaggle 首战中进入前 10% 转载仅出于个人学习收藏,侵删 Introduction 本文采用署名 - 非商业性使用 - 禁止演绎 3.0 中国大陆许可协议进行许可.著作权由章 ...

随机推荐

  1. CHAP认证原理

    整个过程就是PPP协商过程,分三步:LCP.认证.NCP. 一 协议概述 PPP包含以下两个层次的协议: ·链路控制协议(LCP):负责建立.配置和测试数据链路连接 ·网络控制协议(NCP):负责建立 ...

  2. CentOS 7设置网络开机自动连接

    用root登陆系统 修改/etc/sysconfig/network-scripts/ifcfg-enpxxxxxx(xxx)文件,其内容原本如下 TYPE=Ethernet BOOTPROTO=dh ...

  3. C# is as

    if(obj is ClassA) //遍历类层次,看OBJ是不是ClassA类型{    ClassA a=(ClassA) obj; //遍历类层次,看obj能否转换为ClassA,不成功则抛出异 ...

  4. benchmark

    redis benchmark How many requests per second can I get out of Redis? Using New Relic to Understand R ...

  5. VS2010报错无法编译:LINK : fatal error LNK1123: failure during conversion to COFF: file invalid

    win7 64位 专业版 + vs2010 从vc6.0下转过来的一个项目,突然遇到这个问题. 解决方案: 用C:\Windows\winsxs\x86_netfx-cvtres_for_vc_and ...

  6. mysql5.7.12直接解压zip包,安装过程

    MySQL-5.7.12-winx64.zip解压安装方式 1.解压文件到你想要安装的位置.     本人是直接解压到E盘. 2.配置环境变量,在path中放入:E:\mysql-5.7.12-win ...

  7. 炫酷JQUERY自定义对话框插件JDIALOG_JDIALOG弹出对话框和确认对话框插件

    多种类型自定义对话框插件jDialog是一款基于jquery实现的轻量级多种类型的自定义对话框插件 在项目开发中.一般会美化 alert(); 的样式.那么今天我就和大家分享一款非常炫的插件 先来看一 ...

  8. [CareerCup] 12.3 Test Move Method in a Chess Game 测试象棋游戏中的移动方法

    12.3 We have the following method used in a chess game: boolean canMoveTo( int x, int y). This metho ...

  9. 『设计』Laura.Compute 设计思路

    前言: 前一篇文章 <『开源』也顺手写一个 科学计算器:重磅开源> ,继 Laura.Compute 算法开源之后,有 博客园 园友 希望公开一下 Laura.Compute算法 的 设计 ...

  10. servlet请求转发、包含以及重定向

    请求转发: 方式一: ServletContext对象.getRequestDispatcher(目标资源的URI).forward(request,response); 目标资源的URI " ...