【转载】Recommendations with Thompson Sampling (Part II)
[原文链接:http://engineering.richrelevance.com/recommendations-thompson-sampling/。]
[本文链接:http://www.cnblogs.com/breezedeus/p/3775339.html,转载请注明出处]
Recommendations with Thompson Sampling
06/05/2014 • Topics: Bayesian, Big data, Data Science
This is the second in a series of three blog posts on bandits for recommendation systems.
If you read the last blog post, you should now have a good idea of the challenges in building a good algorithm for dishing out recommendations in the bandit setting. The most important challenge is to balance exploitationwith exploration. That is, we have two somewhat conflicting goals: (a) quickly find the best arm to pull and (b) pull the best arm as often as possible. What I dubbed the naive algorithm in the preceding blog post fulfilled these two goals in a direct way: explore for a while, and then exploit forever. It was an OK approach, but we found that more sophisticated approaches, like the UCB family of bandit algorithms, had significantly better performance and no parameters to tune.
In this post, we'll introduce another technique: Thompson sampling (also known as probability matching). This has been well covered elsewhere, but mostly for the binary reward case (zeros and ones). I'll also go over Thompson in the log-normal reward case, and offer some approximations that can work in for any reward distribution.
Before I define Thompson sampling, let's build up some intuition. If we have an infinite amount of pulls, then we know exactly what the expected rewards are, and there is no reason to ever explore. When we have a finite number of pulls, then we have to explore and the reason is that we are uncertain of what is the best arm. The right machinery to quantify uncertainty is the probability distribution. The UCB algorithms are implicitly using a probability distribution, but only one number from it: the upper confidence bound. In Bayesian thinking, we want to use the entire probability distribution. In the preceding post I defined \(p_a(r)\), the probability distribution from which rewards are drawn. That's what controls the bandit. It would be great if we could estimate this entire distribution, but we don't need to. Why? Because all we care about is its mean \( \mu_a \) . What we will do is encode our uncertainty of μa in the probability distribution \( p(\mu_a | \text{data}_a ) \) , and then use that probability distribution to decide when to explore and when to exploit.
You may be confused by now because there are two related probability distributions floating around, so let's review:
With Thompson sampling you keep around a probability distribution \( p(\mu_a | \text{data}_a ) \) that encodes your belief about where the expected reward \( \mu_a \) is for arm a . For the simple coin-flip case, we can use the convenient Beta-Bernoulli model, and the distribution at round t after seeing \(S_{a,t}\) successes and \(F_{a,t}\) failures for arm a is simply:
\( p(\mu_a|\text{data}_a) = \text{Beta}(S_{a,t} + 1,F_{a,t} + 1) \),
where the added 1's are convenient priors.
So now we have a distribution that encodes our uncertainty of where the true expected reward \( \mu_a \) is. What's the actual algorithm? Here it is, in all its simple glory:
- Draw a random sample from \( p(\mu_a | \text{data}_a ) \) for each arm a .
- Pull the arm which has the largest drawn sample.
That's it! It turns out that this approach is a very natural way to balance exploration and exploitation. Here is the same simulation from last time, comparing the algorithms from the preceding blog post to Thompson Sampling:
Normal Approximation
The Bernoulli case is well known and well understood. But what happens when you want to maximize, say, revenue instead of click-through rate? To find out, I coded up a log-normal bandit where each arm pays out strictly positive rewards drawn from a log-normal distribution (the code is messy, so I won't be posting it). For Thompson sampling, I used a full posterior with priors over the log-normal parameters μand σ as described here (note that these are not the mean and standard deviation of the log-normal), and for UCB I used the modified Cox method of computing confidence bounds for the log-normal distribution from here. The normal approximation to exact Thompson sampling is (using Central Limit Theorem arguments):
\( p(\mu_a|\text{data}) = \mathcal{N}\left(\hat{\mu}_{a,t},\frac{\hat{\sigma}^2_{a,t}}{ N_{a,t}} \right) \),
where \(\hat{\mu}_{a,t}\) and \(\hat{\sigma}^2_{a,t}\) , are the sample mean and sample variance, respectively, of the rewards observed from arm a at round t , and \(N_{a,t}\) is the number of times arm a has been pulled at round t .
The UCB normal approximation is identical to that in the previous simulation. For all algorithms, I used the log of the observed rewards to compute sufficient statistics. The results:
Observations:
- Epsilon-Greedy is still the worst performer.
- The normal approximations are slightly worse than their hand-designed counterparts - green is worse than orange and gray is worse than purple.
- UCB is doing better than Thompson sampling over this horizon, but Thompson sampling is maybe poised to do better in the long run.
The Trouble with UCB
In the above experiments UCB beat out Thompson sampling. It sounds like a great algorithm and performs well in simulations, but it has a key weakness when you actually get down to productionizing your bandit algorithm. Let's say that you aren't Google and you have limited computational resources, which means that you can only update your observed data in batch every 2 hours. For the delayed batch case, UCB will pull the same arm every time for those 2 hours because it is deterministic in the absence of immediate updates. While Thompson sampling relies on random samples, which will be different every time even if the distributions aren't updated for a while, UCB needs the distributions to be updated every single round to work properly.
Given that the simulated performance differences between Thompson sampling and UCB are small, I heartily recommend Thompson sampling over UCB; it will work in a larger variety of practical cases.
Avoiding Trouble with Thompson Sampling
RichRelevance sees gobbles of data. This is usually great, but for Thompson sampling this may mean a subtle pitfall. To understand this point, first note that the variance of a Beta distribution with parametersα and β is:
\( \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} \).
For our recommender system, α is the number of successes (clicks) and β is the number of failures (non-clicks). As the amount of total data α+β goes up, the variance shrinks, and quickly. After a while, the posteriors will be so narrow, that exploration will effectively cease. This may sound good - after all, didn't we learn what the best lever to pull is? In practice, we're dealing with a moving target so it is a good idea to put an upper bound on α+β , so that exploration can continue indefinitely. For details, see Section 7.2.3 here.
Analogously, if you're optimizing for revenue instead of click-through rate and using a Normal approximation, you can compute sample means and sample variances in an incremental fashion, using decay as per the last page here. This will ensure that older samples have less influence than newer ones and allow you to track changing means and variances.
Coming up next: contextual bandits!
About Sergey Feldman:
Sergey Feldman is a data scientist & machine learning cowboy with the RichRelevance Analytics team. He was born in Ukraine, moved with his family to Skokie, Illinois at age 10, and now lives in Seattle. In 2012 he obtained his machine learning PhD from the University of Washington. Sergey loves random forests and thinks the Fourier transform is pure magic.
【转载】Recommendations with Thompson Sampling (Part II)的更多相关文章
- 转载 iir直接i型和直接ii型滤波器
1.IIR滤波器构造 之前在介绍FIR滤波器的时候,我们提到过,IIR滤波器的单位冲击响应是无限的!用差分方程来表达一个滤波器,应该是下式这个样子的. ...
- 【转载】Bandits for Recommendation Systems (Part I)
[原文链接:http://engineering.richrelevance.com/bandits-recommendation-systems/.] [本文链接:http://www.cnblog ...
- How do I learn machine learning?
https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644 How Can I Learn X? ...
- 适用于在线服务的A/B测试方法论
适用于在线服务的A/B测试方法论 简介: 这篇文章旨在介绍适用于为在线服务进行A/B测试(A/B Test)的方法论.中文网络中目前还缺乏全面的入门级介绍. 我将首先讨论在线服务业进行A/B测试所考虑 ...
- Reinforcement Learning: An Introduction读书笔记(2)--多臂机
> 目 录 < k-armed bandit problem Incremental Implementation Tracking a Nonstationary Problem ...
- 通俗bandit算法
[原文链接] 选择是一个技术活 著名鸡汤学家沃.滋基硕德曾说过:选择比努力重要. 我们会遇到很多选择的场景.上哪个大学,学什么专业,去哪家公司,中午吃什么,等等.这些事情,都让选择困难症的我们头很大. ...
- (zhuan) Prioritized Experience Replay
Prioritized Experience Replay JAN 26, 2016 Schaul, Quan, Antonoglou, Silver, 2016 This Blog from: ht ...
- RL Problems
1.Delayed, sparse reward(feedback), Long-term planning Hierarchical Deep Reinforcement Learning, Sub ...
- MPI - 缓冲区和非阻塞通信
转载自: Introduction to MPI - Part II (Youtube) Buffering Suppose we have ) MPI_Send(sendbuf,...,,...) ...
随机推荐
- 运用String类实现一个模拟用户登录程序
package Test; import java.util.Scanner; // 模拟用户登录程序 // 思路: // 1.用两个String类分别接收用户名和密码 // 2.判断输入的用户名和密 ...
- C# 遍历文件夹下所有子文件夹中的文件,得到文件名
假设a文件夹在F盘下,代码如下.将文件名输出到一个ListBox中using System.Data;using System.Drawing;using System.Linq;using Syst ...
- 2013 acm 长沙网络赛 G题 素数+枚举 Goldbach
题目 http://acm.zju.edu.cn/onlinejudge/showProblem.do?problemCode=3856 先预处理求出两个素数的和与积,然后枚举n-prime和n/pr ...
- 半小时快速了解redis,基于ubuntu 12.04 + redis 2.8.9
一.什么是redis ? 其官方介绍是: Redis is what is called a key-value store, often referred to as a NoSQL databas ...
- pip 加速方案
每当我pip install * 的时候,总是发现速度很慢,通过google,发现还是有方法来解决这种状况的 在~/ 命令下,创建 .pip/pip.conf,我用的是阿里的镜像,速度还是杠杠的 mk ...
- Android 设置代理(验证用户名和密码)
这几天在研究在Android中,解析网页,但是公司内容,链接外网需要代理,并需要验证用户名和密码,十分头疼,网上查了下,没有头绪,最后总算在一个外国博客中看到类似的,记录下 URL url = new ...
- 有趣的代码: fixTypeof
typeof 可以匹配对象的类型,但是他的能力很弱,比如 typeof new String('123')会显示的object这是我们不想看到的结果很久以前JQ的作者通过Object.prototyp ...
- asp.net下AjaxMethod的使用方法
使用AjaxMethod可以在客户端异步调用服务端方法,简单地说就是在JS里调用后台.cs文件里的方法, 做一些JS无法做到的操作,如查询数据库 使用AjaxMethod要满足一下几点: 1.如果还没 ...
- MySQL 性能优化的最佳20多条经验分享
当我们去设计数据库表结构,对操作数据库时(尤其是查表时的SQL语句),我们都需要注意数据操作的性能.这里,我们不会讲过多的SQL语句的优化,而只是针对MySQL这一Web应用最多的数据库.希望下面的这 ...
- photoshopcc基础教程
web项目中,除了最基础的用java存取数据外,还有重要的h5+css排版以及图片的ps,排版多多看网上人家的好看的界面设计,至于图片,只能自己上手了,设计最终的目的是好看,好看,好看. 接下来,做个 ...