Bayesian optimisation for smart hyperparameter search
Bayesian optimisation for smart hyperparameter search
Fitting a single classifier does not take long, fitting hundreds takes a while. To find the best hyperparameters you need to fit a lot of classifiers. What to do?
This post explores the inner workings of an algorithm you can use to reduce the number of hyperparameter sets you need to try before finding the best set. The algorithm goes under the name of bayesian optimisation. If you are looking for a production ready implementation check out: MOE, metric optimisation engine developed by Yelp.
Gaussian processe regression is a useful tool in general and is used heavily here. Check out my post on Gaussian processes with george for a short introduction.
This post starts with an example where we know the true form of the scoring function. Followed by pitting random grid search against Bayesian optimisation to find the best hyper-parameter for a real classifier.
As usual first some setup and importing:
%matplotlib inline
import random import numpy as np
np.random.seed(9) from scipy.stats import randint as sp_randint import matplotlib.pyplot as plt import seaborn as sns
sns.set_style('whitegrid')
sns.set_context("talk")
By George!¶
Bayesian optimisation uses gaussian processes to fit a regression model to the previously evaluated points in hyper-parameter space. This model is then used to suggest the next (best) point in hyper-parameter space to evaluate the model at.
To choose the best point we need to define a criterion, in this case we use "expected improvement". As we only know the score to with a certain precision we do not want to simply choose the point with the best score. Instead we pick the point which promises the largest expected improvement. This allows us to incorporate the uncertainty about our estimation of the scoring function into the procedure. It leads to a mixture of exploitation and exploration of the parameter space.
Below we setup a toy scoring function (−xsinx), sample a two points from it, and fit our gaussian process model to it.
import george
from george.kernels import ExpSquaredKernel score_func = lambda x: -x*np.sin(x)
x = np.arange(0, 10, 0.1)
# Generate some fake, noisy data. These represent
# the points in hyper-parameter space for which
# we already trained our classifier and evaluated its score
xp = 10 * np.sort(np.random.rand(2))
yerr = 0.2 * np.ones_like(xp)
yp = score_func(xp) + yerr * np.random.randn(len(xp))
# Set up a Gaussian process
kernel = ExpSquaredKernel(1)
gp = george.GP(kernel) gp.compute(xp, yerr) mu, cov = gp.predict(yp, x)
std = np.sqrt(np.diag(cov)) def basic_plot():
fig, ax = plt.subplots()
ax.plot(x, mu, label="GP median")
ax.fill_between(x, mu-std, mu+std, alpha=0.5)
ax.plot(x, score_func(x), '--', label=" True score function (unknown)")
# explicit zorder to draw points and errorbars on top of everything
ax.errorbar(xp, yp, yerr=yerr, fmt='ok', zorder=3, label="samples")
ax.set_ylim(-9,6)
ax.set_ylabel("score")
ax.set_xlabel('hyper-parameter X')
ax.legend(loc='best')
return fig,ax
basic_plot()
(<matplotlib.figure.Figure at 0x10ab63e90>,
<matplotlib.axes._subplots.AxesSubplot at 0x10ab6f590>)
The dashed green line represents the true value of the scoring function as a function of our hypothetical hyper-parameter X. The black dots (and their errorbars) represent points at which we evaluated our classifier and calculated the score. In blue our regression model trying to predict the value of the score function. The shaded area represents the uncertainty on the median (solid blue line) value of the estimated score function value.
Next let's calculate the expected improvement at every value of the hyper-parameter X. We also build a multistart optimisation routine (next_sample) which uses the expected improvement to suggest which point to sample next.
from scipy.optimize import minimize
from scipy import stats
def expected_improvement(points, gp, samples, bigger_better=False):
# are we trying to maximise a score or minimise an error?
if bigger_better:
best_sample = samples[np.argmax(samples)] mu, cov = gp.predict(samples, points)
sigma = np.sqrt(cov.diagonal()) Z = (mu-best_sample)/sigma ei = ((mu-best_sample) * stats.norm.cdf(Z) + sigma*stats.norm.pdf(Z)) # want to use this as objective function in a minimiser so multiply by -1
return -ei else:
best_sample = samples[np.argmin(samples)] mu, cov = gp.predict(samples, points)
sigma = np.sqrt(cov.diagonal()) Z = (best_sample-mu)/sigma ei = ((best_sample-mu) * stats.norm.cdf(Z) + sigma*stats.norm.pdf(Z)) # want to use this as objective function in a minimiser so multiply by -1
return -ei def next_sample(gp, samples, bounds=(0,10), bigger_better=False):
"""Find point with largest expected improvement"""
best_x = None
best_ei = 0
# EI is zero at most values -> often get trapped
# in a local maximum -> multistarting to increase
# our chances to find the global maximum
for rand_x in np.random.uniform(bounds[0], bounds[1], size=30):
res = minimize(expected_improvement, rand_x,
bounds=[bounds],
method='L-BFGS-B',
args=(gp, samples, bigger_better))
if res.fun < best_ei:
best_ei = res.fun
best_x = res.x[0] return best_x fig, ax = basic_plot()
# expected improvement would need its own y axis, so just multiply by ten
ax.plot(x, 10*np.abs(expected_improvement(x, gp, yp)),
label='expected improvement')
ax.legend(loc='best')
<matplotlib.legend.Legend at 0x10c894e50>
print "The algorithm suggests sampling at X=%.4f"%(next_sample(gp, yp))
The algorithm suggests sampling at X=1.5833
The red line shows the expected improvement. Comparing the solid blue line and shaded area with where the exepcted imrpovement is largest it makes sense that the optimisations suggest we should try X=1.58 as the next point to evaluate our scoring function at.
This concludes the toy example part. Let's get moving with something real!
Random Grid Search as Benchmark¶
To make this more interesting than a complete toy example, let's use a regression problem (Friedman1) and a single DecisionTreeRegressor, even though it is fairly fast to fit lots of classifiers on this dataset. Replace both by your setup for your actual problem.
To judge how much more quickly we find the best set of hyperparameters we will pit bayesian optimisation against random grid search. Random grid search is already a big improvement over an exhaustive grid search. I have taken the particular regression problem from Gilles Louppe's PhD thesis: Understanding Random Forests: From Theory to Practice.
from sklearn.grid_search import GridSearchCV
from sklearn.grid_search import RandomizedSearchCV
from sklearn.datasets import make_friedman1
from sklearn.tree import DecisionTreeRegressor from operator import itemgetter # Load the data
X, y = make_friedman1(n_samples=5000) clf = DecisionTreeRegressor() param_dist = {"min_samples_split": sp_randint(1, 101),
} # run randomized search
n_iterations = 8 random_grid = RandomizedSearchCV(clf,
param_distributions=param_dist,
n_iter=n_iterations,
scoring='mean_squared_error')
random_grid = random_grid.fit(X, y)
from scipy.stats import sem params_ = []
scores_ = []
yerr_ = []
for g in random_grid.grid_scores_:
params_.append(g.parameters.values()[0])
scores_.append(g.mean_validation_score)
yerr_.append(sem(g.cv_validation_scores)) fig, ax = plt.subplots()
ax.errorbar(params_, scores_, yerr=yerr_, fmt='ok', label='samples')
ax.set_ylabel("score")
ax.set_xlabel('min samples leaf')
ax.legend(loc='best')
<matplotlib.legend.Legend at 0x10c22bfd0>
With eight evaluations we get a fairly good idea what the score function looks like for this problem. Potentially 1 is the best solution, otherwise steeply falling. The best hyper-parameter setting in this case is eight.
You can see that the search explores all values of min_samples_leaf with equal probability.
def top_parameters(random_grid_cv):
top_score = sorted(random_grid_cv.grid_scores_,
key=itemgetter(1),
reverse=True)[0]
print "Mean validation score: {0:.3f} +- {1:.3f}".format(
top_score.mean_validation_score,
np.std(top_score.cv_validation_scores))
print random_grid_cv.best_params_ top_parameters(random_grid)
Mean validation score: -4.322 +- 0.127
{'min_samples_split': 8}
The top scoring parameter is around eight. Let's see what we can do with a bayesian approach.
Bayesian optimisation¶
Do you have your priors ready? Let's get Bayesian! The question is, can we find at least as good a value for min_samples_split or a better one in eight or less attempts of training a model.
To get things started we evaluate the model at three points of the hyper-parameter. There are used for the first fit of our gaussian process model. The next point at which to evaluate the model is then the point where the expected improvement is largest.
The below two plots show the state of the bayesian optimisation after the first three points are tried and then after the five points choosen according to the expected improvement.
from sklearn.cross_validation import cross_val_score def plot_optimisation(gp, x, params, scores, yerr):
mu, cov = gp.predict(scores, x)
std = np.sqrt(np.diag(cov)) fig, ax = plt.subplots()
ax.plot(x, mu, label="GP median")
ax.fill_between(x, mu-std, mu+std, alpha=0.5) ax_r = ax.twinx()
ax_r.grid(False)
ax_r.plot(x,
np.abs(expected_improvement(x, gp, scores, bigger_better=True)),
label='expected improvement',
c=sns.color_palette()[2])
ax_r.set_ylabel("expected improvement") # explicit zorder to draw points and errorbars on top of everything
ax.errorbar(params, scores, yerr=yerr,
fmt='ok', zorder=3, label='samples')
ax.set_ylabel("score")
ax.set_xlabel('min samples leaf')
ax.legend(loc='best')
return gp def bayes_optimise(clf, X,y, parameter, n_iterations, bounds):
x = range(bounds[0], bounds[1]+1) params = []
scores = []
yerr = [] for param in np.linspace(bounds[0], bounds[1], 3, dtype=int):
clf.set_params(**{parameter: param})
cv_scores = cross_val_score(clf, X,y, scoring='mean_squared_error')
params.append(param)
scores.append(np.mean(cv_scores))
yerr.append(sem(cv_scores)) # Some cheating here, tuning the GP hyperparameters is something
# we skip in this post
kernel = ExpSquaredKernel(1000)
gp = george.GP(kernel, mean=np.mean(scores))
gp.compute(params, yerr) plot_optimisation(gp, x, params, scores, yerr) for n in range(n_iterations-3):
gp.compute(params, yerr)
param = next_sample(gp, scores, bounds=bounds, bigger_better=True) clf.set_params(**{parameter: param})
cv_scores = cross_val_score(clf, X,y, scoring='mean_squared_error')
params.append(param)
scores.append(np.mean(cv_scores))
yerr.append(sem(cv_scores)) plot_optimisation(gp, x, params, scores, yerr)
return params, scores, yerr, clf params, scores, yerr, clf = bayes_optimise(DecisionTreeRegressor(),
X,y,
'min_samples_split',
8, (1,100))
print "Best parameter:"
print params[np.argmax(scores)], 'scores', scores[np.argmax(scores)]
Best parameter:
17.721702255 scores -4.29572348033
You can see that the points are all sampled close to the maximum. Where as the random grid search samples points far away from the peak (above 40 and beyond), the bayesian optimisation concentrates on the region close to the maximum (around 20). This vastly improves the efficiency of finding the true maximum. We could have even stopped before evaluating all of the next five points. They are all pretty close to each other.
The real deal --- MOE¶
While it is quite straightforward to build yourself a small bayesian optimisation procedure, I would recommend you check out MOE. This is a production quality setup for doing global, black box optimisation. It is developed by the good guys at Yelp!. Therefore much more robust than our home made solution.
Conclusions¶
Bayesian optimisation is not scary. With the two examples here you should be convinced that using a smart approach like this is faster than a random grid search (especially in higher dimensions) and that there is nothing magic going on.
If you find a mistake or want to tell me something else get in touch on twitter @betatim
This post started life as a ipython notebook, download it or view it online.
Bayesian optimisation for smart hyperparameter search的更多相关文章
- State of Hyperparameter Selection
State of Hyperparameter Selection DANIEL SALTIEL VIEW NOTEBOOK Historically hyperparameter determina ...
- How to Evaluate Machine Learning Models, Part 4: Hyperparameter Tuning
How to Evaluate Machine Learning Models, Part 4: Hyperparameter Tuning In the realm of machine learn ...
- (转)Illustrated: Efficient Neural Architecture Search ---Guide on macro and micro search strategies in ENAS
Illustrated: Efficient Neural Architecture Search --- Guide on macro and micro search strategies in ...
- Research Guide for Neural Architecture Search
Research Guide for Neural Architecture Search 2019-09-19 09:29:04 This blog is from: https://heartbe ...
- [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
About this Course This course will teach you the "magic" of getting deep learning to work ...
- (转) NAS(神经结构搜索)综述
NAS(神经结构搜索)综述 文章转载自:http://www.tensorinfinity.com/paper_136.html 本文是对神经结构搜索(NAS)的简单综述,在写作的过程中参考了文献[1 ...
- ICML 2018 | 从强化学习到生成模型:40篇值得一读的论文
https://blog.csdn.net/y80gDg1/article/details/81463731 感谢阅读腾讯AI Lab微信号第34篇文章.当地时间 7 月 10-15 日,第 35 届 ...
- 【转载】NeurIPS 2018 | 腾讯AI Lab详解3大热点:模型压缩、机器学习及最优化算法
原文:NeurIPS 2018 | 腾讯AI Lab详解3大热点:模型压缩.机器学习及最优化算法 导读 AI领域顶会NeurIPS正在加拿大蒙特利尔举办.本文针对实验室关注的几个研究热点,模型压缩.自 ...
- AutoML相关论文
本文为Awesome-AutoML-Papers的译文. 1.AutoML简介 Machine Learning几年来取得的不少可观的成绩,越来越多的学科都依赖于它.然而,这些成果都很大程度上取决于人 ...
随机推荐
- mininet实验 动态改变转发规则实验
写在前面 本实验参考 POX脚本设置好控制器的转发策略,所以只要理解脚本. mininet脚本设置好拓扑和相关信息,所以也只要理解脚本. POX脚本目前基本看不懂. 本实验我学会了:POX控制器Web ...
- 程序员必看电影:Java 4-ever
http://blog.csdn.net/zdwzzu2006/article/details/5863068
- 小程序获取access_token
<?php //小程序appid $appid = 'wx79d7c348d19f010c'; //小程序 APPSecret 密钥 $appsecret = 'd624aca86d0350ee ...
- Node.js系列——(3)连接DB
背景 node.js,有人称之为运行在服务器端的JavaScript.以往我们使用JavaScript时,都是依赖后端查询数据库并返回数据,而JavaScript只需要展示即可.问题来了,就不能绕开后 ...
- oracle 数据库 命令
SQL PLUS 命令: SELECT * FROM ALL_TABLES;系统里有权限的表SELECT * FROM DBA_TABLES; 系统表SELECT * FROM USER_TABLES ...
- 【bzoj5197】[CERC2017]Gambling Guide 期望dp+堆优化Dijkstra
题目描述 给定一张n个点,m条双向边的无向图. 你要从1号点走到n号点.当你位于x点时,你需要花1元钱,等概率随机地买到与x相邻的一个点的票,只有通过票才能走到其它点. 每当完成一次交易时,你可以选择 ...
- 【Jmeter】集合点Synchronizing Timer
集合点: 简单来理解一下,虽然我们的“性能测试”理解为“多用户并发测试”,但真正的并发是不存在的,为了更真实的实现并发这感念,我们可以在需要压力的地方设置集合点,每到输入用户名和密码登录时,所有的虚拟 ...
- [BZOJ3507]通配符匹配
3507: [Cqoi2014]通配符匹配 Time Limit: 10 Sec Memory Limit: 128 MB Description 几乎所有操作系统的命令行界面(CLI)中都支持文件 ...
- 【总结】Link-Cut Tree
这是一篇关于LCT的总结 加删边的好朋友--Link Cut Tree Link-Cut Tree,LCT的全称 可以说是从树剖引出的问题 树剖可以解决静态的修改或查询树的链上信息:那如果图会不断改变 ...
- [CTSC2012]熟悉的文章 后缀自动机
题面:洛谷 题解: 观察到L是可二分的,因此我们二分L,然后就只需要想办法判断这个L是否可行即可. 因为要尽量使L可行,因此我们需要求出对于给定L,这个串最多能匹配上多少字符. 如果我们可以对每个位置 ...