How to Evaluate Machine Learning Models, Part 4: Hyperparameter Tuning

In the realm of machine learning, hyperparameter tuning is a “meta” learning task. It happens to be one of my favorite subjects because it can appear like black magic, yet its secrets are not impenetrable. In this post, I'll walk through what is hyperparameter tuning, why it's hard, and what kind of smart tuning methods are being developed to do something about it.

First, let’s clarify some basic concepts. Machine learning models are basically mathematical functions that represent the relationship between different aspects of data. For instance, a linear regression model uses a line to represent the relationship between “features” and “target.” The formula looks like this:

wTx=y,

where x is a vector that represents features of the data and y is a scalar variable that represents the target (some numeric quantity that we wish to learn to predict).

This model assumes that that the relationship between x and y is linear. The variable w is a weight vector that represents the normal vector for the line; it specifies the slope of the line. This is what’s known as a model parameter, and it is learned during the training phase. “Training a model” involves using an optimization procedure to determine the best model parameter that “fits” the data. (Krishna’s blog post on Parallel SGD gives a great introduction on optimization methods for model training.)

What is a hyperparameter? Why is it important?

There is another set of parameters known as hyperparameters, sometimes also knowns as “nuisance parameters.” These are values that must be specified outside of the training procedure. Vanilla linear regression doesn’t have any hyperparameters. But variants of linear regression do.Ridge regression and lasso both add a regularization term to linear regression; the weight for the regularization term is called the regularization parameter. Decision trees have hyperparameters such as the desired depth and number of leaves in the tree. Support Vector Machines require setting a misclassification penalty term. Kernelized SVM require setting kernel parameters like the width for RBF kernels. The list goes on.

This type of hyperparameter controls the capacity of the model, i.e., how flexible the model is, how many degrees of freedom it has in fitting the data. Proper control of model capacity can preventoverfitting, which happens when the model is too flexible, and the training process adapts too much to the training data, thereby losing predictive accuracy on new test data. So a proper setting of the hyperparameters is important.

Another type of hyperparameters comes from the training process itself. For instance, stochastic gradient descent optimization requires a learning rate or a learning schedule. Some optimization methods require a convergence threshold. Random forests and boosted decision trees require knowing the number of total trees. (Though this could also be classified as a type of regularization hyperparameter.) These also need to be set to reasonable values in order for the training process to find a good model.

Hyperparameter tuning

Hyperparameter settings could have a big impact on the prediction accuracy of the trained model. Optimal hyperparameter settings often differ for different datasets. Therefore they should be tuned for each dataset. Since the training process doesn’t set the hyperparameters, there needs to be a meta process that tunes the hyperparameters. This is what we mean by hyperparameter tuning.

Hyperparameter tuning is a meta-optimization task. As the figure shows, each trial of a particular hyperparameter setting involves training a model--an inner optimization process. The outcome of hyperparameter tuning is the best hyperparameter setting, and the outcome of model training is the best model parameter setting.

Illustration of the hyperparameter tuning machine.

For each proposed hyperparameter setting, the inner model training process comes up with a model for the dataset and outputs evaluation results on hold-out or cross validation datasets. After evaluating a number of hyperparameter settings, the hyperparameter tuner outputs the setting that yields the best performing model. The last step is to train a new model on the entire dataset (training and validation) under the best hyperparameter setting. Here is the pseudo-code. (The training and validation step can be conceptually replaced with a cross-validation step.)

Hyperparameter_tuning (training_data, validation_data, hp_list):
  hp_perf = []
  foreach hp_setting in hp_list:
    m = train_model(training_data, hp_setting)
    validation_results = eval_model(m, validation_data)
    hp_perf.append(validation_results)
  best_hp_setting = hp_list[max_index(hp_perf)]
  best_m = train_model(training_data.append(validation_data), best_hp_setting)
  return (best_hp_setting, best_m)

Hyperparameter tuning algorithms

Conceptually, hyperparameter tuning is an optimization task, just like model training.

However, these two tasks are quite different in practice. When training a model, the quality of a proposed set of model parameters can be written as a mathematical formula (usually called the loss function). When tuning hyperparameters, however, the quality of those hyperparameters cannot be written down in a closed-form formula, because it depends on the outcome of a blackbox (the model training process).

This is why hyperparameter tuning is much harder. Up until a few years ago, the only available methods were grid search and random search. In the last few years, there’s been increased interest in auto-tuning. Several research groups have worked on the problem, published papers, and released new tools.

Grid search

Grid search, true to its name, picks out a grid of hyperparameter values, evaluates every one of them, and returns the winner. For example, if the hyperparameter is the number of leaves in a decision tree, then the grid could be 10, 20, 30, …, 100. For regularization parameters, it’s common to use exponential scale: 1e-5, 1e-4, 1e-3, … 1. Some guess work is necessary to specify the minimum and maximum values. So sometimes people run a small grid, see if the optimum lies at either end point, and then expand the grid in that direction. This is called manual grid search.

Grid search is dead simple to set up and trivial to parallelize. It is the most expensive method in terms of total computation time. However, if run in parallel, it is fast in terms of wall clock time.

Random search

I love movies where the underdog wins, and I love machine learning papers where simple solutions are shown to be surprisingly effective. This is the storyline of “Random search for hyperparameter optimization” by Bergstra and Bengio. Random search is a slight variation on grid search. Instead of searching over the entire grid, random search only evaluates a random sample of points on the grid. This makes random search a lot cheaper than grid search. Random search wasn’t taken very seriously before. This is because it doesn’t search over all the grid points, so it cannot possibly beat the optimum found by grid search. But then came along Bergstra and Bengio. They showed that, in surprisingly many instances, random search performs about as well as grid search. All in all, trying 60 random points sampled from the grid seems to be good enough.

In hindsight, there is a simple probabilistic explanation for the result: for any distribution over a sample space with a finite maximum, the maximum of 60 random observations lies within the top 5% of the true maximum, with 95% probability. That may sound complicated, but it’s not. Imagine the 5% interval around the true maximum. Now imagine that we sample points from this space and see if any of it lands within that maximum. Each random draw has a 5% chance of landing in that interval, if we draw n points independently, then the probability that all of them miss the desired interval is (1−0.05)n. So the probability that at least one of them succeeds in hitting the interval is 1 minus that quantity. We want at least a .95 probability of success. To figure out the number of draws we need, just solve for n in the equation:

1−(1−0.05)n>0.95.

We get n>=60. Ta-da!

The moral of the story is: if the close-to-optimal region of hyperparameters occupies at least 5% of the grid surface, then random search with 60 trials will find that region with high probability.

With its utter simplicity and surprisingly reasonable performance, random search is my to-go method for hyperparameter tuning. It’s trivially parallelizable, just like grid search, but it takes much fewer tries and performance almost as well most of the time.

Smart hyperparameter tuning

Smarter tuning methods are available. Unlike the “dumb” alternatives of grid search and random search, smart hyperparameter tuning are much less parallelizable. Instead of generating all the candidate points upfront and evaluating the batch in parallel, smart tuning techniques pick a few hyperparameter settings, evaluate their quality, then decide where to sample next. This is an inherently iterative and sequential process. It is not very parallelizable. The goal is to make fewer evaluations overall and save on the overall computation time. If wall clock time is your goal, and you can afford multiple machines, then I suggest sticking to random search.

Buyer beware: Smart search algorithms require computation time to figure out where to place the next set of samples. Some algorithms require much more time than others. Hence it only makes sense if the evaluation procedure--the inner optimization box--takes much longer than the process of evaluating where to sample next. Smart search algorithms also contain parameters of their own that need to be tuned. (Hyper-hyperparameters?) Sometimes tuning the hyper-hyperparameters is crucial to make it faster than random search.

Recall that hyperparameter tuning is difficult because we cannot write down the actual mathematical formula for the function we’re optimizing. (The technical term for the function that is being optimized is response surface.) Consequently, we don’t have the derivative of that function, and therefore most of the mathematical optimization tools that we know and love, such as the Newton method or SGD, cannot be applied.

I will highlight three smart tuning methods proposed in recent years: derivative-free optimization, Bayesian optimization, and random forest smart tuning. Derivative-free methods employ heuristics to determine where to sample next. Bayesian optimization and random forest smart tuning both model the response surface with another function, then sample more points based on what the model says.

Jasper Snoek, Hugo Larochelle, and Ryan P. Adams used Gaussian processes to model the response function and something called Expected Improvement to determine the next proposals. Gaussian processes are trippy; they specify distributions over *functions*. When one samples from a Gaussian process, one generates an entire function. Training a Gaussian process adapts this distribution over the data at hand, so that it generates functions that are more likely to model all of the data at once. Given the current estimate of the Gaussian process, one can compute the amount of expected improvement of any point over the current optimum--the Expected Improvement. They showed that this procedure of modeling the hyperparameter response surface and generating the next set of proposed hyperparameter settings can beat the evaluation cost of manual tuning.

Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown suggested training a random forest of regression trees to approximate the response surface. New points are sampled based on where the random forest considers to be the optimal regions. They call this SMAC (Sequential Model-based Algorithm Configuration). Word on the street is that this method works better than Gaussian processes for categorical hyperparameters.

Derivative-free optimization, as the name suggests, is a branch of mathematical optimization for situations where there is no derivative information. Notable derivative-free methods includegenetic algorithm and Nelder-Mead. Essentially, the algorithms boil down to: try a bunch of random points, approximate the gradient, find the most likely search direction, and go there. A few years ago, Misha Bilenko and I tried Nelder-Mead for hyperparameter tuning. We found the algorithm delightfully easy to implement and no less efficient that Bayesian optimization.

Other posts in this series

Part 1: Orientation

Part 2a: Classification metrics

Part 2b: Ranking and regression metrics

Part 3: Validation and offline testing

Software packages

Grid search and random search: GraphLab Createscikit-learn.

Bayesian optimization using Gaussian processes: Spearmint (from Jasper et al.)

Random forest tuning: SMAC (from Hutter et al.)

Hyper gradient: hypergrad (from Maclaurin et al.)

Further reading

Random search for hyper-parameter optimization, by James Bergstra and Yoshua Bengio. Journal of Machine Learning Research, 2012.

Algorithms for hyper-parameter optimization, by James Bergstra, Rémi Bardenet, Yoshua Bengio and Balázs Kégl. Neural Information Processing Systems, 2011.

Practical Bayesian Optimization of Machine Learning Algorithms, by Jasper Snoek, Hugo Larochelle and Ryan P. Adams. Neural Information Processing Systems, 2012.

Sequential Model-Based Optimization for General Algorithm Configuration, by Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown. Learning and Intelligent Optimization, 2011.

Lazy paired hyper-parameter tuning, by Alice Zheng and Misha Bilenko. International Joint Conference on Artificial Intelligence, 2013.

Introduction to derivative-free optimization, by Andrew R. Conn, Katya Scheinberg and Luis N. Vincente. MPS-SIAM Series on Optimization, 2009.

Gradient-based Hyperparameter Optimization through Reversible Learning, by Dougal Maclaurin, David Duvenaud and Ryan P. Adams. Arxiv, 2015.

How to Evaluate Machine Learning Models, Part 4: Hyperparameter Tuning的更多相关文章

  1. 【机器学习Machine Learning】资料大全

    昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...

  2. 机器学习(Machine Learning)&深度学习(Deep Learning)资料【转】

    转自:机器学习(Machine Learning)&深度学习(Deep Learning)资料 <Brief History of Machine Learning> 介绍:这是一 ...

  3. 机器学习(Machine Learning)与深度学习(Deep Learning)资料汇总

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost到随机森林.D ...

  4. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  5. 学习笔记之Machine Learning Crash Course | Google Developers

    Machine Learning Crash Course  |  Google Developers https://developers.google.com/machine-learning/c ...

  6. [C2P2] Andrew Ng - Machine Learning

    ##Linear Regression with One Variable Linear regression predicts a real-valued output based on an in ...

  7. Why The Golden Age Of Machine Learning is Just Beginning

    Why The Golden Age Of Machine Learning is Just Beginning Even though the buzz around neural networks ...

  8. Machine Learning - 第3周(Logistic Regression、Regularization)

    Logistic regression is a method for classifying data into discrete outcomes. For example, we might u ...

  9. How to use data analysis for machine learning (example, part 1)

    In my last article, I stated that for practitioners (as opposed to theorists), the real prerequisite ...

随机推荐

  1. 2016-2017 ACM-ICPC, NEERC, Northern Subregional Contest Problem I. Integral Polygons

    题目来源:http://codeforces.com/group/aUVPeyEnI2/contest/229510 时间限制:2s 空间限制:256MB 题目大意: 给定一个凸多边形,有一种连接两个 ...

  2. firefox插件Firebug的使用教程

    什么是Firebug 从事了数年的Web开发工作,越来越觉得现在对WEB开发有了更高的要求.要写出漂亮的HTML代码:要编写精致的CSS样式表展示每个页面模块:要调试 javascript给页面增加一 ...

  3. 阅读笔记《我是一只IT小小鸟》

    我是一只IT小小鸟 我们在尝试新的事物的时候,总是会遇到各种各样的困难,不同的人会在碰壁不同的次数之后退出.用程序员喜欢的话来说就是,我们都在for循环,区别在于你是什么情况下break;的.有的人退 ...

  4. WebSphere Application Server诊断和调优

    近段时间,我们项目中用到的WebSphere应用服务器(WAS),但在客户的production环境下极不稳定,经常宕机.给客户造成非常不好的影响,同时,也给项目组很大压力.为此,我们花了近一个月时间 ...

  5. Jmeter 快速入门--简单的http压测

    1.添加线程组 打开jmeter主窗口后,选择左侧树形结构里的"测试计划",然后右键选择添加,选择"threads(users)",选择"线程组&qu ...

  6. 微信小程序组件 客服

    <!-- 话务 --> <view class='detail-tel flexca'> <image class='image-full' src='../../img ...

  7. java内存加载机制

    什么是java类加载? 类加载是指将.class类中的二进制数据存放到内存中,会在内存中的推中建立一个java.lang.String的引用对象来存放方法区的数据结构,而类中的数据会放到方法区中 类加 ...

  8. 常用的Redis客户端的并发模型(转)

      伪代码模型 # get lock : timestamp = current Unix time + lock = SETNX lock.foo timestamp or (now() > ...

  9. Kafka在大型应用中的 20 项最佳实践

    原标题:Kafka如何做到1秒处理1500万条消息? Apache Kafka 是一款流行的分布式数据流平台,它已经广泛地被诸如 New Relic(数据智能平台).Uber.Square(移动支付公 ...

  10. 【C++】构造函数不能是虚函数

    1 虚函数对应一个vtable,这大家都知道,可是这个vtable其实是存储在对象的内存空间的.问题出来了,如果构造函数是虚的,就需要通过 vtable来调用,可是对象还没有实例化,也就是内存空间还没 ...