Open Access
2017 Fast Bayesian hyperparameter optimization on large datasets
Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank Hutter
Electron. J. Statist. 11(2): 4945-4968 (2017). DOI: 10.1214/17-EJS1335SI

Abstract

Bayesian optimization has become a successful tool for optimizing the hyperparameters of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed FABOLAS, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that FABOLAS often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.

Citation

Download Citation

Aaron Klein. Stefan Falkner. Simon Bartels. Philipp Hennig. Frank Hutter. "Fast Bayesian hyperparameter optimization on large datasets." Electron. J. Statist. 11 (2) 4945 - 4968, 2017. https://doi.org/10.1214/17-EJS1335SI

Information

Received: 1 June 2017; Published: 2017
First available in Project Euclid: 15 December 2017

zbMATH: 06825037
MathSciNet: MR3738202
Digital Object Identifier: 10.1214/17-EJS1335SI

Vol.11 • No. 2 • 2017
Back to Top