New paper: Making a Science of Model Search

24 Feb 2013 - Waterloo ON

A new paper by myself, Dan Yamins David D. Cox about hyperparameter search for large convolutional networks shows that the TPE algorithm introduced in NIPS 2011 can configure a null image-processing model with 238 hyperparameters as well or better than domain experts. Our null model was the set of convolutional networks with contrast normalization and spatial pooling operators, with SVM classification. We stuck to fast filter-learning algorithms: standard normal random filters and random projections of PCA components. The search space includes the one used by Pinto et. al. in their face recognition work and part of the space used by Coates et al. to study different encoder / decoder combinations. Starting from a single prior over vision architectures, TPE is able to find better-performing models (on cross-validation data) than either of these previous works on the data sets they were working on, within 24 hours on 5 GPUs.

Citation:
J. Bergstra, D. Yamins, D. D. Cox (2013).
Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures.
Proc. 30th International Conference on Machine Learning (ICML-13).

Abstract:
Many computer vision algorithms depend on configuration settings that are typically hand-tuned in the course of evaluating the algorithm for a particular data set. While such parameter tuning is often presented as being incidental to the algorithm, correctly setting these parameter choices is frequently critical to realizing a method’s full potential. Compounding matters, these parameters often must be re-tuned when the algorithm is applied to a new problem domain, and the tuning process itself often depends on personal experience and intuition in ways that are hard to quantify or describe. Since the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning, it is sometimes difficult to know whether a given technique is genuinely better, or simply better tuned.

In this work, we propose a meta-modeling approach to support automated hyperparameter optimization, with the goal of providing practical tools that replace hand-tuning with a reproducible and unbiased optimization process. Our approach is to expose the underlying expression graph of how a performance metric (e.g. classification accuracy on validation examples) is computed from hyperparameters that govern not only how individual processing steps are applied, but even which processing steps are included. A hyperparameter optimization algorithm transforms this graph into a program for optimizing that performance metric. Our approach yields state of the art results on three disparate computer vision problems: a face-matching verification task (LFW), a face identification task (PubFig83) and an object recognition task (CIFAR-10), using a single broad class of feed-forward vision architectures. %More broadly, we argue that the formalization of a meta-model supports more objective, reproducible, and quantitative evaluation of computer vision algorithms, and that it can serve as a valuable tool for guiding algorithm development.