SigOpt is designed to accelerate hyperparameter optimization by intelligently trading off exploration and exploitation with an ensemble of Bayesian and global optimization algorithms that do not require access to proprietary prior insights on a model or parameters. But our users often have deep insights on their models and parameters that could guide this tuning process.
Join this talk to learn how Mike and Harvey from our research team built Prior Beliefs to incorporate this type of prior knowledge into SigOpt’s hyperparameter optimization process.
In this talk, they will explain the intuition behind and best practices for implementing Prior Beliefs with SigOpt to integrate experience developed from working on a modeling problem in the initialization of a hyperparameter optimization experiment.
The hope is that this results in parameter suggestions, and, ultimately, an optimal configuration, that is more aligned with their particular needs. And by warm starting this process, they can arrive at this optimal configuration faster than before. In the course of discussing this feature, we will also cover some initial use cases from our community.
Michael is a member of the research engineering team at SigOpt, with interests in Bayesian optimization, kernel-based approximation theory, spatial statistics, and matrix computations. At SigOpt, he applies his expertise in leading the development of an enterprise-grade Bayesian optimization platform and facilitating collaborations with academic partners. Prior to joining SigOpt, he spent time in the math and computer science division at Argonne National Laboratory and was a visiting assistant professor at the University of Colorado-Denver and Illinois Institute of Technology. Mike holds a Ph.D. from Cornell. In his spare time, he enjoys cheering for the Cleveland Cavaliers and traveling to Hong Kong.