SigOpt works with algorithmic trading firms who represent over $600B in assets under management. These partnerships have given us unique insights on how model optimization and experimentation can be made most useful for teams who are modeling at scale with the purpose of generating revenue or differentiating their products in a competitive marketplace. To share these lessons, tune in on the third Tuesday of each month where we will focus more specifically on best practices for experiment management, model optimization and parameter tuning.
Last month, we focused on efficiently scaling tuning jobs with parallelism and multitask optimization. This month, we focus on getting the most out of every run with a focus on deep learning model development.
In deep learning, it can be particularly tough to select the right metric and know when a model has converged during training. In this talk, we discuss ways to monitor convergence, automate early stopping and set the right metric strategy for deep learning training and tuning jobs. The result is a more efficient approach to iterating on these models in the development process.
In particular we will focus on: