Come Join SigOpt at ICML

From July 18-24, the International Conference on Machine Learning (ICML) will be held virtually (from Austria).  SigOpt has participated at ICML since 2016 and published research at the conference since 2016. Along with our Academic and Internship Programs, conferences like ICML are exciting ways in which SigOpt’s research team continues to invest in and collaborate with the broader machine learning and Bayesian optimization communities.

Obviously, we are all disappointed to see that this conference is happening remotely; we look forward to the opportunity to meet everyone physically at future conferences. Until then, please visit us at the Intel booth, where we will be representing during parts of the booth session (exact times will be posted as we learn them). If you cannot make that session but are still interested in talking, please contact us at research@sigopt.com and we will get back to you soon!

 

Poster Session

This year, SigOpt will be presenting our research on a new methodology to power model tuning. In discussions with our customers, we have learned that many production circumstances benefit from having a broad set of high performing models from which to choose. Multi-armed bandits, model ensembling, and utilizing expert opinion all require a small set of choices which meet a developer’s minimum performance standards.

Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design was accepted into the proceedings. You can read more about it in an upcoming blog post by Gustavo Malkomes. We also have a 5 minute presentation and a poster to explain more about the research.

Spotlight Presentation: Bayesian Learning 1
Thu 22 July 6:20 - 6:25 AM PDT

Poster Session 5
Thu 22 July 9:00 - 11:00 AM PDT

We look forward to seeing you!

 

Featured ICML Articles

We are fortunate to be part of an awesome community, and we would like to highlight the work of former collaborators and colleagues below and congratulate them on their achievements. Be sure to check out their talks, posters, blogs, and papers.

Accurate Post Training Quantization With Small Calibration Sets
Itay Hubara (Habana Labs) · Yury Nahshan (Intel Corp) · Yair Hanani (Habana Labs) · Ron Banner (Habana Labs) · Daniel Soudry (Technion)

Learning Binary Decision Trees by Argmin Differentiation
Valentina Zantedeschi (INRIA, UCL) · Matt J. Kusner (University College London) · Vlad Niculae (Instituto de Telecomunicações // NIF 502854200)

Nonmyopic Multifidelity Active Search
Quan Nguyen (Washington University in St. Louis) · Arghavan Modiri (University of Toronto) · Roman Garnett (Washington University in St. Louis)

Operationalizing Complex Causes: A Pragmatic View of Mediation
Limor Gultchin (University of Oxford) · David Watson (University College London) · Matt J. Kusner (University College London) · Ricardo Silva (University College London)

Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction
Afsaneh Mastouri (University College London) · Yuchen Zhu (University College London) · Limor Gultchin (University of Oxford) · Anna Korba (CREST/ENSAE) · Ricardo Silva (University College London) · Matt J. Kusner (University College London) · Arthur Gretton (Gatsby Computational Neuroscience Unit) · Krikamol Muandet (Max Planck Institute for Intelligent Systems)