Back to activities
GERAD seminar

Learning Optimization Proxies: Fast Convergence and Robustness Guarantees

iCalendar

Apr 30, 2026   11:00 AM — 12:00 PM

Paul Grigas University of California, Berkeley, United States

Paul Grigas

Many applications require repeatedly solving a family of optimization problems as parameters vary, which can be computationally prohibitive in real time. This motivates the use of optimization proxies: learned mappings from problem parameters to approximate solutions that can be evaluated rapidly. Optimization proxies are learned by parameterizing the solution map within a function class and training via a single data-driven stochastic optimization problem, an approach that has seen strong empirical success in a variety of domains. Still, relatively little is known about the interplay between architecture design, training distributions, training algorithms, and robustness guarantees. We theoretically investigate this interplay by first demonstrating that the learning problem satisfies a weak interpolation property, implying that constant step-size stochastic gradient methods converge linearly to a neighborhood determined by the approximation error, and revealing a tradeoff between model expressiveness and optimization speed. We leverage this insight to propose an adaptive training strategy that progressively increases model complexity while reducing the step-size, achieving both fast initial convergence and improved final accuracy. We further establish a novel robustness guarantee, showing that the worst-case (out-of-distribution) error of a learned optimization proxy is controlled by the approximation error of the function class. We illustrate our theoretical results using polynomial basis function proxies, demonstrating how the structure of the underlying optimization family can be leveraged to improve learning. Finally, we discuss extensions of our analysis to stochastic mirror descent with non-strongly convex reference functions, and demonstrate strong empirical performance on binary classification and portfolio optimization instances.


Biographie: Paul Grigas is an associate professor of Industrial Engineering and Operations Research (IEOR) at the University of California, Berkeley. His research interests are broadly in optimization, machine learning, and data-driven decision making. He enjoys thinking about new ways to build machine learning models for decision-making problems (usually modeled with optimization), and how to design and analyze optimization algorithms for problems in machine learning and operations research. He has worked on and is interested in applications in online advertising and other areas.

His work has been funded by the National Science Foundation (NSF), and he is affiliated with the NSF Artificial Intelligence (AI) Research Institute for Advances in Optimization (AI4OPT).

He received his B.S. in Operations Research and Information Engineering (ORIE) from Cornell University in 2011, and his Ph.D. in Operations Research from MIT in 2016.

Okan Arslan organizer

Location

HEC Montréal
Budapest room
(1er étage, section verte)
3000, ch. de la Côte-Sainte-Catherine

Montréal Québec H3T 2A7
Canada

Associated organization

Research Axes

Research applications