Retour aux activités
Discussion DS4DM autour d'un café

Extragradient Methods: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities

iCalendar

2 juin 2022   11h00 — 12h00

Gauthier Gidel Université de Montréal, Canada

Gauthier Gidel

Séminaire hybrique sur Zoom et dans la salle de séminaire du GERAD.

The Extragradient [Korpelevich, 1976] and the Past Extragradient [Popov, 1980] methods are among the most popular methods for solving saddle point and variational inequalities problems (VIP). Recently, they have known a surge of interest with the emergence of variational inequality formulation for machine learning such as Generative Adversarial Networks. Despite their long history and significant attention in the optimization community, there remain important open problems about their convergence in the monotone setting. In this talk, we present how we resolved some of these open questions and derived the first last-iterate O(1/K) convergence rates for monotone and Lipschitz VIP without any additional assumptions on the operator.

The central part of our analysis is based on Performance Estimation Problems and computer-aided proofs. In the presentation, I will pay special attention to this approach and explain the main non-trivial issues we faced to get the final proofs via numerical computations.

References

[Korpelevich, 1976] Korpelevich, G. M. (1976). The extragradient method for finding saddle points and other problems. Matecon, 12:747–756.

[Popov, 1980] Popov, L. D. (1980). A modification of the arrow-hurwicz method for search of saddle points. Mathematical notes of the Academy of Sciences of the USSR, 28(5):845–848.

Federico Bobbio responsable
Gabriele Dragotto responsable

Lieu

Séminaire hybride
Zoom et salle 4488
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour
Montréal Québec H3T 1J4 Canada

Organismes associés