Back to activities
GERAD seminar

An \(l_1\)-augmented Lagrangian algorithm and why, at least sometimes, it is a very good idea


Jul 6, 2017   10:45 AM — 12:00 PM

Andrew R. Conn IBM TJ Watson Research Center, United States

For almost 50 years \(l_2\)-Augmented Lagrangian algorithms have been around and they are still frequently used today. One way of looking at them is to consider them as a modification of the inexact quadratic penalty function (which requires that the penalty parameter becomes unbounded) by adding Lagrangian terms. The resulting advantage is that by way of updating the Lagrangian multipliers one can solve the original constrained problem whilst bounding the penalty parameter.

In this talk I will describe an \(l_1\)-Augmented Lagrangian which one could consider as a modification of the exact \(l_1\)-penalty function by adding Lagrangian terms. Since the penalty parameter for exact penalty functions remains bounded anyway and furthermore the \(l_1\) exact penalty function is not differentiable, this does not sound like a good idea. I hope to convince you otherwise.

I will include, motivation, theory, context and some provisional numerical results

Free entrance.
Welcome to everyone!


Room 4488
André-Aisenstadt Building
Université de Montréal Campus
2920, chemin de la Tour
Montréal QC H3T 1J4

Research Axis