We develop a trust-region method for minimizing the sum of a smooth term
\(f\) and a nonsmooth term
\(h\), both of which can be nonconvex.
Each iteration of our method minimizes a possibly nonconvex model of
\(f + h\) in a trust region.
The model coincides with
\(f + h\) in value and subdifferential at the center.
We establish global convergence to a first-order stationary point when
\(f\) satisfies a smoothness condition that holds, in particular, when it has Lipschitz-continuous gradient, and
\(h\) is proper and lower semi-continuous.
The model of
\(h\) is required to be proper, lower-semi-continuous and prox-bounded.
Under these weak assumptions, we establish a worst-case
\(O(1/\epsilon^2)\) iteration complexity bound that matches the best known complexity bound of standard trust-region methods for smooth optimization.
We detail a special instance in which we use a limited-memory quasi-Newton model of
\(f\) and compute a step with the proximal gradient method, resulting in a practical proximal quasi-Newton method.
We establish similar convergence properties and complexity bound for a quadratic regularization variant, and provide an interpretation as a proximal gradient method with adaptive step size for nonconvex problems.
We describe our Julia implementations and report numerical results on inverse problems from sparse optimization and signal processing.
Our trust-region algorithm exhibits promising performance and compares favorably with linesearch proximal quasi-Newton methods based on convex models.
Paru en mars 2021 , 30 pages