LNLQ: an iterative method for least-norm problems with an error minimization property

, , and

BibTeX reference

We describe LNLQ for solving the least-norm problem \(\min\ \|x\|\) subject to \(Ax=b\). Craig's method is known to be equivalent to applying the conjugate gradient method to the normal equations of the second kind (\(AA^T\!y = b, \ x = A^T\!y\)). LNLQ is equivalent to applying SYMMLQ. If an underestimate to the smallest singular value is available, error upper bounds for both \(x\) and \(y\) are available at each iteration. LNLQ is a companion method to the least-squares solver LSLQ (Estrin, Orban, and Saunders, 2017), which is equivalent to SYMMLQ on the conventional normal equations. We show that the error upper bounds are tight and compare with the bounds suggested by Arioli (2013) for CRAIG. A sliding window technique allows us to tighten the error upper bound in \(y\) at the expense of a few additional scalar operations per iteration.

, 24 pages

Research Axis

Research application


, , and
SIAM Journal on Matrix Analysis and Applications, 40(3), 1102–1124, 2019 BibTeX reference