We propose an iterative method named LSLQ for solving linear least-squares problems
\(A x \approx b\) of any shape.
The method is based on the Golub and Kahan (1965) process, where the dominant cost is products with
\(A\) and its transpose.
In the rank-deficient case, LSLQ identifies the minimum-length least-squares solution.
LSLQ is formally equivalent to SYMMLQ
applied to the normal equations, so that the current estimate's Euclidean norm increases monotonically while the associated error norm decreases monotonically.
We provide lower and upper bounds on the error in the Euclidean norm along the LSLQ iterations.
The upper bound translates to an upper bound on the error norm along the LSQR iterations, which was previously unavailable, and provides an error-based stopping criterion involving a transition to the LSQR point.
We report numerical experiments on standard test problems and on a full-wave inversion problem arising from geophysics in which an approximate least-squares solution corresponds to an approximate gradient of a relevant penalty function that is to be minimized.
Published January 2017 , 24 pages