Home
Poster (PDF)
 
Attendees
Conference program
Registration
Location
Hotel information
Links
 
 
Previous editions
2004
2003
2002


    

Session TB6 - Optimisation non-linéaire de grande taille / Large-Scale Nonlinear Optimization

Day Tuesday, May 10, 2005
Location Marie-Husny
Chair Dominique Orban

Presentations

01h30 PM A Modified Newton Interior Point Method for Nonlinear Programming
  Germain Tanoh, Universidad de Chile, Centro de Modelamiento Matematico, 2120, Blanco Encalada, Piso 7, Santiago, Chile

We analyze an iterative method for minimizing general nonlinear function with equality and inequality constraints by interior point method. Applying the barrier penalty technique, we solve a sequence of unconstrained minimization problems with primal-dual merit function. We develop an iterative method to compute the Newton steps. Descent direction and direction of negative curvature are found by using alternatively Conjugate Gradient and Lanczos methods. Various computational enhancements to improve these directions are provided. Our strategy is well adapted to deal with indefiniteness and can be applied to large scale nonlinear optimization problems. Promising preliminary numerical results are presented.


01h55 PM Adaptive Barrier Procedures for Nonlinear Interior Methods
  Richard A. Waltz, Northwestern University, Electrial & Computer Eng, Sheridan Road, Evanston, Illinois, USA

We consider strategies for selecting the barrier parameter at every iteration of an interior-point method for nonlinear programming. Numerical experiments suggest that adaptive choices outperform static strategies that hold the barrier parameter fixed until a barrier optimality test is satisfied. A new adaptive strategy is proposed based on the minimization of a quality function which measures the KKT error violation. We also propose a globalization framework that ensures the convergence of adaptive interior methods. Numerical results will be presented comparing various strategies.


02h20 PM Gradient Projection for Quadratic Programs
  Michael P. Friedlander, University of British Columbia, Computer Science, Vancouver, British Columbia, Canada, V6T 1Z4

Quadratic programs (QPs) play a fundamental role in optimization. They arise as subproblems in algorithms for nonlinear optimization, and in their own right represent a rich class of applications. I discuss an active-set approach for solving large-scale QPs based on gradient projection and augmented Lagrangian methods. Inexpensive gradient-projection iterations identify an active constraint set; they are followed by an approximate solution of a saddle-point system in a subspace. I present numerical results and comment on the suitability of the approach for use in sequential quadratic programming algorithms for nonlinearly constrained optimization.


02h45 PM On Polynomial Complexity of Mehrotra-Type Predictor-Corrector Algorithms
  Maziar Salahi, McMaster University, Mathematics and Statistics, Hamilton Hall 218, Department of Mathematics and Statistics, Hamilton, On, Canada, L8s 4K1
Tamas Terlaky, McMaster University, Computing and Software, Department of Computing and Software, Hamilton, On, Canada, L8s 4K1
Jiming Peng, McMaster University, Computing and Software, Department of Computing and Software, Hamilton, On, Canada, L8s 4K1

In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider a variant of the original prototype of the algorithm that has been widely used in several IPM based optimization packages, for which no complexity result is known to date. By an example we show that in this variant the usual Mehrotra-type adaptive choice of the parameter $mu$ might force the algorithm to take many small steps to keep the iterates in a certain neighborhood of the central path, which is essential to prove the polynomiality of the algorithm. This example has motivated us to incorporate a safeguard in the algorithm that keeps the iterates in the prescribed neighborhood, while it allows us to warrantee a minimal step size. This safeguard strategy is used also when the affine scaling step performs poorly that forces the algorithm to take pure centering steps. To ensure the asymptotic convergence of the algorithm, we changed Mehrotra's heuristic slightly. The new algorithms have the same iteration complexity as the previous one, but enjoy superlinear convergence rate as well.