Retour

Séance TB11 - Méthodes non-linéaires pour l'optimisation combinatoire / Nonlinear Optimization Approaches for Discrete Optimization

Jour mardi, le 05 mai 2009
Salle Demers Beaulne
Président Miguel F. Anjos

Présentations

13h30-
13h55
Recent Progress in the Application of Semidefinite Programming to Discrete Optimization
  Miguel F. Anjos, University of Waterloo, Management Sciences, 200 University Avenue West, Waterloo, Ontario, Canada, N2L 3G1

The max-cut algorithm of Goemans and Williamson is 15 years old in 2009, and its impact in the area of semidefinite programming has been remarkable. In this talk, I will survey some of the main modelling and algorithmic developments since 1994 in the application of semidefinite programming to discrete optimization problems. I will also highlight promising directions for research in this area.


13h55-
14h20
HIPCUT - A Hybrid Interior-Point Cutting-Plane Method for Discrete Optimization
  Alexander Engau, University of Waterloo, Management Sciences, 200 University Avenue West, Waterloo, On, Canada, N2L 3G1

We describe a new technique of using an interior-point method for the solution of linear and semidefinite programming relaxations of discrete optimization problems. The approach combines an infeasible interior-point algorithm with a cutting-plane scheme that does not require to solve successive relaxations to optimality but adds and removes cuts already at intermediate iterates based on feasibility indicators of the cut violation and warm starts of the new variable slacks. Showing encouraging computational results for several max-cut instances and single-row facility-layout problems, we conclude that our method is robust and can find optimal solutions faster than traditional cutting-plane methods.


14h20-
14h45
A First-Order Smoothing Technique for a Class of Large-Scale LPs
  Jieqiu Chen, University of Iowa, Management Sciences
Samuel Burer, University of Iowa, Management Sciences

We investigate a smoothing technique to solve large-scale instances of a class of LPs that arise from machine learning context. When applied to our problem, Nesterov’s smoothing technique requires a parameter that bounds the optimal value from above. We design a scheme that dynamically updates the parameter to speed up the convergence.


Retour