A modi"ed subgradient algorithm for Lagrangean relaxation
Dipartimento di Economia e Produzione, Politecnico di Milano, P.za Leonardo da Vinci, 32, I20133 Milano, Italy Received 1 May 1998; received in revised form 1 July 1999
Abstract Despite nonmonotonic properties, weak convergence performance and possible erraticbehavior, the standard subgradient optimization method is still one of the most widely adopted algorithm for solving the Lagrangean dual problem in practical applications. Several attempts have been made recently to improve the algorithm performance. In this paper we present a modi"ed algorithm which employs a variable target value for the step length determination and considers a direction given by aconic combination of possibly all previously generated subgradients. Computational experience of the proposed algorithm on Traveling Salesman and Assignment problems of di!erent sizes is reported. Scope and purpose In this paper, the problem of maximizing a concave and continuous function which is not di!erentiable everywhere is studied. Such a problem is quite common in the "eld of mathematicalprogramming: for instance, whenever considering the Lagrangean relaxation of a linear programming problem, in order to solve the Lagrangean dual problem, one should optimize a dual objective function which is not di!erentiable everywhere. One of the most widely adopted solution method for solving the dual problem is subgradient optimization. Although many other procedures, with strongerconvergence properties, have been developed (among the others, bundle methods or the Shor space dilation method), yet, subgradient optimization seems to have wide acceptability among researchers and remains as one of the most e!ective and useful technique for solving dual problems, especially for large problems with complex structures. In an attempt to improve the performance of the subgradientalgorithm, researchers have tried to develop modi"cations and extensions of the basic scheme, aimed to reduce its main weaknesses (nonmonotonicity, weak convergence properties and lack of a de"nite termination criteria). In this paper, in order to further improve the computational e$ciency of the procedure, we propose a modi"ed subgradient algorithm which uni"es some of the most promising results in the"eld of subgradient optimization. The provided solution scheme is applied to classical problems of the Operations Research area (Assignment and Traveling Salesman): the
* Fax: #39-2-2399-2720. E-mail address: firstname.lastname@example.org (F. Fumero) 0305-0548/00/$ - see front matter 2000 Elsevier Science Ltd. All rights reserved. PII: S 0 3 0 5 - 0 5 4 8 ( 9 9 ) 0 0 0 8 5 - 4
F. Fumero/ Computers & Operations Research 28 (2001) 33}52
computational results obtained show a relevant improvement over similar algorithm previously proposed. 2000 Elsevier Science Ltd. All rights reserved.
Keywords: Subgradient optimization; Nondi!erentiable optimization; Lagrangean relaxation
1. Introduction The problem of maximizing a concave and continuous function ¸(u) : RLPR which is notnecessarily di!erentiable has been long studied: although the set of nondi!erentiable points may have a null size, traditional techniques employed for smooth optimization cannot be adopted in this situation for two main reasons. First, the optimum is usually a point with no continuous derivative, hence using local approximations of the objective function may not be signi"cant; second, the proposedoptimality criteria and stop conditions (such as ""
f (x)""( ) often do not make sense for nondi!erentiable functions } consider for instance the minimization of the scalar function f (x)""x", where ""
f (x)"""1 ∀xO0. The problem of optimizing a nondi!erentiable function is quite common in the "eld of mathematical programming: for instance, in the context of Lagrangean relaxation of discrete...