We’d like to understand how you use our websites in order to improve them. Register your interest.

# Some new properties of the Lagrange function and its applications

## Abstract

Using a dual problem in Wolfe type, the Lagrange function of an inequality constrained nonconvex programming problem is proved to be constant not only on its optimal solution set but also on a wider set. In addition, it is also constant on the set of Lagrange multipliers corresponding to solutions of the dual problem.

MSC:90C46, 49N15, 49K30.

## 1 Introduction

In mathematical programming, Lagrange functions play a key role in finding maxima or minima of the problems subject to constraint functions. In several papers, to establish characterizations of solution sets of inequality constrained programming problems, Lagrange functions which were associated to the problems were proved to be constant on their optimal solution sets [15]. The aim of this paper is to show some more properties of Lagrange functions. Concretely, we will show that such Lagrange functions can be constant not only on optimal solution sets but also on wider sets.

Let us consider the following nonconvex problem:

where $f, f t :X→R$, $t∈T$, are locally Lipschitz functions on a Banach space X, T is an arbitrary (possibly infinite) index set, and C is a closed convex subset of X. Our new results on the Lagrange function of (P) will be obtained via its dual problem (D) in Wolfe type.

where the Lagrange function L is formulated by

$L(y,λ)={ f ( y ) + ∑ t ∈ T λ t f t ( y ) , ( y , λ ) ∈ C × R + ( T ) , + ∞ , otherwise .$

We denote by G the feasible set of (D). Let $( y ∗ , λ ∗ )$ be a solution of (D). We will prove that the function $L(⋅, λ ∗ )$ is constant on a subset of X which is wider than the solution set of (P) and the function $L( y ∗ ,⋅)$ is constant on the set of Lagrange multipliers corresponding to solutions of (P).

Our main results are divided into two parts. In the first one, we present some new properties of a Lagrange function. The second one is devoted to finding saddle points. Some remarks and further developments will be given.

## 2 Preliminaries

Let $R ( T )$ be a linear space of generalized finite sequences $λ= ( λ ) t ∈ T$ such that $λ t ∈R$ for all $t∈T$, but only finitely many $λ t ≠0$. For each $λ∈ R ( T )$, the corresponding supporting set $T(λ):={t∈T∣ λ t ≠0}$ is a finite subset of T. We denote by $R + ( T ) :={λ= ( λ t ) t ∈ T ∈ R ( T ) ∣ λ t ≥0,t∈T}$ the nonnegative cone of $R ( T )$. For $λ∈ R ( T )$ and ${ z t } t ∈ T ⊂Z$, Z being a real linear space, we understand that

The following concepts can be found in Clarke’s book [6, 7]. Let $C⊂X$ be a convex set, and let $z∈C$. The normal cone to C at z, denoted by $N(C,z)$, is defined by

$N(C,z):= { v ∈ X ∗ ∣ v ( x − z ) ≤ 0 , ∀ x ∈ C } ,$

where $X ∗$ is the dual space of X. Let $g:X→R$ be a locally Lipschitz function. The directional derivative and the Clarke generalized directional derivative of g at $z∈X$ in direction $d∈X$ are defined respectively by

The Clarke subdifferential of g at $z∈X$, denoted by $∂ c g(z)$, is defined by

$∂ c g(z):= { v ∈ X ∗ ∣ v ( d ) ≤ g c ( z ; d ) , ∀ d ∈ X } .$

A locally Lipschitz function g is said to be quasidifferentiable (or regular in the sense of Clarke) at $z∈X$ if the directional derivative $g ′ (z;d)$ exists and

$g c (z;d)= g ′ (z;d),∀d∈X.$

Definition 2.1 [8]

Let C be a subset of X. A function $g:X→R$ is said to be semiconvex at $z∈C$ if g is locally Lipschitz, regular at z, and

$d∈X,z+d∈C, g ′ (z;d)≥0⇒g(z+d)≥f(z).$

The function g is said to be semiconvex on C if g is semiconvex at every $z∈C$.

## 3 Main results

Let us denote by $Sol(P)$ the solution set of (P) and by A the feasible set of (P). Suppose that $Sol(P)≠∅$. For $z∈Sol(P)$, we assume that, under some constraint qualification condition (see [9]), there exists $λ∈ R + ( T )$ such that

$0∈ ∂ c f(z)+ ∑ t ∈ T λ t ∂ c f t (z)+N(C,z), λ t f t (z)=0,∀t∈T.$
(3.1)

Note that in [9], T is a compact topological space. We denote by $V(P)$ and $V(D)$ the optimal values of (P) and (D), respectively. The following lemma is needed for our further research.

Lemma 3.1 For the problem (P), suppose that $f, f t ,t∈T$, are regular on C and the function $L(⋅,λ)$ is semiconvex on C for every $λ∈ R + ( T )$. Let z be a solution of (P) and $λ ¯$ be such that (3.1) holds. Then $(z, λ ¯ )$ is a solution of (D) and $V(P)=V(D)$.

Proof Suppose that z is a solution of (P) and $λ ¯$ is such that (3.1) holds. We get

$0∈ ∂ c f(z)+ ∑ t ∈ T λ ¯ t ∂ c f t (z)+N(C,z), λ ¯ t f t (z)=0,∀t∈T.$
(3.2)

Hence, $(z, λ ¯ )$ is a feasible solution of (D). Since $λ ¯ t f t (z)=0$, for all $t∈T$,

$L(z, λ ¯ )=f(z).$

On the other hand, since $λ ¯ t f t (z)=0$, for all $t∈T$,

$L(z, λ ¯ )−L(x,λ)=f(z)−L(x,λ).$

By the weak duality between (P) and (D), $f(z)−L(x,λ)≥0$ for all feasible point $(x,λ)$ of (D). Consequently, $L(z, λ ¯ )≥L(x,λ)$ for all feasible point $(x,λ)$ of (D). The desired results follow. □

### 3.1 Some new results of the Lagrange function

Theorem 3.2 Suppose that $f, f t ,t∈T$, are regular on C and $( y ∗ , λ ∗ )$ is a solution of (D). Suppose further that the function $L(⋅, λ ∗ )$ is semiconvex on C. The following holds:

$L ( y , λ ∗ ) =L ( y ∗ , λ ∗ ) ,∀y∈ G 1 := { y ∈ C ∣ ( y , λ ∗ ) ∈ G } .$
(3.3)

Proof Let $( y ∗ , λ ∗ )$ be a solution of (D). We obtain $( y ∗ , λ ∗ )∈C× R + ( T )$ and

$0∈ ∂ c f ( y ∗ ) + ∑ t ∈ T λ t ∗ ∂ c f t ( y ∗ ) +N ( C , y ∗ ) .$

Thus, there exist $u∈ ∂ c f( y ∗ )$, $u t ∈ ∂ c f t ( y ∗ )$, $t∈T$, and $w∈N(C, y ∗ )$ such that

$u ( y − y ∗ ) + ∑ t ∈ T λ t ∗ u t ( y − y ∗ ) =−w ( y − y ∗ ) ≥0,∀y∈C.$

Since $f, f t ,t∈T$, are regular on C and $L(⋅, λ ∗ )$ is semiconvex on C, it follows that $L( y ∗ , λ ∗ )≤L(y, λ ∗ )$ for all $y∈C$. Hence,

$L ( y ∗ , λ ∗ ) ≤L ( y , λ ∗ ) ,∀y∈ G 1 .$
(3.4)

On the other hand, we have that $inf G 1 L(y, λ ∗ )≤ sup G 1 L(y, λ ∗ )≤ sup ( x , λ ) ∈ G L(x,λ)$. Combining this and (3.4), we get

$L ( y ∗ , λ ∗ ) ≤ inf y ∈ G 1 L ( ⋅ , λ ∗ ) ≤ sup y ∈ G 1 L ( ⋅ , λ ∗ ) ≤ sup ( x , λ ) ∈ G L(x,λ)=L ( y ∗ , λ ∗ ) .$

We obtain the desired result. □

Corollary 3.3 Suppose that $f, f t ,t∈T$, are regular on C, z is a solution of (P), and there exists $λ ∗$ such that (3.1) holds. If the function $L(⋅,λ)$ is semiconvex on C for every $λ∈ R + ( T )$, then

$L ( y , λ ∗ ) =f ( y ∗ ) ,∀y∈ G 1 .$

In addition, $λ t ∗ f t (y)=0$ for all $y∈Sol(P)$.

Proof Suppose that $y ∗$ is a solution of (P) and the condition (3.1) holds for $( y ∗ , λ ∗ )$. Then by Lemma 3.1, $( y ∗ , λ ∗ )$ is a solution of (D). Note that $λ t ∗ f t ( y ∗ )=0$ for all $t∈T$. By Theorem 3.2, we obtain $L(y, λ ∗ )=f( y ∗ )$ for all $y∈ G 1$. If $y∈Sol(P)$, then $f(y)=f( y ∗ )$. From the equality above, we can deduce that $λ t ∗ f t (y)=0$ for all $t∈T$. □

Remark 3.4

1. (1)

Corollary 3.3 covers Lemma 3.1 in [3]. It also shows that the Lagrange function can be constant on a subset of X which is wider than a solution set.

2. (2)

If the involved functions of (P) are convex, Corollary 3.3 covers Lemma 3.1 in [5].

3. (3)

Using the same method as above, we can establish the results which cover Theorem 2.1 in [2] and Theorem 3.2. in [4].

There exists a question: Which behavior does the function $L( y ∗ ,⋅)$ achieve for $y ∗ ∈ G 1$? The question will be adapted below.

Theorem 3.5 Let $( y ∗ , λ ∗ )$ be a solution of (D). Suppose that $f, f t ,t∈T$, are regular on C and the function $L(⋅, λ ∗ )$ is semiconvex on C. If $f( y ∗ )≥V(P)$, then the function $L( y ∗ ,⋅)$ is constant on $G 2$, where

$G 2 = { λ ∈ R + ( T ) | ( y ∗ , λ ) ∈ G , λ t f t ( y ∗ ) ≥ 0 } .$

Proof Since $( y ∗ , λ ∗ )$ is a solution of (D) and $f( y ∗ )≥V(P)$,

$L ( y ∗ , λ ∗ ) ≥L ( y ∗ , λ ) ≥f ( y ∗ ) ≥V(P),∀λ∈ G 2 .$

On the other hand, since $( y ∗ , λ ∗ )∈G$, using an argument as in the proof of Theorem 3.2, we get $L( y ∗ , λ ∗ )≤L(y, λ ∗ )$ for all $y∈C$. This implies that $L( y ∗ , λ ∗ )≤V(P)$. The desired result follows. □

Corollary 3.6 Assume that $f, f t ,t∈T$, are regular on C, $y ∗$ is a solution of (P) and there exists $λ ∗$ such that (3.1) holds. If $L(⋅,λ)$ is semiconvex on C, every $λ∈ R + ( T )$, then the function $L( y ∗ ,⋅)$ is constant on $G 2$.

Proof If $y ∗$ is a solution of (P) and there exists $λ ∗$ such that (3.1) holds, then by Lemma 3.1, $( y ∗ , λ ∗ )$ is a solution of (D). By Theorem 3.5, we have that $L( y ∗ ,⋅)$ is constant on $G 2$. □

In this part, by applying the results above, we can determine saddle points of the function L.

Definition 3.7 For the problem (P), a point $( z ¯ , λ ¯ )∈C× R + ( T )$ is said to be a saddle point of the function L if

$L( z ¯ ,λ)≤L( z ¯ , λ ¯ )≤L(x, λ ¯ ),∀(x,λ)∈C× R + ( T ) .$

We need the following lemma.

Lemma 3.8 Let $(z, λ ¯ )∈G$ be a saddle point of the function L. Suppose that the function $L(⋅, λ ¯ )$ is semiconvex on C. Then z is a solution of (P), $λ ¯ t f t (z)=0$, and $(z, λ ¯ )$ is a solution of (D). Moreover, $V(P)=V(D)$.

Theorem 3.9 Assume that $f, f t ,t∈T$, are regular on C and $L(⋅,λ)$ is semiconvex on C for every $λ∈ R + ( T )$. Let $( y ∗ , λ ∗ )∈G$ be a saddle point of the function L. Then,

1. (i)

For every $λ ¯ ∈ G 2$, if $( y ∗ , λ ¯ )∈G$, then it is a saddle point for L, and

2. (ii)

For every $z∈Sol(P)$, if $(z, λ ∗ )∈G$, then it is a saddle point for L.

Proof Suppose that $( y ∗ , λ ∗ )∈G$ is a saddle point of the function L. We get

$L ( y ∗ , λ ) ≤L ( y ∗ , λ ∗ ) ≤L ( x , λ ∗ ) ,∀(x,λ)∈C× R + ( T ) .$
(3.5)

Since $L(⋅, λ ∗ )$ is semiconvex on C, by Lemma 3.8, $y ∗$ is a solution of (P), $λ t ∗ f t ( y ∗ )=0$, and $( y ∗ , λ ∗ )$ is a solution of $(D)$, and $V(P)=V(D)$.

1. (i)

$( y ∗ , λ ¯ )$ is a saddle point. For $y ∗$ above, by Corollary 3.6, we obtain $L( y ∗ , λ ¯ )=L( y ∗ , λ ∗ )$ for all $λ ¯ ∈ G 2$. Note that by (3.5), $L( y ∗ ,λ)≤L( y ∗ , λ ∗ )$ for all $λ∈ R + ( T )$. Hence,

$L ( y ∗ , λ ) ≤L ( y ∗ , λ ¯ ) ,∀λ∈ R + ( T ) .$
(3.6)

Since $L( y ∗ , λ ∗ )=L( y ∗ , λ ¯ )$ and $( y ∗ , λ ¯ )∈G$, it is a solution of (D). So,

$0∈ ∂ c f ( y ∗ ) + ∑ t ∈ T λ ¯ t ∂ c f t ( y ∗ ) +N ( C , y ∗ ) .$

From this, it is easy to deduce that $L( y ∗ , λ ¯ )≤L(x, λ ¯ )$ for all $x∈C$. We obtain

$L ( y ∗ , λ ) ≤L ( y ∗ , λ ¯ ) ≤L(x, λ ¯ ),∀(x,λ)∈C× R + ( T ) .$
1. (ii)

$(z, λ ∗ )$ is a saddle point. For $λ ∗$ above, by Corollary 3.3, we get $L(z, λ ∗ )=L( y ∗ , λ ∗ )$ for all $z∈Sol(P)$. Then by (3.5), we obtain $L(z, λ ∗ )≤L(x, λ ∗ )$ for all $x∈C$ and for all $z∈Sol(P)$. It remains to prove that $L(z,λ)≤L(z, λ ∗ )$ for all $λ∈ R + ( T )$. Indeed, since $z∈Sol(P)$, $L(z,λ)≤f(z)$ for all $λ∈ R + ( T )$. By Lemma 3.8, $V(P)=V(D)$. Hence, $f(z)=L( y ∗ , λ ∗ )=L(z, λ ∗ )$. Thus, $L(z,λ)≤L(z, λ ∗ )$ for all $λ∈ R + ( T )$. □

The following corollary can be deduced directly from the theorem above.

Corollary 3.10 Assume that $f, f t ,t∈T$, are regular on C. If there exists a feasible point $( y ∗ , λ ∗ )$ of $(D)$ being a saddle point of the function L and $L(⋅,λ)$ is semiconvex on C for every $λ∈ R + ( T )$, then every point $(z, λ ¯ )∈Sol(P)× G 2$ is also the saddle point of the function L.

## 4 Further developments

1. (1)

Using Theorem 3.2, we can re-establish the characterizations of a solution set of the problem given in [3]via its dual problem in Wolfe type.

2. (2)

The characterizations of solution sets of the problem considered in [5] can be rebuilt via its dual problem.

## References

1. 1.

Dinh N, Jeyakumar V, Lee GM: Lagrange multiplier characterizations of solution sets of constrained pseudolinear optimization problems. Optimization 2006, 55: 241–250. 10.1080/02331930600662849

2. 2.

Jeyakumar V, Lee GM, Dinh N: Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs. J. Optim. Theory Appl. 2004, 123: 83–103.

3. 3.

Kim DS, Son TQ: Characterizations of solution sets of a class of nonconvex semi-infinite programming problems. J. Nonlinear Convex Anal. 2011, 12: 429–440.

4. 4.

Lalitha CS, Mehta M: Characterizations of solution sets of mathematical programs in terms of Lagrange multipliers. Optimization 2009, 58: 995–1007. 10.1080/02331930701763272

5. 5.

Son TQ, Dinh N: Characterizations of optimal solution sets of convex infinite programs. Top 2008, 16: 147–163. 10.1007/s11750-008-0039-2

6. 6.

Clarke FH: Optimization and Nonsmooth Analysis. Willey, New York; 1983.

7. 7.

Clarke FH, Ledyaev YS, Stern JS, Wolenski PR: Nonsmooth Analysis and Control Theory. Springer, Berlin; 1998.

8. 8.

Mifflin M: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 1977, 15: 959–972. 10.1137/0315061

9. 9.

Son TQ, Strodiot JJ, Nguyen VH: ε -optimality and ε -Lagrangian duality for a nonconvex programming problem with an infinite number of constraints. J. Optim. Theory Appl. 2009, 141: 389–409. 10.1007/s10957-008-9475-2

## Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (no.2010-0012780) and the National Foundation for Science and Technology Development (NAFOSTED), Vietnam. The authors are thankful to the anonymous referees whose suggestions have enhanced the presentation of the paper.

## Author information

Authors

### Corresponding author

Correspondence to Do Sang Kim.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Kim, D.S., Son, T.Q. Some new properties of the Lagrange function and its applications. Fixed Point Theory Appl 2012, 192 (2012). https://doi.org/10.1186/1687-1812-2012-192