Skip to main content

Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization

An Erratum to this article was published on 27 February 2012

Abstract

In this paper, we consider a class of nonsmooth multiobjective programming problems. Necessary and sufficient optimality conditions are obtained under higher order strongly convexity for Lipschitz functions. We formulate Mond-Weir type dual problem and establish weak and strong duality theorems for a strict minimizer of order m.

1 Introduction

Nonlinear analysis is an important area in mathematical sciences, and has become a fundamental research tool in the field of contemporary mathematical analysis. Several nonlinear analysis problems arise from areas of optimization theory, game theory, differential equations, mathematical physics, convex analysis and nonlinear functional analysis. Park [13] has devoted to the study of nonlinear analysis and his results had a strong influence on the research topics of equilibrium complementarity and optimization problems. Nonsmooth phenomena in mathematics and optimization occurs naturally and frequently. Rockafellar [4] has pointed out that in many practical applications of applied mathematics the functions involved are not necessarily differentiable. Thus it is important to deal with non-differentiable mathematical programming problems.

The field of multiobjective programming, has grown remarkably in different directional in the setting of optimality conditions and duality theory since 1980s. In 1983, Vial [5] studied a class of functions depending on the sign of the constant ρ. Characteristic properties of this class of sets and related it to strong and weakly convex sets are provided.

Auslender [6] obtained necessary and sufficient conditions for a strict local minimizer of first and second order, supposing that the objective function f is locally Lipschitzian and that the feasible set S is closed. Studniarski [7] extended Auslender's results to any extended real-valued function f, any subset S and encompassing strict minimizers of order greater than 2. Necessary and sufficient conditions for strict minimizer of order m in nondifferentiable scalar programs are studied by Ward [8]. Based on this result, Jimenez [9] extended the notion of strict minimum of order m for real optimization problems to vector optimization. Jimenez and Novo [10, 11] presented the first and second order sufficient conditions for strict local Pareto minima and strict local minima of first and second order to multiobjective and vector optimization problems. Subsequently, Bhatia [12] considered the notion of strict minimizer of order m for a multiobjective optimization problem and established only optimality for the concept of strict minimizer of order m under higher order strong convexity for Lipschitz functions.

In 2008, Kim and Bae [13] formulated nondifferentiable multiobjective programs involving the support functions of a compact convex sets. Also, Bae et al. [14] established duality theorems for nondifferentiable multiobjective programming problems under generalized convexity assumptions.

Very recently, Kim and Lee [15] introduce the nonsmooth multiobjective programming problems involving locally Lipschitz functions and support functions. They introduced Karush-Kuhn-Tucker optimality conditions with support functions and established duality theorems for (weak) Pareto-optimal solutions.

In this paper, we consider the nonsmooth multiobjective programming involving the support function of a compact convex set. In section 2, we introduce the concept of a strict minimizer of order m and higher order strongly convexity for Lipschitz functions. Section 3, necessary and sufficient optimality theorems are established for a strict minimizer of order m by using necessary and sufficient optimality theorems under generalized strongly convexity assumptions. Section 4, we formulate Mond-Weir type dual problem and obtained weak and strong duality theorems for a strict minimizer of order m.

2 Preliminaries

Let n be the n-dimensional Euclidean space and let + n be its nonnegative orthant.

Let x, y n . The following notation will be used for vectors in n:

x < y x i < y i , i = 1 , 2 , , n ; x y x i y i , i = 1 , 2 , , n ; x y x i y i , i = 1 , 2 , , n b u t x y ; x y is the negation of x y ; x y is the negation of x y .

For x, u , x u and x < u have the usual meaning.

Definition 2.1[16]Let D be a compact convex set in n. The support function s(·|D) is defined by

s ( x | D ) : = m a x { x T y : y D } .

The support function s(·|D) has a subdifferential. The subdifferential of s(·|D) at x is given by

s ( x | D ) : = { z D : z T x = s ( x | D ) } .

The support function s(·|D), being convex and everywhere finite, that is, there exists z D such that

s ( y | D ) s ( x | D ) + z T ( y - x ) f o r a l l y D .

Equivalently,

z T x = s ( x | D )

We consider the following multiobjective programming problem,

( MOP )  Minimize ( f 1 ( x ) + s ( x | D 1 ) , . . . , f p ( x ) + s ( x | D p ) ) subject to g ( x ) 0 ,

where f and g are locally Lipschitz functions from n P and n q , respectively. D i , for each i P = {1, 2, ... , p}, is a compact convex set of n . Further let, S := {x X|g j (x) 0, j = 1, ..., q} be the feasible set of (MOP) and B ( x 0 , ε ) = { x n | | | x - x 0 | | < ε } denote an open ball with center x0 and radius ε. Set I(x0): = {j|g j (x0) = 0, j = 1, ... , q}.

We introduce the following definitions due to Jimenez [9].

Definition 2.2 A point x0 S is called a strict local minimizer for (MOP) if there exists an ε > 0, i {1, 2, ..., p} such that

f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) f o r a l l x B ( x 0 , ε ) S .

Definition 2.3 Let m 1 be an integer. A point x0 S is called a strict local minimizer of order m for (MOP) if there exists an ε > 0 and a constant c i n t + p , i { 1 , 2 , , p } such that

f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m f o r a l l x B ( x 0 , ε ) S .

Definition 2.4 Let m 1 be an integer. A point x0 S is called a strict minimizer of order m for (MOP) if there exists a constant c i n t + p , i { 1 , 2 , , p } such that

f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m f o r a l l x S .

Definition 2.5[16]Suppose that h: X is Lipschitz on X. The Clarke's generalized directional derivative of h at x X in the direction v n, denoted by h0(x, v), is defined as

h 0 ( x , v ) = l i m s u p y x t 0 h ( y + t v ) - h ( y ) t .

Definition 2.6[16]The Clarke's generalized gradient of h at x X, denoted byh(x) is defined as

h ( x ) = { ξ n : h 0 ( x , v ) ξ , v f o r a l l v n } .

We recall the notion of strong convexity of order m introduced by Lin and Fukushima in [17].

Definition 2.7 A function h: X→ said to be strongly convex of order m if there exists a constant c > 0 such that for x1, x2 X and t [0, 1]

h ( t x 1 + ( 1 - t ) x 2 ) t h ( x 1 ) + ( 1 - t ) h ( x 2 ) - c t ( 1 - t ) | | x 1 - x 2 | | m .

For m = 2, the function h is refered to as strongly convex in[5].

Proposition 2.1[17]If each h i , i = 1, ... , p is strongly convex of order m on a convex set X, then i = 1 p t i h i and max1 ≤ iph i are also strongly convex of order m on X, where t i ≥ 0, i = 1, ... , p.

Theorem 2.1 Let X and S be nonempty convex subsets of nand X, respectively. Suppose that x0 S is a strict local minimizer of order m for (MOP) and the functions f i : X→, i = 1, ... , p, are strongly convex of order m on X. Then x0is a strict minimizer of order m for (MOP).

Proof. Since x0 S is a strict local minimizer of order m for (MOP). Therefore there exists an ε > 0 and a constant c i > 0, i = 1, ... , p such that

f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m for all x B ( x 0 , ε ) S , that is, there exits no x B(x0, ε) ∩ S such that

f i ( x ) + s ( x | D i ) < f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m , i = 1 , , p .

If x0 is not a strict minimizer of order m for (MOP) then there exists some z S such that

f i ( z ) + s ( z | D i ) < f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m , i = 1 , , p .
(2.1)

Since S is convex, λz + (1 - λ)x0 B(x0, ε) ∩ S, for sufficiently small λ (0, 1). As f i , i = 1, ... , p, are strongly convex of order m on X, we have for z, x0 S,

f i ( λ z + ( 1 - λ ) x 0 ) λ f i ( z ) + ( 1 - λ ) f i ( x 0 ) - c i λ ( 1 - λ ) | | z - x 0 | | m f i ( λ z + ( 1 - λ ) x 0 ) - f i ( x 0 ) λ [ f i ( z ) - f i ( x 0 ) ] - c i λ ( 1 - λ ) | | z - x 0 | | m < λ [ - s ( z | D i ) + s ( x 0 | D i ) + c i | | z - x 0 | | m ] - c i λ ( 1 - λ ) | | z - x 0 | | m , using ( 2.1 ) , = - λ s ( z | D i ) + λ s ( x 0 | D i ) + λ 2 c i | | z - x 0 | | m < - λ s ( z | D i ) + λ s ( x 0 | D i ) + c i | | z - x 0 | | m
f i ( λ z + ( 1 - λ ) x 0 ) + λ s ( z | D i ) < f i ( x 0 ) + λ s ( x 0 | D i ) - s ( x 0 | D i ) + s ( x 0 | D i ) + c i | | z - x 0 | | m

or

f i ( λ z + ( 1 - λ ) x 0 ) + λ s ( z | D i ) + ( 1 - λ ) s ( x 0 | D i ) < f i ( x 0 ) + s ( x 0 | D i ) + c i | | z - x 0 | | m

,

Since s ( λ z + ( 1 - λ ) x 0 | D i ) λ s ( z | D i ) + ( 1 - λ ) s ( x 0 | D i ) , i = 1 , , p , we have

f i ( λ z + ( 1 - λ ) x 0 ) + s ( λ z + ( 1 - λ ) x 0 | D i ) < f i ( x 0 ) + s ( x 0 | D i ) + c i | | z - x 0 | | m .

,

which implies that x0 is not a strict local minimizer of order m, a contradiction. Hence, x0 is a strict minimizer of order m for (MOP). □

Motivated by the above result, we give two obvious generalizations of strong convexity of order m which will be used to derive the optimality conditions for a strict minimizer of order m.

Definition 2.8 The function h is said to be strongly pseudoconvex of order m and Lipschitz on X, if there exists a constant c > 0 such that for x1, x2, X

ξ , x 1 - x 2 + c | | x 1 - x 2 | | m 0 f o r a l l ξ h ( x 2 ) i m p l i e s h ( x 1 ) h ( x 2 ) .

Definition 2.9 The function h is said to be strongly quasiconvex of order m and Lipschitz on X, if there exists a constant c > 0 such that for x1, x2, X

h ( x 1 ) h ( x 2 ) i m p l i e s ξ , x 1 - x 2 + c | | x 1 - x 2 | | m 0 f o r a l l ξ h ( x 2 ) .

We obtain the following lemma due to the theorem 4.1 of Chankong and Haimes [18].

Lemma 2.1 x 0 is an efficient point for (MOP) if and only if x 0 solves

( M O P k ( x 0 ) ) Minimize f k ( x ) + s ( x | D k ) s u b j e c t t o f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) , f o r a l l i k , g j ( x ) 0 , j = 1 , , q

for every k = 1, ... , p.

We introduce the following definition for (MOP) based on the idea of Chandra et al. [19].

Definition 2.10 Let x0be a feasible solution for (MOP). We say that the basic regularity condition (BRC) is satisfied at x0if there exists r {1, 2, ... , p} such that the only scalars λ i 0 0 , w i D i , i = 1, ... , p, ir, μ j 0 0 , j I (x0), μ j 0 = 0 , j I (x0); I (x0) = {j|g j (x0) = 0, j = 1, ... , q} which satisfy

0 i = 1 , i r p λ i 0 ( f i ( x 0 ) + w i ) + j = 1 q μ j 0 g j ( x 0 )

are λ i 0 = 0 for all i= 1, ... , p, ir, μ j 0 = 0 , j = 1, ... , q.

3 Optimality Conditions

In this section, we establish Fritz John and Karush-Kuhn-Tucker necessary conditions and Karush-Kuhn-Tucker sufficient condition for a strict minimizer of (MOP).

Theorem 3.1 (Fritz John Necessary Optimality Conditions) Suppose that x0is a strict minimizer of order m for (MOP) and the functions f i , i = 1, ... , p, g j , j = 1, ... ,q, are Lipschitz at x0. Then there exist λ 0 + p , w i 0 D i , i = 1, ... , p, μ 0 + q such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 , μ 1 0 , , μ q 0 ) ( 0 , , 0 ) .

Proof. Since x0 is strict minimizer of order m for (MOP), it is strict minimizer. It can be seen that x0 solves the following unconstrained scalar problem

minimize F ( x )

where

F ( x ) = m a x { ( f 1 ( x ) + s ( x | D 1 ) ) ( f 1 ( x 0 ) + s ( x 0 | D 1 ) ) , , ( f p ( x ) + s ( x | D p ) ) ( f p ( x 0 ) + s ( x 0 | D p ) ) , g 1 ( x ) , , g q ( x ) } .

If it is not so then there exits x1 n such that F(x1) < F(x0). Since x0 is strict minimizer of (MOP) then g(x0) 0, for all j = 1, ... , q. Thus F(x0) = 0 and hence F(x1) < 0. This implies that x1 is a feasible solution of (MOP) and contradicts the fact that x0 is a strict minimizer of (MOP).

Since x0 minimizes F(x) it follows from Proposition 2.3.2 in Clarke[16] that 0 F(x0). Using Proposition 2.3.12 of [16], it follows that

F ( x 0 ) c o { ( i = 1 p [ f i ( x 0 ) + s ( x 0 D i ) ] ) ( j = 1 q g j ( x 0 ) ) } .

Thus,

0 c o { ( i = 1 p [ f i ( x 0 ) + s ( x 0 D i ) ] ) ( j = 1 q g j ( x 0 ) ) } .

Hence there exist λ i 0 0 , w i 0 D i , i = 1 , , p , and μ j 0 0 , j = 1 , , q , such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) , w i 0 , x 0 = s ( x 0 D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 , μ 1 0 , , μ q 0 ) ( 0 , , 0 ) .

Theorem 3.2 (Karush-Kuhn-Tucker Necessary Optimality Conditions) Suppose that x0is a strict minimizer of order m for (MOP) and the functions f i , i = 1, ... , p, g j , j = 1, ... , q, are Lipschitz at x0. Assume that the basic regularity condition (BRC) holds at x0, then there exist λ 0 + p , w i 0 D i , i = 1, ... p, μ 0 + q such that

0 i = 1 p λ i 0 f i ( x 0 ) + i = 1 p λ i 0 w i 0 + j = 1 q μ j 0 g j ( x 0 ) ,
(3.1)
w i 0 , x 0 = s ( x 0 D i ) , i = 1 , , p ,
(3.2)
μ j 0 g j ( x 0 ) = 0 , j = 1 , , q ,
(3.3)
( λ 1 0 , , λ p 0 ) ( 0 , , 0 ) .
(3.4)

Proof. Since x0 is a strict minimizer of order m for (MOP), by Theorem 3.1, there exist λ 0 + p , w i 0 D i , i = 1, ... , p μ 0 + q such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) , w i 0 , x 0 = s ( x 0 D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 , μ 1 0 , , μ q 0 ) ( 0 , , 0 ) .

Since BRC Condition holds at x0. Then ( λ 1 0 , , λ p 0 ) ( 0 , , 0 ) . If λ i 0 = 0 , i = 1, ... , p, then we have

0 k P , k i λ k ( f k ( x 0 ) + w k ) + j I ( x 0 ) μ j g j ( x 0 ) ,

for each k P = {1, ... , p}. Since the assumptions of Basic Regularity Condition, we have λ k = 0, k P, ki, μ j = 0, j I (x0). This contradicts to the fact that λ i , λ k , k P, ki, μ j , j I (x0) are not all simultaneously zero. Hence (λ1, ... , λ p ) ≠ (0, ... , 0).

Theorem 3.3 (Karush-Kuhn-Tucker Sufficient Optimality Conditions) Let the Karush-Kuhn-Tucker Necessary Optimality Conditions be satisfied at x0 S. Suppose that f i (·) + (·) T w i , i = 1, · · · , p, are strongly convex of order m on X , g j (·), j I (x0) are strongly quasiconvex of order m on X. Then x0is a strict minimizer of order m for (MOP).

Proof. As f i (·) + (·) T w i , i = 1, ... , p, are strongly convex of order m on X therefore there exist constants c i > 0, i = 1, ... , p, such that for all x S, ξ i f i (x0) and w i D i , i = 1, ... , p,

( f i ( x ) + x T w i ) - ( f i ( x 0 ) + ( x 0 ) T w i ) ξ i + w i , x - x 0 + c i x - x 0 m .
(3.5)

For λ i 0 0 , i = 1, ... , p, we obtain

i = 1 p λ i 0 ( f i ( x ) + x T w i ) - i = 1 p λ i 0 ( f i ( x 0 ) + ( x 0 ) T w i ) i = 1 p λ i 0 ξ i + w i , x - x 0 + i = 1 p λ i 0 c i x - x 0 m .
(3.6)

Now for x S,

g j ( x ) g j ( x 0 ) , j I ( x 0 ) .

As g j (·), j I (x0), are strongly quasiconvex of order m on X , it follows that there exist constants c j > 0 and η j ∂g j (x0), j I (x0), such that

η j , x - x 0 + c j x - x 0 m 0 .

For μ j 0 0 , j I (x0), we obtain

j I ( x 0 ) μ j 0 η j , x - x 0 + j I ( x 0 ) μ j 0 c j x - x 0 m 0 .

As μ j 0 = 0 for j I (x0), we have

j = 1 m μ j 0 η j , x - x 0 + j I ( x 0 ) μ j 0 c j x - x 0 m 0 .
(3.7)

By (3.6), (3.7) and (3.1), we get

i = 1 p λ i 0 ( f i ( x ) + x T w i ) - i = 1 p λ i 0 ( f i ( x 0 ) + ( x 0 ) T w i ) a x - x 0 m ,

where a = i = 1 p λ i 0 c i + j I ( x 0 ) μ j 0 c j . This implies that

i = 1 p λ i 0 [ ( f i ( x ) + x T w i ) - ( f i ( x 0 ) + ( x 0 ) T w i ) - c i | | x - x 0 | | m ] 0 ,
(3.8)

where c = ae. It follows from (3.8) that there exist c i n t + p such that for all x S

f i ( x ) + x T w i f i ( x 0 ) + ( x 0 ) T w i + c i | | x - x 0 | | m , i = 1 , , p .

Since (x0) T w i = s(x0|D i ), xT w i s(x|D i ), i = 1, ... , p, we have

f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m ,

i.e.

f i ( x ) + s ( x | D i ) f i ( x 0 ) + s ( x 0 | D i ) + c i | | x - x 0 | | m .

Thereby implying that x0 is a strict minimizer of order m for (MOP). □

Remark 3.1 If D i = {0}, i = 1, ... , k, then our results on optimality reduces to the one of Bhatia[12].

4 Duality Theorems

In this section, we formulate Mond-Weir type dual problem and establish duality theorems for a minima. Now we propose the following Mond-Weir type dual (MOD) to (MOP):

( MOD ) Maximize ( f 1 ( u ) + u T w 1 , , f p ( u ) + u T w p ) subject to 0 i = 1 p λ i ( f i ( u ) + w i ) + j = 1 q μ j g j ( u ) ,
(4.1)
j = 1 q μ j g j ( u ) 0 , j = 1 , , q , μ 0 , w i D i , i = 1 , , p , λ = ( λ 1 , , λ p ) Λ + , u X ,
(4.2)

where Λ + = { λ p : λ 0 , λ T e =  1 , e = { 1 , , 1 } p } .

Theorem 4.1 (Weak Duality) Let x and (u, w, λ, μ) be feasible solution of (MOP) and (MOD), respectively. Assume that f i (·) + (·) T w i , i = 1, ... , p, are strongly convex of order m on X, g j (·), j I (u); I (u) = {j|g j (u) = 0} are strongly quasiconvex of order m on X. Then the following cannot hold:

f ( x ) + s ( x | D ) < f ( u ) + u T w .
(4.3)

Proof. Since x is feasible solution for (MOP) and (u, w, λ, μ) is feasible for (MOD), we have

g j ( x ) g j ( u ) , j I ( u ) .

For every j I (u), as g j , j I (u), are strongly quasiconvex of order m on X, it follows that there exist constants c j > 0 and η j g j (u), j I (u) such that

η j , x - u + c j | | x - u | | m 0 .

This together with μ j 0, j I (u), imply

j I ( u ) μ j η j , x - u + j I ( u ) μ j c j 0 .

As μ j = 0, for j I (u), we have

j = 1 m μ j η j , x - u + j I ( u ) μ j c j | | x - u | | m 0 .
(4.4)

Now, suppose contrary to the result that (4.3) holds. Since xTw i s(x|D), i = 1, ... , p, we obtain

f i ( x ) + x T w i < f i ( u ) + u T w i , i = 1 , , p .

As f i (·) + (·) T w i , i = 1, ... , p, are strongly convex of order m on X, therefore there exist constants c i > 0, i = 1, ... , p, such that for all x S, ξ i f i (u), i = 1, ... , p,

( f i ( x ) + x T w i ) - ( f i ( u ) + u T w i ) ξ i + w i , x - u + c i | | x - u | | m .
(4.5)

For λ i 0, i = 1, ... , p, (4.5) yields

i = 1 p λ i ( f i ( x ) + x T w i ) - i = 1 p λ i ( f i ( u ) + u T w i ) i = 1 p λ i ( ξ i + w i ) , x - u + i = 1 p λ i c i | | x - u | | m .
(4.6)

By (4.4),(4.6) and (4.1), we get

i = 1 p λ i ( f i ( x ) + x T w i ) - i = 1 p λ i ( f i ( u ) + u T w i ) a | | x - u | | m ,
(4.7)

where a = i = 1 p λ i c i + j I ( u ) μ j c j . This implies that

i = 1 p λ i [ ( f i ( x ) + x T w i ) - ( f i ( u ) + u T w i ) - c i | | x - u | | m ] 0 ,
(4.8)

where c = ae, since λT e = 1. It follows from (4.8) that there exist c int p such that for all x S

f i ( x ) + x T w i f i ( u ) + u T w i + c i | | x - u | m , i = 1 , , p .

Since xT w i s(x|D i ), i = 1, ... , p, and c int p , we have

f i ( x ) + s ( x | D i ) f i ( x ) + x T w i f i ( u ) + u T w i + c i | | x - u | | m > f i ( u ) + u T w i , i = 1 , , p .

which contradicts to the fact that (4.3)holds. □

Theorem 4.2 (Strong Duality) If x0is a strictly minimizer of order m for (MOP), and assume that the basic regularity condition (BRC) holds at x0, then there exists λ0 p, w i 0 D i ,i=1, ... , p, μ0 qsuch that (x0, w0, λ0, μ0) is feasible solution for (MOD) and ( x 0 ) T w i 0 = s ( x 0 | D i ) , i = 1 , , p . Moreover, if the assumptions of weak duality are satisfied, then (x0, w0, λ0, μ0) is a strictly minimizer of order m for (MOD).

Proof. By Theorem 3.2, there exists λ0 p , w i 0 D i , i = 1, ... , p, and μ0 q such that

0 i = 1 p λ i 0 ( f i ( x 0 ) + w i 0 ) + j = 1 q μ j 0 g j ( x 0 ) , w i 0 , x 0 = s ( x 0 | D i ) , i = 1 , , p , μ j 0 g j ( x 0 ) = 0 , j = 1 , , q , ( λ 1 0 , , λ p 0 ) ( 0 , , 0 ) .

Thus (x0, w0, λ0, μ0) is a feasible for (MOD) and ( x 0 ) T w i 0 = s ( x 0 | D i ) , i = 1, ... , p. By Theorem 4.1, we obtain that the following cannot hold: □

f i ( x 0 ) + ( x 0 ) T w i 0 = f i ( x 0 ) + s ( x 0 | D i ) < f i ( u ) + u T w i , i = 1 , , p ,

where (u, w, λ, μ) is any feasible solution of (MOD). Since c i int p such that for all x0, u S

f i ( x 0 ) + ( x 0 ) T w i 0 + c i | | u - x 0 | | m f i ( u ) + u T w i , i = 1 , , p .

Thus (x0, w0, λ0, μ0) is a strictly minimizer of order m for (MOD). Hence, the result holds.

References

  1. Park S: Generalized equilibrium problems and generalized comple- mentarity problems. Journal of Optimization Theory and Applications 1997,95(2):409–417. 10.1023/A:1022643407038

    Article  MathSciNet  Google Scholar 

  2. Park S: Remarks on equilibria for g-monotone maps on generalized convex spaces. Journal of Mathematical Analysis and Applications 2002, 269: 244–255. 10.1016/S0022-247X(02)00019-7

    Article  MathSciNet  Google Scholar 

  3. Park S: Generalizations of the Nash equilibrium theorem in the KKM theory. Fixed Point Theory and Applications 2010. Art. ID 234706, 23 pp.

    Google Scholar 

  4. Rockafellar RT: Convex Analysis. Princeton Univ. Press, Princeton, NJ; 1970.

    Book  Google Scholar 

  5. Vial JP: Strong and weak convexity of sets and functions. Mathematics of Operations Research 1983, 8: 231–259. 10.1287/moor.8.2.231

    Article  MathSciNet  Google Scholar 

  6. Auslender A: Stability in mathematical programming with nondifferentiable data. SIAM Journal on Control and Optimization 1984, 22: 239–254. 10.1137/0322017

    Article  MathSciNet  Google Scholar 

  7. Studniarski M: Necessary and sufficient conditions for isolated local minima of nonsmooth functions. SIAM Journal on Control and Optimization 1986, 24: 1044–1049, 1986. 10.1137/0324061

    Article  MathSciNet  Google Scholar 

  8. Ward DE: Characterizations of strict local minima and necessary conditions for weak sharp minima. Journal of Optimization Theory and Applications 1994, 80: 551–571. 10.1007/BF02207780

    Article  MathSciNet  Google Scholar 

  9. Jimenez B: Strictly efficiency in vector optimization. Journal of Mathematical Analysis and Applications 2002, 265: 264–284. 10.1006/jmaa.2001.7588

    Article  MathSciNet  Google Scholar 

  10. Jimenez B, Novo V: First and second order sufficient conditions for strict minimality in multiobjective programming. Numerical Functional Analysis and Optimization 2002, 23: 303–322. 10.1081/NFA-120006695

    Article  MathSciNet  Google Scholar 

  11. Jimenez B, Novo V: First and second order sufficient conditions for strict minimality in nonsmooth vector optimization. Journal of Mathematical Analysis and Applications 2003, 284: 496–510. 10.1016/S0022-247X(03)00337-8

    Article  MathSciNet  Google Scholar 

  12. Bhatia G: Optimality and mixed saddle point criteria in multiobjective optimization. Journal of Mathematical Analysis and Applications 2008, 342: 135–145. 10.1016/j.jmaa.2007.11.042

    Article  MathSciNet  Google Scholar 

  13. Kim DS, Bae KD: Optimality conditions and duality for a class of nondifferentiable multiobjective programming problems. Taiwanese Journal of Mathematics 2009,13(2B):789–804.

    MathSciNet  Google Scholar 

  14. Bae KD, Kang YM, Kim DS: Efficiency and generalized convex duality for nondifferentiable multiobjective programs. Hindawi Publishing Corporation, Journal of Inequalities and Applications 2010., 2010: Article ID 930457, 10 pp

    Google Scholar 

  15. Kim DS, Lee HJ: Optimality conditions and duality in nonsmooth multiobjective programs. Hindawi Publishing Corporation, Journal of Inequalities and Applications 2010. Article ID 939537, 12 pp

    Google Scholar 

  16. Clarke FH: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York; 1983.

    Google Scholar 

  17. Lin GH, Fukushima M: Some exact penalty results for nonlinear programs and mathematical programs with equilibrium constraints. Journal of Optimization Theory and Applications 2003, 118: 67–80. 10.1023/A:1024787424532

    Article  MathSciNet  Google Scholar 

  18. Chankong V, Haimes YY: Multiobjective Decision Making: Theory and Methodology. North-Holland, New York; 1983.

    Google Scholar 

  19. Chandra S, Dutta J, Lalitha CS: Regularity conditions and optimality in vector optimization. Numerical Functional Analysis and Optimization 2004, 25: 479–501. 10.1081/NFA-200042637

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2010-0012780). The authors are indebted to the referee for valuable comments and suggestions which helped to improve the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Do Sang Kim.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

DSK presented necessary and sufficient optimality conditions, formulated Mond-Weir type dual problem and established weak and strong duality theorems for a strict minimizer of order m. KDB carried out the optimality and duality studies, participated in the sequence alignment and drafted the manuscript. All authors read and approved the final manuscript.

An erratum to this article is available at http://dx.doi.org/10.1186/1687-1812-2012-28.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Bae, K.D., Kim, D.S. Optimality and Duality Theorems in Nonsmooth Multiobjective Optimization. Fixed Point Theory Appl 2011, 42 (2011). https://doi.org/10.1186/1687-1812-2011-42

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2011-42

Keywords