Skip to main content

On the hierarchical variational inclusion problems in Hilbert spaces

Abstract

The purpose of this paper is by using Maingé’s approach to study the existence and approximation problem of solutions for a class of hierarchical variational inclusion problems in the setting of Hilbert spaces. As applications, we solve the convex programming problems and quadratic minimization problems by using the main theorems. Our results extend and improve the corresponding recent results announced by many authors.

MSC: 47J05, 47H09, 49J25.

1 Introduction

Throughout this paper, we assume that H is a real Hilbert space, C is a nonempty closed and convex subset of H and denote by Fix(T) the set of fixed points of a mapping T:CC.

Let A:HH be a single-valued nonlinear mapping and let M:H 2 H be a multi-valued mapping. The so-called quasi-variational inclusion problem (see [13]) is to find a point uH such that

θA(u)+M(u).
(1.1)

A number of problems arising in structural analysis, mechanics and economics can be considered in the framework of this kind of variational inclusions (see, for example, [4]).

The set of solutions of the variational inclusion (1.1) is denoted by Ω.

Special cases

(I) If M=ϕ:H 2 H , where ϕ:HR{+} is a proper convex and lower semi-continuous function and ∂ϕ is the sub-differential of ϕ, then variational inclusion problem (1.1) is equivalent to finding uH such that

A ( u ) , v u +ϕ(y)ϕ(u)0,yH,
(1.2)

which is called the mixed quasi-variational inequality.

Especially, if A=0, then (1.2) is equivalent to the minimizing problem of ϕ on H, i.e., to find uH such that ϕ(u)= inf y H ϕ(y).

(II) If M= δ C , where C is a nonempty closed and convex subset of H and δ C :H[0,] is the indicator function of C, i.e.,

δ C (x)={ 0 , x C , + , x C ,

then variational inclusion problem (1.2) is equivalent to finding uC such that

A ( u ) , v u 0,vC.
(1.3)

This problem is called Hartman-Stampacchia variational inequality problem.

(III) If M=0 and A=IT where I is an identity mapping and T:HH is a nonlinear mapping, then problem (1.1) is equivalent to the fixed point problem of T. That is, find uH such that

u=Tu.
(1.4)

Recently, hierarchical fixed point problems, hierarchical optimization problems and hierarchical minimization problems have attracted many authors’ attention due to their link with convex programming problems, optimization problems and monotone variational inequality problems etc. (see [521] and others).

The purpose of this paper is to introduce and study the following bi-level hierarchical variational inclusion problem in the setting of Hilbert spaces:

Find ( x , y ) Ω 1 × Ω 2 such that for given positive real numbers ρ and η, the following inequalities hold:

{ ρ F ( y ) + x y , x x 0 , x Ω 1 , η F ( x ) + y x , y y 0 , y Ω 2 ,
(1.5)

where F, A 1 , A 2 :HH are mappings and M 1 , M 2 :H 2 H are multi-valued mappings, Ω i is the set of solutions to variational inclusion problem (1.1) with A= A i , M= M i for i=1,2.

Special examples

(I) If M i =0, A i =I T i , where T i :HH is a nonlinear mapping for each i=1,2, then Ω i =Fix( T i ) and bi-level hierarchical variational inclusion problem (1.5) is equivalent to finding ( x , y )Fix( T 1 )×Fix( T 2 ) such that

{ ρ F ( y ) + x y , x x 0 , x Fix ( T 1 ) , η F ( x ) + y x , y y 0 , y Fix ( T 2 ) .
(1.6)

This problem, which is called bi-level hierarchical optimization problem, was studied by Maingé [20] and Kraikaew et al. [21].

(II) In (1.6), if T i = P K i for each i=1,2, where P K i is the metric projection from H onto a nonempty closed convex subset K i , then it is clear that the Ω i =Fix( T i )= K i and bi-level hierarchical optimization problem (1.6) is equivalent to finding ( x , y ) K 1 × K 2 such that

{ ρ F ( y ) + x y , x x 0 , x K 1 , η F ( x ) + y x , y y 0 , y K 2 .
(1.7)

This system forms a more general problem originated from Nash equilibrium points and it was treated from a theoretical viewpoint in [2224].

(III) If η=0, ρ>0 and both sets Ω 1 and Ω 2 are nonempty closed and convex subsets of H, then bi-level hierarchical variational inclusion problem (1.5) reduces to the following (one-level) hierarchical variational inclusion problem:

Find x Ω 1 such that for a given positive real number ρ, the following inequality holds:

ρ F ( y ) , x x 0,x Ω 1 .
(1.8)

(IV) If K 1 = K 2 =K and η=0, ρ>0, then (1.7) reduces to the classic variational inequality, i.e., the problem of finding x K such that

F ( x ) , x x 0,xK.
(1.9)

In (1.5), it is worth noting that if Ω 1 , Ω 2 are nonempty closed convex subsets in H, then the metric projections P Ω 1 and P Ω 2 from H onto Ω 1 and Ω 1 , respectively, are well defined and problem (1.5) is equivalent to the problem of finding ( x , y ) Ω 1 × Ω 2 such that

{ x = P Ω 1 [ y ρ F ( y ) ] , y = P Ω 2 [ x η F ( x ) ] .
(1.10)

However, in practice, both solution sets Ω 1 and Ω 2 (and hence the two projections) are not given explicitly.

To overcome this drawback, inspired by the method studied by Yamada et al. [25, 26], Maingé [20] and Kraikaew et al. [21], we investigate a more general variant of the scheme proposed by Maingé [20], Kraikaew et al. [21] to replace the projection by some suitable mappings with a nice fixed point set. This strategy also suggests an effective approximation process. Our analysis and method allow us to prove the existence and approximation of solutions to problem (1.5). As applications, we utilize the main results to study the quadratic minimization problems and convex programming problems in Hilbert spaces. The results presented in the paper extend and improve the corresponding results in [20, 21, 25, 26] and others.

2 Preliminaries

For the sake of convenience, we first recall some definitions and lemmas for our main results.

Definition 2.1 A mapping A:HH is said to be α-inverse-strongly monotone if there exists α>0 such that

AxAy,xyα A x A y 2 ,x,yH.

A multi-valued mapping M:H 2 H is called monotone if for all x,yH, uMx and vMy imply that

uv,xy0.

A multi-valued mapping M:H 2 H is said to be maximal monotone if it is monotone and for any (x,u)H×H,

uv,xy0

for every (y,v)Graph(M) (the graph of mapping M) implies that uMx.

Lemma 2.2 [19]

Let A:HH be an α-inverse-strongly monotone mapping. Then

  1. (i)

    A is an 1 α -Lipschitz continuous and monotone mapping;

  2. (ii)

    For any constant λ>0, we have

    ( I λ A ) x ( I λ A ) y 2 x y 2 +λ(λ2α) A x A y 2 ;
    (2.1)
  3. (iii)

    If λ(0,2α], then IλA is a nonexpansive mapping, where I is the identity mapping on H.

Let H be a real Hilbert space, C be a nonempty closed convex subset of H. For each xH, there exists a unique nearest point in C, denoted by P C (x), such that

x P C xxy,yC.

Such a mapping P C from H onto C is called the metric projection.

Remark 2.3 It is well known that the metric projection P C has the following properties:

  1. (i)

    P C :HC is nonexpansive;

  2. (ii)

    P C is firmly nonexpansive, i.e.,

    P C x P C y 2 P C x P C y,xy,x,yH;
  3. (iii)

    For each xH,

    z= P C (x)xz,zy0,yC.
    (2.2)

Definition 2.4 Let M:H 2 H be a multi-valued maximal monotone mapping. Then the mapping J M , λ :HH defined by

J M , λ (u)= ( I + λ M ) 1 (u),uH

is called the resolvent operator associated with M, where λ is any positive number and I is the identity mapping.

Proposition 2.5 [19]

Let M:H 2 H be a multi-valued maximal monotone mapping, and let A:HH be an α-inverse-strongly monotone mapping. Then the following conclusions hold.

  1. (i)

    The resolvent operator J M , λ associated with M is single-valued and nonexpansive for all λ>0.

  2. (ii)

    The resolvent operator J M , λ is 1-inverse-strongly monotone, i.e.,

    J M , λ ( x ) J M , λ ( y ) 2 x y , J M , λ ( x ) J M , λ ( y ) ,x,yH.
  3. (iii)

    uH is a solution of the variational inclusion (1.1) if and only if u= J M , λ (uλAu), λ>0, i.e., u is a fixed point of the mapping J M , λ (IλA). Therefore we have

    Ω=Fix ( J M , λ ( I λ A ) ) ,λ>0,
    (2.3)

    where Ω is the set of solutions of variational inclusion problem (1.1).

  4. (iv)

    If λ(0,2α], then Ω is a closed convex subset in H.

In the sequel, we denote the strong and weak convergence of a sequence { x n } in H to an element xH by x n x and x n x, respectively.

Lemma 2.6 [27]

For x,yH and ω(0,1), the following statements hold:

  1. (i)

    x + y 2 x 2 +2y,x+y;

  2. (ii)

    ( 1 ω ) x + ω y 2 =(1ω) x 2 +ω y 2 ω(1ω) x y 2 .

Lemma 2.7 [28]

Let { a n } be a sequence of real numbers, and there exists a subsequence { a m j } of { a n } such that a m j < a m j + 1 for all jN, where N is the set of all positive integers. Then there exists a nondecreasing sequence { n k } of N such that lim k n k = and the following properties are satisfied by all (sufficiently large) number kN:

a n k a n k + 1 and a k a n k + 1 .

In fact, n k is the largest number n in the set {1,2,,k} such that a n < a n + 1 holds.

Lemma 2.8 [21]

Let { a n }[0,), { α n }[0,1), { b n }(,+), α ˆ [0,1) be such that

  1. (i)

    { a n } is a bounded sequence;

  2. (ii)

    a n + 1 ( 1 α n ) 2 a n +2 α n α ˆ a n a n + 1 + α n b n , n1;

  3. (iii)

    whenever { a n k } is a subsequence of { a n } satisfying

    lim inf k ( a n k + 1 a n k )0,

    it follows that lim sup k b n k 0;

  4. (iv)

    lim n α n =0 and n = 1 α n =.

Then lim n a n =0.

Definition 2.9

  1. (i)

    A mapping T:HH is said to be nonexpansive if

    TxTyxy,x,yH.
  2. (ii)

    A mapping T:HH is said to be quasi-nonexpansive if Fix(T) and

    Txpxp,xH,pFix(T).

    It should be noted that T is quasi-nonexpansive if and only if xH, pFix(T)

    xTx,xp 1 2 x T x 2 .
    (2.4)
  3. (iii)

    A mapping T:HH is said to be strongly quasi-nonexpansive if T is quasi-nonexpansive and

    x n T x n 0
    (2.5)

    whenever { x n } is a bounded sequence in H and x n pT x n p0 for some pFix(T).

Lemma 2.10 Let M:H 2 H be a multi-valued maximal monotone mapping, A:HH be an α-inverse-strongly monotone mapping and let Ω be the set of solutions of variational inclusion problem (1.1) and Ω. Then the following statements hold.

  1. (i)

    If λ(0,2α], then the mapping K:HH defined by

    K:= J M , λ (IλA)
    (2.6)

    is quasi-nonexpansive, where I is the identity mapping and J M , λ is the resolvent operator associated with M.

  2. (ii)

    The mapping IK:HH is demiclosed at zero, i.e., for any sequence { x n }H, if x n x and (IK) x n 0, then x=Kx.

  3. (iii)

    For any β(0,1), the mapping K β defined by

    K β =(1β)I+βK
    (2.7)

    is a strongly quasi-nonexpansive mapping and Fix( K β )=Fix(K).

  4. (iv)

    I K β , β(0,1) is demiclosed at zero.

Proof (i) Since λ(0,2α], it follows from Lemma 2.2(iii) and Proposition 2.5 that the mapping K is nonexpansive and Ω=Fix(K). This implies that K is quasi-nonexpansive.

(ii) Since K is a nonexpansive mapping on H, IK is demiclosed at zero.

(iii) It is obvious that Fix( K β )=Fix(K) and K β is quasi-nonexpansive.

Next we prove that K β , β(0,1) is a strongly quasi-nonexpansive mapping.

In fact, let { x n } be any bounded sequence in H and let pFix( K β ) be a given point such that

x n p K β x n p0.
(2.8)

Now we prove that K β x n x n 0.

In fact, it follows from (2.4) that

K β x n p 2 = x n p β ( x n K x n ) 2 = x n p 2 2 β x n p , x n K x n + β 2 x n K x n 2 x n p 2 β ( 1 β ) x n K x n 2 .

Hence from (2.8), we have

β(1β) K x n x n 2 x n p 2 K β x n p 2 0.

Since β(1β)>0, K x n x n 0, and so

K β x n x n =βK x n x n 0.

(iv) Since I K β =β(IK) and IK is demi-closed at zero, hence I K β is demi-closed at zero. This completes the proof. □

3 Main results

Throughout this section we always assume that the following conditions are satisfied:

  1. (C1)

    M i :H 2 H , i=1,2, is a multi-valued maximal monotone mapping, A i :HH is an α-inverse-strongly monotone mapping and Ω i is the set of solutions to variational inclusion problem (1.1) with A= A i , M= M i and Ω i ;

  2. (C2)

    K i and K i β , β(0,1), i=1,2, are the mappings defined by

    { K i : = J M , λ ( I λ A i ) , λ ( 0 , 2 α ] , K i , β = ( 1 β ) I + β K i , β ( 0 , 1 ) ,
    (3.1)

    respectively.

We have the following result.

Theorem 3.1 Let A i , M i , Ω i , K i , K i β , i=1,2, satisfy the conditions (C1) and (C2), and let f,g:HH be contractions with a contractive constant h(0,1). Let { x n } and { y n } be two sequences defined by

{ x 0 , y 0 H , x n + 1 = ( 1 α n ) K 1 , β x n + α n f ( K 2 , β y n ) , y n + 1 = ( 1 α n ) K 2 , β y n + α n g ( K 1 , β x n ) , n = 0 , 1 , 2 , ,
(3.2)

where { α n } is a sequence in (0,1) satisfying α n 0 and n = 0 α n =. Then the sequences { x n } and { y n } converge to x and y , respectively, where ( x , y ) Ω 1 × Ω 2 is the unique solution of the following (bi-level) hierarchical optimization problem:

{ x f ( y ) , x x 0 , x Ω 1 , y g ( x ) , y y 0 , y Ω 2 .
(3.3)

Proof (I) First we prove that (3.3) has a unique solution ( x , y ) Ω 1 × Ω 2 .

Indeed, it follows from Proposition 2.5 and Lemma 2.10 that both sets Ω 1 , Ω 2 are nonempty closed and convex and Ω i =Fix( K i ) for each i=1,2. Hence the metric projection P Ω i for each i=1,2 is well defined. It is clear that the mapping

P Ω 1 f P Ω 2 g:HH

is a contraction. By the Banach contractive mapping principle, there exists a unique element x H such that

x =( P Ω 1 f P Ω 2 g) ( x ) .

Letting y = P Ω 2 g( x ), then it is easy to see that

x =( P Ω 1 f) ( y ) , y =( P Ω 2 g) ( x )

are the unique solution of (3.3).

(II) Now we prove that { x n } and { y n } are bounded.

In fact, it follows from Lemma 2.10 that K i , β , i=1,2, is strongly quasi-nonexpansive and Fix( K i , β )=Fix( K i )= Ω i . Since f is h-contractive and x Fix( K 1 , β ), y Fix( K 2 , β ), we have

x n + 1 x ( 1 α n ) K 1 , β x n x + α n f ( K 2 , β y n ) x ( 1 α n ) x n x + α n f ( K 2 , β y n ) f ( y ) + α n f ( y ) x ( 1 α n ) x n x + α n h K 2 , β y n y + α n f ( y ) x ( 1 α n ) x n x + α n h y n y + α n f ( y ) x .

Similarly, we can also prove that

y n + 1 y (1 α n ) y n y + α n h x n x + α n g ( x ) y .

This implies that

x n + 1 x + y n + 1 y ( 1 α n ( 1 h ) ) ( x n x + y n y ) + α n ( 1 h ) f ( y ) x + g ( x ) y 1 h max { ( x n x + y n y ) , f ( y ) x + g ( x ) y 1 h } .

By induction, we have

x n x + y n y max { ( x 0 x + y 0 y ) , f ( y ) x + g ( x ) y 1 h } , n 1 .

This implies that { x n } and { y n } are bounded. Consequently, the sequences { K 1 , β x n } and { K 2 , β y n } both are bounded.

(III) Next we prove that for each n1 the following inequality holds.

x n + 1 x 2 + y n + 1 y 2 ( 1 α n ) 2 ( x n x 2 + y n y 2 ) + 2 α n h ( x n + 1 x y n y + x n x y n + 1 y ) + 2 α n ( f ( y ) x , x n + 1 x + g ( x ) y , x n + 1 y ) .
(3.4)

In fact, it follows from (3.2) and Lemma 2.6(i) that

x n + 1 x 2 = ( 1 α n ) ( K 1 , β x n x ) + α n ( f ( K 2 , β y n ) x ) 2 ( 1 α n ) ( K 1 , β x n x ) 2 + 2 α n ( f ( K 2 , β y n ) x ) , x n + 1 x = ( 1 α n ) 2 K 1 , β x n x 2 + 2 α n f ( K 2 , β y n ) f ( y ) , x n + 1 x + 2 α n f ( y ) x , x n + 1 x ( 1 α n ) 2 x n x 2 + 2 α n f ( K 2 , β y n ) f ( y ) x n + 1 x + 2 α n f ( y ) x , x n + 1 x ( 1 α n ) 2 x n x 2 + 2 α n h y n y x n + 1 x + 2 α n f ( y ) x , x n + 1 x .

Similarly, we have

y n + 1 y 2 ( 1 α n ) 2 y n y 2 + 2 α n h x n x y n + 1 y + 2 α n g ( x ) y , y n + 1 y .

Adding up the last two inequalities, the inequality (3.4) is proved.

(IV) Next we prove the following fact.

If there exists a subsequence { n k }{n} such that

lim inf k ( x n k + 1 x 2 + y n k + 1 y 2 ( x n k x 2 + y n k y 2 ) ) 0,

then

lim sup k ( f ( y ) x , x n k + 1 x + g ( x ) y , y n k + 1 y ) 0.

In fact, since the norm 2 is convex and lim n α n =0, from (3.2) we have that

0 lim inf k { x n k + 1 x 2 + y n k + 1 y 2 ( x n k x 2 + y n k y 2 ) } lim inf k { ( 1 α n k ) K 1 , β x n k x 2 + α n k f ( K 2 , β y n k ) x 2 + ( 1 α n k ) K 2 , β y n k y 2 + α n k g ( K 1 , β x n k ) y 2 x n k x 2 y n k y 2 } = lim inf k { ( K 1 , β x n k x 2 x n k x 2 ) + ( K 2 , β y n k y 2 y n k y 2 ) } lim sup k { ( K 1 , β x n k x 2 x n k x 2 ) + ( K 2 , β y n k y 2 y n k y 2 ) } = lim sup k { ( K 1 , β x n k x + x n k x ) ( K 1 , β x n k x x n k x ) + ( K 2 , β y n k y + y n k y ) ( K 2 , β y n k y y n k y ) } 0 .

The above conclusion can be proved as follows.

Indeed, since the sequences { K 1 , β x n k x + x n k x } and { K 2 , β y n k y + y n k y } are bounded, and K i , β , i=1,2, is quasi-nonexpansive, we have

K 1 , β x n k x x n k x , K 2 , β y n k y y n k y .

The conclusion is proved. Therefore we have that

lim k ( K 1 , β x n k x x n k x ) = lim k ( K 2 , β y n k y y n k y ) =0.
(3.5)

By Lemma 2.10(iii), the mapping K i , β , i=1,2, is strongly quasi-nonexpansive. Hence from (3.5) we have that

K 1 , β x n k x n k 0, K 2 , β y n k y n k 0.
(3.6)

This together with (3.2) shows that

x n k + 1 x n k 0and y n k + 1 y n k 0.

Since { x n k } is bounded and H is reflexive, there exists a subsequence { x n k l }{ x n k } such that x n k l u and

lim l f ( y ) x , x n k l x = lim sup k f ( y ) x , x n k x = lim sup k f ( y ) x , x n k + 1 x .

On the other hand, by virtue of Lemma 2.10(iv), I K 1 , β is demiclosed at zero, and so uFix( K 1 , β )= Ω 1 . Hence from (3.3) we have

lim l f ( y ) x , x n k l x = f ( y ) x , u x 0.

Consequently,

lim sup k f ( y ) x , x n k + 1 x 0.

Similarly, by using the same argument, we have

lim sup k g ( x ) y , y n k + 1 y 0.

The desired inequality is proved.

(V) Finally we prove that the sequences { x n } and { y n } defined by (3.2) converge to x and  y , respectively.

It is easy to see that

x n + 1 x y n y + x n x y n + 1 y ( y n y 2 + x n x 2 ) 1 2 ( y n + 1 y 2 + x n + 1 x 2 ) 1 2 .
(3.7)

Substituting (3.7) into (3.4), simplifying and putting

a n : = x n x 2 + y n y 2 , b n : = 2 ( f ( y ) x , x n + 1 x + g ( x ) y , x n + 1 y ) ,

then we have the following conclusions:

  1. (i)

    By (II), { a n } is a bounded sequence;

  2. (ii)

    From (3.4), a n + 1 ( 1 α n ) 2 a n +2 α n h a n a n + 1 + α n b n , n1;

  3. (iii)

    By (IV), for any subsequence { a n k }{ a n } satisfying

    lim inf k ( a n k + 1 a n k )0,

    it follows that lim sup k b n k 0.

Hence it follows from Lemma 2.8 that x n x and y n y . This completes the proof of Theorem 3.1. □

Definition 3.2 A mapping F:HH is said to be μ-Lipschitzian and r-strongly monotone, if there exist constants μ>0 and r>0 such that

FxFyμxy,FxFy,xyr x y 2 ,x,yH.

Remark 3.3 It is easy to prove that if F:HH is a μ-Lipschitzian and r-strongly monotone mapping and if ρ(0, 2 r μ 2 ), then the mapping f:=IρF:HH is a contraction.

Now we are in a position to prove the following main result.

Theorem 3.4 Let A i , M i , Ω i , K i , K i β , i=1,2, be the same as in Theorem  3.1. Let F:HH be a μ-Lipschitzian and r-strongly monotone mapping. Let { x n } and { y n } be the sequences defined by

{ x 0 , y 0 H , x n + 1 = ( 1 α n ) K 1 , β x n + α n f ( K 2 , β y n ) , y n + 1 = ( 1 α n ) K 2 , β y n + α n g ( K 1 , β x n ) ,
(3.8)

where f:=IρF, g:=IηF with ρ,η(0, 2 r μ 2 ), β(0,1) and { α n } is a sequence in (0,1) satisfying the following conditions:

lim n α n =0, n = 0 α n =.

Then the sequence ({ x n },{ y n }) converges strongly to the unique solution ( x , y ) of bi-level hierarchical variational inclusion problem (1.5).

Proof Indeed, it follows from Remark 3.3 that both mappings f,g:HH are contractive. Therefore all the conditions in Theorem 3.1 are satisfied. By Theorem 3.1, the sequence ({ x n },{ y n }) converges strongly to ( x , y ) Ω 1 × Ω 2 , which is the unique solution of the following bi-level hierarchical optimization problem:

{ x f ( y ) , x x 0 , x Ω 1 , y g ( x ) , y y 0 , y Ω 2 .
(3.9)

Since f=IρF and g=IηF, we have

{ ρ F ( y ) + x y , x x 0 , x Ω 1 , η F ( x ) + y x , y y 0 , y Ω 2 .
(3.10)

This implies that the sequence ({ x n },{ y n }) converges strongly to ( x , y ) Ω 1 × Ω 2 , which is the unique solution of bi-level hierarchical variational inclusion problem (1.5). This completes the proof of Theorem 3.4. □

4 Some applications

In this section, we shall utilize Theorem 3.1 and Theorem 3.4 to study the convex mathematical programming problem and quadratic minimization problem.

(I) Applications to convex mathematical programming problems.

Let ψ:HR be a convex and lower semi-continuous function with ψ being μ-Lipschitzian and r-strongly monotone, i.e., it satisfies the following conditions:

ψ ( x ) ψ ( y ) μxy,x,yH,μ>0,
(4.1)

and

ψ ( x ) ψ ( y ) , x y r x y 2 ,x,yH,r>0.
(4.2)

In (1.5) taking η=0, ρ(0, 2 r μ 2 ) and F=ψ, then hierarchical variational inclusion problem (1.5) reduces to the following problem:

Find a point x Ω 1 such that

ψ ( x ) , x x 0,x Ω 1 .
(4.3)

By using the subdifferential inequality, this implies that

ψ(x)ψ ( x ) ψ ( x ) , x x 0,x Ω 1 .

Therefore we have

ψ(x)ψ ( x ) 0,x Ω 1 .
(4.4)

Thus problem (4.3) reduces to the convex mathematical programming problem on Ω 1 :

Find a point x Ω 1 such that

min x Ω 1 ψ(x).
(4.5)

Hence, we have the following result.

Theorem 4.1 Let A 1 , M 1 , Ω 1 , K 1 , K 1 , β , { α n } be the same as in Theorem  3.4. Let { x n } be the iterative sequence defined by

{ x 0 H , x n + 1 = ( 1 α n ) K 1 , β x n + α n ( I ρ F ) ( K 1 , β x n ) ,
(4.6)

where ρ(0, 2 r μ 2 ), β(0,1). Then { x n } converges strongly to x Ω 1 , which is the unique solution of convex mathematical programming problem (4.5).

(II) Applications to quadratic minimization problems.

Recall that a linear bounded operator T:HH is said to be ξ-strongly positive if there exists a positive constant ξ such that

Tx,xξ x 2 ,xH.

Lemma 4.2 Let T:HH be a ξ-strongly positive linear operator and let γ be a positive number with γ< 1 T , where T is the norm of T defined by

T=sup { T u , u : u H , u = 1 } .

Then we have

  1. (1)

    The linear operator F:=I+γT:HH is μ-Lipschitzian and r-strongly monotone, where μ=(1+γT) and r=1+γξ.

  2. (2)

    If ρ(0, 1 1 + γ ξ ), then the linear operator (Iρ(I+γT)) is contractive with a contractive constant h:=1ρ(1+γξ).

Proof (1) In fact, for any x,yH, we have

( I + γ T ) ( x y ) ( 1 + γ T ) xy=μxy.

Again, since T:HH is a ξ-strongly positive linear operator, we have

( I + γ T ) ( x y ) , x y (1+γξ) x y 2 =r x y 2 .

Conclusion (1) is proved.

(2) By the definition of the norm of the bounded linear operator (Iρ(I+γT)), we have

I ρ ( I + γ T ) = sup { ( I ρ ( I + γ T ) ) u , u : u H , u = 1 } = sup { ( 1 ρ ρ γ ) T u , u : u H , u = 1 } 1 ρ ρ γ ξ , ρ ( 0 , 1 1 + γ ξ ) .

Therefore, (Iρ(I+γT)) is contractive with a contractive constant 1ρ(1+γξ). This completes the proof. □

From Theorem 3.4 and Lemma 4.2 we have the following result.

Theorem 4.3 Let A, M, K, K β , Ω and { α n } satisfy the same conditions as given in Theorem  3.4. Let the linear mappings T and F satisfy the same conditions as in Lemma  4.2. Then the sequence { x n } defined by

{ x 0 H , x n + 1 = ( 1 α n ) K β x n + α n ( I ρ F ) ( K β x n ) ,
(4.7)

where ρ(0, 1 1 + γ ξ ), β(0,1), converges strongly to x Ω 1 , which is the unique solution of the hierarchical variational inclusion problem:

ρ ( I + γ T ) x , x x 0,xΩ,

that is,

( I + γ T ) x , x x 0,xΩ.
(4.8)

Letting g(x):= γ 2 Tx,x+ 1 2 x 2 , then it is easy to know that g:H R + is a continuous and convex functional and g( x )=(I+γT)( x ). By the subdifferential inequality of g, we have

g(x)g ( x ) ( I + γ T ) ( x ) , x x 0,xΩ.

This implies that x solves the following quadratic minimization problem:

min x Ω { γ 2 T x , x + 1 2 x 2 }
(4.9)

and x n x . This completes the proof.

References

  1. Noor MA, Noor KI: Sensitivity analysis of quasi variational inclusions. J. Math. Anal. Appl. 1999, 236: 290–299. 10.1006/jmaa.1999.6424

    Article  MathSciNet  Google Scholar 

  2. Chang SS: Set-valued variational inclusions in Banach spaces. J. Math. Anal. Appl. 2000, 248: 438–454. 10.1006/jmaa.2000.6919

    Article  MathSciNet  Google Scholar 

  3. Chang SS: Existence and approximation of solutions of set-valued variational inclusions in Banach spaces. Nonlinear Anal. 2001, 47: 583–594. 10.1016/S0362-546X(01)00203-6

    Article  MathSciNet  Google Scholar 

  4. Demyanov VF, Stavroulakis GE, Polyakova LN, Panagiotopoulos PD: Quasidifferentiability and Nonsmooth Modeling in Mechanics, Engineering and Economics. Kluwer Academic, Dordrecht; 1996.

    Book  MATH  Google Scholar 

  5. Zhang SS, Wang XR, Lee HW, Chan CK: Viscosity method for hierarchical fixed point and variational inequalities with applications. Appl. Math. Mech. 2011, 32: 241–250.

    Article  MathSciNet  Google Scholar 

  6. Yao YH, Liou YC: An implicit extra-gradient method for hierarchical variational inequalities. Fixed Point Theory Appl. 2011., 2011: Article ID 697248 10.1155/2011/697248

    Google Scholar 

  7. Chang SS, Cho YJ, Kim JK: Hierarchical variational inclusion problems in Hilbert spaces with applications. J. Nonlinear Convex Anal. 2012, 13: 503–513.

    MathSciNet  Google Scholar 

  8. Xu HK: Viscosity methods for hierarchical fixed point approach to variational inequalities. Taiwan. J. Math. 2010, 14: 463–478.

    Google Scholar 

  9. Cianciaruso F, Colao V, Muglia L, Xu HK: On a implicit hierarchical fixed point approach to variational inequalities. Bull. Aust. Math. Soc. 2009, 80: 117–124. 10.1017/S0004972709000082

    Article  MathSciNet  Google Scholar 

  10. Cianciaruso F, Marino G, Muglia L, Yao Y: On a two-steps algorithm for hierarchical fixed points and variational inequalities. J. Inequal. Appl. 2009., 2009: Article ID 208692 10.1155/2009/208692

    Google Scholar 

  11. Kim JK, Kim KS: A new system of generalized nonlinear mixed quasivariational inequalities and iterative algorithms in Hilbert spaces. J. Korean Math. Soc. 2007, 44: 823–834. 10.4134/JKMS.2007.44.4.823

    Article  MathSciNet  Google Scholar 

  12. Kim JK, Kim KS: New system of generalized mixed variational inequalities with nonlinear mappings in Hilbert spaces. J. Comput. Anal. Appl. 2010, 12: 601–612.

    MathSciNet  Google Scholar 

  13. Kim JK, Kim KS: On new system of generalized nonlinear mixed quasivariational inequalities with two-variable operators. Taiwan. J. Math. 2007, 11: 867–881.

    Google Scholar 

  14. Chang SS, Cho YJ, Kim JK: On the two-step projection methods and applications to variational inequalities. Math. Inequal. Appl. 2007, 10: 755–760.

    MathSciNet  Google Scholar 

  15. Mainge PE, Moudafi A: Strong convergence of an iterative method for hierarchical fixed point problems. Pac. J. Optim. 2007, 3: 529–538.

    MathSciNet  Google Scholar 

  16. Guo G, Wang S, Cho YJ: Strong convergence algorithms for hierarchical fixed point problems and variational inequalities. J. Appl. Math. 2011., 2011: Article ID 164978 10.1155/2011/164978

    Google Scholar 

  17. Yao Y, Cho YJ, Liou YC: Hierarchical convergence of an implicit double-net algorithm for nonexpansive semigroups and variational inequality problems. Fixed Point Theory Appl. 2011., 2011: Article ID 101

    Google Scholar 

  18. Yao Y, Cho YJ, Yang P: An iterative algorithm for a hierarchical problem. J. Appl. Math. 2012., 2012: Article ID 320421 10.1155/2012/320421

    Google Scholar 

  19. Chang SS, Joseph Lee HW, Chan CK: Algorithms of common solutions for quasi variational inclusion and fixed point problems. Appl. Math. Mech. 2008, 29: 1–11.

    Article  Google Scholar 

  20. Maingé PE: New approach to solving a system of variational inequalities and hierarchical problems. J. Optim. Theory Appl. 2008, 138: 459–477. 10.1007/s10957-008-9433-z

    Article  MathSciNet  Google Scholar 

  21. Kraikaew R, Saejung S: On Maingé’s approach for hierarchical optimization problem. J. Optim. Theory Appl. 2012. 10.1007/s10957-011-9982-4

    Google Scholar 

  22. Kassay G, Kolumbán J, Páles Z: Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143: 377–389. 10.1016/S0377-2217(02)00290-4

    Article  Google Scholar 

  23. Kassay G, Kolumbán J: System of multi-valued variational inequalities. Publ. Math. (Debr.) 2000, 56: 185–195.

    Google Scholar 

  24. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9

    Article  MathSciNet  Google Scholar 

  25. Yamada I, Ogura N: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 2004, 25: 619–655.

    Article  MathSciNet  Google Scholar 

  26. Yamada I, Ogura N, Shirakawa N: A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems.Contemporary Mathematics. In Inverse Problems, Image Analysis, and Medical Imaging. Edited by: Nashed MZ, Scherzer O. Am. Math. Soc., Providence; 2002:269–305.

    Chapter  Google Scholar 

  27. Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.

    MATH  Google Scholar 

  28. Maingé PE: The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 2010, 59: 74–79. 10.1016/j.camwa.2009.09.003

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The second author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012R1A1A2042138).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong Kyu Kim.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The main idea of this paper was proposed by JKK. JKK and SC prepared the manuscript initially and performed all the steps of the proof in this research. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chang, Ss., Kim, J.K., Joseph Lee, H. et al. On the hierarchical variational inclusion problems in Hilbert spaces. Fixed Point Theory Appl 2013, 179 (2013). https://doi.org/10.1186/1687-1812-2013-179

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-179

Keywords