Skip to main content

Iterative algorithms for a system of generalized variational inequalities in Hilbert spaces

Abstract

In this paper, a new system of generalized nonlinear variational inequalities involving three operators is introduced. A three-step iterative algorithm is considered for the system of generalized nonlinear variational inequalities. Strong convergence theorems of the three-step iterative algorithm are established.

MSC:47H05, 47H09, 47J25.

1 Introduction

Variational inequalities are among the most interesting and intensively studied classes of mathematical problems and have wide applications in the fields of optimization and control, economics, transportation equilibrium and engineering sciences. There exists a vast amount of literature (see, for instance, [126]) on the approximation solvability of nonlinear variational inequalities as well as operator equations.

Iterative algorithms have played a central role in the approximation solvability, especially of nonlinear variational inequalities as well as of nonlinear equations, in several fields such as applied mathematics, mathematical programming, mathematical finance, control theory and optimization, engineering sciences and others. Projection methods have played a significant role in the numerical resolution of variational inequalities based on their convergence analysis. However, the convergence analysis does require some sort of strong monotonicity besides the Lipschitz continuity. There have been some recent developments where convergence analysis for projection methods under somewhat weaker conditions such as cocoercivity [28] and partial relaxed monotonicity [24] is achieved.

Recently, Chang et al. [17] introduced a two-step iterative algorithm for a system of nonlinear variational inequalities and established strong convergence theorems. Huang and Noor [16] introduced the so-called explicit two-step iterative algorithm for a system of nonlinear variational inequalities involving two different nonlinear operators and established strong convergence theorems.

In this paper, we consider, based on the projection method, the approximate solvability of a new system of generalized nonlinear variational inequalities involving three different nonlinear operators in the framework of Hilbert spaces. The results presented in this paper extend and improve the corresponding results announced in Huang and Noor [16], Chang et al. [17], Verma [2426] and many others.

Let H be a real Hilbert space, whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H and P C be the metric projection from H onto C.

Given nonlinear operators T,f:CH and g:CC, we consider the problem of finding uC such that

g ( u ) f ( u ) + λ T u , v g ( u ) 0,vC,
(1.1)

where λ>0 is a constant. The variational inequality (1.1) is called the generalized variational inequality involving three operators.

We see that an element uC is a solution to the generalized variational inequality (1.1) if and only if uC is a fixed point of the mapping

Ig+ P C (fλT),

where I is the identity mapping. This equivalence plays an important role in this work.

If f=g, then the generalized variational inequality (1.1) is equivalent to the following.

Find uC such that

T u , v g ( u ) 0,vC.
(1.2)

Further, if g=I, then the problem (1.2) is reduced to finding uC such that

Tu,vu0,vC,
(1.3)

which is known as the classical variational inequality originally introduced and studied by Stampacchia [27].

Let T:CH be a mapping. Recall the following definitions.

  1. (1)

    T is said to be monotone if

    TuTv,uv0,u,vC.
  2. (2)

    T is called δ-strongly monotone if there exists a constant δ>0 such that

    TxTy,xyδ x y 2 ,x,yC.

This implies that

TxTyδxy,x,yC,

that is, T is δ-expansive.

  1. (3)

    T is said to be γ-cocoercive if there exists a constant γ>0 such that

    TxTy,xyγ T x T y 2 ,x,yC.

Clearly, every γ-cocoercive mapping A is 1 γ -Lipschitz continuous.

  1. (4)

    T is said to be relaxed γ-cocoercive if there exists a constant γ>0 such that

    TxTy,xy(γ) T x T y 2 ,x,yC.
  2. (5)

    T is said to be relaxed (γ,δ)-cocoercive if there exist two constants γ,δ>0 such that

    TxTy,xy(γ) T x T y 2 +δ x y 2 ,x,yC.

Let T i :C×C×CH, f i :CH and g i :CC be nonlinear mappings for each i=1,2,3. Consider a system of generalized nonlinear variational inequality (SGNVI) as follows.

Find ( x , y , z )C×C×C such that for all s,t,r>0,

{ s T 1 ( y , z , x ) + g 1 ( x ) f 1 ( y ) , x g 1 ( x ) 0 , x C , t T 2 ( z , x , y ) + g 2 ( y ) f 2 ( z ) , x g 2 ( x ) 0 , x C , r T 3 ( x , y , z ) + g 3 ( z ) f 3 ( x ) , x g 3 ( x ) 0 , x C .
(1.4)

One can easily see SGNVI (1.4) is equivalent to the following projection problem:

{ g 1 ( x ) = P C ( f 1 ( y ) s T 1 ( y , z , x ) ) , s > 0 , g 2 ( y ) = P C ( f 2 ( z ) t T 2 ( z , x , y ) ) , t > 0 , g 3 ( z ) = P C ( f 3 ( x ) r T 3 ( x , y , z ) ) , r > 0 .
(1.5)

Next, we consider some special classes of SGNVI (1.4) as follows.

  1. (I)

    If g 1 = g 2 = g 3 =I, then SGNVI (1.4) is reduced to the following.

Find ( x , y , z )C×C×C such that for all s,t,r>0,

{ s T 1 ( y , z , x ) + x f 1 ( y ) , x x 0 , x C , t T 2 ( z , x , y ) + y f 2 ( z ) , x x 0 , x C , r T 3 ( x , y , z ) + z f 3 ( x ) , x x 0 , x C .
(1.6)

We see that the problem (1.6) is equivalent to the following projection problem:

{ x = P C ( f 1 ( y ) s T 1 ( y , z , x ) ) , s > 0 , y = P C ( f 2 ( z ) t T 2 ( z , x , y ) ) , t > 0 , z = P C ( f 3 ( x ) r T 3 ( x , y , z ) ) , r > 0 .
(1.7)
  1. (II)

    If f 1 = f 2 = f 3 =I, then SGNVI (1.4) is reduced to the following.

Find ( x , y , z )C×C×C such that for all s,t,r>0,

{ s T 1 ( y , z , x ) + g 1 ( x ) y , x g 1 ( x ) 0 , x C , t T 2 ( z , x , y ) + g 2 ( y ) z , x g 2 ( x ) 0 , x C , r T 3 ( x , y , z ) + g 3 ( z ) x , x g 3 ( x ) 0 , x C .
(1.8)

We see that the problem (1.8) is equivalent to the following projection problem:

{ g 1 ( x ) = P C [ y s T 1 ( y , z , x ) ] , s > 0 , g 2 ( y ) = P C [ z t T 2 ( z , x , y ) ] , t > 0 , g 3 ( z ) = P C [ x r T 3 ( x , y , z ) ] , r > 0 .
(1.9)
  1. (III)

    If g 1 = g 2 = g 3 = f 1 = f 2 = f 3 =I, then SGNVI (1.4) is reduced to the following.

Find ( x , y , z )C×C×C such that for all s,t,r>0,

{ s T 1 ( y , z , x ) + x y , x x 0 , x C , t T 2 ( z , x , y ) + y z , x x 0 , x C , r T 3 ( x , y , z ) + z x , x x 0 , x C .
(1.10)

One can easily get that the problem (1.10) is equivalent to the following projection problem:

{ x = P C ( y s T 1 ( y , z , x ) ) , s > 0 , y = P C ( z t T 2 ( z , x , y ) ) , t > 0 , z = P C ( x r T 3 ( x , y , z ) ) , r > 0 .
(1.11)
  1. (IV)

    If T 1 , T 2 and T 3 are univariate mappings, then SGNVI (1.4) is reduced to the following.

Find ( x , y , z )C×C×C such that for all s,t,r>0,

{ s T 1 y + g 1 ( x ) f 1 ( y ) , x g 1 ( x ) 0 , x C , t T 2 z + g 2 ( y ) f 2 ( z ) , x g 2 ( x ) 0 , x C , r T 3 x + g 3 ( z ) f 3 ( x ) , x g 3 ( x ) 0 , x C .
(1.12)

One can easily see that the problem (1.12) is equivalent to the following projection problem:

{ g 1 ( x ) = P C ( f 1 ( y ) s T 1 y ) , s > 0 , g 2 ( y ) = P C ( f 2 ( z ) t T 2 z ) , t > 0 , g 3 ( z ) = P C ( f 3 ( x ) r T 3 x ) , r > 0 .
(1.13)

2 Preliminaries

In this section, to study the approximate solvability of the problems (1.4), (1.6), (1.8), (1.10) and (1.12), we introduce the following three-step algorithms.

Algorithm 2.1 For any ( x 0 , y 0 , z 0 )C×C×C, compute the sequences { x n }, { y n } and { z n } by the following iterative process:

{ x n + 1 = ( 1 α n ) x n + α n ( x n g 1 ( x n ) + P C ( f 1 ( y n ) s T 1 ( y n , z n , x n ) ) ) , n 0 , g 3 ( z n + 1 ) = P C ( f 3 ( x n + 1 ) r T 3 ( x n + 1 , y n , z n ) ) , n 0 , g 2 ( y n + 1 ) = P C ( f 2 ( z n + 1 ) t T 2 ( z n + 1 , x n , y n ) ) , n 0 ,

where r,s,t>0 are three constants and { α n } is a sequence in [0,1].

If g 1 = g 2 = g 3 =I, then Algorithm 2.1 is reduced to the following.

Algorithm 2.2 For any ( x 0 , y 0 , z 0 )C×C×C, compute the sequences { x n }, { y n } and { z n } by the following iterative process:

{ x n + 1 = ( 1 α n ) x n + α n P C ( f 1 ( y n ) s T 1 ( y n , z n , x n ) ) , n 0 , z n + 1 = P C ( f 3 ( x n + 1 ) r T 3 ( x n + 1 , y n , z n ) ) , n 0 , y n + 1 = P C ( f 2 ( z n + 1 ) t T 2 ( z n + 1 , x n , y n ) ) , n 0 ,

where r,s,t>0 are three constants and { α n } is a sequence in [0,1].

If f 1 = f 2 = f 3 =I, the identity mapping, then Algorithm 2.1 is reduced to the following.

Algorithm 2.3 For any ( x 0 , y 0 , z 0 )C×C×C, compute the sequences { x n }, { y n } and { z n } by the following iterative process:

{ x n + 1 = ( 1 α n ) x n + α n ( x n g 1 ( x n ) + P C ( y n s T 1 ( y n , z n , x n ) ) ) , n 0 , g 3 ( z n + 1 ) = P C ( x n + 1 r T 3 ( x n + 1 , y n , z n ) ) , n 0 , g 2 ( y n + 1 ) = P C ( z n + 1 t T 2 ( z n + 1 , x n , y n ) ) , n 0 ,

where r,s,t>0 are three constants and { α n } is a sequence in [0,1].

If f 1 = f 2 = f 3 = g 1 = g 2 = g 3 =I, the identity mapping, then Algorithm 2.1 is reduced to the following.

Algorithm 2.4 For any ( x 0 , y 0 , z 0 )C×C×C, compute the sequences { x n }, { y n } and { z n } by the following iterative process:

{ x n + 1 = ( 1 α n ) x n + α n P C ( y n s T 1 ( y n , z n , x n ) ) , n 0 , z n + 1 = P C ( x n + 1 r T 3 ( x n + 1 , y n , z n ) ) , n 0 , y n + 1 = P C ( z n + 1 t T 2 ( z n + 1 , x n , y n ) ) , n 0 ,

where r,s,t>0 are three constants and { α n } is a sequence in [0,1].

  1. (IV)

    If T 1 , T 2 and T 3 are univariate mappings, then Algorithm 2.1 is reduced to the following.

Algorithm 2.5 For any ( x 0 , y 0 , z 0 )C×C×C, compute the sequences { x n }, { y n } and { z n } by the following iterative process:

{ x n + 1 = ( 1 α n ) x n + α n ( x n g 1 ( x n ) + P C ( f 1 ( y n ) s T 1 y n ) ) , n 0 , g 3 ( z n + 1 ) = P C ( f 3 ( x n + 1 ) r T 3 x n + 1 ) , n 0 , g 2 ( y n + 1 ) = P C ( f 2 ( z n + 1 ) t T 2 z n + 1 ) , n 0 ,

where r,s,t>0 are three constants and { α n } is a sequence in [0,1].

In order to prove our main results, we also need the following lemma and definitions.

Lemma 2.6 [29]

Assume that { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 λ n ) a n + b n ,n n 0 ,

where n 0 is a nonnegative integer, { λ n } is a sequence in (0,1) with n = 1 λ n = and b n =( λ n ), then lim n a n =0.

Definition 2.7 A mapping T:C×C×CH is said to be relaxed (γ,δ)-cocoercive if there exist constants γ,δ>0 such that for all x, x C,

Definition 2.8 A mapping T:C×C×CH is said to be β-Lipschitz continuous in the first variable if there exists a constant β>0 such that for all x, x C,

T ( x , y , z ) T ( x , y , z ) β x x ,y, y ,z, z C.

3 Main results

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let T i :C×C×CH be a relaxed ( γ i , δ i )-cocoercive and β i -Lipschitz continuous mapping in the first variable, f i :CH be a relaxed ( η i , ρ i )-cocoercive and λ i -Lipschitz continuous mapping and g i :CC be a relaxed ( η ¯ i , ρ ¯ i )-cocoercive and λ ¯ i -Lipschitz continuous mapping for each i=1,2,3. Suppose that ( x , y , z )C×C×C is a solution to the problem (1.4). Let { x n }, { y n } and { z n } be the sequences generated by Algorithm 2.1. Assume that the following conditions are satisfied:

  1. (a)

    n = 0 α n =;

  2. (b)

    0 θ 6 , θ 9 <1;

  3. (c)

    ( θ 1 + θ 2 )( θ 4 + θ 5 )( θ 7 + θ 8 )(1 θ 3 )(1 θ 6 )(1 θ 9 ),

where

and

θ 9 = 1 2 ρ ¯ 3 + λ ¯ 3 2 + 2 η ¯ 3 λ ¯ 3 2 .

Then the sequences { x n }, { y n } and { z n } converge strongly to x , y and z , respectively.

Proof In view of ( x , y , z ) being a solution to the problem (1.4), we see that

{ x = ( 1 α n ) x + α n ( x g 1 ( x ) + P C ( f 1 ( y ) s T 1 ( y , z , x ) ) ) , n 0 , g 3 ( z ) = P C ( f 3 ( x ) r T 3 ( x , y , z ) ) , n 0 , g 2 ( y ) = P C ( f 2 ( z ) t T 2 ( z , x , y ) ) , n 0 .

It follows from Algorithm (2.1) that

(3.1)

By the assumption that T 1 is relaxed ( γ 1 , r 1 )-cocoercive and β 1 -Lipschitz continuous in the first variable, we obtain that

(3.2)

where θ 1 = 1 2 s δ 1 + 2 s γ 1 β 1 2 + s 2 β 1 2 . On the other hand, it follows from the assumption that f 1 is relaxed ( η 1 , ρ 1 )-cocoercive and λ 1 -Lipschitz continuous that

(3.3)

where θ 2 = 1 2 ρ 1 + λ 1 2 + 2 η 1 λ 1 2 . In a similar way, we can obtain that

x n x ( g 1 ( x n ) g 1 ( x ) ) θ 3 x n x ,
(3.4)

where θ 3 = 1 2 ρ ¯ 1 + λ ¯ 1 2 + 2 η ¯ 1 λ ¯ 1 2 . Substituting (3.2), (3.3) and (3.4) into (3.1), we arrive at

x n + 1 x [ 1 α n ( 1 θ 3 ) ] x n x + α n ( θ 1 + θ 2 ) y n y .
(3.5)

Next, we estimate y n y . From Algorithm 2.1, we see that

g 2 ( y n + 1 ) g 2 ( y ) = P C ( f 2 ( z n + 1 ) t T 2 ( z n + 1 , x n , y n ) ) P C ( f 2 ( z ) t T 2 ( z , x , y ) ) ( f 2 ( z n + 1 ) t T 2 ( z n + 1 , x n , y n ) ) ( f 2 ( z ) t T 2 ( z , x , y ) ) ( z n + 1 z ) ( f 2 ( z n + 1 ) f 2 ( z ) ) + ( z n + 1 z ) t ( T 2 ( z n + 1 , x n , y n ) T 2 ( z , x , y ) ) .
(3.6)

By the assumption that T 2 is relaxed ( γ 2 , r 2 )-cocoercive and β 2 -Lipschitz continuous in the first variable, we obtain that

(3.7)

where θ 4 = 1 2 t δ 2 + 2 t γ 2 β 2 2 + t 2 β 2 2 . It follows from the assumption that f 2 is relaxed ( η 2 , ρ 2 )-cocoercive and λ 2 -Lipschitz continuous that

(3.8)

where θ 5 = 1 2 ρ 2 + λ 2 2 + 2 η 2 λ 2 2 . Substituting (3.7) and (3.8) into (3.6), we see that

g 2 ( y n + 1 ) g 2 ( y ) ( θ 4 + θ 5 ) z n + 1 z .
(3.9)

On the other hand, we have

y n + 1 y y n + 1 y ( g 2 ( y n + 1 ) g 2 ( y ) ) + g 2 ( y n + 1 ) g 2 ( y ) .
(3.10)

From the proof of (3.8), we arrive at

( y n + 1 y ) ( g 2 ( y n + 1 ) g 2 ( y ) ) θ 6 y n + 1 y ,
(3.11)

where θ 6 = 1 2 ρ ¯ 2 + λ ¯ 2 2 + 2 η ¯ 2 λ ¯ 2 2 . Substituting (3.9) and (3.11) into (3.10), we see that

y n + 1 y θ 6 y n + 1 y +( θ 4 + θ 5 ) z n + 1 z .

It follows from the condition (b) that

y n + 1 y θ 4 + θ 5 1 θ 6 z n + 1 z .

That is,

y n y θ 4 + θ 5 1 θ 6 z n z .
(3.12)

Finally, we estimate z n z . It follows from Algorithm 2.1 that

(3.13)

In a similar way, we can show that

( x n + 1 x ) r ( T 3 ( x n + 1 , y n , z n ) T 3 ( x , y , z ) ) θ 7 x n + 1 x
(3.14)

and

( x n + 1 x ) ( f 3 ( x n + 1 ) f 3 ( x ) ) θ 8 x n + 1 x ,
(3.15)

where θ 7 = 1 2 r δ 3 + 2 r γ 3 β 3 2 + r 2 β 3 2 and θ 8 = 1 2 ρ 3 + λ 3 2 + 2 η 3 λ 3 2 . Substituting (3.14) and (3.15) into (3.13), we arrive at

g 3 ( z n + 1 ) g 3 ( z ) ( θ 7 + θ 8 ) x n + 1 x .
(3.16)

Note that

z n + 1 z z n + 1 z ( g 3 ( z n + 1 ) g 3 ( z ) ) + g 3 ( z n + 1 ) g 3 ( z ) .
(3.17)

On the other hand, we have

z n + 1 z ( g 3 ( z n + 1 ) g 3 ( z ) ) θ 9 z n + 1 z ,
(3.18)

θ 9 = 1 2 ρ ¯ 3 + λ ¯ 3 2 + 2 η ¯ 3 λ ¯ 3 2 . Substituting (3.16) and (3.18) into (3.17), we arrive at

z n + 1 z θ 9 z n + 1 z +( θ 7 + θ 8 ) x n + 1 x .

It follows from the condition (b) that

z n + 1 z θ 7 + θ 8 1 θ 9 x n + 1 x .

That is,

z n z θ 7 + θ 8 1 θ 9 x n x .
(3.19)

Combining (3.5), (3.12) with (3.19), we obtain that

x n + 1 x ( 1 α n ( 1 θ 3 ( θ 1 + θ 2 ) θ 4 + θ 5 1 θ 6 θ 7 + θ 8 1 θ 9 ) ) x n x .

Since n = 0 α n = and the condition (c), we can conclude the desired conclusion easily from Lemma 2.6. This completes the proof. □

Remark 3.2 Theorem 3.1 includes the corresponding results in Huang and Noor [16] Chang et al. [17], and Verma [2426] as special cases.

From Theorem 3.1, we can get the following results immediately.

Corollary 3.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let T i :C×C×CH be a relaxed ( γ i , δ i )-cocoercive and β i -Lipschitz continuous mapping in the first variable and f i :CH be a relaxed ( η i , ρ i )-cocoercive and λ i -Lipschitz continuous mapping for each i=1,2,3. Suppose that ( x , y , z )C×C×C is a solution to the problem (1.6). Let { x n }, { y n } and { z n } be the sequences generated by Algorithm 2.2. Assume that the following conditions are satisfied:

  1. (a)

    n = 0 α n =;

  2. (b)

    ( θ 1 + θ 2 )( θ 4 + θ 5 )( θ 7 + θ 8 )1,

where

and

θ 7 = 1 2 r δ 3 + 2 r γ 3 β 3 2 + r 2 β 3 2 , θ 8 = 1 2 ρ 3 + λ 3 2 + 2 η 3 λ 3 2 .

Then the sequences { x n }, { y n } and { z n } converge strongly to x , y and z , respectively.

Corollary 3.4 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let T i :C×C×CH be a relaxed ( γ i , δ i )-cocoercive and β i -Lipschitz continuous mapping in the first variable and g i :CC be a relaxed ( η ¯ i , ρ ¯ i )-cocoercive and λ ¯ i -Lipschitz continuous mapping for each i=1,2,3. Suppose that ( x , y , z )C×C×C is a solution to the problem (1.8). Let { x n }, { y n } and { z n } be the sequences generated by Algorithm 2.3. Assume that the following conditions are satisfied:

  1. (a)

    n = 0 α n =;

  2. (b)

    0 θ 6 , θ 9 <1;

  3. (c)

    θ 1 θ 4 θ 7 (1 θ 3 )(1 θ 6 )(1 θ 9 ),

where

θ 1 = 1 2 s δ 1 + 2 s γ 1 β 1 2 + s 2 β 1 2 , θ 3 = 1 2 ρ ¯ 1 + λ ¯ 1 2 + 2 η ¯ 1 λ ¯ 1 2 , θ 4 = 1 2 t δ 2 + 2 t γ 2 β 2 2 + t 2 β 2 2 , θ 6 = 1 2 ρ ¯ 2 + λ ¯ 2 2 + 2 η ¯ 2 λ ¯ 2 2 ,

and

θ 7 = 1 2 r δ 3 + 2 r γ 3 β 3 2 + r 2 β 3 2 , θ 9 = 1 2 ρ ¯ 3 + λ ¯ 3 2 + 2 η ¯ 3 λ ¯ 3 2 .

Then the sequences { x n }, { y n } and { z n } converge strongly to x , y and z , respectively.

Corollary 3.5 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let T i :C×C×CH be a relaxed ( γ i , δ i )-cocoercive and β i -Lipschitz continuous mapping in the first variable for each i=1,2,3. Suppose that ( x , y , z )C×C×C is a solution to the problem (1.10). Let { x n }, { y n } and { z n } be the sequences generated by Algorithm 2.4. Assume that the following conditions are satisfied:

  1. (a)

    n = 0 α n =;

  2. (b)

    θ 1 θ 4 θ 7 1,

where

θ 1 = 1 2 s δ 1 + 2 s γ 1 β 1 2 + s 2 β 1 2 , θ 4 = 1 2 t δ 2 + 2 t γ 2 β 2 2 + t 2 β 2 2 ,

and

θ 7 = 1 2 r δ 3 + 2 r γ 3 β 3 2 + r 2 β 3 2 .

Then the sequences { x n }, { y n } and { z n } converge strongly to x , y and z , respectively.

Corollary 3.6 Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let T i :CH be a relaxed ( γ i , δ i )-cocoercive and β i -Lipschitz continuous mapping, f i :CH be a relaxed ( η i , ρ i )-cocoercive and λ i -Lipschitz continuous mapping and g i :CC be a relaxed ( η ¯ i , ρ ¯ i )-cocoercive and λ ¯ i -Lipschitz continuous mapping for each i=1,2,3. Suppose that ( x , y , z )C×C×C is a solution to the problem (1.12). Let { x n }, { y n } and { z n } be the sequences generated by Algorithm 2.5. Assume that the following conditions are satisfied:

  1. (a)

    n = 0 α n =;

  2. (b)

    0 θ 6 , θ 9 <1;

  3. (c)

    ( θ 1 + θ 2 )( θ 4 + θ 5 )( θ 7 + θ 8 )(1 θ 3 )(1 θ 6 )(1 θ 9 ),

where

and

θ 9 = 1 2 ρ ¯ 3 + λ ¯ 3 2 + 2 η ¯ 3 λ ¯ 3 2 .

Then the sequences { x n }, { y n } and { z n } converge strongly to x , y and z , respectively.

References

  1. Chang SS, Yao JC, Kim JK, Yang L: Iterative approximation to convex feasibility problems in Banach space. Fixed Point Theory Appl. 2007., 2007: Article ID 046797

    Google Scholar 

  2. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 2: 374–397.

    Google Scholar 

  3. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32: 1607–1618.

    Article  Google Scholar 

  4. Husain S, Gupta S: A resolvent operator technique for solving generalized system of nonlinear relaxed cocoercive mixed variational inequalities. Adv. Fixed Point Theory 2012, 2: 18–28.

    Google Scholar 

  5. Kim JK, Cho SY, Qin X: Hybrid projection algorithms for generalized equilibrium problems and strictly pseudocontractive mappings. J. Inequal. Appl. 2010., 2010: Article ID 312602

    Google Scholar 

  6. Ye J, Huang J: Strong convergence theorems for fixed point problems and generalized equilibrium problems of three relatively quasi-nonexpansive mappings in Banach spaces. J. Math. Comput. Sci. 2011, 1: 1–18.

    Article  MathSciNet  Google Scholar 

  7. Qing Y, Kim JK: Weak convergence of algorithms for asymptotically strict pseudocontractions in the intermediate sense and equilibrium problems. Fixed Point Theory Appl. 2012, 2012: 132. 10.1186/1687-1812-2012-132

    Article  MathSciNet  Google Scholar 

  8. Kang SM, Cho SY, Liu Z: Convergence of iterative sequences for generalized equilibrium problems involving inverse-strongly monotone mappings. J. Inequal. Appl. 2010., 2010: Article ID 827082

    Google Scholar 

  9. Lv S:Generalized systems of variational inclusions involving(A,η)-monotone mappings. Adv. Fixed Point Theory 2011, 1: 1–14.

    Google Scholar 

  10. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008

    Article  MathSciNet  Google Scholar 

  11. Qin X, Cho YJ, Kang SM: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 2009, 225: 20–30. 10.1016/j.cam.2008.06.011

    Article  MathSciNet  Google Scholar 

  12. Abdel-Salam HS, Al-Khaled K: Variational iteration method for solving optimization problems. J. Math. Comput. Sci. 2012, 2: 1457–1497.

    MathSciNet  Google Scholar 

  13. Liu M, Chang SS: A new iterative method for solving a system of general variational inequalities. Adv. Nonlinear Var. Inequal. 2009, 12: 51–60.

    MathSciNet  Google Scholar 

  14. Qin X, Cho SY, Kang SM: Strong convergence of shrinking projection methods for quasi- ϕ -nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 2010, 234: 750–760. 10.1016/j.cam.2010.01.015

    Article  MathSciNet  Google Scholar 

  15. Kim JK: Strong convergence theorems by hybrid projection methods for equilibrium problems and fixed point problems of the asymptotically quasi- ϕ -nonexpansive mappings. Fixed Point Theory Appl. 2011, 2011: 10. 10.1186/1687-1812-2011-10

    Article  Google Scholar 

  16. Huang Z, Noor MA:An explicit projection method for a system of nonlinear variational inequalities with different(γ,r)-cocoercive mappings. Appl. Math. Comput. 2007, 190: 356–361. 10.1016/j.amc.2007.01.032

    Article  MathSciNet  Google Scholar 

  17. Chang SS, Joseph Lee HW, Chan CK: Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces. Appl. Math. Lett. 2007, 20: 329–334. 10.1016/j.aml.2006.04.017

    Article  MathSciNet  Google Scholar 

  18. Noor MA, Noor KI: Projection algorithms for solving a system of general variational inequalities. Nonlinear Anal. 2009, 70: 2700–2706. 10.1016/j.na.2008.03.057

    Article  MathSciNet  Google Scholar 

  19. Qin X, Cho SY, Kang SM: On hybrid projection methods for asymptotically quasi- ϕ -nonexpansive mappings. Appl. Math. Comput. 2010, 215: 3874–3883. 10.1016/j.amc.2009.11.031

    Article  MathSciNet  Google Scholar 

  20. Lv S, Wu C: Convergence of iterative algorithms for a generalized variational inequality and a nonexpansive mapping. Eng. Math. Lett. 2012, 1: 44–57.

    Google Scholar 

  21. Noor MA: General variational inequalities. Appl. Math. Lett. 1988, 1: 119–121. 10.1016/0893-9659(88)90054-7

    Article  MathSciNet  Google Scholar 

  22. Qin X, Chang SS, Cho YJ: Iterative methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. 2010, 11: 2963–2972. 10.1016/j.nonrwa.2009.10.017

    Article  MathSciNet  Google Scholar 

  23. Zhang M: Iterative algorithms for common elements in fixed point sets and zero point sets with applications. Fixed Point Theory Appl. 2012, 2012: 21. 10.1186/1687-1812-2012-21

    Article  Google Scholar 

  24. Verma RU: Generalized system for relaxed cocoercive variational inequalities and its projection methods. J. Optim. Theory Appl. 2004, 121: 203–210.

    Article  MathSciNet  Google Scholar 

  25. Verma RU: General convergence analysis for two-step projection methods and application to variational problems. Appl. Math. Lett. 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026

    Article  MathSciNet  Google Scholar 

  26. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9

    Article  MathSciNet  Google Scholar 

  27. Stampacchia G: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258: 4413–4416.

    MathSciNet  Google Scholar 

  28. Dunn JC: Convexity, monotonicity and gradient processes in Hilbert spaces. J. Math. Anal. Appl. 1976, 53: 145–158. 10.1016/0022-247X(76)90152-9

    Article  MathSciNet  Google Scholar 

  29. Reich S: Constructive techniques for accretive and monotone operators, applied nonlinear analysis. In Proceedings of the Third International Conference University of Texas. Academic Press, New York; 1979:335–345. Arlington, TX, 1978

    Google Scholar 

Download references

Acknowledgements

The author is grateful to the editors, Kejifazhan (NO. 112400430123), and Jichuheqianyanjishuyanjiu (NO. 112400430123), Henan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingliang Zhang.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhang, M. Iterative algorithms for a system of generalized variational inequalities in Hilbert spaces. Fixed Point Theory Appl 2012, 232 (2012). https://doi.org/10.1186/1687-1812-2012-232

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-232

Keywords