Skip to content

Advertisement

  • Research
  • Open Access

The convergence analysis of the projection methods for a system of generalized relaxed cocoercive variational inequalities in Hilbert spaces

Fixed Point Theory and Applications20132013:189

https://doi.org/10.1186/1687-1812-2013-189

  • Received: 4 May 2013
  • Accepted: 5 July 2013
  • Published:

Abstract

In this paper, the approximate solvability for a system of generalized relaxed cocoercive nonlinear variational inequalities in Hilbert spaces is studied, based on the convergence of the projection methods. The results presented in this paper extend and improve the main results of Refs. (Verma in Comput. Math. Appl. 41:1025-1031, 2001; Verma in Int. J. Differ. Equ. Appl. 6:359-367, 2002; Verma in J. Optim. Theory Appl. 121(1):203-210, 2004; Verma in Appl. Math. Lett. 18(11):1286-1292, 2005; Chang, Lee and Chan in Appl. Math. Lett. 20:329-334, 2007).

MSC:90C33, 65K15, 58E36.

Keywords

  • generalized relaxed cocoercive variational inequalities
  • projection methods
  • convergence analysis

1 Introduction

Variational inequalities are one of the most interesting and intensively studied classes of mathematical problems and there exists a considerable amount of literature [115] on the approximate solvability of nonlinear variational inequalities. In this paper, we consider, based on the projection methods, the approximate solvability for a system of generalized relaxed cocoercive nonlinear variational inequalities in Hilbert spaces. The results presented in this paper extend and improve the main results in [15].

Throughout this paper, we assume that H is a real Hilbert space with the inner product , and the induced norm . Let C be a nonempty closed convex subset of H, and let P C be the metric projection of H onto C. Let T i : C × C H , g i : C C and f i : C H be relaxed cocoercive mappings for each i = 1 , 2 . We consider a system of generalized nonlinear variational inequality (SGNVI) problem as follows: find an element ( x , y ) C × C such that
{ λ T 1 ( y , x ) + g 1 ( x ) f 1 ( y ) , x g 1 ( x ) 0 , x C  and  λ > 0 , μ T 2 ( x , y ) + g 2 ( y ) f 2 ( x ) , x g 2 ( y ) 0 , x C  and  μ > 0 .
(1.1)
SGNVI problem (1.1) is equivalent to the following projection problem:
{ g 1 ( x ) = P C ( f 1 ( y ) λ T 1 ( y , x ) ) , λ > 0 , g 2 ( y ) = P C ( f 2 ( x ) μ T 2 ( x , y ) ) , μ > 0 .
(1.2)
Next we consider some special cases of SGNVI problem (1.1), where I is the identity mapping.
  1. (1)
    If g 1 = g 2 = I , then SGNVI problem (1.1) is reduced to the following: find an element ( x , y ) C × C such that
    { λ T 1 ( y , x ) + x f 1 ( y ) , x x 0 , x C  and  λ > 0 , μ T 2 ( x , y ) + y f 2 ( x ) , x y 0 , x C  and  μ > 0 .
    (1.3)
     
  2. (2)
    If f 1 = f 2 = I , then SGNVI problem (1.1) is reduced to the following: find an element ( x , y ) C × C such that
    { λ T 1 ( y , x ) + g 1 ( x ) y , x g 1 ( x ) 0 , x C  and  λ > 0 , μ T 2 ( x , y ) + g 2 ( y ) x , x g 2 ( y ) 0 , x C  and  μ > 0 .
    (1.4)
     
  3. (3)
    If g 1 = g 2 = f 1 = f 2 = I , then SGNVI problem (1.1) is reduced to the following: find an element ( x , y ) C × C such that
    { λ T 1 ( y , x ) + x y , x x 0 , x C  and  λ > 0 , μ T 2 ( x , y ) + y x , x y 0 , x C  and  μ > 0 .
    (1.5)
     
  4. (4)
    If T 1 and T 2 are univariate mappings, then SGNVI problem (1.1) is reduced to the following: find an element ( x , y ) C × C such that
    { λ T 1 ( y ) + g 1 ( x ) f 1 ( y ) , x g 1 ( x ) 0 , x C  and  λ > 0 , μ T 2 ( x ) + g 2 ( y ) f 2 ( x ) , x g 2 ( y ) 0 , x C  and  μ > 0 .
    (1.6)
     
  5. (5)
    If g 1 = g 2 = f 1 = f 2 = I , T 1 and T 2 are univariate mappings, then SGNVI problem (1.1) is reduced to the following: find an element ( x , y ) C × C such that
    { λ T 1 ( y ) + x y , x x 0 , x C  and  λ > 0 , μ T 2 ( x ) + y x , x y 0 , x C  and  μ > 0 .
    (1.7)
     
  6. (6)
    If g 1 = g 2 = f 1 = f 2 = I and μ = 0 , then SGNVI problem (1.1) is reduced to the following: find an element x C such that
    T 1 ( x , x ) , x x 0 , x C .
    (1.8)
     

2 Preliminaries

In order to prove our main results in the next section, we recall several definitions and lemmas.

Definition 2.1 Let T : C H be a mapping.
  1. (1)
    T is said to be β-Lipschitz continuous if there exists a constant β > 0 such that
    T x T y β x y , x , y C .
     
  2. (2)
    T is said to be monotone if
    T x T y , x y 0 , x , y C .
     
  3. (3)
    T is said to be δ-strongly monotone if there exists a constant δ > 0 such that
    T x T y , x y δ x y 2 , x , y C .
     
This implies that
T x T y δ x y , x , y C ,
that is, T is δ-expansive.
  1. (4)
    T is said to be γ-cocoercive if there exists a constant γ > 0 such that
    T x T y , x y γ T x T y 2 , x , y C .
     
Clearly, every γ-cocoercive mapping T is 1 γ -Lipschitz continuous.
  1. (5)
    T is said to be relaxed γ-cocoercive if there exists a constant γ > 0 such that
    T x T y , x y ( γ ) T x T y 2 , x , y C .
     
  2. (6)
    T is said to be relaxed ( γ , δ ) -cocoercive if there exist two constants γ , δ > 0 such that
    T x T y , x y ( γ ) T x T y 2 + δ x y 2 , x , y C .
     
Definition 2.2 A mapping T : C × C H is said to be relaxed ( γ , δ ) -cocoercive if there exist two constants γ , δ > 0 such that for all x , x C ,
T ( x , y ) T ( x , y ) , x x ( γ ) T ( x , y ) T ( x , y ) 2 + δ x x 2 , y , y C .
Definition 2.3 A mapping T : C × C H is said to be β-Lipschitz continuous in the first variable if there exists a constant β > 0 such that for all x , x C ,
T ( x , y ) T ( x , y ) β x x , y , y C .
Definition 2.4 P C : H C is called a metric projection if for every point x H , there exists a unique nearest point in C, denoted by P C x , such that
x P C x x y , y C .
Lemma 2.1 P C : H C is a metric projection, then P C is a nonexpansive mapping, i.e.,
P C x P C y x y , x , y H .

Lemma 2.2 [5]

Let { a n } , { b n } and { c n } be three nonnegative real sequences such that
a n + 1 ( 1 λ n ) a n + b n + c n , n n 0 ,

where n 0 is some nonnegative integer, { λ n } is a sequence in ( 0 , 1 ) with n = 1 λ n = , b n = o ( λ n ) and n = 1 c n < . Then lim n a n = 0 .

3 Main results

In this section, we present the projection methods and give the convergence analysis of SGNVI problem (1.1) involving relaxed ( γ , δ ) -cocoercive and β-Lipschitz continuous mappings in Hilbert spaces.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T i : C × C H be a relaxed ( γ i , δ i ) -cocoercive and α i -Lipschitz continuous mapping in the first variable, let g i : C C be a relaxed ( η i , ρ i ) -cocoercive and β i -Lipschitz continuous mapping, and let f i : C H be a relaxed ( η ¯ i , ρ ¯ i ) -cocoercive and β ¯ i -Lipschitz continuous mapping for each i = 1 , 2 . Suppose that ( x , y ) C × C is a solution to SGNVI problem (1.1). For any ( x 0 , y 0 ) C × C , the sequences { x n } and { y n } are generated by
{ t n + 1 = ( 1 a n ) x n + a n ( x n g 1 ( x n ) + P C ( f 1 ( y n ) λ T 1 ( y n , x n ) ) ) , n 0 , x n + 1 = P C t n + 1 , n 0 , z n = y n g 2 ( y n ) + P C ( f 2 ( x n ) μ T 2 ( x n , y n ) ) , n 1 , y n = ( 1 b n ) x n + b n P C z n , n 1 ,
(3.1)
where λ , μ > 0 and the following conditions are satisfied:
(1) 0 < a n , b n 1 ; (2) n = 0 a n = ; (3) n = 1 ( 1 b n ) < ; (4) 0 θ 2 , θ 5 < 1 ; (5) θ 4 + θ 6 1 ; (6) 0 < ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) < ( 1 θ 2 ) ( 1 θ 5 ) ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 , θ 2 = 1 2 ρ 1 + β 1 2 + 2 η 1 β 1 2 , θ 3 = 1 2 ρ ¯ 1 + β ¯ 1 2 + 2 η ¯ 1 β ¯ 1 2 , θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 , θ 5 = 1 2 ρ 2 + β 2 2 + 2 η 2 β 2 2 , θ 6 = 1 2 ρ ¯ 2 + β ¯ 2 2 + 2 η ¯ 2 β ¯ 2 2 .

Then the sequences { x n } and { y n } converge strongly to x and y , respectively.

Proof Since ( x , y ) C × C is a solution to SGNVI problem (1.1), it follows that
{ x = x g 1 ( x ) + P C ( f 1 ( y ) λ T 1 ( y , x ) ) , y = y g 2 ( y ) + P C ( f 2 ( x ) μ T 2 ( x , y ) ) .
For n 1 , we have
x n + 1 x = P C t n + 1 P C x t n + 1 x = ( 1 a n ) x n + a n ( x n g 1 ( x n ) + P C ( f 1 ( y n ) λ T 1 ( y n , x n ) ) ) x ( 1 a n ) x n x + a n ( x n g 1 ( x n ) + P C ( f 1 ( y n ) λ T 1 ( y n , x n ) ) ) ( x g 1 ( x ) + P C ( f 1 ( y ) λ T 1 ( y , x ) ) ) ( 1 a n ) x n x + a n x n x ( g 1 ( x n ) g 1 ( x ) ) + a n P C ( f 1 ( y n ) λ T 1 ( y n , x n ) ) P C ( f 1 ( y ) λ T 1 ( y , x ) ) ( 1 a n ) x n x + a n x n x ( g 1 ( x n ) g 1 ( x ) ) + a n f 1 ( y n ) f 1 ( y ) λ ( T 1 ( y n , x n ) T 1 ( y , x ) ) ( 1 a n ) x n x + a n x n x ( g 1 ( x n ) g 1 ( x ) ) + a n y n y ( f 1 ( y n ) f 1 ( y ) ) + a n y n y λ ( T 1 ( y n , x n ) T 1 ( y , x ) ) .
(3.2)
Since g 1 is a relaxed ( η 1 , ρ 1 ) -cocoercive and β 1 -Lipschitz continuous mapping, we have
x n x ( g 1 ( x n ) g 1 ( x ) ) 2 = x n x 2 2 g 1 ( x n ) g 1 ( x ) , x n x + g 1 ( x n ) g 1 ( x ) 2 x n x 2 2 ( ( η 1 ) g 1 ( x n ) g 1 ( x ) 2 + ρ 1 x n x 2 ) + β 1 2 x n x 2 = ( 1 2 ρ 1 + β 1 2 ) x n x 2 + 2 η 1 g 1 ( x n ) g 1 ( x ) 2 ( 1 2 ρ 1 + β 1 2 ) x n x 2 + 2 η 1 β 1 2 x n x 2 θ 2 2 x n x 2 ,
(3.3)
where θ 2 = 1 2 ρ 1 + β 1 2 + 2 η 1 β 1 2 . In a similar way, we can obtain that
y n y ( f 1 ( y n ) f 1 ( y ) ) θ 3 y n y ,
(3.4)
where θ 3 = 1 2 ρ ¯ 1 + β ¯ 1 2 + 2 η ¯ 1 β ¯ 1 2 . By the assumption that T 1 is a relaxed ( γ 1 , δ 1 ) -cocoercive and α 1 -Lipschitz continuous mapping in the first variable, we have
y n y λ ( T 1 ( y n , x n ) T 1 ( y , x ) ) 2 = y n y 2 2 λ T 1 ( y n , x n ) T 1 ( y , x ) , y n y + λ 2 T 1 ( y n , x n ) T 1 ( y , x ) 2 y n y 2 2 λ ( ( γ 1 ) T 1 ( y n , x n ) T 1 ( y , x ) 2 + δ 1 y n y 2 ) + λ 2 T 1 ( y n , x n ) T 1 ( y , x ) 2 = ( 1 2 λ δ 1 ) y n y 2 + ( 2 λ γ 1 + λ 2 ) T 1 ( y n , x n ) T 1 ( y , x ) 2 ( 1 2 λ δ 1 ) y n y 2 + ( 2 λ γ 1 + λ 2 ) α 1 2 y n y 2 = θ 1 2 y n y 2 ,
(3.5)
where θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 . According to (3.2), (3.3), (3.4) and (3.5), we obtain that
x n + 1 x [ 1 a n ( 1 θ 2 ) ] x n x + a n ( θ 1 + θ 3 ) y n y .
(3.6)
Next, we estimate y n y . From (3.1), we see that
y n y = ( 1 b n ) x n + b n P C z n y ( 1 b n ) x n y + b n P C z n P C y ( 1 b n ) x n y + b n z n y = ( 1 b n ) x n y + b n y n g 2 ( y n ) + P C ( f 2 ( x n ) μ T 2 ( x n , y n ) ) ( y g 2 ( y ) + P C ( f 2 ( x ) μ T 2 ( x , y ) ) ) ( 1 b n ) x n y + b n y n y ( g 2 ( y n ) g 2 ( y ) ) + b n P C ( f 2 ( x n ) μ T 2 ( x n , y n ) ) P C ( f 2 ( x ) μ T 2 ( x , y ) ) ( 1 b n ) x n y + b n y n y ( g 2 ( y n ) g 2 ( y ) ) + b n x n x ( f 2 ( x n ) f 2 ( x ) ) + b n x n x μ ( T 2 ( x n , y n ) T 2 ( x , y ) ) .
(3.7)
Similarly, we obtain that
y n y ( g 2 ( y n ) g 2 ( y ) ) θ 5 y n y ,
(3.8)
where θ 5 = 1 2 ρ 2 + β 2 2 + 2 η 2 β 2 2 , and
x n x ( f 2 ( x n ) f 2 ( x ) ) θ 6 x n x ,
(3.9)
where θ 6 = 1 2 ρ ¯ 2 + β ¯ 2 2 + 2 η ¯ 2 β ¯ 2 2 , and
x n x μ ( T 2 ( x n , y n ) T 2 ( x , y ) ) θ 4 x n x ,
(3.10)
where θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 . According to (3.7), (3.8), (3.9) and (3.10), we obtain that
y n y ( 1 b n ) x n y + b n θ 5 y n y + b n θ 6 x n x + b n θ 4 x n x ( 1 b n ) x n x + ( 1 b n ) x y + b n θ 5 y n y + b n ( θ 4 + θ 6 ) x n x ,
(3.11)
that is,
( 1 b n θ 5 ) y n y ( 1 b n ) x y + [ 1 b n + b n ( θ 4 + θ 6 ) ] x n x .
(3.12)
By conditions (1), (4) and (5), we have
1 1 b n θ 5 1 1 θ 5 and 1 b n + b n ( θ 4 + θ 6 ) θ 4 + θ 6 .
(3.13)
Substituting (3.13) into (3.12), we have
y n y 1 b n 1 θ 5 x y + θ 4 + θ 6 1 θ 5 x n x .
(3.14)
According (3.6) and (3.14), for n 1 , we have
x n + 1 x [ 1 a n ( 1 θ 2 ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) 1 θ 5 ) ] x n x + a n ( θ 1 + θ 3 ) 1 θ 5 ( 1 b n ) x y [ 1 a n ( 1 θ 2 ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) 1 θ 5 ) ] x n x + ( θ 1 + θ 3 ) 1 θ 5 ( 1 b n ) x y .
(3.15)
From conditions (1), (2), (3) and (6), we get
a n ( 1 θ 2 ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) 1 θ 5 ) ( 0 , 1 ) , n = 1 a n ( 1 θ 2 ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) 1 θ 5 ) =
and
n = 1 ( θ 1 + θ 3 ) 1 θ 5 ( 1 b n ) x y < .

The conditions in Lemma 2.2 are satisfied, then x n x 0 (as n ), i.e., x n x (as n ).

On the one hand, from condition (3) we know that 1 b n 0 (as n ). On the other hand, from (3.14) and the result that x n x (as n ), we can get y n y 0 (as n ), i.e., y n y (as n ).

This completes the proof of Theorem 3.1. □

When g 1 = g 2 = I , f 1 = f 2 = I , then we have θ 2 = θ 5 = 0 , θ 3 = θ 6 = 0 , respectively. And from Theorem 3.1 we can get the following results immediately.

Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T i : C × C H be a relaxed ( γ i , δ i ) -cocoercive and α i -Lipschitz continuous mapping in the first variable, and let f i : C H be a relaxed ( η ¯ i , ρ ¯ i ) -cocoercive and β ¯ i -Lipschitz continuous mapping for each i = 1 , 2 . Suppose that ( x , y ) C × C is a solution to problem (1.3). For any ( x 0 , y 0 ) C × C , the sequences { x n } and { y n } are generated by
{ x n + 1 = ( 1 a n ) x n + a n P C ( f 1 ( y n ) λ T 1 ( y n , x n ) ) , n 0 , y n = ( 1 b n ) x n + b n P C ( f 2 ( x n ) μ T 2 ( x n , y n ) ) , n 1 ,
(3.16)
where λ , μ > 0 and the following conditions are satisfied:
(1) 0 < a n , b n 1 ; (2) n = 0 a n = ; (3) n = 1 ( 1 b n ) < ; (4) θ 4 + θ 6 1 ; (5) 0 < ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) < 1 ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 , θ 3 = 1 2 ρ ¯ 1 + β ¯ 1 2 + 2 η ¯ 1 β ¯ 1 2 , θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 , θ 6 = 1 2 ρ ¯ 2 + β ¯ 2 2 + 2 η ¯ 2 β ¯ 2 2 .

Then the sequences { x n } and { y n } converge strongly to x and y , respectively.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T i : C × C H be a relaxed ( γ i , δ i ) -cocoercive and α i -Lipschitz continuous mapping in the first variable, and let g i : C C be a relaxed ( η i , ρ i ) -cocoercive and β i -Lipschitz continuous mapping for each i = 1 , 2 . Suppose that ( x , y ) C × C is a solution to problem (1.4). For any ( x 0 , y 0 ) C × C , the sequences { x n } and { y n } are generated by
{ t n + 1 = ( 1 a n ) x n + a n ( x n g 1 ( x n ) + P C ( y n λ T 1 ( y n , x n ) ) ) , n 0 , x n + 1 = P C t n + 1 , n 0 , z n = y n g 2 ( y n ) + P C ( x n μ T 2 ( x n , y n ) ) , n 1 , y n = ( 1 b n ) x n + b n P C z n , n 1 ,
(3.17)
where λ , μ > 0 and the following conditions are satisfied:
(1) 0 < a n , b n 1 ; (2) n = 0 a n = ; (3) n = 1 ( 1 b n ) < ; (4) 0 θ 2 , θ 5 < 1 ; (5) θ 4 1 ; (6) 0 < θ 1 θ 4 < ( 1 θ 2 ) ( 1 θ 5 ) ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 , θ 2 = 1 2 ρ 1 + β 1 2 + 2 η 1 β 1 2 , θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 , θ 5 = 1 2 ρ 2 + β 2 2 + 2 η 2 β 2 2 .

Then the sequences { x n } and { y n } converge strongly to x and y , respectively.

Corollary 3.3 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T i : C × C H be a relaxed ( γ i , δ i ) -cocoercive and α i -Lipschitz continuous mapping in the first variable for each i = 1 , 2 . Suppose that ( x , y ) C × C is a solution to problem (1.5). For any ( x 0 , y 0 ) C × C , the sequences { x n } and { y n } are generated by
{ x n + 1 = ( 1 a n ) x n + a n P C ( y n λ T 1 ( y n , x n ) ) , n 0 , y n = ( 1 b n ) x n + b n P C ( x n μ T 2 ( x n , y n ) ) , n 1 ,
(3.18)
where λ , μ > 0 and the following conditions are satisfied:
(1) 0 < a n , b n 1 ; (2) n = 0 a n = ; (3) n = 1 ( 1 b n ) < ; (4) θ 4 1 ; (5) 0 < θ 1 θ 4 < 1 ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 , θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 .

Then the sequences { x n } and { y n } converge strongly to x and y , respectively.

Corollary 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T i : C H be a relaxed ( γ i , δ i ) -cocoercive and α i -Lipschitz continuous mapping, let g i : C C be a relaxed ( η i , ρ i ) -cocoercive and β i -Lipschitz continuous mapping, and let f i : C H be a relaxed ( η ¯ i , ρ ¯ i ) -cocoercive and β ¯ i -Lipschitz continuous mapping for each i = 1 , 2 . Suppose that ( x , y ) C × C is a solution to problem (1.6). For any ( x 0 , y 0 ) C × C , the sequences { x n } and { y n } are generated by
{ t n + 1 = ( 1 a n ) x n + a n ( x n g 1 ( x n ) + P C ( f 1 ( y n ) λ T 1 ( y n ) ) ) , n 0 , x n + 1 = P C t n + 1 , n 0 , z n = y n g 2 ( y n ) + P C ( f 2 ( x n ) μ T 2 ( x n ) ) , n 1 , y n = ( 1 b n ) x n + b n P C z n , n 1 ,
(3.19)
where λ , μ > 0 and the following conditions are satisfied:
(1) 0 < a n , b n 1 ; (2) n = 0 a n = ; (3) n = 1 ( 1 b n ) < ; (4) 0 θ 2 , θ 5 < 1 ; (5) θ 4 + θ 6 1 ; (6) 0 < ( θ 1 + θ 3 ) ( θ 4 + θ 6 ) < ( 1 θ 2 ) ( 1 θ 5 ) ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 , θ 2 = 1 2 ρ 1 + β 1 2 + 2 η 1 β 1 2 , θ 3 = 1 2 ρ ¯ 1 + β ¯ 1 2 + 2 η ¯ 1 β ¯ 1 2 , θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 , θ 5 = 1 2 ρ 2 + β 2 2 + 2 η 2 β 2 2 , θ 6 = 1 2 ρ ¯ 2 + β ¯ 2 2 + 2 η ¯ 2 β ¯ 2 2 .

Then the sequences { x n } and { y n } converge strongly to x and y , respectively.

Corollary 3.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T i : C H be a relaxed ( γ i , δ i ) -cocoercive and α i -Lipschitz continuous mapping for each i = 1 , 2 . Suppose that ( x , y ) C × C is a solution to problem (1.7). For any ( x 0 , y 0 ) C × C , the sequences { x n } and { y n } are generated by
{ x n + 1 = ( 1 a n ) x n + a n P C ( y n λ T 1 ( y n ) ) , n 0 , y n = ( 1 b n ) x n + b n P C ( x n μ T 2 ( x n ) ) , n 1 ,
(3.20)
where λ , μ > 0 and the following conditions are satisfied:
(1) 0 < a n , b n 1 ; (2) n = 0 a n = ; (3) n = 1 ( 1 b n ) < ; (4) θ 4 1 ; (5) 0 < θ 1 θ 4 < 1 ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 , θ 4 = 1 2 μ δ 2 + 2 μ γ 2 α 2 2 + μ 2 α 2 2 .

Then the sequences { x n } and { y n } converge strongly to x and y , respectively.

For μ = 0 , b n = 1 in Corollary 3.3, we arrive at the following result.

Corollary 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H. Let T 1 : C × C H be a relaxed ( γ 1 , δ 1 ) -cocoercive and α 1 -Lipschitz continuous mapping in the first variable. Suppose that x C is a solution to problem (1.8). For any x 0 C , the sequence { x n } is generated by
x n + 1 = ( 1 a n ) x n + a n P C ( x n λ T 1 ( x n , x n ) ) , n 0 ,
(3.21)
where λ > 0 and the following conditions are satisfied:
(1) 0 < a n 1 ; (2) n = 0 a n = ; (3) 0 < θ 1 < 1 ;
where
θ 1 = 1 2 λ δ 1 + 2 λ γ 1 α 1 2 + λ 2 α 1 2 .

Then the sequence { x n } converges strongly to x .

Declarations

Acknowledgements

Changfeng Ma is grateful for the hospitality and support during his research at Chern Mathematics Institute in Nankai University in June 2013. The project is supported by the National Natural Science Foundation of China (11071041, 11201074), the Fujian Natural Science Foundation (2013J01006) and the Scientific Research Special Fund Project of Fujian University (JK2013060).

Authors’ Affiliations

(1)
School of Mathematics and Computer Science, Fujian Normal University, Fuzhou, 350007, P.R., China

References

  1. Verma RU: Projection methods, algorithms, and a new system of nonlinear variational inequalities. Comput. Math. Appl. 2001, 41: 1025–1031. 10.1016/S0898-1221(00)00336-9MathSciNetView ArticleGoogle Scholar
  2. Verma RU: Projection methods and a new system of cocoercive variational inequality problems. Int. J. Differ. Equ. Appl. 2002, 6: 359–367.MathSciNetGoogle Scholar
  3. Verma RU: Generalized system for relaxed cocoercive variational inequalities and its projection methods. J. Optim. Theory Appl. 2004, 121(1):203–210.MathSciNetView ArticleGoogle Scholar
  4. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18(11):1286–1292. 10.1016/j.aml.2005.02.026MathSciNetView ArticleGoogle Scholar
  5. Chang SS, Lee JHW, Chan CK: Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces. Appl. Math. Lett. 2007, 20: 329–334. 10.1016/j.aml.2006.04.017MathSciNetView ArticleGoogle Scholar
  6. Verma RU: A class of quasivariational inequalities involving cocoercive mappings. Adv. Nonlinear Var. Inequal. 1999, 2(2):1–12.MathSciNetGoogle Scholar
  7. Nie H, Liu Z, Kim KH, Kang SM: A system of nonlinear variational inequalities involving strongly monotone and pseudocontractive mappings. Adv. Nonlinear Var. Inequal. 2003, 6(2):91–99.MathSciNetGoogle Scholar
  8. He BS: A new method for a class of linear variational inequalities. Math. Program. 1994, 66: 137–144. 10.1007/BF01581141View ArticleGoogle Scholar
  9. Xiu NH, Zhang JZ: Local convergence analysis of projection type algorithms: unified approach. J. Optim. Theory Appl. 2002, 115: 211–230. 10.1023/A:1019637315803MathSciNetView ArticleGoogle Scholar
  10. Huang Z, Noor MA:An explicit projection method for a system of nonlinear variational inequalities with different ( γ , r ) -cocoercive mappings. Appl. Math. Comput. 2007, 190: 356–361. 10.1016/j.amc.2007.01.032MathSciNetView ArticleGoogle Scholar
  11. Noor MA, Noor KI: Projection algorithms for solving a system of general variational inequalities. Nonlinear Anal. 2009, 70: 2700–2706. 10.1016/j.na.2008.03.057MathSciNetView ArticleGoogle Scholar
  12. Lü S, Wu C: Convergence of iterative algorithms for a generalized inequality and a nonexpansive mapping. Eng. Math. Lett. 2012, 1: 44–57.Google Scholar
  13. Huang N, Ma C: A new extragradient-like method for solving variational inequality problems. Fixed Point Theory Appl. 2012., 2012: Article ID 223Google Scholar
  14. Ke Y, Ma C: A new relaxed extragradient-like algorithm for approaching common solutions of generalized mixed equilibrium problems, a more general system of variational inequalities and a fixed point problem. Fixed Point Theory Appl. 2013., 2013: Article ID 126Google Scholar
  15. Cheng QQ, Su YF, Zhang JL: Convergence theorems of a three-step iteration method for a countable family of pseudocontractive mappings. Fixed Point Theory Appl. 2013., 2013: Article ID 100Google Scholar

Copyright

© Ke and Ma; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement