Skip to content

Advertisement

  • Research
  • Open Access

Variant extragradient-type method for monotone variational inequalities

Fixed Point Theory and Applications20132013:185

https://doi.org/10.1186/1687-1812-2013-185

  • Received: 25 May 2013
  • Accepted: 26 June 2013
  • Published:

Abstract

Korpelevich’s extragradient method has been studied and extended extensively due to its applicability to the whole class of monotone variational inequalities. In the present paper, we propose a variant extragradient-type method for solving monotone variational inequalities. Convergence analysis of the method is presented under reasonable assumptions on the problem data.

MSC:47H05, 47J05, 47J25.

Keywords

  • variational inequality
  • extragradient-type method
  • nonexpansive mapping
  • projection
  • strong convergence

1 Introduction

Let H be a real Hilbert space with the inner product , and its induced norm . Let C be a nonempty, closed and convex subset of H and let A : C H be a nonlinear operator. The variational inequality problem for A and C, denoted by V I ( C , A ) , is the problem of finding a point x C satisfying
A x , x x 0 , x C .
(1)

We denote the solution set of this problem by S V I ( C , A ) . Under the monotonicity assumption, the solution set of S V I ( C , A ) is always closed and convex.

The variational inequality problem is a fundamental problem in variational analysis and, in particular, in optimization theory. There are several iterative methods for solving it. See, e.g., [138]. The basic idea consists of extending the projected gradient method for constrained optimization, i.e., for the problem of minimizing f ( x ) subject to x C . For x 0 C , compute the sequence { x n } in the following manner:
x n + 1 = P C [ x n α n f ( x n ) ] , n 0 ,
(2)
where α n > 0 is the stepsize and P C is the metric projection onto C. See [1] for convergence properties of this method for the case in which f is convex and f : R n R is a differentiable function, which are related to the results in this article. An immediate extension of the method (2) to V I ( C , A ) is the iterative procedure given by
x n + 1 = P C [ x n α n A x n ] , n 0 .
(3)

Convergence results for this method require some monotonicity properties of A. Note that for the method given by (3) there is no chance of relaxing the assumption on A to plain monotonicity. The typical example consists of taking C = R 2 and A, a rotation with a π 2 angle. A is monotone and the unique solution of V I ( C , A ) is x = 0 . However, it is easy to check that x n α n A x n > x n for all x n 0 and all α n > 0 , therefore the sequence generated by (3) moves away from the solution, independently of the choice of the stepsize α n .

To overcome this weakness of the method defined by (3), Korpelevich [20] proposed a modification of the method, called the extragradient algorithm. It generates iterates using the following formulae:
y n = P C [ x n λ A x n ] , x n + 1 = P C [ x n λ A y n ] , n 0 ,
(4)

where λ > 0 is a fixed number. The difference in (4) is that A is evaluated twice and the projection is computed twice at each iteration, but the benefit is significant, because the resulting algorithm is applicable to the whole class of monotone variational inequalities. However, we note that Korpelevich assumed that A is Lipschitz continuous and that an estimate of the Lipschitz constant is available. When A is not Lipschitz continuous, or it is Lipschitz but the constant is not known, the fixed parameter λ must be replaced by stepsizes computed through an Armijo-type search, as in the following method, presented in [39] (see also [18] for another related approach).

Let δ ( 0 , 1 ) , { β n } [ β ˆ , β ¯ ] and x 0 C . Given x n define
z n = x n β n A x n .
If x n = P C [ z n ] , then stop. Otherwise, let
j ( n ) = min { j 0 : A ( 1 2 j P C [ z n ] + ( 1 1 2 j x n ) ) , x n P C [ z n ] δ β n x n P C [ z n ] 2 }
(5)
and
α n = 2 j ( n ) , y n = α n P C [ z n ] + ( 1 α n ) x n .
Define
H n = { x H : A y n , z y n 0 } , W n = { z H : x 0 x n , z x n 0 } , x n + 1 = P H n W n C x 0 .
(6)

It is proved that if A is maximal monotone, point-to-point and uniformly continuous on bounded sets, and if V I ( C , A ) is nonempty, then { x n } n strongly converges to P V I ( C , A ) x 0 .

We now know that the difficult implementation of these methods is in computational respect. First, we note that in order to get α n , we have to compute j ( n ) , which may be time-consuming. At the same time, we observe that (6) involves two half-spaces H n and W n . If the sets C, H n and W n are simple enough, then P C , P H n and P W n are easily executed. But H n W n C may be complicated, so that the projection P H n W n C is not easily executed. This might seriously affect the efficiency of the method.

The literature on the V I ( C , A ) is vast and Korpelevich’s method has received great attention from many authors, who improved it in various ways; see, e.g., [33, 3944] and references therein. It is known that Korpelevich’s method (4) has only weak convergence in the infinite-dimensional Hilbert spaces (please refer to a recent result of Censor et al. [40] and [41]). So, to obtain strong convergence, the original method was modified by several authors. For example, in [4, 43] it was proved that some very interesting Korpelevich-type algorithms strongly converge to a solution of V I ( C , A ) . Very recently, Yao et al. [33] suggested modified Korpelevich’s method which converges strongly to the minimum norm solution of variational inequality (1) in infinite-dimensional Hilbert spaces.

Motivated by the works given above, in the present paper, we propose a variant extragradient-type method for solving monotone variational inequalities. Strong convergence analysis of the method is presented under reasonable assumptions on the problem data in the infinite-dimensional Hilbert spaces.

2 Preliminaries

In this section, we present some definitions and results that are needed for the convergence analysis of the proposed method. Let C be a closed convex subset of a real Hilbert space H.

A mapping F : C H is said to be Lipschitz if there exists a positive real number L > 0 such that
F ( x ) F ( y ) L x y
for all x , y C . In the case L ( 0 , 1 ) , F is called L-contractive. A mapping A : C H is called α-inverse-strongly-monotone if there exists a positive real number α such that
A u A v , u v α A u A v 2 , u , v C .

The following result is well known.

Proposition 1 [45]

Let C be a bounded closed convex subset of a real Hilbert space H and let A be an α-inverse strongly monotone operator of C into H. Then S V I ( C , A ) is nonempty.

For any u H , there exists a unique u 0 C such that
u u 0 = inf { u x : x C } .

We denote u 0 by P C u , where P C is called the metric projection of H onto C. The following is a useful characterization of projections.

Proposition 2 Given x H . We have
x P C x , y P C x 0 , y C ,
which is equivalent to
x y , P C x P C y P C x P C y 2 , x , y H .
Consequently, we deduce immediately that P C is nonexpansive, that is,
P C x P C y x y

for all x , y H .

It is well known that 2 P C I is nonexpansive.

Lemma 1 [45]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping A : C H be α-inverse strongly monotone and r > 0 be a constant. Then we have
( I r A ) x ( I r A ) y 2 x y 2 + r ( r 2 α ) A x A y 2 , x , y C .

In particular, if 0 r 2 α , then I r A is nonexpansive.

Lemma 2 [46]

Let { x n } and { y n } be bounded sequences in a Banach space X and let { β n } be a sequence in [ 0 , 1 ] with 0 < lim inf n β n lim sup n β n < 1 .

Suppose that
  1. (1)

    x n + 1 = ( 1 β n ) y n + β n x n for all n 0 ;

     
  2. (2)

    lim sup n ( y n + 1 y n x n + 1 x n ) 0 .

     

Then lim n y n x n = 0 .

Lemma 3 [47]

Assume that { a n } is a sequence of nonnegative real numbers, which satisfies
a n + 1 ( 1 γ n ) a n + δ n , n 0 ,
where { γ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
  1. (1)

    n = 1 γ n = ;

     
  2. (2)

    lim sup n δ n / γ n 0 or n = 1 | δ n | < .

     

Then lim n a n = 0 .

3 Algorithm and its convergence analysis

In this section, we present the formal statement of our proposal for the algorithm.

Variant extragradient-type method

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let A : C H be an α-inverse-strongly-monotone mapping and let F : C H be a ρ-contractive mapping. Consider the sequences { α n } [ 0 , 1 ] , { λ n } [ 0 , 2 α ] , { μ n } [ 0 , 2 α ] and { γ n } [ 0 , 1 ] .
  1. 1.
    Initialization:
    x 0 C .
     
  2. 2.
    Iterative step: Given x n , define
    { y n = P C [ x n λ n A x n + α n ( F x n x n ) ] , x n + 1 = P C [ x n μ n A y n + γ n ( y n x n ) ] , n 0 .
    (7)
     

Remark 1 Note that algorithm (7) includes Korpelevich’s method (4) as a special case.

Next, we shall perform a study on the convergence analysis of the proposed algorithm (7).

Theorem 1 Suppose that S V I ( C , A ) . Assume that the algorithm parameters { α n } , { λ n } , { μ n } and { γ n } satisfy the following conditions:
  1. (C1)

    lim n α n = 0 and n = 1 α n = ;

     
  2. (C2)

    λ n [ a , b ] ( 0 , 2 α ) and lim n ( λ n + 1 λ n ) = 0 ;

     
  3. (C3)

    γ n ( 0 , 1 ) , μ n 2 α γ n and lim n ( γ n + 1 γ n ) = lim n ( μ n + 1 μ n ) = 0 .

     
Then the sequence { x n } generated by (7) converges strongly to x ˜ S V I ( C , A ) , which solves the following variational inequality:
x ˜ F x ˜ , x ˜ x 0 for all x S V I ( C , A ) .

We shall prove our main result in several steps, included into the propositions given bellow.

Proposition 3 The sequences { x n } and { y n } are bounded. Therefore, the sequences { F x n } , { A x n } and { A y n } are all bounded.

Proof From conditions (C1) and (C2), since α n 0 and λ n [ a , b ] ( 0 , 2 α ) , we have α n < 1 λ n 2 α , for n large enough. Without loss of generality, we may assume that, for all n N , α n < 1 λ n 2 α . So, λ n 1 α n ( 0 , 2 α ) .

Consider any x S V I ( C , A ) . By the property of the metric projection, we know x = P C [ x δ A x ] for any δ > 0 . Hence,
x = P C [ x λ n 1 α n A x ] = P C [ x λ n A x ] = P C [ α n x + ( 1 α n ) ( x λ n 1 α n A x ) ] , n 0 .
(8)
Thus, by (7) and (8), we have
y n x = P C [ α n F x n + ( 1 α n ) x n λ n A x n ] x = P C [ α n F x n + ( 1 α n ) ( x n λ n 1 α n A x n ) ] P C [ α n x + ( 1 α n ) ( x λ n 1 α n A x ) ] α n ( F x n x ) + ( 1 α n ) [ ( x n λ n 1 α n A x n ) ( x λ n 1 α n A x ) ] .
(9)
Since λ n 1 α n ( 0 , 2 α ) , from Lemma 1, we know that I λ n 1 α n A is nonexpansive. From (9), we get
y n x α n F x n x + ( 1 α n ) ( I λ n 1 α n A ) x n ( I λ n 1 α n A ) x α n F x n F x + α n F x x + ( 1 α n ) x n x α n ρ x n x + α n F x x + ( 1 α n ) x n x = [ 1 ( 1 ρ ) α n ] x n x + α n F x x .
By (C3), we obtain μ n γ n 2 α . So, I μ n γ n A is also nonexpansive. Therefore,
x n + 1 x = P C [ x n μ n A y n + γ n ( y n x n ) ] x = P C [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] P C [ ( 1 γ n ) x + γ n ( x μ n γ n A x ) ] ( 1 γ n ) x n x + γ n ( y n μ n γ n A y n ) ( x μ n γ n A x ) ( 1 γ n ) x n x + γ n y n x ( 1 γ n ) x n x + γ n α n F x x + γ n [ 1 ( 1 ρ ) α n ] x n x = [ 1 ( 1 ρ ) γ n α n ] x n x + γ n α n F x x max { x n x , F x x 1 ρ } .
(10)
By induction, we get
x n + 1 x max { x 0 x , F x x 1 ρ } .

Then { x n } is bounded, and so are { y n } , { F x n } , { A x n } and { A y n } . Therefore, the proof is complete. □

Proposition 4 The following two properties hold:
lim n x n + 1 x n = 0 , lim n x n y n = 0 .
Proof Let S = 2 P C I . From the property of the metric projection, we known that S is nonexpansive. Therefore, we can rewrite x n + 1 in (7) as
x n + 1 = I + S 2 [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] = 1 γ n 2 x n + γ n 2 ( y n μ n γ n A y n ) + 1 2 S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] = 1 γ n 2 x n + 1 + γ n 2 z n ,
where
z n = γ n 2 ( y n μ n γ n A y n ) + 1 2 S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] 1 + γ n 2 = γ n ( y n μ n γ n A y n ) + S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] 1 + γ n .
It follows that
z n + 1 z n = γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) + S [ ( 1 γ n + 1 ) x n + 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ] 1 + γ n + 1 γ n ( y n μ n γ n A y n ) + S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] 1 + γ n .
So,
z n + 1 z n γ n + 1 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ( y n μ n γ n A y n ) + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + 1 1 + γ n + 1 S [ ( 1 γ n + 1 ) x n + 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ] S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] γ n + 1 1 + γ n + 1 ( I μ n + 1 γ n + 1 A ) y n + 1 ( I μ n + 1 γ n + 1 A ) y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + 1 1 + γ n + 1 S [ ( 1 γ n + 1 ) x n + 1 + γ n + 1 ( y n + 1 μ n + 1 γ n + 1 A y n + 1 ) ] S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] .
Again, by using the nonexpansivity of I μ n γ n A and S, we have
z n + 1 z n γ n + 1 1 + γ n + 1 y n + 1 y n + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + 1 1 + γ n + 1 ( 1 γ n + 1 ) ( x n + 1 x n ) + γ n + 1 [ ( I μ n + 1 γ n + 1 A ) y n + 1 ( I μ n + 1 γ n + 1 A ) y n ] + ( γ n + 1 γ n ) ( y n x n ) + ( μ n μ n + 1 ) A y n + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] γ n + 1 1 + γ n + 1 y n + 1 y n + | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + 1 γ n + 1 1 + γ n + 1 x n + 1 x n + γ n + 1 1 + γ n + 1 y n + 1 y n + | γ n + 1 γ n | 1 + γ n + 1 ( x n + y n ) + | μ n + 1 μ n | 1 + γ n + 1 A y n + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] .

Next, we estimate y n + 1 y n .

By (7), we have
y n + 1 y n = P C [ x n + 1 λ n + 1 A x n + 1 + α n + 1 ( F x n + 1 x n + 1 ) ] P C [ x n λ n A x n + α n ( F x n x n ) ] [ x n + 1 λ n + 1 A x n + 1 ] [ x n λ n A x n ] + α n + 1 F x n + 1 x n + 1 + α n F x n x n = ( I λ n + 1 A ) x n + 1 ( I λ n + 1 A ) x n + ( λ n λ n + 1 ) A x n + α n + 1 F x n + 1 x n + 1 + α n F x n x n x n + 1 x n + | λ n + 1 λ n | x n + α n + 1 F x n + 1 x n + 1 + α n F x n x n .
So, we deduce
z n + 1 z n | γ n + 1 1 + γ n + 1 γ n 1 + γ n | y n μ n γ n A y n + γ n + 1 1 + γ n + 1 | μ n + 1 γ n + 1 μ n γ n | A y n + | γ n + 1 γ n | 1 + γ n + 1 ( x n + y n ) + | μ n + 1 μ n | 1 + γ n + 1 A y n + x n + 1 x n + | 1 1 + γ n + 1 1 1 + γ n | S [ ( 1 γ n ) x n + γ n ( y n μ n γ n A y n ) ] + | λ n + 1 λ n | x n + α n + 1 F x n + 1 x n + 1 + α n F x n x n .
Since lim n ( γ n + 1 γ n ) = 0 and lim n ( μ n + 1 μ n ) = 0 , we derive that
lim n | γ n + 1 1 + γ n + 1 γ n 1 + γ n | = 0 , lim n | μ n + 1 γ n + 1 μ n γ n | = 0 , lim n | 1 1 + γ n + 1 1 1 + γ n | = 0 .
At the same time, note that { x n } , { F x n } , { y n } and { A y n } are bounded. Therefore,
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
By Lemma 2, we obtain
lim n z n x n = 0 .
Hence,
lim n x n + 1 x n = lim n 1 + γ n 2 z n x n = 0 .
From (9), (10), Lemma 1 and the convexity of the norm, we deduce
x n + 1 x 2 ( 1 γ n ) x n x 2 + γ n y n x 2 γ n α n ( F x n x ) + ( 1 α n ) [ ( x n λ n 1 α n A x n ) ( x λ n 1 α n A x ) ] 2 + ( 1 γ n ) x n x 2 γ n [ α n F x n x 2 + ( 1 α n ) ( I λ n 1 α n A ) x n ( I λ n 1 α n A ) x 2 ] + ( 1 γ n ) x n x 2 ( 1 α n ) γ n [ x n x 2 + λ n 1 α n ( λ n 1 α n 2 α ) A x n A x 2 ] + ( 1 γ n ) x n x 2 + α n γ n F x n x 2 α n γ n F x n x 2 + x n x 2 + γ n a ( b 1 α n 2 α ) A x n A x 2 .
Therefore, we have
γ n a ( 2 α b 1 α n ) A x n A x 2 α n γ n F x n x 2 + x n x 2 x n + 1 x 2 α n γ n F x n x 2 + ( x n x + x n + 1 x ) × x n x n + 1 .
Since lim n α n = 0 , lim n x n x n + 1 = 0 and lim inf n γ n a ( 2 α b 1 α n ) > 0 , we deduce
lim n A x n A x = 0 .
By the property (ii) of the metric projection P C , we have
y n x 2 = P C [ α n F x n + ( 1 α n ) x n λ n A x n ] P C [ x λ n A x ] 2 α n F x n + ( 1 α n ) x n λ n A x n ( x λ n A x ) , y n x = 1 2 { x n λ n A x n ( x λ n A x ) + α n ( F x n x n ) 2 + y n x 2 α n F x n + ( 1 α n ) x n λ n A x n ( x λ n A x ) ( y n x ) 2 } 1 2 { ( x n λ n A x n ) ( x λ n A x ) 2 + 2 α n F x n x n x n λ n A x n ( x λ n A x ) + α n ( F x n x n ) + y n x 2 ( x n y n ) λ n ( A x n A x ) + α n ( F x n x n ) 2 } 1 2 { ( x n λ n A x n ) ( x λ n A x ) 2 + α n M + y n x 2 ( x n y n ) λ n ( A x n A x ) + α n ( F x n x n ) 2 } 1 2 { x n x 2 + α n M + y n x 2 x n y n 2 + 2 λ n x n y n , A x n A x 2 α n F x n x n , x n y n λ n ( A x n A x ) α n ( F x n x n ) 2 } 1 2 { x n x 2 + α n M + y n x 2 x n y n 2 + 2 λ n x n y n A x n A x + 2 α n F x n x n x n y n } ,
where M > 0 is some constant satisfying
sup n { 2 F x n x n x n λ n A x n ( x λ n A x ) + α n ( F x n x n ) } M .
It follows that
y n x 2 x n x 2 + α n M x n y n 2 + 2 λ n x n y n A x n A x + 2 α n F x n x n x n y n ,
and hence
x n + 1 x 2 ( 1 γ n ) x n x 2 + γ n y n x 2 x n x 2 + α n M γ n x n y n 2 + 2 λ n x n y n A x n A x + 2 α n F x n x n x n y n ,
which implies that
γ n x n y n 2 ( x n x + x n + 1 x ) x n + 1 x n + 2 λ n x n y n A x n A x + α n M + 2 α n F x n x n x n y n .
Since lim n α n = 0 , lim n x n x n + 1 = 0 and lim n A x n A x = 0 , we derive
lim n x n y n = 0 ,

and this concludes the proof. □

Proposition 5 lim sup n x ˜ F x ˜ , x ˜ y n 0 , where x ˜ = P S V I ( C , A ) F x ˜ .

Proof In order to show that lim sup n x ˜ F x ˜ , x ˜ y n 0 , we choose a subsequence { y n i } of { y n } such that
lim sup n x ˜ F x ˜ , x ˜ y n = lim i x ˜ F x ˜ , x ˜ y n i .

As { y n i } is bounded, we deduce that a subsequence { y n i j } of { y n i } converges weakly to z.

Next, we show that z S V I ( C , A ) . The following proofs are similar to the one in [45]. Since the involved algorithms are not different, we still give the details. Now, we define a mapping T by the formula
T v = { A v + N C v , v C , , v C .

Then T is maximal monotone.

Let ( v , w ) G ( T ) . Since w A v N C v and y n C , we have v y n , w A v 0 . On the other hand, from y n = P C [ α n F x n + ( 1 α n ) x n λ n A x n ] , we obtain
v y n , y n α n F x n ( 1 α n ) x n + λ n A x n 0 ,
that is,
v y n , y n x n λ n + A x n α n λ n ( F x n x n ) 0 .
Therefore, we have
v y n i , w v y n i , A v v y n i , A v v y n i , y n i x n i λ n i + A x n i α n i λ n i ( F x n i x n i ) = v y n i , A v A x n i y n i x n i λ n i + α n i λ n i ( F x n i x n i ) = v y n i , A v A y n i + v y n i , A y n i A x n i v y n i , y n i x n i λ n i α n i λ n i ( F x n i x n i ) v y n i , A y n i A x n i v y n i , y n i x n i λ n i α n i λ n i ( F x n i x n i ) .
Noting that α n i 0 , y n i x n i 0 and A is Lipschitz continuous, we obtain v z , w 0 . Since T is maximal monotone, we have z T 1 ( 0 ) and hence z S V I ( C , A ) . Therefore,
lim sup n x ˜ F x ˜ , x ˜ y n = lim i x ˜ F x ˜ , x ˜ y n i = x ˜ F x ˜ , x ˜ z 0 .

The proof of this proposition is now complete. □

Finally, by using Propositions 3-5, we prove Theorem 1.

Proof By the property of the metric projection P C , we have
y n x ˜ 2 = P C [ α n F x n + ( 1 α n ) ( x n λ n 1 α n A x n ) ] P C [ α n x ˜ + ( 1 α n ) ( x ˜ λ n 1 α n A x ˜ ) ] 2 α n ( F x n x ˜ ) + ( 1 α n ) [ ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) ] , y n x ˜ α n x ˜ F x ˜ , x ˜ y n + α n F x ˜ F x n , x ˜ y n + ( 1 α n ) ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) y n x ˜ α n x ˜ F x ˜ , x ˜ y n + α n F x ˜ F x n x ˜ y n + ( 1 α n ) x n x ˜ y n x ˜ α n x ˜ F x ˜ , x ˜ y n + [ 1 ( 1 ρ ) α n ] x n x ˜ x ˜ y n α n x ˜ G x ˜ , x ˜ y n + 1 ( 1 ρ ) α n 2 x n x ˜ 2 + 1 2 y n x ˜ 2 .
Hence,
y n x ˜ 2 [ 1 ( 1 ρ ) α n ] x n x ˜ 2 + 2 α n x ˜ F x ˜ , x ˜ y n .
Therefore,
x n + 1 x ˜ 2 ( 1 γ n ) x n x ˜ 2 + γ n y n x ˜ 2 [ 1 ( 1 ρ ) α n γ n ] x n x ˜ 2 + 2 α n γ n x ˜ F x ˜ , x ˜ y n .

We apply Lemma 3 to the last inequality to deduce that x n x ˜ .

The proof of our main result is completed. □

Remark 2 Our algorithm (7) includes Korpelevich’s method (4) as a special case. However, it is well known that Korpelevich’s algorithm (4) has only weak convergence in the setting of infinite-dimensional Hilbert spaces. But our algorithm (7) has strong convergence in the setting of infinite-dimensional Hilbert spaces.

If we take F 0 , then we have the following algorithm:
  1. 1.
    Initialization:
    x 0 C .
     
  2. 2.
    Iterative step: Given x n , define
    { y n = P C [ x n λ n A x n α n x n ] , x n + 1 = P C [ x n μ n A y n + γ n ( y n x n ) ] , n 0 .
    (11)
     
Corollary 1 Suppose that S V I ( C , A ) . Assume that the algorithm parameters { α n } , { λ n } , { μ n } and { γ n } satisfy the following conditions:
  1. (C1)

    lim n α n = 0 and n = 1 α n = ;

     
  2. (C2)

    λ n [ a , b ] ( 0 , 2 α ) and lim n ( λ n + 1 λ n ) = 0 ;

     
  3. (C3)

    γ n ( 0 , 1 ) , μ n 2 α γ n and lim n ( γ n + 1 γ n ) = lim n ( μ n + 1 μ n ) = 0 .

     

Then the sequence { x n } generated by (11) converges strongly to the minimum norm element x ˜ in S V I ( C , A ) .

Proof It is clear that algorithm (11) is a special case of algorithm (7). So, from Theorem 1, we have that the sequence { x n } defined by (11) converges strongly to x ˜ S V I ( C , A ) , which solves
x ˜ , x ˜ x 0 , for all  x S V I ( C , A ) .
(12)
Applying the characterization of the metric projection, we can deduce from (12) that
x = P S V I ( C , A ) ( 0 ) .

This indicates that x ˜ is the minimum-norm element in S V I ( C , A ) . This completes the proof. □

Remark 3 Corollary 1 includes the main result in [1] as a special case.

Declarations

Acknowledgements

Yonghong Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Yeong-Cheng Liou was partially supported by NSC 100-2221-E-230-012.

Authors’ Affiliations

(1)
Department of Mathematics, Tianjin Polytechnic University, Tianjin, China
(2)
Faculty of Applied Sciences, University ‘Politehnica’ of Bucharest, Bucharest, Romania
(3)
Department of Information Management, Cheng Shiu University, Kaohsiung, Taiwan

References

  1. Alber YI, Iusem AN: Extension of subgradient techniques for nonsmooth optimization in Banach spaces. Set-Valued Anal. 2001, 9: 315–335. 10.1023/A:1012665832688MathSciNetView ArticleGoogle Scholar
  2. Bao TQ, Khanh PQ: A projection-type algorithm for pseudomonotone non-Lipschitzian multivalued variational inequalities. Nonconvex Optim. Appl. 2005, 77: 113–129. 10.1007/0-387-23639-2_6MathSciNetView ArticleGoogle Scholar
  3. Bauschke HH: The approximation of fixed points of composition of nonexpansive mapping in Hilbert space. J. Math. Anal. Appl. 1996, 202: 150–159. 10.1006/jmaa.1996.0308MathSciNetView ArticleGoogle Scholar
  4. Bello Cruz JY, Iusem AN: A strongly convergent direct method for monotone variational inequalities in Hilbert space. Numer. Funct. Anal. Optim. 2009, 30: 23–36. 10.1080/01630560902735223MathSciNetView ArticleGoogle Scholar
  5. Bello Cruz JY, Iusem AN: Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46: 247–263. 10.1007/s10589-009-9246-5MathSciNetView ArticleGoogle Scholar
  6. Bnouhachem A, Noor MA, Hao Z: Some new extragradient iterative methods for variational inequalities. Nonlinear Anal. 2009, 70: 1321–1329. 10.1016/j.na.2008.02.014MathSciNetView ArticleGoogle Scholar
  7. Bruck RE: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 1977, 61: 159–164. 10.1016/0022-247X(77)90152-4MathSciNetView ArticleGoogle Scholar
  8. Cegielski A, Zalas R: Methods for variational inequality problem over the intersection of fixed point sets of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 2013, 34: 255–283. 10.1080/01630563.2012.716807MathSciNetView ArticleGoogle Scholar
  9. Censor Y, Gibali A, Reich S: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26: 827–845. 10.1080/10556788.2010.551536MathSciNetView ArticleGoogle Scholar
  10. Censor Y, Gibali A, Reich S: Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59: 301–323. 10.1007/s11075-011-9490-5MathSciNetView ArticleGoogle Scholar
  11. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, vols. I and II. Springer, New York; 2003.MATHGoogle Scholar
  12. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.MATHView ArticleGoogle Scholar
  13. He BS: A new method for a class of variational inequalities. Math. Program. 1994, 66: 137–144. 10.1007/BF01581141View ArticleGoogle Scholar
  14. He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300: 362–374. 10.1016/j.jmaa.2004.04.068MathSciNetView ArticleGoogle Scholar
  15. Hirstoaga SA: Iterative selection methods for common fixed point problems. J. Math. Anal. Appl. 2006, 324: 1020–1035. 10.1016/j.jmaa.2005.12.064MathSciNetView ArticleGoogle Scholar
  16. Iiduka H, Takahashi W: Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl. 2008, 339: 668–679. 10.1016/j.jmaa.2007.07.019MathSciNetView ArticleGoogle Scholar
  17. Iusem AN: An iterative algorithm for the variational inequality problem. Comput. Appl. Math. 1994, 13: 103–114.MathSciNetGoogle Scholar
  18. Khobotov EN: Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. U.S.S.R. Comput. Math. Math. Phys. 1989, 27: 120–127.View ArticleGoogle Scholar
  19. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.MATHGoogle Scholar
  20. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 1976, 12: 747–756.Google Scholar
  21. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–517. 10.1002/cpa.3160200302MathSciNetView ArticleGoogle Scholar
  22. Lu X, Xu HK, Yin X: Hybrid methods for a class of monotone variational inequalities. Nonlinear Anal. 2009, 71: 1032–1041. 10.1016/j.na.2008.11.067MathSciNetView ArticleGoogle Scholar
  23. Sabharwal A, Potter LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 1998, 46: 2345–2352. 10.1109/78.709518MathSciNetView ArticleGoogle Scholar
  24. Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475MathSciNetView ArticleGoogle Scholar
  25. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34: 1814–1830. 10.1137/S0363012994268655MathSciNetView ArticleGoogle Scholar
  26. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026MathSciNetView ArticleGoogle Scholar
  27. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119: 185–201.MathSciNetView ArticleGoogle Scholar
  28. Yamada I, Ogura N: Hybrid steepest descent method for the variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 2004, 25: 619–655.MathSciNetView ArticleGoogle Scholar
  29. Yao JC: Variational inequalities with generalized monotone operators. Math. Oper. Res. 1994, 19: 691–705. 10.1287/moor.19.3.691MathSciNetView ArticleGoogle Scholar
  30. Yao Y, Chen R, Xu HK: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72: 3447–3456. 10.1016/j.na.2009.12.029MathSciNetView ArticleGoogle Scholar
  31. Yao Y, Liou YC, Kang SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 2013, 55(4):801–811. 10.1007/s10898-011-9804-0MathSciNetView ArticleGoogle Scholar
  32. Yao Y, Liou YC, Shahzad N: Construction of iterative methods for variational inequality and fixed point problems. Numer. Funct. Anal. Optim. 2012, 33: 1250–1267. 10.1080/01630563.2012.660796MathSciNetView ArticleGoogle Scholar
  33. Yao Y, Marino G, Muglia L: A modified Korpelevich’s method convergent to the minimum norm solution of a variational inequality. Optimization 2012. 10.1080/02331934.2012.674947Google Scholar
  34. Yao Y, Noor MA, Liou YC, Kang SM: Iterative algorithms for general multi-valued variational inequalities. Abstr. Appl. Anal. 2012., 2012: Article ID 768272Google Scholar
  35. Yao Y, Noor MA: On viscosity iterative methods for variational inequalities. J. Math. Anal. Appl. 2007, 325: 776–787. 10.1016/j.jmaa.2006.01.091MathSciNetView ArticleGoogle Scholar
  36. Zeng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7View ArticleGoogle Scholar
  37. Zeng LC, Teboulle M, Yao JC: Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed point problems. J. Optim. Theory Appl. 2010, 146: 19–31. 10.1007/s10957-010-9650-0MathSciNetView ArticleGoogle Scholar
  38. Zhu D, Marcotte P: New classes of generalized monotonicity. J. Optim. Theory Appl. 1995, 87: 457–471. 10.1007/BF02192574MathSciNetView ArticleGoogle Scholar
  39. Iusem AN, Svaiter BF: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. 10.1080/02331939708844365MathSciNetView ArticleGoogle Scholar
  40. Censor Y, Gibali A, Reich S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2010. 10.1080/02331934.2010.539689Google Scholar
  41. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3MathSciNetView ArticleGoogle Scholar
  42. Iusem AN, Lucambio Peŕez LR: An extragradient-type algorithm for non-smooth variational inequalities. Optimization 2000, 48: 309–332. 10.1080/02331930008844508MathSciNetView ArticleGoogle Scholar
  43. Mashreghi J, Nasri M: Forcing strong convergence of Korpelevich’s method in Banach spaces with its applications in game theory. Nonlinear Anal. 2010, 72: 2086–2099. 10.1016/j.na.2009.10.009MathSciNetView ArticleGoogle Scholar
  44. Noor MA: New extragradient-type methods for general variational inequalities. J. Math. Anal. Appl. 2003, 277: 379–394. 10.1016/S0022-247X(03)00023-4MathSciNetView ArticleGoogle Scholar
  45. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560MathSciNetView ArticleGoogle Scholar
  46. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.View ArticleGoogle Scholar
  47. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar

Copyright

© Yao et al.; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement