Open Access

The subgradient double projection method for variational inequalities in a Hilbert space

Fixed Point Theory and Applications20132013:136

https://doi.org/10.1186/1687-1812-2013-136

Received: 25 December 2012

Accepted: 10 May 2013

Published: 27 May 2013

Abstract

We present a modification of the double projection algorithm proposed by Solodov and Svaiter for solving variational inequalities (VI) in a Hilbert space. The main modification is to use the subgradient of a convex function to obtain a hyperplane, and the second projection onto the intersection of the feasible set and a halfspace is replaced by projection onto the intersection of two halfspaces. In addition, we propose a modified version of our algorithm that is to find a solution of VI which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for our algorithms.

MSC:90C25, 90C30.

Keywords

variational inequalitiesnonexpansive mappingsubgradientdouble projection algorithmweak convergence

1 Introduction

Let H be a real Hilbert space, C H be a nonempty, closed and convex set, and let f : C H be a mapping. The inner product and norm are denoted by , and , respectively. We write x k x to indicate that the sequence { x k } converges weakly to x and x k x to indicate that the sequence { x k } converges strongly to x. For each point x H , there exists a unique nearest point in C, which is called the projection of x onto C and denoted by P C ( x ) . That is
P C ( x ) = arg min { y x | y C } .

It is well known that the projection operator is nonexpansive (i.e., Lipschitz continuous with a Lipschitz constant 1), and hence is also a bounded mapping.

We consider the following variational inequality problem denoted by VI ( C , f ) : find a vector x C such that
f ( x ) , y x 0 for all  y C .
(1.1)

The variational inequality problem was first introduced by Hartman and Stampacchia [1] in 1966. In recent years, many iterative projection-type algorithms have been proposed and analyzed for solving the variational inequality problem; see [2] and the references therein. To implement these algorithms, one has to compute the projection onto the feasible set C, or onto some related set.

In 1999, Solodov and Svaiter [3] proposed an algorithm for solving the VI ( C , f ) in an Euclidean space, known as the double projection algorithm due to the fact that one needs to implement two projections in each iteration. One is onto the feasible set C, and the other is onto the intersection of the feasible set C and the halfspace. More precisely, they presented the following algorithm.

Algorithm 1.1 Choose an initial point x 0 , parameters μ > 0 , σ , γ ( 0 , 1 ) and set k = 0 .

Step 1. Having x k , compute
y k = P C [ x k μ f ( x k ) ] .

Stop if x k = y k ; otherwise, go to Step 2.

Step 2. Compute z k = x k η k ( x k y k ) , where η k = γ m k with m k being the smallest nonnegative integer m such that
f ( x k γ m ( x k y k ) ) , x k y k σ x k y k 2 .
(1.2)
Step 3. Compute
x k + 1 = P C H k ( x k ) ,
where
H k = { x R n | f ( z k ) , x z k 0 } .
(1.3)

Let k : = k + 1 and return to Step 1.

Although [4] shows that Algorithm 1.1 can get a longer stepsize, and hence is a better algorithm than the extragradient method proposed by Korpelevich [5] in theory, there is still the need to calculate two projections onto the feasible set C and onto a related set C H k at each iteration. If the set C is simple enough (e.g., C is a halfspace or a ball) so that projections onto it and the related set are easily executed, then Algorithm 1.1 is particularly useful. But if C is a general closed and convex set, one has to solve the two problems min x C x ( x k μ f ( x k ) ) and min x C H k x x k at each iteration to get the next iterate x k + 1 . This might seriously affect the efficiency of Algorithm 1.1.

Recently, Censor et al. [6, 7] presented a subgradient extragradient projection method for solving VI ( C , f ) . Inspired by the above works, in this paper we present a modification of Algorithm 1.1 in a Hilbert space. Our algorithm replaces an iterate k P C H k by P C k H k , where C k is a halfspace constructed by the subgradient and contains the feasible set C, and C k H k is the intersection of two halfspaces. Observe that the projection onto the intersection of two halfspaces is easily computable. In addition, we propose a modified version of our algorithm that is to find a solution of VI which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for our algorithms.

2 Preliminaries

In this section, we recall some useful definitions and results which will be used in this paper.

We have the following properties on the projection operator, see [8].

Lemma 2.1 Let Ω H be a closed and convex set. Then for any x H and z Ω ,
( 1 ) P Ω ( x ) z 2 x z 2 P Ω ( x ) x 2 ; ( 2 ) P Ω ( x ) x , z P Ω ( x ) 0 .

The next property is known as the Opial condition [9]. Any Hilbert space has this property.

Condition 2.1 (Opial)

For any sequence { x k } in H that converges weakly to x ( x k x ),
lim inf k x k x < lim inf k x k y , y x .
Lemma 2.2 Let H be a real Hilbert space and D be a nonempty, closed and convex subset of H. Let the sequence { x k } H be Fejér-monotone with respect to D, i.e., for every u D ,
x k + 1 u x k u , k 0 .

Then { P D ( x k ) } converges strongly to some z D .

Proof See [[10], Lemma 3.2]. □

Lemma 2.3 Let H be a real Hilbert space, { α k } be a real sequence satisfying 0 < a α k b < 1 for all k 0 , and let { ν k } and { ω k } be two sequences in H such that for some σ 0 ,
lim sup k ν k σ , lim sup k ω k σ
and
lim k α k ν k + ( 1 α k ) ω k = σ .
Then
lim k ν k ω k = 0 .

Proof See [[11], Lemma 3.1]. □

The next fact is known as the demiclosedness principle [12].

Lemma 2.4 Let H be a real Hilbert space, D be a closed and convex subset of H and S : D H be a nonexpansive mapping. Then I S (I is the identity operator on H) is demiclosed at y H , i.e., for any sequence { x k } in D such that x k x ¯ D and ( I S ) x k y , we have ( I S ) x ¯ = y .

Lemma 2.5 Let H be a real Hilbert space, h be a real-valued function on H and K be the set { x H : h ( x ) 0 } . If K is nonempty and h is Lipschitz continuous with modulus θ > 0 , then
dist ( x , K ) θ 1 h ( x ) , x H ,
(2.1)

where dist ( x , K ) denotes the distance from x to K.

Proof It is clear that (2.1) holds for all x K . Hence, it suffices to show that (2.1) holds for all x H K . Let x H but x K . Since K is closed, there exists y K such that x y = dist ( x , K ) . So, for an arbitrary positive number ε, there exists z K such that
x z dist ( x , k ) + ε .
Since h is Lipschitz continuous with modulus θ on H, we have
| h ( x ) h ( z ) | θ x z θ dist ( x , k ) + ε θ .
It follows from z K that h ( z ) 0 . Thus we have
h ( x ) h ( x ) h ( z ) | h ( x ) h ( z ) | θ dist ( x , K ) + θ ε .

Let ε 0 + , we obtain the desired result. □

Remark 2.1 The idea of Lemma 2.5 is from Lemma 2.3 of [13]. In Lemma 2.5, if we take K : = K C , where C is a closed subset of H and K C , then (2.1) still holds. In fact, for each x H , since C and C K are closed, we have min y K C x y and min y K x y exist, and
min y K C x y min y K x y ,

that is, dist ( x , C K ) dist ( x , K ) .

In this paper, we assume that the convex set C satisfies the following assumptions:

(1) The set C is given by
C = { x H | c ( x ) 0 } ,
(2.2)

where c : H R is a convex (not necessarily differentiable) function and C is nonempty.

Note that the differentiability of c ( x ) is not assumed, therefore the set C is quite general. For example, any system of inequalities c j ( x ) 0 , j J , where c j ( x ) is convex and J is an arbitrary index set, is the same as the single inequality c ( x ) 0 with c ( x ) = sup { c j ( x ) | j J } . In fact, every closed convex set can be represented in this way, e.g., take c ( x ) = dist ( x , C ) , where dist is the distance function; see [[7], Section 1.3(c)].

(2) For any x H , at least one subgradient ξ c ( x ) can be calculated, where c ( x ) is the subdifferential of c ( x ) at x and is defined as follows:
c ( x ) = { ξ H | c ( z ) c ( x ) + ξ , z x  for all  z H } .
Denote
C k = { x H | c ( x k ) + ξ k , x x k 0 } ,
(2.3)

where ξ k c ( x k ) .

Proposition 2.1 For every nonnegative integer k, let x k H and C k be defined as in (2.3). Then for any x H , we have
P C k ( x ) = { x c ( x k ) + ξ k , x x k ξ k 2 ξ k if c ( x k ) + ξ k , x x k > 0 , x otherwise .
(2.4)

Proof See [14]. □

Remark 2.2 (1) From the definition of subdifferential, we have C C k for all k. In fact, for any x C and ξ k c ( x k ) , we have
c ( x k ) + ξ k , x x k c ( x ) 0 ,

i.e., x C k and hence C C k .

(2) From Proposition 2.1, we can observe that P C k can be explicitly represented without resorting to projection operator, thus its computation is easy. Recently, C k is often regarded as the projection region in the algorithm of the split feasibility problem, see [1518].

3 The subgradient double projection algorithm

To this end, the following assumptions are needed.

Assumption

  1. (A1)

    The solution set of problem (1.1), denoted by SOL ( C , f ) , is nonempty.

     
  2. (A2)
    For all x H , let y = P C [ x μ f ( x ) ] and [ x , y ] be a closed line segment joining x and y, f satisfies
    f ( z ) , z x 0 , z [ x , y ] , x SOL ( C , f ) .
     
  3. (A3)

    The mapping f is continuous and bounded on a bounded set of H.

     
Remark 3.1 (1) If f is Lipschitz continuous, then f is bounded on a bounded set of H, but the continuity and boundedness cannot ensure the Lipschitz continuity. For example, let H = R = ( , + ) and f ( x ) = x 2 , x H . It is easy to see that f is continuous and bounded on a bounded set of H, but we can see that f is not Lipschitz continuous. So, our assumption (A3) is weaker than Lipschitz continuous. Recently, the literature [6] proposed two subgradient extragradient algorithms for VI ( C , f ) in a Hilbert space and established weak convergence theorems for them under the assumptions of Lipschitz continuity and monotonicity of f.
  1. (2)

    Here, we give a concrete example satisfying condition (A2). Let f : R R be defined by f ( x ) = 1 e x , and C = [ 0 , 1 ] .

     

In this paper, we establish weak convergence theorem of subgradient double projection methods for VI ( C , f ) in a Hilbert space under assumptions (A1)-(A3).

Algorithm 3.1 Select x 0 C , α > 0 , β 0 , σ > 0 , μ ( 0 , σ 1 ) , γ ( 0 , 1 ) . Set k = 0 .

Step 1. For x k C , define
y k = P C ( x k μ f ( x k ) ) .

If x k = y k , stop; else go to Step 2.

Step 2. Compute z k = ( 1 η k ) x k + η k y k , where η k = γ m k with m k being the smallest nonnegative integer m satisfying
f ( x k ) f ( x k γ m ( x k y k ) ) , x k y k σ x k y k 2 .
(3.1)
Step 3. Compute x k + 1 = P C k H k ( x k ) , where
C k = { x H | c ( x k ) + ξ k , x x k 0 } ,
H k = { v H : h k ( v ) 0 } is a halfspace defined by the function
h k ( v ) = α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) , v x k + α η k ( 1 μ σ ) x k y k 2 ,
(3.2)

and ξ k c ( x k ) .

Let k = k + 1 and return to Step 1.

Remark 3.2 (1) Since C k and H k are halfspaces, and by Proposition 2.1, the projection onto C k H k can be directly calculated, so our Algorithm 3.1 can be more easily implemented than Algorithm 1.1.

(2) If y k = x k for some positive integer k, then x k is a solution of problem (1.1). In fact, suppose y k = x k . Since y k = P C ( x k μ f ( x k ) ) , it follows from Lemma 2.1(2) that
x k μ f ( x k ) x k , x x k 0 , x C ,

this means that x k is a solution of problem (1.1).

4 Convergence of the subgradient double projection algorithm

Now, we turn to consider the convergence of Algorithm 3.1. Certainly, if an algorithm terminates within finite steps, e.g., step k, then x k is a solution of problem (1.1). So, in the following analysis, we assume that our algorithm always generates an infinite sequence.

Lemma 4.1 Let x SOL ( f , C ) and the function h k be defined by (3.2). Then
h k ( x k ) α η k ( 1 μ σ ) x k y k 2 and h k ( x ) 0 .
(4.1)

In particular, if x k y k , then h k ( x k ) > 0 .

Proof
h k ( x k ) = α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) , x k x k + α η k ( 1 μ σ ) x k y k 2 = α η k ( 1 μ σ ) x k y k 2 .
If x k y k , then h k ( x k ) > 0 because μ σ < 1 . In the following, we prove that h k ( x ) 0 . Since
y k = P C ( x k μ f ( x k ) ) ,
by (2) of Lemma 2.1, we have
x k y k μ f ( x k ) , y k x 0 .
(4.2)
By assumption (A2),
μ f ( x k ) , x k x 0 .
(4.3)
Adding inequalities (4.2) and (4.3), we obtain
x k y k μ f ( x k ) , y k x k + x k y k , x k x 0 .
(4.4)
We have
x k x , α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) = α η k x k x , x k y k + β f ( x k ) , x k x + α μ f ( z k ) , x k x α η k x k y k μ f ( x k ) , x k y k + α μ η k f ( z k ) , x k y k = α η k x k y k 2 α μ η k f ( x k ) f ( z k ) , x k y k α η k x k y k 2 α μ σ η k x k y k 2 = η k α ( 1 μ σ ) x k y k 2 ,

where the first inequality follows from (A2) and (4.4) and the last one follows from (3.1).

It follows that
h k ( x ) = α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) , x x k + η k α ( 1 μ σ ) x k y k 2 = α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) , x k x + η k α ( 1 μ σ ) x k y k 2 η k α ( 1 μ σ ) x k y k 2 + η k α ( 1 μ σ ) x k y k 2 = 0 .

The proof is completed. □

Lemma 4.2 Suppose assumptions (A1)-(A3) hold and the sequences { x k } and { y k } are generalized by Algorithm 3.1, then there exists a positive number M such that
x k + 1 x 2 x k x 2 M 2 ( 1 μ σ ) 2 α 2 η k 2 y k x k 4 , k ,
(4.5)

where x SOL ( C , f ) .

Proof From Lemma 4.1 and Remark 2.2(1), we get SOL ( C , f ) H k C k , and hence x H k C k . Since x k + 1 = P C k H k ( x k ) , it follows from Lemma 2.1(1) that
x k + 1 x 2 x k x 2 x k + 1 x k 2 = x k x 2 dist 2 ( x k , C k H k ) .
(4.6)
It follows that the sequence { x k x } is nonincreasing, and hence is a convergent sequence. Thus, { x k } is bounded and
lim k dist ( x k , C k H k ) = 0 .
(4.7)
Since f and the projection operator P C are continuous and bounded, we obtain that the sequence { y k } and hence the sequences { f ( x k ) } and { f ( z k ) } are bounded, and for some M > 0 ,
α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) M , k .
(4.8)
Clearly, each function h k is Lipschitz continuous on H with modulus M. Applying Lemma 4.1 and Remark 2.1, we obtain that
dist ( x k , C k H k ) M 1 h k ( x k ) M 1 α η k ( 1 μ σ ) x k y k 2 ,
which, together with (4.8), yields that
x k + 1 x 2 x k x 2 M 2 ( 1 μ σ ) 2 α 2 η k 2 y k x k 4 , k .

 □

Theorem 4.1 Suppose assumptions (A1)-(A3) hold, then any sequence { x k } generalized by Algorithm 3.1 weakly converges to some solution u SOL ( C , f ) .

Proof By Lemma 4.2, the sequence { x k } is bounded and
lim k η k x k y k 2 = 0 .
(4.9)
If lim sup k η k > 0 , then we have lim inf k x k y k 2 = 0 . Therefore there exists a weak accumulation point x ¯ of { x k } , and two subsequences { x k j } and { y k j } of { x k } and { y k } , respectively, such that
lim j x k j y k j = 0
(4.10)
and
x k j x ¯ , as  j .
(4.11)
Since x k j ( x k j y k j ) = P C ( x k j μ f ( x k j ) ) , it follows from Lemma 2.1(2) that
x x k j + ( x k j y k j ) , x k j μ f ( x k j ) y k j 0 , x C .
Letting j , taking into account (4.10), (4.11) and the continuity of f, we obtain
f ( x ¯ ) , x x ¯ 0 , x C ,
i.e., x ¯ is a solution of problem (1.1). In order to show that the entire sequence { x k } weakly converges to x ¯ , assume that there exists another subsequence { x k ¯ j } of { x k } that weakly converges to some x ¯ x ¯ and x ¯ SOL ( C , f ) . It follows from Lemma 4.2 that the sequences { x k x ¯ } and { x k x ¯ } are decreasing. By the Opial condition, we have
lim k x k x ¯ = lim inf j x k j x ¯ < lim inf j x k j x ¯ = lim k x k x ¯ = lim inf j x k ¯ j x ¯ < lim inf j x k ¯ j x ¯ = lim k x k x ¯ ,

which is a contradiction. Thus x ¯ = x ¯ . This implies that the sequence { x k } converges weakly to x ¯ SOL ( C , f ) .

Suppose now that lim j η k j = 0 . By the choice of η k , (3.1) implies that
σ x k j y k j 2 < f ( x k j ) f ( x k j γ m k j 1 ( x k j y k j ) ) , x k j y k j = f ( x k j ) f ( x k j γ 1 η k j ( x k j y k j ) ) , x k j y k j .

Since { x k } and { y k } are bounded and f is continuous, we obtain by letting j that lim j x k j y k j 2 = 0 . Applying the similar proof to the previous case, we get the desired result. □

Remark 4.1 If the mapping f is pseudomonotone on C, i.e.,
f ( y ) , x y 0 f ( x ) , x y 0 , x , y C ,
then SOL ( f , C ) is a closed and convex set. In fact, if f is pseudomonotone on C, then for any x SOL ( C , f ) , we have
f ( x ) , x x 0 , x C .
Hence, it suffices to show that the solution set SOL ( C , f ) can be characterized as the intersection of a family of halfspaces. That is,
SOL ( C , f ) = x C { y C | f ( x ) , x y 0 }
since
SOL ( C , f ) = x C { y C | f ( y ) , x y 0 } .
From the pseudomonotonicity of f, we obtain
x C { y C | f ( y ) , x y 0 } x C { y C | f ( x ) , y x 0 } .
So, we need only to prove
x C { y C | f ( y ) , x y 0 } x C { y C | f ( x ) , x y 0 } .
(4.12)
We suppose that the conclusion (4.12) does not hold, then there exist y 0 and x 0 in C such that
f ( x ) , x y 0 0 for all  x C ,
(4.13)
and
f ( y 0 ) , x 0 y 0 < 0 .
(4.14)
In (4.13), taking x = ( 1 t ) y 0 + t x 0 , t ( 0 , 1 ) , we obtain
f ( ( 1 t ) y 0 + t x 0 ) , x 0 y 0 0 .
Letting t 0 + , it follows from the continuity of f that
f ( y 0 ) , x 0 y 0 0 ,

which contradicts (4.14). We obtain the desired conclusion. Therefore the solution set SOL ( C , f ) is closed and convex.

In Theorem 4.1, if SOL ( f , C ) is a closed set, then we can furthermore obtain
u = lim k P SOL ( C , f ) ( x k ) .
In fact, we take
u k = P SOL ( C , f ) ( x k ) .
By Lemma 2.1(2) and noting x ¯ SOL ( C , f ) , we have
x ¯ u k , u k x k 0 .
By Lemma 2.2, { u k } converges strongly to some u SOL ( C , f ) . Therefore
x ¯ u , u x ¯ 0 ,

and hence u = x ¯ .

5 The modified subgradient double projection algorithm

In this section, we present a modified subgradient double projection algorithm which finds a solution of the VI ( C , f ) which is also a fixed point of a given nonexpansive mapping. Let S : H H be a nonexpansive mapping and denote by Fix ( S ) its fixed point set, i.e.,
Fix ( S ) = { x H | S ( x ) = x } .

Let { α k } k = 0 [ c , d ] for some c , d ( 0 , 1 ) .

Algorithm 5.1 Select x 0 C , α > 0 , β 0 , σ > 0 , μ ( 0 , σ 1 ) , γ ( 0 , 1 ) . Set k = 0 .

Step 1. For x k C , define
y k = P C ( x k μ f ( x k ) ) .

If x k = y k , stop; else go to Step 2.

Step 2. Compute z k = ( 1 η k ) x k + η k y k , where η k = γ m k with m k being the smallest nonnegative integer m satisfying
f ( x k ) f ( x k γ m ( x k y k ) ) , x k y k σ x k y k 2 .
(5.1)
Step 3. Compute
x k + 1 = α k x k + ( 1 α k ) S P C k H k ( x k ) ,
where
C k = { x H | c ( x k ) + ξ k , x x k 0 } ,
H k = { v H : h k ( v ) 0 } being a halfspace defined by the function
h k ( v ) = α η k ( x k y k ) + β f ( x k ) + α μ f ( z k ) , v x k + α η k ( 1 μ σ ) x k y k 2 ,
(5.2)

and ξ k c ( x k ) . Let k = k + 1 and return to Step 1.

6 Convergence of the modified subgradient double projection algorithm

In this section, we establish a weak convergence theorem for Algorithm 5.1. We assume that the following condition holds:
Fix ( S ) SOL ( C , f ) .
We also recall that in a Hilbert space H,
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2
(6.1)

for all x , y H and λ [ 0 , 1 ] .

Theorem 6.1 Suppose that assumptions (A1)-(A3) hold, then any sequence { x k } generalized by Algorithm 5.1 weakly converges to some solution u Fix ( S ) SOL ( C , f ) .

Proof Denote t k = P C k H k ( x k ) for all k. Let u Fix ( S ) SOL ( C , f ) . By the definition of x k + 1 , we obtain
x k + 1 u 2 = α k x k + ( 1 α k ) S ( t k ) u 2 = α k ( x k u ) + ( 1 α k ) ( S ( t k ) u ) 2
(6.2)
= α k x k u 2 + ( 1 α k ) S ( t k ) u 2 α k ( 1 α k ) x k S ( t k ) 2 α k x k u 2 + ( 1 α k ) S ( t k ) S ( u ) 2 α k x k u 2 + ( 1 α k ) t k u 2 α k x k u 2 + ( 1 α k ) ( x k u 2 t k x k 2 ) = x k u 2 ( 1 α k ) t k x k 2
(6.3)
= x k u 2 ( 1 α k ) dist 2 ( x k , C k H k ) ,
(6.4)
where the third equality follows from (6.1), the second inequality follows from the nonexpansion of S and the third inequality follows from Lemma 2.1(1). In (6.4), using the proof similar to those of Lemma 4.2 and Theorem 4.1, we obtain that there exists subsequences { x k j } and { y k j } of { x k } and { y k } , respectively, such that
lim j x k j y k j = 0 .
(6.5)
By (6.3), we obtain that { x k u } is a convergent sequence, i.e., there exists some σ > 0 such that
lim k x k u = σ
(6.6)
and
lim k x k t k = 0
(6.7)
and { x k } and { t k } are bounded. Hence we may assume, without loss of generality, that { x k j } weakly converges to some x ¯ H . We now show that
x ¯ Fix ( S ) SOL ( C , f ) .
By the triangle inequality,
t k y k t k x k + x k y k ,
(6.8)
so by (6.5) and (6.7), we obtain
lim j t k j y k j = 0 .
(6.9)
Since t k j ( t k j y k j ) = P C [ x k j μ f ( x k j ) ] , it follows from Lemma 2.1(2) that
x t k j + ( t k j y k j ) , x k j μ f ( x k j ) t k j + ( t k j y k j ) 0 for all  x C .
Passing to the limit j in the above inequality, taking into account (6.7), (6.9), C C k j and the continuity of f, we obtain
f ( x ¯ ) , x x ¯ 0 for all  x C ,
i.e., x ¯ SOL ( C , f ) . It is now left to show that x ¯ Fix ( S ) . Since S is nonexpansive, we obtain
S ( t k ) u = S ( t k ) S ( u ) t k u x k u .
(6.10)
By (6.6),
lim sup k S ( t k ) u σ .
(6.11)
Furthermore,
lim k α k ( x k u ) + ( 1 α k ) ( S ( t k ) u ) = lim k α k x k + ( 1 α k ) S ( t k ) u = lim k x k + 1 u = σ .
(6.12)
So, applying Lemma 2.3, we obtain
lim k S ( t k ) x k = 0
(6.13)
since
S ( x k ) x k = S ( x k ) S ( t k ) + S ( t k ) x k S ( x k ) S ( t k ) + S ( t k ) x k x k t k + S ( t k ) x k .
It follows from (6.7) and (6.13) that
lim k S ( x k ) x k = 0 .
(6.14)
Since S is nonexpansive on H, x k j weakly converges to x ¯ and
lim j ( I S ) ( x k j ) = lim j x k j S ( x k j ) = 0 ,
(6.15)

we obtain by Lemma 2.4 that ( I S ) ( x ¯ ) = 0 , which means that x ¯ Fix ( S ) . Now, again by using similar arguments to those used in the proof of Theorem 4.1, we obtain that the entire sequence { x k } weakly converges to x ¯ . Therefore the sequence { x k } weakly converges to x ¯ Fix ( S ) SOL ( C , f ) . □

7 Conclusion

In this paper, a new double projection algorithm for the variational inequality problem has been presented. The main advantage of the proposed method is that the second projection at each iteration is onto the intersection set of two halfspaces, which is implemented very easily. When the feasible set C of VI is a quite general set, our algorithm is more effective than the double projection method proposed by Solodov and Svaiter. It is natural to ask whether it is possible to replace the first projection onto the halfspace or the intersection set of halfspaces. This would be an interesting topic in further research.

Declarations

Acknowledgements

This work was supported by the Educational Science Foundation of Chongqing, Chongqing of China (grant KJ111309).

Authors’ Affiliations

(1)
Department of Mathematics and Computer Science, Yangtze Normal University

References

  1. Hartman P, Stampacchia G: On some non-linear elliptic differential-functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210MathSciNetView ArticleGoogle Scholar
  2. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.MATHGoogle Scholar
  3. Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475MathSciNetView ArticleGoogle Scholar
  4. Wang YJ, Xiu NH, Wang CY: Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 2001, 111: 641–656. 10.1023/A:1012606212823MathSciNetView ArticleGoogle Scholar
  5. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 17: 747–756.Google Scholar
  6. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3MathSciNetView ArticleGoogle Scholar
  7. Censor Y, Gibali A, Reich S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61: 1119–1132. 10.1080/02331934.2010.539689MathSciNetView ArticleGoogle Scholar
  8. Zarantonello EH: Projections on Convex Sets in Hilbert Spaces and Spectral Theory. Academic Press, New York; 1971.MATHGoogle Scholar
  9. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0MathSciNetView ArticleGoogle Scholar
  10. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560MathSciNetView ArticleGoogle Scholar
  11. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-zMathSciNetView ArticleGoogle Scholar
  12. Browder FE: Fixed point theorems for noncompact mappings in Hilbert space. Proc. Natl. Acad. Sci. USA 1965, 53: 1272–1276. 10.1073/pnas.53.6.1272MathSciNetView ArticleGoogle Scholar
  13. He YR: A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006, 185: 166–173. 10.1016/j.cam.2005.01.031MathSciNetView ArticleGoogle Scholar
  14. Polyak BT: Minimization of unsmooth functionals. U.S.S.R. Comput. Math. Math. Phys. 1969, 9: 14–29.View ArticleGoogle Scholar
  15. Yang QZ: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleGoogle Scholar
  16. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010MathSciNetView ArticleGoogle Scholar
  17. Qu B, Xiu NH: A new halfspace-relaxation projection method for the split feasibility problem. Linear Algebra Appl. 2008, 428: 1218–1229. 10.1016/j.laa.2007.03.002MathSciNetView ArticleGoogle Scholar
  18. Qu B, Xiu NH: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleGoogle Scholar

Copyright

© Zheng; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.