Skip to main content

A new extragradient method for generalized variational inequality in Euclidean space

Abstract

In this paper, we extend the extragradient projection method proposed in (Wang et al. in J. Optim. Theory Appl. 119:167-183, 2003) for the classical variational inequalities to the generalized variational inequalities. For this algorithm, we first prove its expansion property of the generated sequence with respect to the starting point and then show that the existence of the solution to the problem can be verified through the behavior of the generated sequence. The global convergence of the method is also established under mild conditions.

MSC:90C30, 15A06.

1 Introduction

Let F(x) be a multi-valued mapping from R n into 2 R n with nonempty values, and let X be a nonempty closed convex set in R n . The problem of generalized variational inequalities (GVI) [1, 2] is to find x ∗ ∈X such that there exists ω ∗ ∈F( x ∗ ) satisfying

〈 ω ∗ , x − x ∗ 〉 ≥0,∀x∈X,
(1.1)

where 〈⋅,⋅〉 stands for the Euclidean inner product of vectors in R n . The solution set of problem (1.1) is denoted by X ∗ . Certainly, the GVI reduces to the classical variational inequalities (VI) when F is a single-valued mapping, which has been well studied in the past decades [3, 4].

For the GVI, theories and solution methods have been extensively considered in the literature [2, 5–12], and it is well known that the existence of solutions is an important topic for the GVI [1]. Generally, there are mainly two approaches to attack the solution existence problem of the GVI. One is an analytic approach which first reformulates the GVI as a well-studied mathematical problem and then invokes an existence theorem for the latter problem [13]. The second is a constructive approach in which the existence can be verified by the behavior of the proposed method. The algorithm that is considered in this paper belongs to the second approach.

First, we give a short summary to the constructive approach on the existence theory for the VI. For this approach, the equivalence between the existence of solutions to the VI problem and the boundedness of the sequence generated by some modified extragradient methods was first established by Sun in [14]. Later, Wang et al. [4] established the same theory by a new type of extragradient-type method. Furthermore, the generated sequence possesses an expansion property with respect to the starting point and converges to a solution point if the solution set of the VI is nonempty. Now a question is posed naturally: as the GVI problem is an extension of the VI, can this theory be extended to the GVI? This constitutes the main motivation of the paper.

In this paper, inspired by the work in [4], we propose a new type of extragradient projection method for the problem GVI. We first establish the existence results for the GVI under pseudomonotonicity and continuity assumption of the underlying mapping F, and then show the global convergence of the proposed method.

The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions. In Section 3, we present the description of the algorithm and establish some properties of the algorithm. The global convergence of the sequence is also established.

2 Preliminaries

For a nonempty closed convex set K⊂ R n and a vector x∈ R n , the orthogonal projection of x onto K, i.e.,

argmin { ∥ y − x ∥ ∣ y ∈ K } ,

is denoted by P K (x). In what follows, we state some well-known properties of the projection operator which will be used in the sequel.

Lemma 2.1 [15]

Let K be a nonempty, closed and convex subset in R n . Then, for any x,y∈ R n and z∈K, the following statements hold:

  1. (i)

    〈 P K (x)−x,z− P K (x)〉≥0;

  2. (ii)

    ∥ P K ( x ) − P K ( y ) ∥ 2 ≤ ∥ x − y ∥ 2 − ∥ P K ( x ) − x + y − P K ( y ) ∥ 2 .

Remark 2.1 In fact, (i) in Lemma 2.1 provides also a sufficient and necessary condition for a vector u∈K to be the projection of the vector x; i.e., u= P K (x) if and only if

〈u−x,z−u〉≥0,∀z∈K.

Definition 2.1 Let K be a nonempty subset of R n . A multi-valued mapping F:K→ 2 R n is said to be

  1. (i)

    monotone if and only if

    〈u−v,x−y〉≥0,∀x,y∈K,u∈F(x),v∈F(y);
  2. (ii)

    pseudomonotone if and only if, for any x,y∈K, u∈F(x), v∈F(y),

    〈u,y−x〉≥0⟹〈v,y−x〉≥0.

Now let us recall the definition of a continuous multi-valued mapping F.

Definition 2.2 Assume that F:X→ 2 R n is a multi-valued mapping, then

  1. (i)

    F is said to be upper semicontinuous at x∈X if for every open set V containing F(x), there is an open set U containing x such that F(y)⊂V for all y∈X∩U;

  2. (ii)

    F is said to be lower semicontinuous at x∈X if given any sequence { x k } converging to x and any y∈F(x), there exists a sequence y k ∈F( x k ) that converges to y.

F is said to be continuous at x∈X if it is both upper semicontinuous and lower semicontinuous at x.

For the simplicity of our description, we list the assumptions needed in the sequel.

Assumption 2.1 Suppose that X is a nonempty closed convex set in R n . The multi-valued mapping F:X→ 2 R n is pseudomonotone and continuous on X with nonempty compact convex values.

3 Main results

For x∈ R n , ξ∈F(x), we first define the projection residue

r(x,ξ)=x− P X (x−ξ).

It is well known that the projection residue is related intimately to the solution set X ∗ .

Proposition 3.1 For x∈X and ξ∈F(x), they solve problem (1.1) if and only if

x= P X (x−ξ).

The basic idea of the designed algorithm is as follows. At each step of the algorithm, compute the projection residue r( x k , ξ k ) at iterate x k . If it is a zero vector, then stop with x k being a solution of the GVI; otherwise, find a trial point y k by a back-tracking search at x k along the residue r( x k , ξ k ), and the new iterate is obtained by projecting x 0 onto the intersection of X with two halfspaces respectively associated with y k and x k . Repeat this process until the projection residue is a zero vector.

Algorithm 3.1 Choose σ,γ∈(0,1), x 0 ∈X, k=0.

Step 1: Given the current iterate x k ∈X, if ∥r( x k ,ξ)∥=0 for some ξ∈F( x k ), stop; else take any ξ k ∈F( x k ) and compute

z k = P X ( x k − ξ k ) .

Then, let

y k =(1− η k ) x k + η k z k ,

where η k = γ m k , with m k being the smallest nonnegative integer m satisfying: ∃ ζ k ∈F( x k − γ m r( x k , ξ k )) such that

〈 ζ k , r ( x k , ξ k ) 〉 ≥σ ∥ r ( x k , ξ k ) ∥ 2 .
(3.1)

Step 2: Let x k + 1 = P H k 1 ∩ H k 2 ∩ X ( x 0 ), where

H k 1 = { x ∈ R n ∣ 〈 x − y k , ζ k 〉 ≤ 0 , ∀ ζ k ∈ F ( y k ) } , H k 2 = { x ∈ R n ∣ 〈 x − x k , x 0 − x k 〉 ≤ 0 } .

Select k=k+1 and go to Step 1.

Now, we first discuss the feasibility of the stepsize rule (3.1).

Lemma 3.1 If x k is not a solution of problem (1.1), then there exists the smallest nonnegative integer m satisfying (3.1).

Proof By the definition of r( x k , ξ k ) and Lemma 2.1, we know that

〈 P X ( x k − ξ k ) − ( x k − ξ k ) , x k − P X ( x k − ξ k ) 〉 ≥0,

which implies

〈 ξ k , r ( x k , ξ k ) 〉 ≥ ∥ r ( x k , ξ k ) ∥ 2 >0.
(3.2)

Since γ∈(0,1), we get

lim m → ∞ ( x k − γ m r ( x k , ξ k ) ) = x k .

By this and the fact that F is lower semicontinuous, there exists ζ m ∈F( x k − γ m r( x k , ξ k )) such that

lim m → ∞ ζ m = ξ k .

So,

lim m → ∞ 〈 ζ m , r ( x k , ξ k ) 〉 = 〈 ξ k , r ( x k , ξ k ) 〉 ≥ ∥ r ( x k , ξ k ) ∥ 2 >0,

which implies the conclusion. □

The following lemma shows that the halfspace H k 1 in Algorithm 3.1 strictly separates x k and the solution set X ∗ if X ∗ is nonempty.

Lemma 3.2 If X ∗ ≠∅, the halfspace H k 1 in Algorithm  3.1 separates the point x k from the set X ∗ . Moreover,

X ∗ ⊆ H k 1 ∩X,∀k≥0.

Proof By the definition of r( x k , ξ k ) and Algorithm 3.1, we know

y k =(1− η k ) x k + η k z k = x k − η k r ( x k , ξ k ) ,

which can be written

η k r ( x k , ξ k ) = x k − y k .

Then, by this and (3.1), we get

〈 ζ k , x k − y k 〉 >0,
(3.3)

where ζ k is a vector in F( y k ). So, by the definition of H k 1 and (3.3), we get x k ∉ H k 1 .

On the other hand, for any x ∗ ∈ X ∗ and x∈X, we have

〈 ω ∗ , x − x ∗ 〉 ≥0, ω ∗ ∈F ( x ∗ ) .

Since F is pseudomonotone on X, we get

〈 ω , x − x ∗ 〉 ≥0,∀ω∈F(x).
(3.4)

Let x= y k in (3.4), for any ζ k ∈F( y k ), we have

〈 ζ k , y k − x ∗ 〉 ≥0,

which implies x ∗ ∈ H k 1 . Moreover, it is easy to see that X ∗ ⊆ H k 1 ∩X, ∀k≥0. □

The following lemma says that if the solution set is nonempty, then X ∗ ⊆ H k 1 ∩ H k 2 ∩X and thus H k 1 ∩ H k 2 ∩X is a nonempty set.

Lemma 3.3 If the solution set X ∗ ≠∅, then X ∗ ⊆ H k 1 ∩ H k 2 ∩X for all k≥0 under Assumption  2.1.

Proof From the analysis above, it is sufficient to prove that X ∗ ⊆ H k 2 for all k≥0. The proof will be given by induction. Obviously, if k=0,

X ∗ ⊆ H 0 2 = R n .

Now, suppose that

X ∗ ⊆ H k 2

holds for k=l≥0. Then

X ∗ ⊆ H l 1 ∩ H l 2 ∩X.

For any x ∗ ∈ X ∗ , by Lemma 2.1 and the fact that

x l + 1 = P H l 1 ∩ H l 2 ∩ X ( x 0 ) ,

it holds that

〈 x ∗ − x l + 1 , x 0 − x l + 1 〉 ≤0.

Thus X ∗ ⊆ H l + 1 2 . This shows that X ∗ ⊆ H k 2 for all k≥0, and the desired result follows. □

For the case that the solution set is empty, we have that H k 1 ∩ H k 2 ∩X is also nonempty from the following lemma, which implies the feasibility of Algorithm 3.1.

Lemma 3.4 Suppose that X ∗ =∅, then H k 1 ∩ H k 2 ∩X≠∅ for all k≥0 under Assumption  2.1.

Before proving Lemma 3.4, we present a fundamental existence result for the GVI problem defined over a compact convex region [16]. For the sake of completeness, we give the proof process.

Lemma 3.5 Let X⊆ R n be a nonempty bounded closed convex set and let the multi-valued mapping F:X→ 2 R n be lower semicontinuous with nonempty closed convex values. Then the solution set X ∗ is nonempty.

Proof Since the multi-valued mapping F is lower semicontinuous and has nonempty closed convex values, by Michael’s selection theorem (see, for instance, Theorem 24.1 in [17]), it admits a continuous selection; that is, there exists a continuous mapping G:X→ R n such that G(x)∈F(x) for every x∈X. Since X is a nonempty bounded closed convex set, the VI(X,G), which consists of finding an x∈X such that

〈 G ( x ) , y − x 〉 ≥0,∀y∈X,

has a solution (see Lemma 3.1 in [18]), i.e., the solution set X ′ of the problem VI(X,G) is nonempty. It follows from X ′ ⊆ X ∗ that X ∗ is nonempty. □

Proof of Lemma 3.4 On the contrary, suppose that there exists k 0 ≥1 such that H k 0 1 ∩ H k 0 2 ∩X=∅. Then there exists a positive number M such that

{ x k ∣ 0 ≤ k ≤ k 0 } ⊆B ( x 0 , M )

and

{ x k − ξ k ∣ 0 ≤ k ≤ k 0 , ξ k ∈ F ( x k ) } ⊆B ( x 0 , M ) ,

where

B ( x 0 , M ) = { x ∈ R n ∣ ∥ x − x 0 ∥ ≤ M } .

Let Y=X∩B( x 0 ,2M) and consider the GVI(F,Y). From Lemma 3.5, we know that the solution set Y ∗ of GVI(F,Y) is nonempty. In order to avoid confusion with the sequence { H k 1 }, { H k 2 }, and { x k }, we denote the three corresponding sequences by { H ¯ k 1 }, { H ¯ k 2 } and { x ¯ k }, respectively, when Algorithm 3.1 is applied to GVI(F,Y) with a starting point x 0 . We claim that

  1. (i)

    the set has at least k 0 +1 elements: x ¯ 0 , x ¯ 1 ,…, x ¯ k 0 ;

  2. (ii)

    x ¯ k = x k , H ¯ k 1 = H k 1 , H ¯ k 2 = H k 2 for 0≤k≤ k 0 ;

  3. (iii)

    x k 0 is not a solution of GVI(F,Y).

Since Y ∗ ≠∅, using Lemma 3.3, we know that H ¯ k 0 1 ∩ H ¯ k 0 2 ∩X≠∅, so H k 0 1 ∩ H k 0 2 ∩X≠∅, which contradicts the supposition that H k 0 1 ∩ H k 0 2 ∩X=∅. □

From Lemma 3.4, if the solution set of problem (1.1) is empty, then Algorithm 3.1 generates an infinite sequence. More generally, we have the following conclusion.

Theorem 3.1 Suppose that Assumption  2.1 holds. Assume that Algorithm  3.1 generates an infinite sequence { x k }. If the solution set X ∗ is nonempty, then the sequence { x k } is bounded and all its cluster points belong to the solution set. Otherwise,

lim k → + ∞ ∥ x k − x 0 ∥ =+∞

if the solution set X ∗ is empty.

Proof First, we suppose that the solution set is nonempty. Since

x k + 1 = P H k 1 ∩ H k 2 ∩ X ( x 0 ) ,

by Lemma 3.3 and the definition of the projection, it holds that

∥ x k + 1 − x 0 ∥ ≤ ∥ x ∗ − x 0 ∥

for any x ∗ ∈ X ∗ . So, { x k } is a bounded sequence.

Since x k + 1 ∈ H k 2 , it is obvious that

P H k 2 ( x k + 1 ) = x k + 1

from the definition of the projection operator. For x k , since

〈 z − x k , x 0 − x k 〉 ≤0,∀z∈ H k 2 ,

it holds that x k = P H k 2 ( x 0 ) from Remark 2.1. Thus, using Lemma 2.1, one has

∥ P H k 2 ( x k + 1 ) − P H k 2 ( x 0 ) ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 − ∥ P H k 2 ( x k + 1 ) − x k + 1 + x 0 − P H k 2 ( x 0 ) ∥ 2 ;

i.e.,

∥ x k + 1 − x k ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 − ∥ x k − x 0 ∥ 2 ,

which can be written as

∥ x k + 1 − x k ∥ 2 + ∥ x k − x 0 ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 .

Thus, the sequence {∥ x k − x 0 ∥} is nondecreasing and bounded, and hence convergent, which implies that

lim k → ∞ ∥ x k + 1 − x k ∥ 2 =0.
(3.5)

On the other hand, by x k + 1 ∈ H k 1 , we get

〈 x k + 1 − y k , ζ k 〉 ≤0.
(3.6)

Since

y k =(1− η k ) x k + η k z k = x k − η k r ( x k , ξ k )

and by (3.6) we have

〈 x k + 1 − y k , ζ k 〉 = 〈 x k + 1 − x k + η k r ( x k , ξ k ) , ζ k 〉 ≤0,

which implies

η k 〈 r ( x k , ξ k ) , ζ k 〉 ≤ 〈 x k − x k + 1 , ζ k 〉 ≤0.

Using the Cauchy-Schwarz inequality and (3.1), we obtain

η k σ ∥ r ( x k , ξ k ) ∥ 2 ≤ η k 〈 r ( x k , ξ k ) , ζ k 〉 ≤ ∥ x k + 1 − x k ∥ ∥ ζ k ∥ .
(3.7)

Since F is continuous with compact values, Proposition 3.11 in [19] implies that {F( y k ):k∈N} is a bounded set, and so the sequence { ζ k : ζ k ∈F( y k )} is bounded. By (3.5) and (3.7), it follows that

lim k → ∞ η k ∥ r ( x k , ξ k ) ∥ 2 =0.

For any convergent sequence { x k j } of { x k }, its limit is denoted by x ¯ , i.e.,

lim j → ∞ x k j = x ¯ .

Without loss of generality, suppose that { η k j } has a limit. Then

lim j → ∞ η k j =0

or

lim j → ∞ ∥ r ( x k j , ξ k j ) ∥ 2 =0.

For the first case, by the choice of η k j in Algorithm 3.1, we know that

〈 ζ , r ( x k j , ξ k j ) 〉 <σ ∥ r ( x k j , ξ k j ) ∥ 2
(3.8)

for all ζ∈F( x k j − η k j γ r( x k j , ξ k j )).

Since

lim j → ∞ ( x k j − η k j γ r ( x k j , ξ k j ) ) = x ¯

and F is lower semicontinuous on X, ∃ ζ k j ∈F( x k j − η k j γ r( x k j , ξ k j )) such that

lim j → ∞ ζ k j =ξ,

where ξ is a vector in F( x ¯ ). So, by (3.8) we obtain

lim j → ∞ 〈 ζ k j , r ( x k j , ξ k j ) 〉 = 〈 ξ , r ( x ¯ , ξ ) 〉 ≤ lim j → ∞ σ ∥ r ( x k j , ξ k j ) ∥ 2 =σ ∥ r ( x ¯ , ξ ) ∥ 2 .
(3.9)

Using the similar arguments of (3.2), we have

〈 ξ , r ( x ¯ , ξ ) 〉 ≥ ∥ r ( x ¯ , ξ ) ∥ 2 .

Combining this and (3.9), we know that r( x ¯ ,ξ)=0 and thus x ¯ is a solution of problem (1.1).

For the second case

lim j → ∞ ∥ r ( x k j , ξ k j ) ∥ 2 =0,

it is easy to see that the limit point x ¯ of x k j is a solution of problem (1.1).

Now, we consider the case that the solution set is empty. Since the inequality

∥ x k + 1 − x k ∥ 2 ≤ ∥ x k + 1 − x 0 ∥ 2 − ∥ x k − x 0 ∥ 2

also holds in this case, the sequence {∥ x k − x 0 ∥} is still nondecreasing. Next, we claim that

lim k → + ∞ ∥ x k − x 0 ∥ =+∞.

Otherwise, {∥ x k − x 0 ∥} is a bounded sequence. A similar discussion as above would lead to the conclusion that every cluster point of { x k } is a solution of problem (1.1), which contradicts the emptiness of the solution set. □

Theorem 3.2 Under the assumption of Theorem  3.1, if the solution set X ∗ is nonempty, then the sequence { x k } globally converges to a solution x ∗ such that x ∗ = P X ∗ ( x 0 ); otherwise, lim k → + ∞ ∥ x k − x 0 ∥=+∞. That is, the solution set of problem (1.1) is empty if and only if the generated sequence diverges to infinity.

Proof First, we suppose that the solution set is nonempty. From Theorem 3.1, we know that the sequence { x k } is bounded and that every cluster point x ∗ of { x k } is a solution of problem (1.1). Suppose that the subsequence { x k j } converges to x ∗ , i.e.,

lim j → ∞ x k j = x ∗ .

Let x ¯ = P X ∗ ( x 0 ). Since x ¯ ∈ X ∗ , by Lemma 3.3 we have

x ¯ ∈ H k j − 1 1 ∩ H k j − 1 2 ∩X

for all j. So, by the iterative sequence of Algorithm 3.1, we have

∥ x k j − x 0 ∥ ≤ ∥ x ¯ − x 0 ∥ .

Thus,

∥ x k j − x ¯ ∥ 2 = ∥ x k j − x 0 + x 0 − x ¯ ∥ 2 = ∥ x k j − x 0 ∥ 2 + ∥ x 0 − x ¯ ∥ 2 + 2 〈 x k j − x 0 , x 0 − x ¯ 〉 ≤ ∥ x ¯ − x 0 ∥ 2 + ∥ x 0 − x ¯ ∥ 2 + 2 〈 x k j − x 0 , x 0 − x ¯ 〉 .

Letting j→∞, we have

∥ x ∗ − x ¯ ∥ 2 ≤ 2 ∥ x ¯ − x 0 ∥ 2 + 2 〈 x ∗ − x 0 , x 0 − x ¯ 〉 = 2 〈 x ∗ − x ¯ , x 0 − x ¯ 〉 ≤ 0 ,

where the last inequality is due to Lemma 2.1 and the fact that x ¯ = P X ∗ ( x 0 ) and x ∗ ∈ X ∗ . So,

x ∗ = x ¯ = P X ∗ ( x 0 ) .

Thus the sequence { x k } has a unique cluster point P X ∗ ( x 0 ), which shows the global convergence of { x k }.

For the case that the solution set is empty, the conclusion can be obtained directly from Theorem 3.1. □

4 Discussion

Certainly, the proposed extragradient method for the GVI in this paper has a good theoretical property in theory, as the generated sequence not only has an expansion property w.r.t. the starting point, but also the existence of the solution to the problem can be verified through the behavior of the generated sequence. However, the proposed algorithm is not easy to be realized in practice as the termination criterion is not easy to execute. This is an interesting topic for further research.

References

  1. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York; 2003.

    Google Scholar 

  2. Xia FQ, Huang NJ: A projection-proximal point algorithm for solving generalized variational inequalities. J. Optim. Theory Appl. 2011, 150: 98–117. 10.1007/s10957-011-9825-3

    Article  MathSciNet  Google Scholar 

  3. Wang Y, Xiu N, Wang C: Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 2001, 111: 641–656. 10.1023/A:1012606212823

    Article  MathSciNet  Google Scholar 

  4. Wang Y, Xiu N, Zhang J: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 2003, 119: 167–183.

    Article  MathSciNet  Google Scholar 

  5. Auslender A, Teboulle M: Lagrangian duality and related multiplier methods for variational inequality problems. SIAM J. Optim. 2000, 10: 1097–1115. 10.1137/S1052623499352656

    Article  MathSciNet  Google Scholar 

  6. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3

    Article  MathSciNet  Google Scholar 

  7. Fang SC, Peterson EL: Generalized variational inequalities. J. Optim. Theory Appl. 1982, 38: 363–383. 10.1007/BF00935344

    Article  MathSciNet  Google Scholar 

  8. Fand C, He Y: A double projection algorithm for multi-valued variational inequalities and a unified framework of the method. Appl. Math. Comput. 2001, 217: 9543–9551.

    Google Scholar 

  9. He Y: Stable pseudomonotone variational inequality in reflexive Banach spaces. J. Math. Anal. Appl. 2007, 330: 352–363. 10.1016/j.jmaa.2006.07.063

    Article  MathSciNet  Google Scholar 

  10. Li F, He Y: An algorithm for generalized variational inequality with pseudomonotone mapping. J. Comput. Appl. Math. 2009, 228: 212–218. 10.1016/j.cam.2008.09.014

    Article  MathSciNet  Google Scholar 

  11. Saigal R: Extension of the generalized complementarity problem. Math. Oper. Res. 1976, 1: 260–266. 10.1287/moor.1.3.260

    Article  MathSciNet  Google Scholar 

  12. Salmon G, Strodiot JJ, Nguyen VH: A bundle method for solving variational inequalities. SIAM J. Optim. 2003, 14: 869–893.

    Article  MathSciNet  Google Scholar 

  13. He Y: The Tikhonov regularization method for set-valued variational inequalities. Abstr. Appl. Anal. 2012., 2012: Article ID 172061. doi:10.1155/2012/172061

    Google Scholar 

  14. Sun D: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 1996, 91: 123–140. 10.1007/BF02192286

    Article  MathSciNet  Google Scholar 

  15. Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.

    Google Scholar 

  16. Fang, C, He, Y: An extragradient method for generalized variational inequality. J. Optim. Theory Appl. (2012, submitted)

  17. Deimling K: Nonlinear Functional Analysis. Springer, Berlin; 1985.

    Book  Google Scholar 

  18. Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210

    Article  MathSciNet  Google Scholar 

  19. Aubin JP, Ekeland I: Applied Nonlinear Analysis. Wiley, New York; 1984.

    Google Scholar 

Download references

Acknowledgements

The author would like to thank two anonymous reviewers for their insightful comments and constructive suggestions, and Professor Wang Yiju for his careful reading, which helped improve the presentation of the paper. This work was supported by the Natural Science Foundation of China (Grant Nos. 11171180, 11101303).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haibin Chen.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chen, H. A new extragradient method for generalized variational inequality in Euclidean space. Fixed Point Theory Appl 2013, 139 (2013). https://doi.org/10.1186/1687-1812-2013-139

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-139

Keywords