Skip to main content

A new extragradient-like method for solving variational inequality problems

Abstract

In this paper, we present a new extragradient-like method for the classical variational inequality problem based on our constructed novel descent direction. Furthermore, we show the global convergence and R-linear convergence rate of the new method under certain conditions. Numerical results also confirm the good theoretical properties of our approach.

MSC:90C33, 65K10.

1 Introduction

In this paper, we consider the classical variational inequality problem, which is to find a vector x ∗ ∈K such that

〈 F ( x ∗ ) , x − x ∗ 〉 ≥0,∀x∈K,
(1.1)

where F is a continuous mapping from R n into R n , K is a nonempty closed convex subset of R n , and 〈⋅,⋅〉 is the usual Euclidean inner product in R n . We denote problem (1.1) by VI(F,K) and its solution set by K ∗ . VI(F,K) was first introduced by Hartman and Stampacchia (see [1]) in 1966, primarily with the goal of computing stationary points for nonlinear programs. It provides a broad unifying setting for the study of optimization and equilibrium problems and servers as the main computational framework for the practical solution of a host of continuum problems in the mathematical sciences. It has a wide range of important applications in economics, engineering, operations research etc.; we will not dwell further on this. The problem we are interested in is how to find the vector x ∗ ∈ K ∗ .

Recently, there have been many methods proposed in the literature to tackle this problem (see [2]), among which we think the projection method is one of the most excellent ones. The projection method for solving problem (1.1) came originally from the Goldstein (see [3]) and Levitin-Polyak (see [4]) gradient projection method for the box-constrained minimization and was studied by many researchers such as Auslender (see [5]), Bakusinskii-Polyak (see [6]), Bruck (see [7]), Noor-Wang-Xiu (see [8]) and Xiu-Wang-Zhang (see [9]). Its original iterative scheme is finding an x k ∈K such that

x k + 1 = P K [ x k − α F ( x k ) ] ,k=0,1,2,…,
(1.2)

where P K [⋅] is the orthogonal projection from R n onto K, and α>0 is a fixed number. Korpelevich (see [10]) combined two neighboring iterations in (1.2) and then got a new projection method:

{ x ¯ k = P K [ x k − α F ( x k ) ] , x k + 1 = P K [ x k − α F ( x ¯ k ) ] .
(1.3)

That is the extragradient method which has R-linear convergence rate. The vector −F( x ¯ k ) in (1.3) is the descent direction of f(x)= 1 2 ∥ x − x ∗ ∥ 2 (x∈K, x ∗ ∈ K ∗ ) at a point x k under certain conditions, which is the key to the convergence of the algorithm. There are several studies on the descent direction. To our knowledge, just five descent directions are found so far (see [9, 11–28]). In this paper, we construct a novel descent direction and present a new extragradient-like method based on the direction. Furthermore, we prove that the new method has the same R-linear convergence rate as the extragradient method. Some numerical experiments are given to prove our analysis.

The rest of this article is organized as follows. In Section 2, some preliminaries are stated and an extragradient-like method is proposed. In Section 3, the global convergence and the local convergence rate of the algorithm are proved. The results of some preliminary experiments on a few test examples are reported in Section 4, and the conclusions are given in Section 5.

2 Preliminaries and algorithm

We first provide some necessary conclusions from convex analysis and related papers.

Definition 2.1 K is a nonempty closed convex subset of R n , x∈K is the projection of y∈ R n onto K if

x=argmin { ∥ z − y ∥ | z ∈ K } .

Then call x as P K [y].

Definition 2.2 C⊂ R n is a nonempty subset, F(x) is a mapping from C into R n . F(x) is pseudomonotone on C if for all x,y∈C, x≠y, the following implication relation is established:

( x − y ) T F(y)≥0⇒ ( x − y ) T F(x)≥0.

Lemma 2.1 The variational inequality (1.1) has a solution x ∗ ∈ K ∗ if and only if x ∗ satisfies the relation

x ∗ = P K [ x ∗ − α F ( x ∗ ) ] ,

where α>0 is a constant and P K [⋅] is an orthogonal projection from R n onto K.

This alternative equivalent formulation has played an important role in studying the existence of a solution and suggesting the projection-type algorithms for solving variational inequalities. To prove the convergence and convergence rate of our algorithm later, the other two lemmas are presented here.

Lemma 2.2 (Property 3.1.1 in [29])

Let P K [⋅] be the projection from R n onto K, then for y,z∈ R n ,

∥ P K [ y ] − P K [ z ] ∥ 2 ≤ 〈 y − z , P K [ y ] − P K [ z ] 〉 .

Specially, it follows from the Cauchy-Schwarz inequality that

∥ P K [ y ] − P K [ z ] ∥ ≤∥y−z∥.

Lemma 2.3 (Property 3.1.3 in [29])

Let P K [⋅] be the projection from R n onto K, take x∈K and d∈ R n arbitrarily, then

  1. (1)

    ∥ x − P K [ x − α d ] ∥ α is monotonic nonincreasing on the variable α>0.

  2. (2)

    ∥x− P K [x−αd]∥ is monotonic nondecreasing on the variable α>0.

Lemma 2.4 For any α>0 and x∈ R n ,

min{1,α} ∥ e ( x , 1 ) ∥ ≤ ∥ e ( x , α ) ∥ ≤max{1,α} ∥ e ( x , 1 ) ∥ ,

where e(x,α)=x− P K [x−αF(x)].

Proof If α≤1, from Lemma 2.3, we know that ∥e(x,α)∥ is monotonic nondecreasing and ∥ e ( x , α ) ∥ α is monotonic nonincreasing on the variable α>0. Then we have

∥ e ( x , α ) ∥ ≤ ∥ e ( x , 1 ) ∥ =max{1,α} ∥ e ( x , 1 ) ∥

and

∥ e ( x , α ) ∥ min { 1 , α } = ∥ e ( x , α ) ∥ α ≥ ∥ e ( x , 1 ) ∥ 1 = ∥ e ( x , 1 ) ∥ .

Combining the above two inequalities, we can easily get the result.

From the same argument, we can get the result if α>1. Summing up the two cases completes the proof. □

Now, we begin to establish the following iterative method for solving problem (1.1).

Algorithm 2.1 (A new extragradient-like method)

Step 0 (Initialization) Choose the initial values x 0 ∈ R n , l∈(0,1), μ∈(0,1) and θ∈(0,1], take the stopping criterion ϵ>0. Set k:=0.

Step 1 (The predictor step) Compute the predictor

x ¯ k = P K [ x k − α k F ( x k ) ] ,
(2.1)

where α k = l m k and m k is the smallest nonnegative integer m such that

∥ F ( x k ) − F ( x ¯ k ) ∥ ≤μ ∥ x k − x ¯ k ∥ α k .
(2.2)

Step 2 (The corrector step) Computing the corrector

x k + 1 = P K [ x k − β k d k ] ,
(2.3)

where

(2.4)
(2.5)

Step 3 If ∥ x k + 1 − x k ∥≤ϵ, then stop; otherwise, set k:=k+1 go to Step 1.

Remark 2.1 To our knowledge, the search direction d k in Algorithm 2.1 is new; moreover, − d k is a descent direction of f(x)= 1 2 ∥ x − x ∗ ∥ 2 (x∈K, x ∗ ∈ K ∗ ) at a point x k under certain conditions. We will prove it in the next section.

Remark 2.2 If F( x k )=0, then 〈F( x k ),x− x ∗ 〉=0, ∀x∈K, namely x k ∈ K ∗ , so the algorithm will be stopped. Therefore, F( x k )≠0 when the algorithm is running, by (2.2) and Lemma 2.2, we have

∥ ( 1 − θ ) F ( x k ) + θ F ( x ¯ k ) ∥ = ∥ F ( x k ) − θ ( F ( x k ) − F ( x ¯ k ) ) ∥ ≥ ∥ F ( x k ) ∥ − θ ∥ F ( x k ) − F ( x ¯ k ) ∥ ≥ ∥ F ( x k ) ∥ − θ μ α k ∥ x k − x ¯ k ∥ ≥ ∥ F ( x k ) ∥ − θ μ ∥ F ( x k ) ∥ = ( 1 − θ μ ) ∥ F ( x k ) ∥ > 0 .

Thus by (2.4) we get ∥ d k ∥>0 in the algorithm, that is, Step 2 of Algorithm 2.1 is well posed.

3 Convergence analysis

In this section, we discuss the convergence and convergence rate of Algorithm 2.1. Firstly, we prove an important lemma.

Lemma 3.1 Assume that F(x) is pseudomonotone on K and K ∗ is nonempty. If x k ∈K is not a solution to problem (1.1), then for any x ∗ ∈ K ∗ ,

〈 d k , x k − x ∗ 〉 ≥θ(1−μ) ∥ x k − x ¯ k ∥ 2 .
(3.1)

Proof Take x ∗ ∈ K ∗ arbitrarily. As x ∗ ∈ K ∗ , we have

〈 F ( x ∗ ) , x − x ∗ 〉 ≥0,∀x∈K.

Specially, for x k ∈K and x ¯ k ∈K, we can get

〈 F ( x ∗ ) , x k − x ∗ 〉 ≥0

and

〈 F ( x ∗ ) , x ¯ k − x ∗ 〉 ≥0.

From the pseudomonotonicity of F(x), we have

〈 F ( x k ) , x k − x ∗ 〉 ≥0
(3.2)

and

〈 F ( x ¯ k ) , x ¯ k − x ∗ 〉 ≥0.
(3.3)

From Lemma 2.2 we get

∥ x k − x ¯ k ∥ 2 ≤ 〈 x k − ( x k − α k F ( x k ) ) , x k − x ¯ k 〉 = 〈 α k F ( x k ) , x k − x ¯ k 〉 = α k 〈 F ( x k ) , x k − x ¯ k 〉 ,

namely

〈 F ( x k ) , x k − x ¯ k 〉 ≥ ∥ x k − x ¯ k ∥ 2 α k .
(3.4)

By the Cauchy-Schwarz inequality and (2.2), we get

〈 F ( x k ) − F ( x ¯ k ) , x k − x ¯ k 〉 ≤ ∥ F ( x k ) − F ( x ¯ k ) ∥ ⋅ ∥ x k − x ¯ k ∥ ≤ μ ∥ x k − x ¯ k ∥ 2 α k .
(3.5)

Combining (3.3), (3.4) and (3.5) yields

〈 F ( x ¯ k ) , x k − x ∗ 〉 ≥ 〈 F ( x ¯ k ) , x k − x ¯ k 〉 = 〈 F ( x k ) , x k − x ¯ k 〉 − 〈 F ( x k ) − F ( x ¯ k ) , x k − x ¯ k 〉 ≥ ∥ x k − x ¯ k ∥ 2 α k − μ ∥ x k − x ¯ k ∥ 2 α k = ( 1 − μ ) ∥ x k − x ¯ k ∥ 2 α k .
(3.6)

Thus, from (2.4), (3.2), (3.6) and θ∈(0,1], we obtain

〈 d k , x k − x ∗ 〉 = α k 〈 ( 1 − θ ) F ( x k ) + θ F ( x ¯ k ) , x k − x ∗ 〉 = ( 1 − θ ) α k 〈 F ( x k ) , x k − x ∗ 〉 + θ α k 〈 F ( x ¯ k ) , x k − x ∗ 〉 ≥ θ ( 1 − μ ) ∥ x k − x ¯ k ∥ 2 ,

which completes the proof. □

By using Lemma 3.1 and the proof technique usual in projection-type methods, we easily conclude the global convergence of Algorithm 2.1.

Theorem 3.1 Assume that F(x) is continuous and pseudomonotone on K and K ∗ is nonempty. If { x k } and { x ¯ k } are two infinite sequences produced by Algorithm  2.1, then

lim k → ∞ ∥ x k − x ¯ k ∥ =0,
(3.7)

and { x k } converges to a solution of problem (1.1).

Proof For any x ∗ ∈ K ∗ , it follows from (2.3), (2.5), Lemma 2.2 and Lemma 3.1 that for all k,

∥ x k + 1 − x ∗ ∥ 2 ≤ ∥ x k − β k d k − x ∗ ∥ 2 = ∥ x k − x ∗ ∥ 2 − 2 β k 〈 d k , x k − x ∗ 〉 + β k 2 ∥ d k ∥ 2 ≤ ∥ x k − x ∗ ∥ 2 − 2 θ ( 1 − μ ) β k ∥ x k − x ¯ k ∥ 2 + β k 2 ∥ d k ∥ 2 = ∥ x k − x ∗ ∥ 2 − θ 2 ( 1 − μ ) 2 ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 .
(3.8)

Thus, the sequence { x k } generated by Algorithm 2.1 is bounded, and

θ 2 ( 1 − μ ) 2 ∑ k = 0 ∞ ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 ≤ ∑ k = 0 ∞ ( ∥ x k − x ∗ ∥ 2 − ∥ x k + 1 − x ∗ ∥ 2 ) <∞.

So ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 →0 as k→∞. Notice that F(x) is continuous on K, P K [⋅] is continuous on R n and { x k }⊆K is bounded, the sequence { x ¯ k } is bounded, thereby the sequence { d k } is bounded. Then we have

∥ x k − x ¯ k ∥ 4 = ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 ⋅ ∥ d k ∥ 2 →0,k→∞,

and then we get the (3.7). Suppose lim k i → ∞ x k i = x ∞ , we will prove that x ∞ ∈ K ∗ .

If α k i ≥ α m i n >0, from Lemma 2.4 we can get

∥ e ( x ∞ , 1 ) ∥ = lim k i → ∞ ∥ e ( x k i , 1 ) ∥ ≤ lim k i → ∞ ∥ x k i − x ¯ k i ∥ min { 1 , α min } =0.

If α k i →0, for all sufficiently large k i , it follows from (2.2) and Lemma 2.3(1) that

μ ∥ e ( x k i , 1 ) ∥ ≤μ ∥ e ( x k i , α k i l ) ∥ α k i l < ∥ F ( x k i ) − F ( x k i ( α k i l ) ) ∥ ,

where x k i ( α k i l )= P K [ x k i − α k i l F( x k i )], so

∥ e ( x ∞ , 1 ) ∥ = lim k i → ∞ ∥ e ( x k i , 1 ) ∥ ≤ lim k i → ∞ ∥ F ( x k i ) − F ( x k i ( α k i l ) ) ∥ μ =0.

In both cases, we have ∥e( x ∞ ,1)∥=0. From Lemma 2.1 we have x ∞ ∈ K ∗ . Combining it with (3.8), we obtain

∥ x k + 1 − x ∞ ∥ 2 ≤ ∥ x k − x ∞ ∥ 2 − θ 2 ( 1 − μ ) 2 ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 ≤ ∥ x k − x ∞ ∥ 2 .

Then ∀k=1,2,… , choose k i j ∈{ k i } satisfying k i j ≤k, we can get

∥ x k + 1 − x ∞ ∥ ≤ ∥ x k − x ∞ ∥ ≤⋯≤ ∥ x k i j − x ∞ ∥ ,

thus, { x k } converges to a solution of problem (1.1). □

From Theorem 3.1, we can easily get the following result.

Corollary 3.1 Assume that F(x) is continuous and pseudomonotone on K and K ∗ is nonempty. If { x k } is an infinite sequence produced by Algorithm  2.1, then

lim k → ∞ ∥ x k + 1 − x k ∥ =0.

Proof From (2.3), (2.5) and Lemma 2.2, we have

∥ x k + 1 − x k ∥ = ∥ P K [ x k − β k d k ] − x k ∥ ≤ β k ∥ d k ∥ = θ ( 1 − μ ) ∥ x k − x ¯ k ∥ 2 ∥ d k ∥ .

From the proving process of Theorem 3.1, we have

lim k → ∞ ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 =0,

which implies that

lim k → ∞ ∥ x k − x ¯ k ∥ 2 ∥ d k ∥ =0.

That is, ∥ x k + 1 − x k ∥→0 as k→+∞. □

The above corollary shows that Algorithm 2.1 is terminable. The following theorem implies that Algorithm 2.1 has R-linear convergence rate.

Theorem 3.2 Assume that variational inequality problem VI(F,K) meets the following conditions:

  1. (a)

    F(x) is pseudomonotone on K and K ∗ is nonempty;

  2. (b)

    F(x) is Lipschitz continuous on K with Lip-constant L>0;

  3. (c)

    the local error bound holds, that is, there exist constants τ>1 and δ>0 such that

    dist ( x , K ∗ ) ≤τ ∥ e ( x , 1 ) ∥ ,∀x∈K, with ∥ e ( x , 1 ) ∥ ≤δ.
    (3.9)

If { x k } is an infinite sequence produced by Algorithm  2.1, then it converges to a solution of (1.1) R-linearly.

Proof From the condition (b) and (2.2), we can easily get

μ ∥ x k − x k ( α k l ) ∥ α k l < ∥ F ( x k ) − F ( x k ( α k l ) ) ∥ ≤ L ∥ x k − x k ( α k l ) ∥ .

After appropriate simplification, we get

α k ≥ l μ L :=α,∀k=0,1,2,….

Then by Lemma 2.4 and Theorem 3.1, we have

∥ e ( x k , 1 ) ∥ ≤ ∥ x k − x ¯ k ∥ min { 1 , α k } ≤ ∥ x k − x ¯ k ∥ min { 1 , α } →0.
(3.10)

So, there exists sufficiently large k 0 such that

∥ e ( x k , 1 ) ∥ ≤δ,∀k≥ k 0 .

Thus, from the condition (c), we get

dist ( x k , K ∗ ) ≤τ ∥ e ( x k , 1 ) ∥ ,∀k≥ k 0 .
(3.11)

From the proving process of Theorem 3.1, we know that { d k } is bounded, so there exists a constant M>0 such that ∥ d k ∥≤M. Choosing x ∗ ∈ K ∗ closest to x k , from (3.8), (3.10) and (3.11), we obtain for all k≥ k 0 ,

[ dist ( x k + 1 , K ∗ ) ] 2 ≤ ∥ x k + 1 − x ∗ ∥ 2 ≤ ∥ x k − x ∗ ∥ 2 − θ 2 ( 1 − μ ) 2 ∥ x k − x ¯ k ∥ 4 ∥ d k ∥ 2 ≤ [ dist ( x k , K ∗ ) ] 2 − θ 2 ( 1 − μ ) 2 ( min { 1 , α } ) 4 τ 4 [ dist ( x k , K ∗ ) ] 4 ∥ d k ∥ 2 ≤ [ 1 − θ 2 ( 1 − μ ) 2 ( min { 1 , α } ) 4 τ 4 [ dist ( x k , K ∗ ) ] 2 M ] [ dist ( x k , K ∗ ) ] 2 .

Thus, {dist( x k , K ∗ )} converge to zero at a Q-linear rate, then the desired result follows. □

4 Numerical examples

In this section, we present some examples to illustrate the efficiency and performance of the newly developed method (Algorithm 2.1) (denoted by HMM). This new method was compared with the classical extragradient method (denoted by EGM) in the number of iterations (Iter.), CPU time (CPU) and residual error (Err.). All computations were done using the PC with Intel(R) Core(TM)i3 CPU M370 @ 2.40 GHz. All the programming is implemented in MATLAB R2011b.

Throughout the computational experiments, unless otherwise stated, the parameters in Algorithm 2.1 were set as l=0.65 and μ=0.95. As the descent direction d k changes with the parameter θ, we use different θ in different experiments and then find something interesting.

Example 4.1 This test problem is from Ahn (see [30]). Let F(x)=Mx+q, where

M=( 4 − 2 1 4 − 2 1 4 ⋱ ⋱ ⋱ ⋱ ⋱ 4 − 2 1 4 ),q=( − 1 − 1 ⋮ ⋮ ⋮ − 1 ).

We test this problem by using x 0 = ( 0 , 0 , … , 0 ) T as a starting point and set the parameter θ=0.1 for different dimensions n. The test results are listed in Table 1.

Table 1 Numerical results for Example 4.1

Example 4.2 This problem was tested by Kanzow (see [31]) with five variables defined by

F i (x)=2( x i −i+2)exp { ∑ j = 1 5 ( x j − j + 2 ) 2 } ,1≤i≤5.

This example has one degenerate solution x ∗ = ( 0 , 0 , 1 , 2 , 3 ) T . The numerical results are given in Table 2 using different start points (SP). The parameter θ=0.1 in this example as well.

Table 2 Numerical results for Example 4.2

Example 4.3 The Nash problem. This is a Nash equilibrium model with ten variables. The test function F(x)= ( F 1 ( x ) , … , F 10 ( x ) ) T is defined by

F i (x)= c i + ( L i x i ) 1 β i − [ 5000 ∑ k = 1 10 x k ] 1 γ + x i γ ∑ k = 1 10 x k [ 5000 ∑ k = 1 10 x k ] 1 γ ,1≤i≤10,

where γ=1.2, c= ( 5.0 , 3.0 , 8.0 , 5.0 , 1.0 , 3.0 , 7.0 , 4.0 , 6.0 , 3.0 ) T , L i =10 (1≤i≤10) and β= ( 1.2 , 1.0 , 0.9 , 0.6 , 1.5 , 1.0 , 0.7 , 1.1 , 0.95 , 0.75 ) T . The test results for Example 4.3 are summarized in Table 3 using the following standard starting points: (1) e; (2) 4e; (3) 7e; (4) 10e; (5) ( 1.0 , 1.2 , 1.4 , 1.6 , 1.8 , 2.1 , 2.3 , 2.5 , 2.7 , 2.9 ) T ; (6) ( 7 , 4 , 3 , 1 , 8 , 4 , 1 , 6 , 3 , 2 ) T . This time we set θ=0.25.

Table 3 Numerical results for Example 4.3

Example 4.4 The Kojshin problem. This example was used by Pang and Gabriel (see [32]), and Kanzow (see [31]) with four variables. Let

F(x)=( 3 x 1 2 + 2 x 1 x 2 + 2 x 2 2 + x 3 + 3 x 4 − 6 2 x 1 2 + x 1 + x 2 2 + 10 x 3 + 2 x 4 − 2 3 x 1 2 + x 1 x 2 + 2 x 2 2 + 2 x 3 + 9 x 4 − 9 x 1 2 + 3 x 2 2 + 2 x 3 + 3 x 4 − 3 ).

This problem has one degenerate solution ( 6 2 , 0 , 0 , 1 2 ) T and one nondegenerate solution ( 1 , 0 , 3 , 0 ) T . The numerical results are listed in Table 4 using different initial points. The asterisk (∗) denotes that the limit point generated by the algorithms is the degenerate solution; otherwise, it is the nondegenerate solution. We also set θ=0.1 in this example.

Table 4 Numerical results for Example 4.4

Example 4.5 This is a box-constrained variational inequality VI(F,K) with four variables, and the constraint set K= [ a i , b i ] n , i=1,…,n is a box region. The function is given as follows:

F(x)=( x 1 3 − 8 x 2 − x 3 + x 2 3 + 3 x 2 + x 3 + 2 x 3 3 − 3 x 4 + 2 x 4 3 ).

We consider the following two cases:

  1. (1)

    K= [ 0 , 5 ] 4 . The solution x ∗ = ( 2 , 0 , 1 , 0 ) T , F( x ∗ )= ( 0 , 2 , 0 , 0 ) T is degenerate but not R-regular.

  2. (2)

    K= [ − 1 , 1 ] 4 . The solution x ∗ = ( 1 , − 1 , 1 , 0 ) T , F( x ∗ )= ( − 7 , 0 , − 1 , 0 ) T is also degenerate but not R-regular.

In the example, the parameter θ in Algorithm 2.1 is chosen as θ=0.2. The test results are listed in Table 5 and Table 6 using different starting points for K= [ 0 , 5 ] 4 and K= [ − 1 , 1 ] 4 , respectively.

Table 5 Numerical results for Example 4.5 with K= [ 0 , 5 ] 4
Table 6 Numerical results for Example 4.5 with K= [ − 1 , 1 ] 4

Example 4.6 This is a box-constrained affine variational inequality VI(F,K) with four variables, and the constraint set K= [ a i , b i ] n , i=1,…,n is a box region. The function is given as follows:

F(x)=Mx+q,

where

M=( 4 2 2 1 2 4 0 1 2 0 2 2 − 1 − 1 − 2 0 ),q=( − 8 − 6 − 4 3 ).

We consider the following two cases:

  1. (1)

    K= [ − 1 , 1 ] 4 . The solution x ∗ = ( 1 , 8 / 9 , 5 / 9 , 4 / 9 ) T , F( x ∗ )= ( − 2 / 3 , 0 , 0 , 0 ) T ;

  2. (2)

    K= [ − 5 , 5 ] 4 . The solution x ∗ = ( 4 / 3 , 7 / 9 , 4 / 9 , 2 / 9 ) T , F( x ∗ )= ( 0 , 0 , 0 , 0 ) T .

In the example, the parameter θ in Algorithm 2.1 is chosen as θ=0.55. The test results are listed in Table 7 and Table 8 using different starting points for K= [ − 1 , 1 ] 4 and K= [ − 5 , 5 ] 4 , respectively.

Table 7 Numerical results for Example 4.6 with K= [ − 1 , 1 ] 4
Table 8 Numerical results for Example 4.6 with K= [ − 5 , 5 ] 4

From the above experiments, we find that the newly developed method (Algorithm 2.1) enjoys obvious advantages in the number of iterations and CPU time. In Example 4.1, the iterations of our algorithm always keep 20 with the increasing of dimension, but the extragradient method is growing. What is more, in this example, the CPU time for the extragradient method is seven times than our algorithm. In Example 4.2, although our algorithm’s error is sometimes larger than that of the extragradient method (when the start point is (1,0,1,3,5)), our algorithm is more steady obviously (when we choose ( 1 , 2 , 3 , 4 , 5 ) T and ( 10 , 9 , 8 , 7 , 6 ) T as start points, the extragradient method does not work). Moreover, the CPU time for our algorithm is just about one sixth of that for the extragradient method. The last two examples are box-constrained variational inequality problems. In these two examples, our algorithm is also obviously advantageous. In addition, the parameter θ is very small, which implies the importance of F( x k ) in the descent direction d k . In some examples we set parameter θ small enough, however, Algorithm 2.1 even works less well than the extragradient method. In a word, our algorithm is promising.

5 Conclusion

In this work, we present a new extragradient-like method for the classical variational inequality problem based on a novel descent direction that we constructed. The numerical results show the perfect performance of our algorithm. In the paper, we request 0<θ≤1, but sometimes Algorithm 2.1 also performs perfectly when the constant θ=0, which makes sense for our further studying. In addition, the β k in Algorithm 2.1 is not perfect enough, and the convergence rate is not enough as well. Maybe they can be modified to some extent. The progress yet needs to be made in the numerical methods of the variational inequality problem.

References

  1. Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115: 153–188.

    Article  MathSciNet  Google Scholar 

  2. Xiu N, Zhang J: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 2003, 152: 559–585. 10.1016/S0377-0427(02)00730-6

    Article  MathSciNet  Google Scholar 

  3. Goldstein AA: Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70: 709–710. 10.1090/S0002-9904-1964-11178-2

    Article  Google Scholar 

  4. Levitin ES, Polyak BT: Constrained minimization problems. U.S.S.R. Comput. Math. Math. Phys. 1966, 6: 1–50.

    Article  Google Scholar 

  5. Auslender A: Optimization Moethodes Numoeriques. Masson, Paris; 1976.

    Google Scholar 

  6. Bakusinskii AB, Polyak BT: On the solution of variational inequalities. Sov. Math. Dokl. 1974, 15: 1705–1710.

    Google Scholar 

  7. Bruck RE: An iterative solution of a variational inequality for certain monotone operators in Hilbert space. Bull. Am. Math. Soc. 1975, 81: 890–892. 10.1090/S0002-9904-1975-13874-2

    Article  MathSciNet  Google Scholar 

  8. Aslam Noor M, Wang Y, Xiu N: Some new projection methods for variational inequalities. Appl. Math. Comput. 2003, 137: 423–435. 10.1016/S0096-3003(02)00148-0

    Article  MathSciNet  Google Scholar 

  9. Xiu N, Wang Y, Zhang X: Modified fixed-point equations and related iterative methods for variational inequalities. Comput. Math. Appl. 2004, 47: 913–920. 10.1016/S0898-1221(04)90075-2

    Article  MathSciNet  Google Scholar 

  10. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.

    Google Scholar 

  11. He B, Stoer J: Solution of projection problems over polytopes. Numer. Math. 1992, 61: 73–90. 10.1007/BF01385498

    Article  MathSciNet  Google Scholar 

  12. He B: A new method for a class of linear variational inequalities. Math. Program. 1994, 66: 137–144. 10.1007/BF01581141

    Article  Google Scholar 

  13. He B: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35(1):69–76.

    Article  MathSciNet  Google Scholar 

  14. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34(5):1814–1830. 10.1137/S0363012994268655

    Article  MathSciNet  Google Scholar 

  15. Sun D: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 1996, 91(1):123–140. 10.1007/BF02192286

    Article  MathSciNet  Google Scholar 

  16. Bello Cruz JY, Iusem AN: A strongly convergent direct method for monotone variational inequalities in Hilbert spaces. Numer. Funct. Anal. Optim. 2009, 30: 23–36. 10.1080/01630560902735223

    Article  MathSciNet  Google Scholar 

  17. Bello Cruz JY, Iusem AN: Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46: 247–263. 10.1007/s10589-009-9246-5

    Article  MathSciNet  Google Scholar 

  18. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3

    Article  MathSciNet  Google Scholar 

  19. Iusem AN: An iterative algorithm for the variational inequality problem. J. Comput. Appl. Math. 1994, 13: 103–114.

    MathSciNet  Google Scholar 

  20. Iusem AN, Svaiter BF: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. Addendum Optimization 43, 85 (1998) 10.1080/02331939708844365

    Article  MathSciNet  Google Scholar 

  21. Khobotov EN: Modifications of the extragradient method for solving variational inequalities and certain optimization problems. U.S.S.R. Comput. Math. Math. Phys. 1987, 27: 120–127.

    Article  MathSciNet  Google Scholar 

  22. Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2001.

    Book  Google Scholar 

  23. Konnov IV: A class of combined iterative methods for solving variational inequalities. J. Optim. Theory Appl. 1997, 94: 677–693. 10.1023/A:1022605117998

    Article  MathSciNet  Google Scholar 

  24. Konnov IV: A combined relaxation method for variational in- equalities with nonlinear constraints. Math. Program. 1998, 80: 239–252.

    MathSciNet  Google Scholar 

  25. Korpelevich GM: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 1976, 12: 747–756.

    Google Scholar 

  26. Marcotte P: Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. Inf. Syst. Oper. Res. 1991, 29: 258–270.

    Google Scholar 

  27. Solodov MV, Svaiter BF: A new projection method for monotone variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475

    Article  MathSciNet  Google Scholar 

  28. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34: 1814–1830. 10.1137/S0363012994268655

    Article  MathSciNet  Google Scholar 

  29. Han J, Xiu N, Qi H: Nonlinear Complementarity Theory and Algorithm. Shanghai Sci. Technol., Shanghai; 2006.

    Google Scholar 

  30. Ahn BH: Iterative methods for linear complementarity problem with upperbounds and lowerbounds. Math. Program. 1983, 26: 265–315.

    Google Scholar 

  31. Kanzow C: Some equation-based methods for the nonlinear complementarity problem. Optim. Methods Softw. 1994, 3: 327–340. 10.1080/10556789408805573

    Article  Google Scholar 

  32. Pang J-S, Gabriel SA: NE/SQP: a robust algorithm for the nonlinear complementarity problem. Math. Program. 1993, 60: 295–337. 10.1007/BF01580617

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The project was supported by the National Natural Science Foundation of China (Grant No. 11071041) and Fujian Natural Science Foundation (Grant No. 2009J01002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changfeng Ma.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Huang, N., Ma, C. & Liu, Z. A new extragradient-like method for solving variational inequality problems. Fixed Point Theory Appl 2012, 223 (2012). https://doi.org/10.1186/1687-1812-2012-223

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-223

Keywords