A new extragradient-like method for solving variational inequality problems
© Huang et al.; licensee Springer. 2012
Received: 20 August 2012
Accepted: 16 November 2012
Published: 10 December 2012
In this paper, we present a new extragradient-like method for the classical variational inequality problem based on our constructed novel descent direction. Furthermore, we show the global convergence and R-linear convergence rate of the new method under certain conditions. Numerical results also confirm the good theoretical properties of our approach.
where F is a continuous mapping from into , K is a nonempty closed convex subset of , and is the usual Euclidean inner product in . We denote problem (1.1) by and its solution set by . was first introduced by Hartman and Stampacchia (see ) in 1966, primarily with the goal of computing stationary points for nonlinear programs. It provides a broad unifying setting for the study of optimization and equilibrium problems and servers as the main computational framework for the practical solution of a host of continuum problems in the mathematical sciences. It has a wide range of important applications in economics, engineering, operations research etc.; we will not dwell further on this. The problem we are interested in is how to find the vector .
That is the extragradient method which has R-linear convergence rate. The vector in (1.3) is the descent direction of (, ) at a point under certain conditions, which is the key to the convergence of the algorithm. There are several studies on the descent direction. To our knowledge, just five descent directions are found so far (see [9, 11–28]). In this paper, we construct a novel descent direction and present a new extragradient-like method based on the direction. Furthermore, we prove that the new method has the same R-linear convergence rate as the extragradient method. Some numerical experiments are given to prove our analysis.
The rest of this article is organized as follows. In Section 2, some preliminaries are stated and an extragradient-like method is proposed. In Section 3, the global convergence and the local convergence rate of the algorithm are proved. The results of some preliminary experiments on a few test examples are reported in Section 4, and the conclusions are given in Section 5.
2 Preliminaries and algorithm
We first provide some necessary conclusions from convex analysis and related papers.
Then call x as .
where is a constant and is an orthogonal projection from onto K.
This alternative equivalent formulation has played an important role in studying the existence of a solution and suggesting the projection-type algorithms for solving variational inequalities. To prove the convergence and convergence rate of our algorithm later, the other two lemmas are presented here.
Lemma 2.2 (Property 3.1.1 in )
Lemma 2.3 (Property 3.1.3 in )
is monotonic nonincreasing on the variable .
is monotonic nondecreasing on the variable .
Combining the above two inequalities, we can easily get the result.
From the same argument, we can get the result if . Summing up the two cases completes the proof. □
Now, we begin to establish the following iterative method for solving problem (1.1).
Algorithm 2.1 (A new extragradient-like method)
Step 0 (Initialization) Choose the initial values , , and , take the stopping criterion . Set .
Step 3 If , then stop; otherwise, set go to Step 1.
Remark 2.1 To our knowledge, the search direction in Algorithm 2.1 is new; moreover, is a descent direction of (, ) at a point under certain conditions. We will prove it in the next section.
Thus by (2.4) we get in the algorithm, that is, Step 2 of Algorithm 2.1 is well posed.
3 Convergence analysis
In this section, we discuss the convergence and convergence rate of Algorithm 2.1. Firstly, we prove an important lemma.
which completes the proof. □
By using Lemma 3.1 and the proof technique usual in projection-type methods, we easily conclude the global convergence of Algorithm 2.1.
and converges to a solution of problem (1.1).
and then we get the (3.7). Suppose , we will prove that .
thus, converges to a solution of problem (1.1). □
From Theorem 3.1, we can easily get the following result.
That is, as . □
The above corollary shows that Algorithm 2.1 is terminable. The following theorem implies that Algorithm 2.1 has R-linear convergence rate.
is pseudomonotone on K and is nonempty;
is Lipschitz continuous on K with Lip-constant ;
- (c)the local error bound holds, that is, there exist constants and such that(3.9)
If is an infinite sequence produced by Algorithm 2.1, then it converges to a solution of (1.1) R-linearly.
Thus, converge to zero at a Q-linear rate, then the desired result follows. □
4 Numerical examples
In this section, we present some examples to illustrate the efficiency and performance of the newly developed method (Algorithm 2.1) (denoted by HMM). This new method was compared with the classical extragradient method (denoted by EGM) in the number of iterations (Iter.), CPU time (CPU) and residual error (Err.). All computations were done using the PC with Intel(R) Core(TM)i3 CPU M370 @ 2.40 GHz. All the programming is implemented in MATLAB R2011b.
Throughout the computational experiments, unless otherwise stated, the parameters in Algorithm 2.1 were set as and . As the descent direction changes with the parameter θ, we use different θ in different experiments and then find something interesting.
Numerical results for Example 4.1
Numerical results for Example 4.2
Numerical results for Example 4.3
Numerical results for Example 4.4
. The solution , is degenerate but not R-regular.
. The solution , is also degenerate but not R-regular.
Numerical results for Example 4.5 with
Numerical results for Example 4.5 with
. The solution , ;
. The solution , .
Numerical results for Example 4.6 with
Numerical results for Example 4.6 with
From the above experiments, we find that the newly developed method (Algorithm 2.1) enjoys obvious advantages in the number of iterations and CPU time. In Example 4.1, the iterations of our algorithm always keep 20 with the increasing of dimension, but the extragradient method is growing. What is more, in this example, the CPU time for the extragradient method is seven times than our algorithm. In Example 4.2, although our algorithm’s error is sometimes larger than that of the extragradient method (when the start point is ), our algorithm is more steady obviously (when we choose and as start points, the extragradient method does not work). Moreover, the CPU time for our algorithm is just about one sixth of that for the extragradient method. The last two examples are box-constrained variational inequality problems. In these two examples, our algorithm is also obviously advantageous. In addition, the parameter θ is very small, which implies the importance of in the descent direction . In some examples we set parameter θ small enough, however, Algorithm 2.1 even works less well than the extragradient method. In a word, our algorithm is promising.
In this work, we present a new extragradient-like method for the classical variational inequality problem based on a novel descent direction that we constructed. The numerical results show the perfect performance of our algorithm. In the paper, we request , but sometimes Algorithm 2.1 also performs perfectly when the constant , which makes sense for our further studying. In addition, the in Algorithm 2.1 is not perfect enough, and the convergence rate is not enough as well. Maybe they can be modified to some extent. The progress yet needs to be made in the numerical methods of the variational inequality problem.
The project was supported by the National Natural Science Foundation of China (Grant No. 11071041) and Fujian Natural Science Foundation (Grant No. 2009J01002).
- Hartman P, Stampacchia G: On some nonlinear elliptic differential functional equations. Acta Math. 1966, 115: 153–188.MathSciNetView ArticleGoogle Scholar
- Xiu N, Zhang J: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 2003, 152: 559–585. 10.1016/S0377-0427(02)00730-6MathSciNetView ArticleGoogle Scholar
- Goldstein AA: Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70: 709–710. 10.1090/S0002-9904-1964-11178-2View ArticleGoogle Scholar
- Levitin ES, Polyak BT: Constrained minimization problems. U.S.S.R. Comput. Math. Math. Phys. 1966, 6: 1–50.View ArticleGoogle Scholar
- Auslender A: Optimization Moethodes Numoeriques. Masson, Paris; 1976.Google Scholar
- Bakusinskii AB, Polyak BT: On the solution of variational inequalities. Sov. Math. Dokl. 1974, 15: 1705–1710.Google Scholar
- Bruck RE: An iterative solution of a variational inequality for certain monotone operators in Hilbert space. Bull. Am. Math. Soc. 1975, 81: 890–892. 10.1090/S0002-9904-1975-13874-2MathSciNetView ArticleGoogle Scholar
- Aslam Noor M, Wang Y, Xiu N: Some new projection methods for variational inequalities. Appl. Math. Comput. 2003, 137: 423–435. 10.1016/S0096-3003(02)00148-0MathSciNetView ArticleGoogle Scholar
- Xiu N, Wang Y, Zhang X: Modified fixed-point equations and related iterative methods for variational inequalities. Comput. Math. Appl. 2004, 47: 913–920. 10.1016/S0898-1221(04)90075-2MathSciNetView ArticleGoogle Scholar
- Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 12: 747–756.Google Scholar
- He B, Stoer J: Solution of projection problems over polytopes. Numer. Math. 1992, 61: 73–90. 10.1007/BF01385498MathSciNetView ArticleGoogle Scholar
- He B: A new method for a class of linear variational inequalities. Math. Program. 1994, 66: 137–144. 10.1007/BF01581141View ArticleGoogle Scholar
- He B: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35(1):69–76.MathSciNetView ArticleGoogle Scholar
- Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34(5):1814–1830. 10.1137/S0363012994268655MathSciNetView ArticleGoogle Scholar
- Sun D: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory Appl. 1996, 91(1):123–140. 10.1007/BF02192286MathSciNetView ArticleGoogle Scholar
- Bello Cruz JY, Iusem AN: A strongly convergent direct method for monotone variational inequalities in Hilbert spaces. Numer. Funct. Anal. Optim. 2009, 30: 23–36. 10.1080/01630560902735223MathSciNetView ArticleGoogle Scholar
- Bello Cruz JY, Iusem AN: Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46: 247–263. 10.1007/s10589-009-9246-5MathSciNetView ArticleGoogle Scholar
- Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3MathSciNetView ArticleGoogle Scholar
- Iusem AN: An iterative algorithm for the variational inequality problem. J. Comput. Appl. Math. 1994, 13: 103–114.MathSciNetGoogle Scholar
- Iusem AN, Svaiter BF: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. Addendum Optimization 43, 85 (1998) 10.1080/02331939708844365MathSciNetView ArticleGoogle Scholar
- Khobotov EN: Modifications of the extragradient method for solving variational inequalities and certain optimization problems. U.S.S.R. Comput. Math. Math. Phys. 1987, 27: 120–127.MathSciNetView ArticleGoogle Scholar
- Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2001.View ArticleGoogle Scholar
- Konnov IV: A class of combined iterative methods for solving variational inequalities. J. Optim. Theory Appl. 1997, 94: 677–693. 10.1023/A:1022605117998MathSciNetView ArticleGoogle Scholar
- Konnov IV: A combined relaxation method for variational in- equalities with nonlinear constraints. Math. Program. 1998, 80: 239–252.MathSciNetGoogle Scholar
- Korpelevich GM: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 1976, 12: 747–756.Google Scholar
- Marcotte P: Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. Inf. Syst. Oper. Res. 1991, 29: 258–270.Google Scholar
- Solodov MV, Svaiter BF: A new projection method for monotone variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475MathSciNetView ArticleGoogle Scholar
- Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34: 1814–1830. 10.1137/S0363012994268655MathSciNetView ArticleGoogle Scholar
- Han J, Xiu N, Qi H: Nonlinear Complementarity Theory and Algorithm. Shanghai Sci. Technol., Shanghai; 2006.Google Scholar
- Ahn BH: Iterative methods for linear complementarity problem with upperbounds and lowerbounds. Math. Program. 1983, 26: 265–315.Google Scholar
- Kanzow C: Some equation-based methods for the nonlinear complementarity problem. Optim. Methods Softw. 1994, 3: 327–340. 10.1080/10556789408805573View ArticleGoogle Scholar
- Pang J-S, Gabriel SA: NE/SQP: a robust algorithm for the nonlinear complementarity problem. Math. Program. 1993, 60: 295–337. 10.1007/BF01580617MathSciNetView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.