The subgradient double projection method for variational inequalities in a Hilbert space
© Zheng; licensee Springer 2013
Received: 25 December 2012
Accepted: 10 May 2013
Published: 27 May 2013
We present a modification of the double projection algorithm proposed by Solodov and Svaiter for solving variational inequalities (VI) in a Hilbert space. The main modification is to use the subgradient of a convex function to obtain a hyperplane, and the second projection onto the intersection of the feasible set and a halfspace is replaced by projection onto the intersection of two halfspaces. In addition, we propose a modified version of our algorithm that is to find a solution of VI which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for our algorithms.
Keywordsvariational inequalities nonexpansive mapping subgradient double projection algorithm weak convergence
It is well known that the projection operator is nonexpansive (i.e., Lipschitz continuous with a Lipschitz constant 1), and hence is also a bounded mapping.
The variational inequality problem was first introduced by Hartman and Stampacchia  in 1966. In recent years, many iterative projection-type algorithms have been proposed and analyzed for solving the variational inequality problem; see  and the references therein. To implement these algorithms, one has to compute the projection onto the feasible set C, or onto some related set.
In 1999, Solodov and Svaiter  proposed an algorithm for solving the in an Euclidean space, known as the double projection algorithm due to the fact that one needs to implement two projections in each iteration. One is onto the feasible set C, and the other is onto the intersection of the feasible set C and the halfspace. More precisely, they presented the following algorithm.
Algorithm 1.1 Choose an initial point , parameters , and set .
Stop if ; otherwise, go to Step 2.
Let and return to Step 1.
Although  shows that Algorithm 1.1 can get a longer stepsize, and hence is a better algorithm than the extragradient method proposed by Korpelevich  in theory, there is still the need to calculate two projections onto the feasible set C and onto a related set at each iteration. If the set C is simple enough (e.g., C is a halfspace or a ball) so that projections onto it and the related set are easily executed, then Algorithm 1.1 is particularly useful. But if C is a general closed and convex set, one has to solve the two problems and at each iteration to get the next iterate . This might seriously affect the efficiency of Algorithm 1.1.
Recently, Censor et al. [6, 7] presented a subgradient extragradient projection method for solving . Inspired by the above works, in this paper we present a modification of Algorithm 1.1 in a Hilbert space. Our algorithm replaces an iterate k by , where is a halfspace constructed by the subgradient and contains the feasible set C, and is the intersection of two halfspaces. Observe that the projection onto the intersection of two halfspaces is easily computable. In addition, we propose a modified version of our algorithm that is to find a solution of VI which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for our algorithms.
In this section, we recall some useful definitions and results which will be used in this paper.
We have the following properties on the projection operator, see .
The next property is known as the Opial condition . Any Hilbert space has this property.
Condition 2.1 (Opial)
Then converges strongly to some .
Proof See [, Lemma 3.2]. □
Proof See [, Lemma 3.1]. □
The next fact is known as the demiclosedness principle .
Lemma 2.4 Let H be a real Hilbert space, D be a closed and convex subset of H and be a nonexpansive mapping. Then (I is the identity operator on H) is demiclosed at , i.e., for any sequence in D such that and , we have .
where denotes the distance from x to K.
Let , we obtain the desired result. □
that is, .
In this paper, we assume that the convex set C satisfies the following assumptions:
where is a convex (not necessarily differentiable) function and C is nonempty.
Note that the differentiability of is not assumed, therefore the set C is quite general. For example, any system of inequalities , , where is convex and J is an arbitrary index set, is the same as the single inequality with . In fact, every closed convex set can be represented in this way, e.g., take , where dist is the distance function; see [, Section 1.3(c)].
Proof See . □
i.e., and hence .
(2) From Proposition 2.1, we can observe that can be explicitly represented without resorting to projection operator, thus its computation is easy. Recently, is often regarded as the projection region in the algorithm of the split feasibility problem, see [15–18].
3 The subgradient double projection algorithm
To this end, the following assumptions are needed.
The solution set of problem (1.1), denoted by , is nonempty.
- (A2)For all , let and be a closed line segment joining x and y, f satisfies
The mapping f is continuous and bounded on a bounded set of H.
Here, we give a concrete example satisfying condition (A2). Let be defined by , and .
In this paper, we establish weak convergence theorem of subgradient double projection methods for in a Hilbert space under assumptions (A1)-(A3).
Algorithm 3.1 Select , , , , , . Set .
If , stop; else go to Step 2.
Let and return to Step 1.
Remark 3.2 (1) Since and are halfspaces, and by Proposition 2.1, the projection onto can be directly calculated, so our Algorithm 3.1 can be more easily implemented than Algorithm 1.1.
this means that is a solution of problem (1.1).
4 Convergence of the subgradient double projection algorithm
Now, we turn to consider the convergence of Algorithm 3.1. Certainly, if an algorithm terminates within finite steps, e.g., step k, then is a solution of problem (1.1). So, in the following analysis, we assume that our algorithm always generates an infinite sequence.
In particular, if , then .
where the first inequality follows from (A2) and (4.4) and the last one follows from (3.1).
The proof is completed. □
Theorem 4.1 Suppose assumptions (A1)-(A3) hold, then any sequence generalized by Algorithm 3.1 weakly converges to some solution .
which is a contradiction. Thus . This implies that the sequence converges weakly to .
Since and are bounded and f is continuous, we obtain by letting that . Applying the similar proof to the previous case, we get the desired result. □
which contradicts (4.14). We obtain the desired conclusion. Therefore the solution set is closed and convex.
and hence .
5 The modified subgradient double projection algorithm
Let for some .
Algorithm 5.1 Select , , , , , . Set .
If , stop; else go to Step 2.
and . Let and return to Step 1.
6 Convergence of the modified subgradient double projection algorithm
for all and .
Theorem 6.1 Suppose that assumptions (A1)-(A3) hold, then any sequence generalized by Algorithm 5.1 weakly converges to some solution .
we obtain by Lemma 2.4 that , which means that . Now, again by using similar arguments to those used in the proof of Theorem 4.1, we obtain that the entire sequence weakly converges to . Therefore the sequence weakly converges to . □
In this paper, a new double projection algorithm for the variational inequality problem has been presented. The main advantage of the proposed method is that the second projection at each iteration is onto the intersection set of two halfspaces, which is implemented very easily. When the feasible set C of VI is a quite general set, our algorithm is more effective than the double projection method proposed by Solodov and Svaiter. It is natural to ask whether it is possible to replace the first projection onto the halfspace or the intersection set of halfspaces. This would be an interesting topic in further research.
This work was supported by the Educational Science Foundation of Chongqing, Chongqing of China (grant KJ111309).
- Hartman P, Stampacchia G: On some non-linear elliptic differential-functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210MathSciNetView ArticleGoogle Scholar
- Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.MATHGoogle Scholar
- Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475MathSciNetView ArticleGoogle Scholar
- Wang YJ, Xiu NH, Wang CY: Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 2001, 111: 641–656. 10.1023/A:1012606212823MathSciNetView ArticleGoogle Scholar
- Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 17: 747–756.Google Scholar
- Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3MathSciNetView ArticleGoogle Scholar
- Censor Y, Gibali A, Reich S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61: 1119–1132. 10.1080/02331934.2010.539689MathSciNetView ArticleGoogle Scholar
- Zarantonello EH: Projections on Convex Sets in Hilbert Spaces and Spectral Theory. Academic Press, New York; 1971.MATHGoogle Scholar
- Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0MathSciNetView ArticleGoogle Scholar
- Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560MathSciNetView ArticleGoogle Scholar
- Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-zMathSciNetView ArticleGoogle Scholar
- Browder FE: Fixed point theorems for noncompact mappings in Hilbert space. Proc. Natl. Acad. Sci. USA 1965, 53: 1272–1276. 10.1073/pnas.53.6.1272MathSciNetView ArticleGoogle Scholar
- He YR: A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006, 185: 166–173. 10.1016/j.cam.2005.01.031MathSciNetView ArticleGoogle Scholar
- Polyak BT: Minimization of unsmooth functionals. U.S.S.R. Comput. Math. Math. Phys. 1969, 9: 14–29.View ArticleGoogle Scholar
- Yang QZ: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleGoogle Scholar
- Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010MathSciNetView ArticleGoogle Scholar
- Qu B, Xiu NH: A new halfspace-relaxation projection method for the split feasibility problem. Linear Algebra Appl. 2008, 428: 1218–1229. 10.1016/j.laa.2007.03.002MathSciNetView ArticleGoogle Scholar
- Qu B, Xiu NH: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.