# The subgradient double projection method for variational inequalities in a Hilbert space

## Abstract

We present a modification of the double projection algorithm proposed by Solodov and Svaiter for solving variational inequalities (VI) in a Hilbert space. The main modification is to use the subgradient of a convex function to obtain a hyperplane, and the second projection onto the intersection of the feasible set and a halfspace is replaced by projection onto the intersection of two halfspaces. In addition, we propose a modified version of our algorithm that is to find a solution of VI which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for our algorithms.

MSC:90C25, 90C30.

## 1 Introduction

Let H be a real Hilbert space, $C\subset H$ be a nonempty, closed and convex set, and let $f:C\to H$ be a mapping. The inner product and norm are denoted by $〈\cdot ,\cdot 〉$ and $\parallel \cdot \parallel$, respectively. We write ${x}^{k}⇀x$ to indicate that the sequence $\left\{{x}^{k}\right\}$ converges weakly to x and ${x}^{k}\to x$ to indicate that the sequence $\left\{{x}^{k}\right\}$ converges strongly to x. For each point $x\in H$, there exists a unique nearest point in C, which is called the projection of x onto C and denoted by ${P}_{C}\left(x\right)$. That is

${P}_{C}\left(x\right)=arg min\left\{\parallel y-x\parallel |y\in C\right\}.$

It is well known that the projection operator is nonexpansive (i.e., Lipschitz continuous with a Lipschitz constant 1), and hence is also a bounded mapping.

We consider the following variational inequality problem denoted by $VI\left(C,f\right)$: find a vector ${x}^{\ast }\in C$ such that

(1.1)

The variational inequality problem was first introduced by Hartman and Stampacchia [1] in 1966. In recent years, many iterative projection-type algorithms have been proposed and analyzed for solving the variational inequality problem; see [2] and the references therein. To implement these algorithms, one has to compute the projection onto the feasible set C, or onto some related set.

In 1999, Solodov and Svaiter [3] proposed an algorithm for solving the $VI\left(C,f\right)$ in an Euclidean space, known as the double projection algorithm due to the fact that one needs to implement two projections in each iteration. One is onto the feasible set C, and the other is onto the intersection of the feasible set C and the halfspace. More precisely, they presented the following algorithm.

Algorithm 1.1 Choose an initial point ${x}^{0}$, parameters $\mu >0$, $\sigma ,\gamma \in \left(0,1\right)$ and set $k=0$.

Step 1. Having ${x}^{k}$, compute

${y}^{k}={P}_{C}\left[{x}^{k}-\mu f\left({x}^{k}\right)\right].$

Stop if ${x}^{k}={y}^{k}$; otherwise, go to Step 2.

Step 2. Compute ${z}^{k}={x}^{k}-{\eta }_{k}\left({x}^{k}-{y}^{k}\right)$, where ${\eta }_{k}={\gamma }^{{m}_{k}}$ with ${m}_{k}$ being the smallest nonnegative integer m such that

$〈f\left({x}^{k}-{\gamma }^{m}\left({x}^{k}-{y}^{k}\right)\right),{x}^{k}-{y}^{k}〉\ge \sigma {\parallel {x}^{k}-{y}^{k}\parallel }^{2}.$
(1.2)

Step 3. Compute

${x}^{k+1}={P}_{C\cap {H}_{k}}\left({x}^{k}\right),$

where

${H}_{k}=\left\{x\in {\mathbb{R}}^{n}|〈f\left({z}^{k}\right),x-{z}^{k}〉\le 0\right\}.$
(1.3)

Let $k:=k+1$ and return to Step 1.

Although [4] shows that Algorithm 1.1 can get a longer stepsize, and hence is a better algorithm than the extragradient method proposed by Korpelevich [5] in theory, there is still the need to calculate two projections onto the feasible set C and onto a related set $C\cap {H}_{k}$ at each iteration. If the set C is simple enough (e.g., C is a halfspace or a ball) so that projections onto it and the related set are easily executed, then Algorithm 1.1 is particularly useful. But if C is a general closed and convex set, one has to solve the two problems ${min}_{x\in C}\parallel x-\left({x}^{k}-\mu f\left({x}^{k}\right)\right)\parallel$ and ${min}_{x\in C\cap {H}_{k}}\parallel x-{x}^{k}\parallel$ at each iteration to get the next iterate ${x}^{k+1}$. This might seriously affect the efficiency of Algorithm 1.1.

Recently, Censor et al. [6, 7] presented a subgradient extragradient projection method for solving $VI\left(C,f\right)$. Inspired by the above works, in this paper we present a modification of Algorithm 1.1 in a Hilbert space. Our algorithm replaces an iterate k ${P}_{C\cap {H}_{k}}$ by ${P}_{{C}_{k}\cap {H}_{k}}$, where ${C}_{k}$ is a halfspace constructed by the subgradient and contains the feasible set C, and ${C}_{k}\cap {H}_{k}$ is the intersection of two halfspaces. Observe that the projection onto the intersection of two halfspaces is easily computable. In addition, we propose a modified version of our algorithm that is to find a solution of VI which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for our algorithms.

## 2 Preliminaries

In this section, we recall some useful definitions and results which will be used in this paper.

We have the following properties on the projection operator, see [8].

Lemma 2.1 Let $\mathrm{\Omega }\subset H$ be a closed and convex set. Then for any $x\in H$ and $z\in \mathrm{\Omega }$,

$\begin{array}{r}\left(1\right)\phantom{\rule{1em}{0ex}}{\parallel {P}_{\mathrm{\Omega }}\left(x\right)-z\parallel }^{2}\le {\parallel x-z\parallel }^{2}-{\parallel {P}_{\mathrm{\Omega }}\left(x\right)-x\parallel }^{2};\\ \left(2\right)\phantom{\rule{1em}{0ex}}〈{P}_{\mathrm{\Omega }}\left(x\right)-x,z-{P}_{\mathrm{\Omega }}\left(x\right)〉\ge 0.\end{array}$

The next property is known as the Opial condition [9]. Any Hilbert space has this property.

Condition 2.1 (Opial)

For any sequence $\left\{{x}^{k}\right\}$ in H that converges weakly to x (${x}^{k}⇀x$),

$\underset{k\to \mathrm{\infty }}{lim inf}\parallel {x}^{k}-x\parallel <\underset{k\to \mathrm{\infty }}{lim inf}\parallel {x}^{k}-y\parallel ,\phantom{\rule{1em}{0ex}}y\ne x.$

Lemma 2.2 Let H be a real Hilbert space and D be a nonempty, closed and convex subset of H. Let the sequence $\left\{{x}^{k}\right\}\subset H$ be Fejér-monotone with respect to D, i.e., for every $u\in D$,

$\parallel {x}^{k+1}-u\parallel \le \parallel {x}^{k}-u\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall }k\ge 0.$

Then $\left\{{P}_{D}\left({x}^{k}\right)\right\}$ converges strongly to some $z\in D$.

Proof See [[10], Lemma 3.2]. □

Lemma 2.3 Let H be a real Hilbert space, $\left\{{\alpha }_{k}\right\}$ be a real sequence satisfying $0 for all $k\ge 0$, and let $\left\{{\nu }^{k}\right\}$ and $\left\{{\omega }^{k}\right\}$ be two sequences in H such that for some $\sigma \ge 0$,

$\begin{array}{c}\underset{k\to \mathrm{\infty }}{lim sup}\parallel {\nu }^{k}\parallel \le \sigma ,\hfill \\ \underset{k\to \mathrm{\infty }}{lim sup}\parallel {\omega }^{k}\parallel \le \sigma \hfill \end{array}$

and

$\underset{k\to \mathrm{\infty }}{lim}\parallel {\alpha }_{k}{\nu }^{k}+\left(1-{\alpha }_{k}\right){\omega }^{k}\parallel =\sigma .$

Then

$\underset{k\to \mathrm{\infty }}{lim}\parallel {\nu }^{k}-{\omega }^{k}\parallel =0.$

Proof See [[11], Lemma 3.1]. □

The next fact is known as the demiclosedness principle [12].

Lemma 2.4 Let H be a real Hilbert space, D be a closed and convex subset of H and $S:D\to H$ be a nonexpansive mapping. Then $I-S$ (I is the identity operator on H) is demiclosed at $y\in H$, i.e., for any sequence $\left\{{x}^{k}\right\}$ in D such that ${x}^{k}⇀\overline{x}\in D$ and $\left(I-S\right){x}^{k}\to y$, we have $\left(I-S\right)\overline{x}=y$.

Lemma 2.5 Let H be a real Hilbert space, h be a real-valued function on H and K be the set $\left\{x\in H:h\left(x\right)\le 0\right\}$. If K is nonempty and h is Lipschitz continuous with modulus $\theta >0$, then

$dist\left(x,K\right)\ge {\theta }^{-1}h\left(x\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in H,$
(2.1)

where $dist\left(x,K\right)$ denotes the distance from x to K.

Proof It is clear that (2.1) holds for all $x\in K$. Hence, it suffices to show that (2.1) holds for all $x\in H\mathrm{\setminus }K$. Let $x\in H$ but $x\notin K$. Since K is closed, there exists $y\in K$ such that $\parallel x-y\parallel =dist\left(x,K\right)$. So, for an arbitrary positive number ε, there exists $z\in K$ such that

$\parallel x-z\parallel \le dist\left(x,k\right)+\epsilon .$

Since h is Lipschitz continuous with modulus θ on H, we have

$|h\left(x\right)-h\left(z\right)|\le \theta \parallel x-z\parallel \le \theta dist\left(x,k\right)+\epsilon \theta .$

It follows from $z\in K$ that $h\left(z\right)\le 0$. Thus we have

$h\left(x\right)\le h\left(x\right)-h\left(z\right)\le |h\left(x\right)-h\left(z\right)|\le \theta dist\left(x,K\right)+\theta \epsilon .$

Let $\epsilon \to {0}^{+}$, we obtain the desired result. □

Remark 2.1 The idea of Lemma 2.5 is from Lemma 2.3 of [13]. In Lemma 2.5, if we take $K:=K\cap C$, where C is a closed subset of H and $K\cap C\ne \mathrm{\varnothing }$, then (2.1) still holds. In fact, for each $x\in H$, since C and $C\cap K$ are closed, we have ${min}_{y\in K\cap C}\parallel x-y\parallel$ and ${min}_{y\in K}\parallel x-y\parallel$ exist, and

$\underset{y\in K\cap C}{min}\parallel x-y\parallel \ge \underset{y\in K}{min}\parallel x-y\parallel ,$

that is, $dist\left(x,C\cap K\right)\ge dist\left(x,K\right)$.

In this paper, we assume that the convex set C satisfies the following assumptions:

(1) The set C is given by

$C=\left\{x\in H|c\left(x\right)\le 0\right\},$
(2.2)

where $c:H\to \mathbb{R}$ is a convex (not necessarily differentiable) function and C is nonempty.

Note that the differentiability of $c\left(x\right)$ is not assumed, therefore the set C is quite general. For example, any system of inequalities ${c}_{j}\left(x\right)\le 0$, $j\in J$, where ${c}_{j}\left(x\right)$ is convex and J is an arbitrary index set, is the same as the single inequality $c\left(x\right)\le 0$ with $c\left(x\right)=sup\left\{{c}_{j}\left(x\right)|j\in J\right\}$. In fact, every closed convex set can be represented in this way, e.g., take $c\left(x\right)=dist\left(x,C\right)$, where dist is the distance function; see [[7], Section 1.3(c)].

(2) For any $x\in H$, at least one subgradient $\xi \in \partial c\left(x\right)$ can be calculated, where $\partial c\left(x\right)$ is the subdifferential of $c\left(x\right)$ at x and is defined as follows:

Denote

${C}_{k}=\left\{x\in H|c\left({x}^{k}\right)+〈{\xi }^{k},x-{x}^{k}〉\le 0\right\},$
(2.3)

where ${\xi }^{k}\in \partial c\left({x}^{k}\right)$.

Proposition 2.1 For every nonnegative integer k, let ${x}^{k}\in H$ and ${C}_{k}$ be defined as in (2.3). Then for any $x\in H$, we have

${P}_{{C}_{k}}\left(x\right)=\left\{\begin{array}{cc}x-\frac{c\left({x}^{k}\right)+〈{\xi }^{k},x-{x}^{k}〉}{{\parallel {\xi }^{k}\parallel }^{2}}{\xi }^{k}\hfill & \mathit{\text{if}}c\left({x}^{k}\right)+〈{\xi }^{k},x-{x}^{k}〉>0,\hfill \\ x\hfill & \mathit{\text{otherwise}}.\hfill \end{array}$
(2.4)

Proof See [14]. □

Remark 2.2 (1) From the definition of subdifferential, we have $C\subseteq {C}_{k}$ for all k. In fact, for any $x\in C$ and ${\xi }^{k}\in \partial c\left({x}^{k}\right)$, we have

$c\left({x}^{k}\right)+〈{\xi }^{k},x-{x}^{k}〉\le c\left(x\right)\le 0,$

i.e., $x\in {C}_{k}$ and hence $C\subseteq {C}_{k}$.

(2) From Proposition 2.1, we can observe that ${P}_{{C}_{k}}$ can be explicitly represented without resorting to projection operator, thus its computation is easy. Recently, ${C}_{k}$ is often regarded as the projection region in the algorithm of the split feasibility problem, see [1518].

## 3 The subgradient double projection algorithm

To this end, the following assumptions are needed.

Assumption

1. (A1)

The solution set of problem (1.1), denoted by $SOL\left(C,f\right)$, is nonempty.

2. (A2)

For all $x\in H$, let $y={P}_{C}\left[x-\mu f\left(x\right)\right]$ and $\left[x,y\right]$ be a closed line segment joining x and y, f satisfies

$〈f\left(z\right),z-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in \left[x,y\right],{x}^{\ast }\in SOL\left(C,f\right).$
3. (A3)

The mapping f is continuous and bounded on a bounded set of H.

Remark 3.1 (1) If f is Lipschitz continuous, then f is bounded on a bounded set of H, but the continuity and boundedness cannot ensure the Lipschitz continuity. For example, let $H=R=\left(-\mathrm{\infty },+\mathrm{\infty }\right)$ and $f\left(x\right)={x}^{2}$, $x\in H$. It is easy to see that f is continuous and bounded on a bounded set of H, but we can see that f is not Lipschitz continuous. So, our assumption (A3) is weaker than Lipschitz continuous. Recently, the literature [6] proposed two subgradient extragradient algorithms for $VI\left(C,f\right)$ in a Hilbert space and established weak convergence theorems for them under the assumptions of Lipschitz continuity and monotonicity of f.

1. (2)

Here, we give a concrete example satisfying condition (A2). Let $f:R\to R$ be defined by $f\left(x\right)=1-{e}^{-x}$, and $C=\left[0,1\right]$.

In this paper, we establish weak convergence theorem of subgradient double projection methods for $VI\left(C,f\right)$ in a Hilbert space under assumptions (A1)-(A3).

Algorithm 3.1 Select ${x}^{0}\in C$, $\alpha >0$, $\beta \ge 0$, $\sigma >0$, $\mu \in \left(0,{\sigma }^{-1}\right)$, $\gamma \in \left(0,1\right)$. Set $k=0$.

Step 1. For ${x}^{k}\in C$, define

${y}^{k}={P}_{C}\left({x}^{k}-\mu f\left({x}^{k}\right)\right).$

If ${x}^{k}={y}^{k}$, stop; else go to Step 2.

Step 2. Compute ${z}^{k}=\left(1-{\eta }_{k}\right){x}^{k}+{\eta }_{k}{y}^{k}$, where ${\eta }_{k}={\gamma }^{{m}_{k}}$ with ${m}_{k}$ being the smallest nonnegative integer m satisfying

$〈f\left({x}^{k}\right)-f\left({x}^{k}-{\gamma }^{m}\left({x}^{k}-{y}^{k}\right)\right),{x}^{k}-{y}^{k}〉\le \sigma {\parallel {x}^{k}-{y}^{k}\parallel }^{2}.$
(3.1)

Step 3. Compute ${x}^{k+1}={P}_{{C}_{k}\cap {H}_{k}}\left({x}^{k}\right)$, where

${C}_{k}=\left\{x\in H|c\left({x}^{k}\right)+〈{\xi }^{k},x-{x}^{k}〉\le 0\right\},$

${H}_{k}=\left\{v\in H:{h}_{k}\left(v\right)\le 0\right\}$ is a halfspace defined by the function

${h}_{k}\left(v\right)=〈\alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right),v-{x}^{k}〉+\alpha {\eta }_{k}\left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2},$
(3.2)

and ${\xi }^{k}\in \partial c\left({x}^{k}\right)$.

Let $k=k+1$ and return to Step 1.

Remark 3.2 (1) Since ${C}_{k}$ and ${H}_{k}$ are halfspaces, and by Proposition 2.1, the projection onto ${C}_{k}\cap {H}_{k}$ can be directly calculated, so our Algorithm 3.1 can be more easily implemented than Algorithm 1.1.

(2) If ${y}^{k}={x}^{k}$ for some positive integer k, then ${x}^{k}$ is a solution of problem (1.1). In fact, suppose ${y}^{k}={x}^{k}$. Since ${y}^{k}={P}_{C}\left({x}^{k}-\mu f\left({x}^{k}\right)\right)$, it follows from Lemma 2.1(2) that

$〈{x}^{k}-\mu f\left({x}^{k}\right)-{x}^{k},x-{x}^{k}〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C,$

this means that ${x}^{k}$ is a solution of problem (1.1).

## 4 Convergence of the subgradient double projection algorithm

Now, we turn to consider the convergence of Algorithm 3.1. Certainly, if an algorithm terminates within finite steps, e.g., step k, then ${x}^{k}$ is a solution of problem (1.1). So, in the following analysis, we assume that our algorithm always generates an infinite sequence.

Lemma 4.1 Let ${x}^{\ast }\in SOL\left(f,C\right)$ and the function ${h}_{k}$ be defined by (3.2). Then

${h}_{k}\left({x}^{k}\right)\ge \alpha {\eta }_{k}\left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}{h}_{k}\left({x}^{\ast }\right)\le 0.$
(4.1)

In particular, if ${x}^{k}\ne {y}^{k}$, then ${h}_{k}\left({x}^{k}\right)>0$.

Proof

$\begin{array}{rl}{h}_{k}\left({x}^{k}\right)& =〈\alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right),{x}^{k}-{x}^{k}〉+\alpha {\eta }_{k}\left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}\\ =\alpha {\eta }_{k}\left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}.\end{array}$

If ${x}^{k}\ne {y}^{k}$, then ${h}_{k}\left({x}^{k}\right)>0$ because $\mu \sigma <1$. In the following, we prove that ${h}_{k}\left({x}^{\ast }\right)\le 0$. Since

${y}^{k}={P}_{C}\left({x}^{k}-\mu f\left({x}^{k}\right)\right),$

by (2) of Lemma 2.1, we have

$〈{x}^{k}-{y}^{k}-\mu f\left({x}^{k}\right),{y}^{k}-{x}^{\ast }〉\ge 0.$
(4.2)

By assumption (A2),

$〈\mu f\left({x}^{k}\right),{x}^{k}-{x}^{\ast }〉\ge 0.$
(4.3)

Adding inequalities (4.2) and (4.3), we obtain

$〈{x}^{k}-{y}^{k}-\mu f\left({x}^{k}\right),{y}^{k}-{x}^{k}〉+〈{x}^{k}-{y}^{k},{x}^{k}-{x}^{\ast }〉\ge 0.$
(4.4)

We have

$\begin{array}{r}〈{x}^{k}-{x}^{\ast },\alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right)〉\\ \phantom{\rule{1em}{0ex}}=\alpha {\eta }_{k}〈{x}^{k}-{x}^{\ast },{x}^{k}-{y}^{k}〉+\beta 〈f\left({x}^{k}\right),{x}^{k}-{x}^{\ast }〉+\alpha \mu 〈f\left({z}^{k}\right),{x}^{k}-{x}^{\ast }〉\\ \phantom{\rule{1em}{0ex}}\ge \alpha {\eta }_{k}〈{x}^{k}-{y}^{k}-\mu f\left({x}^{k}\right),{x}^{k}-{y}^{k}〉+\alpha \mu {\eta }_{k}〈f\left({z}^{k}\right),{x}^{k}-{y}^{k}〉\\ \phantom{\rule{1em}{0ex}}=\alpha {\eta }_{k}{\parallel {x}^{k}-{y}^{k}\parallel }^{2}-\alpha \mu {\eta }_{k}〈f\left({x}^{k}\right)-f\left({z}^{k}\right),{x}^{k}-{y}^{k}〉\\ \phantom{\rule{1em}{0ex}}\ge \alpha {\eta }_{k}{\parallel {x}^{k}-{y}^{k}\parallel }^{2}-\alpha \mu \sigma {\eta }_{k}{\parallel {x}^{k}-{y}^{k}\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}={\eta }_{k}\alpha \left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2},\end{array}$

where the first inequality follows from (A2) and (4.4) and the last one follows from (3.1).

It follows that

$\begin{array}{rl}{h}_{k}\left({x}^{\ast }\right)& =〈\alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right),{x}^{\ast }-{x}^{k}〉+{\eta }_{k}\alpha \left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}\\ =-〈\alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right),{x}^{k}-{x}^{\ast }〉+{\eta }_{k}\alpha \left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}\\ \le -{\eta }_{k}\alpha \left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}+{\eta }_{k}\alpha \left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2}=0.\end{array}$

The proof is completed. □

Lemma 4.2 Suppose assumptions (A1)-(A3) hold and the sequences $\left\{{x}^{k}\right\}$ and $\left\{{y}^{k}\right\}$ are generalized by Algorithm 3.1, then there exists a positive number M such that

${\parallel {x}^{k+1}-{x}^{\ast }\parallel }^{2}\le {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{M}^{-2}{\left(1-\mu \sigma \right)}^{2}{\alpha }^{2}{\eta }_{k}^{2}{\parallel {y}^{k}-{x}^{k}\parallel }^{4},\phantom{\rule{1em}{0ex}}\mathrm{\forall }k,$
(4.5)

where ${x}^{\ast }\in SOL\left(C,f\right)$.

Proof From Lemma 4.1 and Remark 2.2(1), we get $SOL\left(C,f\right)\subseteq {H}_{k}\cap {C}_{k}$, and hence ${x}^{\ast }\in {H}_{k}\cap {C}_{k}$. Since ${x}^{k+1}={P}_{{C}_{k}\cap {H}_{k}}\left({x}^{k}\right)$, it follows from Lemma 2.1(1) that

$\begin{array}{rl}{\parallel {x}^{k+1}-{x}^{\ast }\parallel }^{2}& \le {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{\parallel {x}^{k+1}-{x}^{k}\parallel }^{2}\\ ={\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{dist}^{2}\left({x}^{k},{C}_{k}\cap {H}_{k}\right).\end{array}$
(4.6)

It follows that the sequence $\left\{\parallel {x}^{k}-{x}^{\ast }\parallel \right\}$ is nonincreasing, and hence is a convergent sequence. Thus, $\left\{{x}^{k}\right\}$ is bounded and

$\underset{k\to \mathrm{\infty }}{lim}dist\left({x}^{k},{C}_{k}\cap {H}_{k}\right)=0.$
(4.7)

Since f and the projection operator ${P}_{C}$ are continuous and bounded, we obtain that the sequence $\left\{{y}^{k}\right\}$ and hence the sequences $\left\{f\left({x}^{k}\right)\right\}$ and $\left\{f\left({z}^{k}\right)\right\}$ are bounded, and for some $M>0$,

$\parallel \alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right)\parallel \le M,\phantom{\rule{1em}{0ex}}\mathrm{\forall }k.$
(4.8)

Clearly, each function ${h}_{k}$ is Lipschitz continuous on H with modulus M. Applying Lemma 4.1 and Remark 2.1, we obtain that

$dist\left({x}^{k},{C}_{k}\cap {H}_{k}\right)\ge {M}^{-1}{h}_{k}\left({x}^{k}\right)\ge {M}^{-1}\alpha {\eta }_{k}\left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2},$

which, together with (4.8), yields that

${\parallel {x}^{k+1}-{x}^{\ast }\parallel }^{2}\le {\parallel {x}^{k}-{x}^{\ast }\parallel }^{2}-{M}^{-2}{\left(1-\mu \sigma \right)}^{2}{\alpha }^{2}{\eta }_{k}^{2}{\parallel {y}^{k}-{x}^{k}\parallel }^{4},\phantom{\rule{1em}{0ex}}\mathrm{\forall }k.$

□

Theorem 4.1 Suppose assumptions (A1)-(A3) hold, then any sequence $\left\{{x}^{k}\right\}$ generalized by Algorithm 3.1 weakly converges to some solution ${u}^{\ast }\in SOL\left(C,f\right)$.

Proof By Lemma 4.2, the sequence $\left\{{x}^{k}\right\}$ is bounded and

$\underset{k\to \mathrm{\infty }}{lim}{\eta }_{k}{\parallel {x}^{k}-{y}^{k}\parallel }^{2}=0.$
(4.9)

If ${lim sup}_{k\to \mathrm{\infty }}{\eta }_{k}>0$, then we have ${lim inf}_{k\to \mathrm{\infty }}{\parallel {x}^{k}-{y}^{k}\parallel }^{2}=0$. Therefore there exists a weak accumulation point $\overline{x}$ of $\left\{{x}^{k}\right\}$, and two subsequences $\left\{{x}^{{k}_{j}}\right\}$ and $\left\{{y}^{{k}_{j}}\right\}$ of $\left\{{x}^{k}\right\}$ and $\left\{{y}^{k}\right\}$, respectively, such that

$\underset{j\to \mathrm{\infty }}{lim}\parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel =0$
(4.10)

and

(4.11)

Since ${x}^{{k}_{j}}-\left({x}^{{k}_{j}}-{y}^{{k}_{j}}\right)={P}_{C}\left({x}^{{k}_{j}}-\mu f\left({x}^{{k}_{j}}\right)\right)$, it follows from Lemma 2.1(2) that

$〈x-{x}^{{k}_{j}}+\left({x}^{{k}_{j}}-{y}^{{k}_{j}}\right),{x}^{{k}_{j}}-\mu f\left({x}^{{k}_{j}}\right)-{y}^{{k}_{j}}〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$

Letting $j\to \mathrm{\infty }$, taking into account (4.10), (4.11) and the continuity of f, we obtain

$〈f\left(\overline{x}\right),x-\overline{x}〉\ge 0,\phantom{\rule{1em}{0ex}}x\in C,$

i.e., $\overline{x}$ is a solution of problem (1.1). In order to show that the entire sequence $\left\{{x}^{k}\right\}$ weakly converges to $\overline{x}$, assume that there exists another subsequence $\left\{{x}^{{\overline{k}}_{j}}\right\}$ of $\left\{{x}^{k}\right\}$ that weakly converges to some ${\overline{x}}^{\prime }\ne \overline{x}$ and ${\overline{x}}^{\prime }\in SOL\left(C,f\right)$. It follows from Lemma 4.2 that the sequences $\left\{\parallel {x}^{k}-\overline{x}\parallel \right\}$ and $\left\{\parallel {x}^{k}-{\overline{x}}^{\prime }\parallel \right\}$ are decreasing. By the Opial condition, we have

$\begin{array}{rl}\underset{k\to \mathrm{\infty }}{lim}\parallel {x}^{k}-\overline{x}\parallel & =\underset{j\to \mathrm{\infty }}{lim inf}\parallel {x}^{{k}_{j}}-\overline{x}\parallel <\underset{j\to \mathrm{\infty }}{lim inf}\parallel {x}^{{k}_{j}}-{\overline{x}}^{\prime }\parallel \\ =\underset{k\to \mathrm{\infty }}{lim}\parallel {x}^{k}-{\overline{x}}^{\prime }\parallel =\underset{j\to \mathrm{\infty }}{lim inf}\parallel {x}^{{\overline{k}}_{j}}-{\overline{x}}^{\prime }\parallel \\ <\underset{j\to \mathrm{\infty }}{lim inf}\parallel {x}^{{\overline{k}}_{j}}-\overline{x}\parallel =\underset{k\to \mathrm{\infty }}{lim}\parallel {x}^{k}-\overline{x}\parallel ,\end{array}$

which is a contradiction. Thus ${\overline{x}}^{\prime }=\overline{x}$. This implies that the sequence $\left\{{x}^{k}\right\}$ converges weakly to $\overline{x}\in SOL\left(C,f\right)$.

Suppose now that ${lim}_{j\to \mathrm{\infty }}{\eta }_{{k}_{j}}=0$. By the choice of ${\eta }_{k}$, (3.1) implies that

$\begin{array}{rl}\sigma {\parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel }^{2}& <〈f\left({x}^{{k}_{j}}\right)-f\left({x}^{{k}_{j}}-{\gamma }^{{m}_{{k}_{j}}-1}\left({x}^{{k}_{j}}-{y}^{{k}_{j}}\right)\right),{x}^{{k}_{j}}-{y}^{{k}_{j}}〉\\ =〈f\left({x}^{{k}_{j}}\right)-f\left({x}^{{k}_{j}}-{\gamma }^{-1}{\eta }_{{k}_{j}}\left({x}^{{k}_{j}}-{y}^{{k}_{j}}\right)\right),{x}^{{k}_{j}}-{y}^{{k}_{j}}〉.\end{array}$

Since $\left\{{x}^{k}\right\}$ and $\left\{{y}^{k}\right\}$ are bounded and f is continuous, we obtain by letting $j\to \mathrm{\infty }$ that ${lim}_{j\to \mathrm{\infty }}{\parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel }^{2}=0$. Applying the similar proof to the previous case, we get the desired result. □

Remark 4.1 If the mapping f is pseudomonotone on C, i.e.,

$〈f\left(y\right),x-y〉\ge 0\phantom{\rule{1em}{0ex}}⟹\phantom{\rule{1em}{0ex}}〈f\left(x\right),x-y〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C,$

then $SOL\left(f,C\right)$ is a closed and convex set. In fact, if f is pseudomonotone on C, then for any ${x}^{\ast }\in SOL\left(C,f\right)$, we have

$〈f\left(x\right),x-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$

Hence, it suffices to show that the solution set $SOL\left(C,f\right)$ can be characterized as the intersection of a family of halfspaces. That is,

$SOL\left(C,f\right)=\bigcap _{x\in C}\left\{y\in C|〈f\left(x\right),x-y〉\ge 0\right\}$

since

$SOL\left(C,f\right)=\bigcap _{x\in C}\left\{y\in C|〈f\left(y\right),x-y〉\ge 0\right\}.$

From the pseudomonotonicity of f, we obtain

$\bigcap _{x\in C}\left\{y\in C|〈f\left(y\right),x-y〉\ge 0\right\}\subseteq \bigcap _{x\in C}\left\{y\in C|〈f\left(x\right),y-x〉\ge 0\right\}.$

So, we need only to prove

$\bigcap _{x\in C}\left\{y\in C|〈f\left(y\right),x-y〉\ge 0\right\}\supseteq \bigcap _{x\in C}\left\{y\in C|〈f\left(x\right),x-y〉\ge 0\right\}.$
(4.12)

We suppose that the conclusion (4.12) does not hold, then there exist ${y}_{0}$ and ${x}_{0}$ in C such that

(4.13)

and

$〈f\left({y}_{0}\right),{x}_{0}-{y}_{0}〉<0.$
(4.14)

In (4.13), taking $x=\left(1-t\right){y}_{0}+t{x}_{0}$, $t\in \left(0,1\right)$, we obtain

$〈f\left(\left(1-t\right){y}_{0}+t{x}_{0}\right),{x}_{0}-{y}_{0}〉\ge 0.$

Letting $t\to {0}^{+}$, it follows from the continuity of f that

$〈f\left({y}_{0}\right),{x}_{0}-{y}_{0}〉\ge 0,$

which contradicts (4.14). We obtain the desired conclusion. Therefore the solution set $SOL\left(C,f\right)$ is closed and convex.

In Theorem 4.1, if $SOL\left(f,C\right)$ is a closed set, then we can furthermore obtain

${u}^{\ast }=\underset{k\to \mathrm{\infty }}{lim}{P}_{SOL\left(C,f\right)}\left({x}^{k}\right).$

In fact, we take

${u}^{k}={P}_{SOL\left(C,f\right)}\left({x}^{k}\right).$

By Lemma 2.1(2) and noting $\overline{x}\in SOL\left(C,f\right)$, we have

$〈\overline{x}-{u}^{k},{u}^{k}-{x}^{k}〉\ge 0.$

By Lemma 2.2, $\left\{{u}^{k}\right\}$ converges strongly to some ${u}^{\ast }\in SOL\left(C,f\right)$. Therefore

$〈\overline{x}-{u}^{\ast },{u}^{\ast }-\overline{x}〉\ge 0,$

and hence ${u}^{\ast }=\overline{x}$.

## 5 The modified subgradient double projection algorithm

In this section, we present a modified subgradient double projection algorithm which finds a solution of the $VI\left(C,f\right)$ which is also a fixed point of a given nonexpansive mapping. Let $S:H\to H$ be a nonexpansive mapping and denote by $Fix\left(S\right)$ its fixed point set, i.e.,

$Fix\left(S\right)=\left\{x\in H|S\left(x\right)=x\right\}.$

Let ${\left\{{\alpha }_{k}\right\}}_{k=0}^{\mathrm{\infty }}\subset \left[c,d\right]$ for some $c,d\in \left(0,1\right)$.

Algorithm 5.1 Select ${x}^{0}\in C$, $\alpha >0$, $\beta \ge 0$, $\sigma >0$, $\mu \in \left(0,{\sigma }^{-1}\right)$, $\gamma \in \left(0,1\right)$. Set $k=0$.

Step 1. For ${x}^{k}\in C$, define

${y}^{k}={P}_{C}\left({x}^{k}-\mu f\left({x}^{k}\right)\right).$

If ${x}^{k}={y}^{k}$, stop; else go to Step 2.

Step 2. Compute ${z}^{k}=\left(1-{\eta }_{k}\right){x}^{k}+{\eta }_{k}{y}^{k}$, where ${\eta }_{k}={\gamma }^{{m}_{k}}$ with ${m}_{k}$ being the smallest nonnegative integer m satisfying

$〈f\left({x}^{k}\right)-f\left({x}^{k}-{\gamma }^{m}\left({x}^{k}-{y}^{k}\right)\right),{x}^{k}-{y}^{k}〉\le \sigma {\parallel {x}^{k}-{y}^{k}\parallel }^{2}.$
(5.1)

Step 3. Compute

${x}^{k+1}={\alpha }_{k}{x}^{k}+\left(1-{\alpha }_{k}\right)S{P}_{{C}_{k}\cap {H}_{k}}\left({x}^{k}\right),$

where

${C}_{k}=\left\{x\in H|c\left({x}^{k}\right)+〈{\xi }^{k},x-{x}^{k}〉\le 0\right\},$

${H}_{k}=\left\{v\in H:{h}_{k}\left(v\right)\le 0\right\}$ being a halfspace defined by the function

${h}_{k}\left(v\right)=〈\alpha {\eta }_{k}\left({x}^{k}-{y}^{k}\right)+\beta f\left({x}^{k}\right)+\alpha \mu f\left({z}^{k}\right),v-{x}^{k}〉+\alpha {\eta }_{k}\left(1-\mu \sigma \right){\parallel {x}^{k}-{y}^{k}\parallel }^{2},$
(5.2)

and ${\xi }^{k}\in \partial c\left({x}^{k}\right)$. Let $k=k+1$ and return to Step 1.

## 6 Convergence of the modified subgradient double projection algorithm

In this section, we establish a weak convergence theorem for Algorithm 5.1. We assume that the following condition holds:

$Fix\left(S\right)\cap SOL\left(C,f\right)\ne \mathrm{\varnothing }.$

We also recall that in a Hilbert space H,

${\parallel \lambda x+\left(1-\lambda \right)y\parallel }^{2}=\lambda {\parallel x\parallel }^{2}+\left(1-\lambda \right){\parallel y\parallel }^{2}-\lambda \left(1-\lambda \right){\parallel x-y\parallel }^{2}$
(6.1)

for all $x,y\in H$ and $\lambda \in \left[0,1\right]$.

Theorem 6.1 Suppose that assumptions (A1)-(A3) hold, then any sequence $\left\{{x}^{k}\right\}$ generalized by Algorithm 5.1 weakly converges to some solution ${u}^{\ast }\in Fix\left(S\right)\cap SOL\left(C,f\right)$.

Proof Denote ${t}^{k}={P}_{{C}_{k}\cap {H}_{k}}\left({x}^{k}\right)$ for all k. Let $u\in Fix\left(S\right)\cap SOL\left(C,f\right)$. By the definition of ${x}^{k+1}$, we obtain

$\begin{array}{rl}{\parallel {x}^{k+1}-u\parallel }^{2}& ={\parallel {\alpha }_{k}{x}^{k}+\left(1-{\alpha }_{k}\right)S\left({t}^{k}\right)-u\parallel }^{2}\\ ={\parallel {\alpha }_{k}\left({x}^{k}-u\right)+\left(1-{\alpha }_{k}\right)\left(S\left({t}^{k}\right)-u\right)\parallel }^{2}\end{array}$
(6.2)
$\begin{array}{r}={\alpha }_{k}{\parallel {x}^{k}-u\parallel }^{2}+\left(1-{\alpha }_{k}\right){\parallel S\left({t}^{k}\right)-u\parallel }^{2}-{\alpha }_{k}\left(1-{\alpha }_{k}\right){\parallel {x}^{k}-S\left({t}^{k}\right)\parallel }^{2}\\ \le {\alpha }_{k}{\parallel {x}^{k}-u\parallel }^{2}+\left(1-{\alpha }_{k}\right){\parallel S\left({t}^{k}\right)-S\left(u\right)\parallel }^{2}\\ \le {\alpha }_{k}{\parallel {x}^{k}-u\parallel }^{2}+\left(1-{\alpha }_{k}\right){\parallel {t}^{k}-u\parallel }^{2}\\ \le {\alpha }_{k}{\parallel {x}^{k}-u\parallel }^{2}+\left(1-{\alpha }_{k}\right)\left({\parallel {x}^{k}-u\parallel }^{2}-{\parallel {t}^{k}-{x}^{k}\parallel }^{2}\right)\\ ={\parallel {x}^{k}-u\parallel }^{2}-\left(1-{\alpha }_{k}\right){\parallel {t}^{k}-{x}^{k}\parallel }^{2}\end{array}$
(6.3)
$={\parallel {x}^{k}-u\parallel }^{2}-\left(1-{\alpha }_{k}\right){dist}^{2}\left({x}^{k},{C}_{k}\cap {H}_{k}\right),$
(6.4)

where the third equality follows from (6.1), the second inequality follows from the nonexpansion of S and the third inequality follows from Lemma 2.1(1). In (6.4), using the proof similar to those of Lemma 4.2 and Theorem 4.1, we obtain that there exists subsequences $\left\{{x}^{{k}_{j}}\right\}$ and $\left\{{y}^{{k}_{j}}\right\}$ of $\left\{{x}^{k}\right\}$ and $\left\{{y}^{k}\right\}$, respectively, such that

$\underset{j\to \mathrm{\infty }}{lim}\parallel {x}^{{k}_{j}}-{y}^{{k}_{j}}\parallel =0.$
(6.5)

By (6.3), we obtain that $\left\{\parallel {x}^{k}-u\parallel \right\}$ is a convergent sequence, i.e., there exists some $\sigma >0$ such that

$\underset{k\to \mathrm{\infty }}{lim}\parallel {x}^{k}-u\parallel =\sigma$
(6.6)

and

$\underset{k\to \mathrm{\infty }}{lim}\parallel {x}^{k}-{t}^{k}\parallel =0$
(6.7)

and $\left\{{x}^{k}\right\}$ and $\left\{{t}^{k}\right\}$ are bounded. Hence we may assume, without loss of generality, that $\left\{{x}^{{k}_{j}}\right\}$ weakly converges to some $\overline{x}\in H$. We now show that

$\overline{x}\in Fix\left(S\right)\cap SOL\left(C,f\right).$

By the triangle inequality,

$\parallel {t}^{k}-{y}^{k}\parallel \le \parallel {t}^{k}-{x}^{k}\parallel +\parallel {x}^{k}-{y}^{k}\parallel ,$
(6.8)

so by (6.5) and (6.7), we obtain

$\underset{j\to \mathrm{\infty }}{lim}\parallel {t}^{{k}_{j}}-{y}^{{k}_{j}}\parallel =0.$
(6.9)

Since ${t}^{{k}_{j}}-\left({t}^{{k}_{j}}-{y}^{{k}_{j}}\right)={P}_{C}\left[{x}^{{k}_{j}}-\mu f\left({x}^{{k}_{j}}\right)\right]$, it follows from Lemma 2.1(2) that

Passing to the limit $j\to \mathrm{\infty }$ in the above inequality, taking into account (6.7), (6.9), $C\subset {C}_{{k}_{j}}$ and the continuity of f, we obtain

i.e., $\overline{x}\in SOL\left(C,f\right)$. It is now left to show that $\overline{x}\in Fix\left(S\right)$. Since S is nonexpansive, we obtain

$\parallel S\left({t}^{k}\right)-u\parallel =\parallel S\left({t}^{k}\right)-S\left(u\right)\parallel \le \parallel {t}^{k}-u\parallel \le \parallel {x}^{k}-u\parallel .$
(6.10)

By (6.6),

$\underset{k\to \mathrm{\infty }}{lim sup}\parallel S\left({t}^{k}\right)-u\parallel \le \sigma .$
(6.11)

Furthermore,

$\begin{array}{r}\underset{k\to \mathrm{\infty }}{lim}\parallel {\alpha }_{k}\left({x}^{k}-u\right)+\left(1-{\alpha }_{k}\right)\left(S\left({t}^{k}\right)-u\right)\parallel \\ \phantom{\rule{1em}{0ex}}=\underset{k\to \mathrm{\infty }}{lim}\parallel {\alpha }_{k}{x}^{k}+\left(1-{\alpha }_{k}\right)S\left({t}^{k}\right)-u\parallel \\ \phantom{\rule{1em}{0ex}}=\underset{k\to \mathrm{\infty }}{lim}\parallel {x}^{k+1}-u\parallel =\sigma .\end{array}$
(6.12)

So, applying Lemma 2.3, we obtain

$\underset{k\to \mathrm{\infty }}{lim}\parallel S\left({t}^{k}\right)-{x}^{k}\parallel =0$
(6.13)

since

$\begin{array}{rl}\parallel S\left({x}^{k}\right)-{x}^{k}\parallel & =\parallel S\left({x}^{k}\right)-S\left({t}^{k}\right)+S\left({t}^{k}\right)-{x}^{k}\parallel \\ \le \parallel S\left({x}^{k}\right)-S\left({t}^{k}\right)\parallel +\parallel S\left({t}^{k}\right)-{x}^{k}\parallel \\ \le \parallel {x}^{k}-{t}^{k}\parallel +\parallel S\left({t}^{k}\right)-{x}^{k}\parallel .\end{array}$

It follows from (6.7) and (6.13) that

$\underset{k\to \mathrm{\infty }}{lim}\parallel S\left({x}^{k}\right)-{x}^{k}\parallel =0.$
(6.14)

Since S is nonexpansive on H, ${x}^{{k}_{j}}$ weakly converges to $\overline{x}$ and

$\underset{j\to \mathrm{\infty }}{lim}\parallel \left(I-S\right)\left({x}^{{k}_{j}}\right)\parallel =\underset{j\to \mathrm{\infty }}{lim}\parallel {x}^{{k}_{j}}-S\left({x}^{{k}_{j}}\right)\parallel =0,$
(6.15)

we obtain by Lemma 2.4 that $\left(I-S\right)\left(\overline{x}\right)=0$, which means that $\overline{x}\in Fix\left(S\right)$. Now, again by using similar arguments to those used in the proof of Theorem 4.1, we obtain that the entire sequence $\left\{{x}^{k}\right\}$ weakly converges to $\overline{x}$. Therefore the sequence $\left\{{x}^{k}\right\}$ weakly converges to $\overline{x}\in Fix\left(S\right)\cap SOL\left(C,f\right)$. □

## 7 Conclusion

In this paper, a new double projection algorithm for the variational inequality problem has been presented. The main advantage of the proposed method is that the second projection at each iteration is onto the intersection set of two halfspaces, which is implemented very easily. When the feasible set C of VI is a quite general set, our algorithm is more effective than the double projection method proposed by Solodov and Svaiter. It is natural to ask whether it is possible to replace the first projection onto the halfspace or the intersection set of halfspaces. This would be an interesting topic in further research.

## References

1. Hartman P, Stampacchia G: On some non-linear elliptic differential-functional equations. Acta Math. 1966, 115: 271–310. 10.1007/BF02392210

2. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.

3. Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475

4. Wang YJ, Xiu NH, Wang CY: Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 2001, 111: 641–656. 10.1023/A:1012606212823

5. Korpelevich GM: The extragradient method for finding saddle points and other problems. Matecon 1976, 17: 747–756.

6. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3

7. Censor Y, Gibali A, Reich S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61: 1119–1132. 10.1080/02331934.2010.539689

8. Zarantonello EH: Projections on Convex Sets in Hilbert Spaces and Spectral Theory. Academic Press, New York; 1971.

9. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

10. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

11. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

12. Browder FE: Fixed point theorems for noncompact mappings in Hilbert space. Proc. Natl. Acad. Sci. USA 1965, 53: 1272–1276. 10.1073/pnas.53.6.1272

13. He YR: A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006, 185: 166–173. 10.1016/j.cam.2005.01.031

14. Polyak BT: Minimization of unsmooth functionals. U.S.S.R. Comput. Math. Math. Phys. 1969, 9: 14–29.

15. Yang QZ: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048

16. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010

17. Qu B, Xiu NH: A new halfspace-relaxation projection method for the split feasibility problem. Linear Algebra Appl. 2008, 428: 1218–1229. 10.1016/j.laa.2007.03.002

18. Qu B, Xiu NH: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009

## Acknowledgements

This work was supported by the Educational Science Foundation of Chongqing, Chongqing of China (grant KJ111309).

## Author information

Authors

### Corresponding author

Correspondence to Lian Zheng.

### Competing interests

The author declares that he has no competing interests.

## Rights and permissions

Reprints and Permissions

Zheng, L. The subgradient double projection method for variational inequalities in a Hilbert space. Fixed Point Theory Appl 2013, 136 (2013). https://doi.org/10.1186/1687-1812-2013-136

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-1812-2013-136

### Keywords

• variational inequalities
• nonexpansive mapping