Skip to main content

The generalized viscosity implicit rules of nonexpansive mappings in Hilbert spaces

Abstract

The generalized viscosity implicit rules of nonexpansive mappings in Hilbert spaces are established. The strong convergence theorems of the rules are proved under certain assumptions imposed on the sequences of parameters. The results presented in this paper extend and improve the main results of Refs. (Moudafi in J. Math. Anal. Appl. 241:46-55, 2000; Xu et al. in Fixed Point Theory Appl. 2015:41, 2015). Moreover, applications to a more general system of variational inequalities, the constrained convex minimization problem and K-mapping are included.

1 Introduction

In this paper, we assume that H is a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the induced norm \(\|\cdot\|\), and C is a nonempty closed convex subset of H. Let \(T: H\rightarrow H\) be a mapping and \(F(T )\) be the set of fixed points of the mapping T, i.e., \(F(T )=\{x\in H:Tx=x\}\). A mapping \(T: H\rightarrow H\) is called nonexpansive, if

$$ \| Tx-Ty \|\leqslant\| x-y \| $$

for all \(x, y\in H\). A mapping \(f: H\rightarrow H\) is called a contraction, if

$$ \bigl\Vert f(x)-f(y) \bigr\Vert \leqslant\theta\| x-y \| $$

for all \(x, y\in H\) and some \(\theta\in[0,1)\).

In 2000, Moudafi [1] proved the following strong convergence theorem for nonexpansive mappings in real Hilbert spaces.

Theorem 1.1

[1]

Let C be a nonempty closed convex subset of the real Hilbert space H. Let T be a nonexpansive mapping of C into itself such that \(F(T)\) is nonempty. Let f be a contraction of C into itself with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$x_{n+1}=\frac{\varepsilon_{n}}{1+\varepsilon_{n}}f(x_{n})+\frac {1}{1+\varepsilon_{n}}T(x_{n}), \quad n\geqslant0, $$

where \(\{\varepsilon_{n}\}\in(0,1)\) satisfies

  1. (1)

    \(\lim_{n\rightarrow\infty}\varepsilon_{n}=0\);

  2. (2)

    \(\sum_{n=0}^{\infty}\varepsilon_{n}=\infty\);

  3. (3)

    \(\lim_{n\rightarrow\infty} |\frac{1}{\varepsilon _{n+1}}-\frac{1}{\varepsilon_{n}} |=0\).

Then \(\{x_{n}\}\) converges strongly to a fixed point \(x^{*}\) of the nonexpansive mapping T, which is also the unique solution of the variational inequality (VI)

$$ \bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0, \quad \forall y\in F(T). $$
(1.1)

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{F(T)}f\), that is, \(P_{F(T)}f(x^{*})=x^{*}\).

Such a method for approximation of fixed points is called the viscosity approximation method. In 2015, Xu et al. [2] applied the viscosity technique to the implicit midpoint rule for nonexpansive mappings and proposed the following viscosity implicit midpoint rule (VIMR):

$$x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})T \biggl(\frac {x_{n}+x_{n+1}}{2} \biggr),\quad \forall n\geqslant0. $$

The idea was to use contractions to regularize the implicit midpoint rule for nonexpansive mappings. They also proved that VIMR converges strongly to a fixed point of T, which also solved VI (1.1).

In this paper, motivated and inspired by Xu et al. [2], we give the following generalized viscosity implicit rules:

$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) $$
(1.2)

and

$$ x_{n+1}=\alpha_{n}x_{n}+ \beta_{n} f(x_{n})+\gamma_{n}T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) $$
(1.3)

for \(n\geqslant0\). We will prove that the generalized viscosity implicit rules (1.2) and (1.3) converge strongly to a fixed point of T under certain assumptions imposed on the sequences of parameters, which also solve VI (1.1).

The organization of this paper is as follows. In Section 2, we recall the notion of the metric projection, the demiclosedness principle of nonexpansive mappings and a convergence lemma. In Section 3, the strong convergence theorems of the generalized viscosity implicit rules (1.2) and (1.3) are proved under some conditions, respectively. Applications to a more general system of variational inequalities, the constrained convex minimization problem, and the K-mapping are presented in Section 4.

2 Preliminaries

Firstly, we recall the notion and some properties of the metric projection.

Definition 2.1

\(P_{C}: H \rightarrow C\) is called a metric projection if for every point \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}x\), such that

$$\| x-P_{C}x \|\leqslant\| x-y \|, \quad \forall y\in C. $$

Lemma 2.1

Let C be a nonempty closed convex subset of the real Hilbert space H and \(P_{C}: H \rightarrow C\) be a metric projection. Then

  1. (1)

    \(\| P_{C}x-P_{C}y\|^{2} \leqslant\langle x-y,P_{C}x-P_{C}y\rangle\), \(\forall x, y\in H\);

  2. (2)

    \(P_{C}\) is a nonexpansive mapping, i.e., \(\| P_{C}x-P_{C}y\|\leqslant\| x-y\|\), \(\forall x, y\in H\);

  3. (3)

    \(\langle x-P_{C}x,y-P_{C}x\rangle\leqslant0\), \(\forall x\in H, y\in C \).

In order to prove our results, we need the demiclosedness principle of nonexpansive mappings, which is quite helpful in verifying the weak convergence of an algorithm to a fixed point of a nonexpansive mapping.

Lemma 2.2

(The demiclosedness principle)

Let C be a nonempty closed convex subset of the real Hilbert space H and \(T : C \rightarrow C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). If \(\{x_{n}\}\) is a sequence in C such that

$$x_{n} \rightharpoonup x^{*}\in C\quad \textit{and} \quad (I-T)x_{n}\rightarrow0 \quad \textit{imply}\quad x^{*}=Tx^{*}, $$

where → (resp. ⇀) denotes strong (resp. weak) convergence.

In addition, we also need the following convergence lemma.

Lemma 2.3

[2]

Assume that \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that

$$ a_{n+1}\leqslant(1-\gamma_{n})a_{n}+ \delta_{n},\quad \forall n\geqslant0, $$

where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\) and \(\{\delta_{n}\}\) is a sequence such that:

  1. (1)

    \(\sum_{n=0}^{\infty}\gamma_{n}=\infty\);

  2. (2)

    \(\limsup_{n\rightarrow\infty}\frac{\delta_{n}}{\gamma _{n}}\leqslant0\) or \(\sum_{n=0}^{\infty}|\delta_{n}|<\infty\).

Then \(\lim_{n\rightarrow\infty}a_{n}=0\).

3 Main results

Theorem 3.1

Let C be a nonempty closed convex subset of the real Hilbert space H. Let \(T: C \rightarrow C\) be a nonexpansive mapping with \(F(T)\neq \emptyset\) and \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$
(3.1)

where \(\{\alpha_{n}\}, \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  2. (2)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a fixed point \(x^{*}\) of the nonexpansive mapping T, which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0,\quad \forall y\in F(T). $$

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{F(T)}f\), that is, \(P_{F(T)}f(x^{*})=x^{*}\).

Proof

We divide the proof into five steps.

Step 1. Firstly, we show that \(\{x_{n}\}\) is bounded.

Indeed, take \(p\in F(T)\) arbitrarily, we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert =&\bigl\Vert \alpha_{n} f(x_{n})+(1-\alpha_{n})T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-p\bigr\Vert \\ \leqslant& \alpha_{n}\bigl\Vert f(x_{n})-p\bigr\Vert + (1-\alpha_{n})\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-p\bigr\Vert \\ \leqslant& \alpha_{n} \bigl\Vert f(x_{n})-f(p)\bigr\Vert +\alpha_{n} \bigl\Vert f(p)-p\bigr\Vert +(1-\alpha _{n})\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-p \bigr\Vert \\ \leqslant& \theta\alpha_{n}\Vert x_{n}-p\Vert + \alpha_{n} \bigl\Vert f(p)-p\bigr\Vert +(1-\alpha _{n}) \bigl[s_{n}\Vert x_{n}-p\Vert +(1-s_{n})\Vert x_{n+1}-p\Vert \bigr] \\ = & \bigl[\theta\alpha_{n}+(1-\alpha_{n})s_{n} \bigr]\Vert x_{n}-p\Vert +(1-\alpha _{n}) (1-s_{n})\Vert x_{n+1}-p\Vert +\alpha_{n} \bigl\Vert f(p)-p\bigr\Vert . \end{aligned}$$

It follows that

$$ \bigl[1-(1-\alpha_{n}) (1-s_{n}) \bigr] \|x_{n+1}-p\|\leqslant \bigl[\theta \alpha_{n}+(1- \alpha_{n})s_{n} \bigr]\|x_{n}-p\|+ \alpha_{n} \bigl\Vert f(p)-p\bigr\Vert . $$
(3.2)

Since \(\alpha_{n},s_{n}\in(0,1)\), \(1-(1-\alpha_{n})(1-s_{n})>0\). Moreover, by (3.2), we get

$$\begin{aligned} \Vert x_{n+1}-p\Vert \leqslant&\frac{\theta\alpha_{n}+(1-\alpha_{n})s_{n}}{1-(1-\alpha _{n})(1-s_{n})}\Vert x_{n}-p\Vert +\frac{\alpha_{n}}{1-(1-\alpha_{n})(1-s_{n})} \bigl\Vert f(p)-p\bigr\Vert \\ =& \biggl[1-\frac{\alpha_{n}(1-\theta)}{1-(1-\alpha_{n})(1-s_{n})} \biggr]\Vert x_{n}-p\Vert + \frac{\alpha_{n}}{1-(1-\alpha_{n})(1-s_{n})} \bigl\Vert f(p)-p\bigr\Vert \\ =& \biggl[1-\frac{\alpha_{n}(1-\theta)}{1-(1-\alpha_{n})(1-s_{n})} \biggr]\Vert x_{n}-p\Vert \\ &{}+ \frac{\alpha_{n}(1-\theta)}{1-(1-\alpha_{n})(1-s_{n})} \biggl(\frac {1}{1-\theta} \bigl\Vert f(p)-p\bigr\Vert \biggr). \end{aligned}$$

Thus, we have

$$\|x_{n+1}-p\|\leqslant\max \biggl\{ \|x_{n}-p\|, \frac{1}{1-\theta} \bigl\Vert f(p)-p\bigr\Vert \biggr\} . $$

By induction, we obtain

$$\|x_{n}-p\|\leqslant\max \biggl\{ \|x_{0}-p\|, \frac{1}{1-\theta} \bigl\Vert f(p)-p\bigr\Vert \biggr\} ,\quad \forall n \geqslant0. $$

Hence, it turns out that \(\{x_{n}\}\) is bounded. Consequently, we deduce immediately that \(\{f(x_{n}) \}\), \(\{T (s_{n}x_{x}+(1-s_{n})x_{n+1} ) \}\) are bounded.

Step 2. Next, we prove that \(\lim_{n\rightarrow\infty} \| x_{n+1}-x_{n}\|=0\).

To see this, we apply (3.1) to get

$$\begin{aligned} \Vert x_{n+1}-x_{n}\Vert =&\bigl\Vert \alpha_{n} f(x_{n})+(1-\alpha_{n})T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \\ &{}- \bigl[\alpha_{n-1} f(x_{n-1})+(1-\alpha_{n-1})T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr) \bigr]\bigr\Vert \\ =&\bigl\Vert \alpha_{n}\bigl[ f(x_{n})-f(x_{n-1}) \bigr]+(\alpha_{n}-\alpha_{n-1})f(x_{n-1}) \\ &{} +(1-\alpha_{n}) \bigl[T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr) \bigr] \\ &{} -(\alpha_{n}-\alpha_{n-1})T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ \leqslant& \alpha_{n}\bigl\Vert f(x_{n})-f(x_{n-1}) \bigr\Vert +|\alpha_{n}-\alpha _{n-1}|\cdot\bigl\Vert f(x_{n-1})-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ &{} +(1-\alpha_{n})\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ \leqslant&\theta\alpha_{n}\Vert x_{n}-x_{n-1} \Vert +|\alpha_{n}-\alpha_{n-1}| M_{1} \\ &{} +(1-\alpha_{n})\bigl\Vert \bigl[s_{n} x_{n}+(1-s_{n})x_{n+1}\bigr]-\bigl[s_{n-1} x_{n-1}+(1-s_{n-1})x_{n}\bigr]\bigr\Vert \\ =&\theta\alpha_{n}\Vert x_{n}-x_{n-1}\Vert +| \alpha_{n}-\alpha_{n-1}| M_{1} \\ &{}+(1- \alpha_{n})\bigl\Vert (1-s_{n}) (x_{n+1}-x_{n})+s_{n-1}(x_{n}-x_{n-1}) \bigr\Vert \\ \leqslant&\theta\alpha_{n}\Vert x_{n}-x_{n-1} \Vert +|\alpha_{n}-\alpha_{n-1}| M_{1}+(1- \alpha_{n}) (1-s_{n})\Vert x_{n+1}-x_{n} \Vert \\ &{} +(1-\alpha_{n})s_{n-1}\Vert x_{n}-x_{n-1} \Vert \\ =&(1-\alpha_{n}) (1-s_{n})\Vert x_{n+1}-x_{n} \Vert \\ &{}+\bigl[\theta\alpha_{n}+(1-\alpha _{n})s_{n-1} \bigr]\Vert x_{n}-x_{n-1}\Vert +|\alpha_{n}- \alpha_{n-1}| M_{1}, \end{aligned}$$

where \(M_{1}>0\) is a constant such that

$$M_{1}\geqslant\sup_{n\geqslant0}\bigl\Vert f(x_{n})-T \bigl(s_{n}x_{n}+(1-s_{n})x_{n+1} \bigr)\bigr\Vert . $$

It turns out that

$$\bigl[1-(1-\alpha_{n}) (1-s_{n})\bigr]\|x_{n+1}-x_{n} \|\leqslant\bigl[\theta\alpha _{n}+(1-\alpha_{n})s_{n-1} \bigr]\|x_{n}-x_{n-1}\|+|\alpha_{n}- \alpha_{n-1}| M_{1}, $$

that is,

$$\begin{aligned} \|x_{n+1}-x_{n}\| \leqslant&\frac{\theta\alpha_{n}+(1-\alpha _{n})s_{n-1}}{1-(1-\alpha_{n})(1-s_{n})} \|x_{n}-x_{n-1}\|+\frac {M_{1}}{1-(1-\alpha_{n})(1-s_{n})}|\alpha_{n}- \alpha_{n-1}| \\ =& \biggl[1-\frac{\alpha_{n}(1-\theta)+(1-\alpha _{n})(s_{n}-s_{n-1})}{1-(1-\alpha_{n})(1-s_{n})} \biggr]\|x_{n}-x_{n-1}\| \\ &{}+ \frac {M_{1}}{1-(1-\alpha_{n})(1-s_{n})}|\alpha_{n}-\alpha_{n-1}|. \end{aligned}$$

Note that \(0<\varepsilon\leqslant s_{n-1}\leqslant s_{n}<1\), we have

$$0< \varepsilon\leqslant s_{n}< 1-(1-\alpha_{n}) (1-s_{n})< 1 $$

and

$$\frac{\alpha_{n}(1-\theta)+(1-\alpha_{n})(s_{n}-s_{n-1})}{1-(1-\alpha _{n})(1-s_{n})}\geqslant\alpha_{n}(1-\theta). $$

Thus,

$$ \|x_{n+1}-x_{n}\|\leqslant \bigl[1-\alpha_{n}(1- \theta)\bigr]\|x_{n}-x_{n-1}\| +\frac{ M_{1}}{\varepsilon}| \alpha_{n}-\alpha_{n-1}| . $$

Since \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\) and \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\), by Lemma 2.3, we can get \(\| x_{n+1}-x_{n}\|\rightarrow0\) as \(n\rightarrow\infty\).

Step 3. Now, we prove that \(\lim_{n\rightarrow\infty} \|x_{n}-Tx_{n}\|=0\).

In fact, we can see that

$$\begin{aligned} \Vert x_{n}-Tx_{n}\Vert \leqslant&\Vert x_{n}-x_{n+1}\Vert +\bigl\Vert x_{n+1}-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \bigr\Vert \\ &{}+\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-Tx_{n}\bigr\Vert \\ \leqslant&\Vert x_{n}-x_{n+1}\Vert +\bigl\Vert \alpha_{n} \bigl[f(x_{n})-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \bigr]\bigr\Vert \\ &{}+ \bigl\Vert \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x_{n}\bigr\Vert \\ \leqslant&\Vert x_{n}-x_{n+1}\Vert + \alpha_{n}M_{1}+(1-s_{n})\Vert x_{n+1}-x_{n}\Vert \\ \leqslant&(2-s_{n})\Vert x_{n}-x_{n+1}\Vert + \alpha_{n}M_{1} \\ \leqslant&2\Vert x_{n}-x_{n+1}\Vert + \alpha_{n}M_{1}. \end{aligned}$$

Then, by \(\lim_{n\rightarrow\infty}\|x_{n+1}-x_{n}\|=0\) and \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), we get \(\|x_{n}-Tx_{n}\|\rightarrow0\) as \(n\rightarrow\infty\). Moreover, we have

$$\begin{aligned}& \bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x_{n}\bigr\Vert \\& \quad \leqslant \bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-Tx_{n}\bigr\Vert +\Vert Tx_{n}-x_{n} \Vert \\& \quad \leqslant \bigl\Vert \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x_{n}\bigr\Vert +\Vert Tx_{n}-x_{n} \Vert \\& \quad = (1-s_{n})\Vert x_{n+1}-x_{n}\Vert +\Vert Tx_{n}-x_{n}\Vert \\& \quad \leqslant \Vert x_{n+1}-x_{n}\Vert +\Vert Tx_{n}-x_{n}\Vert \rightarrow0 \quad (\text{as } n \rightarrow\infty). \end{aligned}$$
(3.3)

Step 4. In this step, we claim that \(\limsup_{n\rightarrow\infty }\langle x^{*}-f(x^{*}), x^{*}-x_{n}\rangle\leqslant0\), where \(x^{*}=P_{F(T)}f(x^{*})\).

Indeed, take a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) such that

$$ \limsup_{n\rightarrow\infty}\bigl\langle x^{*}-f\bigl(x^{*}\bigr), x^{*}-x_{n}\bigr\rangle =\lim_{n\rightarrow\infty}\bigl\langle x^{*}-f \bigl(x^{*}\bigr), x^{*}-x_{n_{i}}\bigr\rangle . $$

Since \(\{x_{n}\}\) is bounded, there exists a subsequence of \(\{x_{n}\}\) which converges weakly to p. Without loss of generality, we may assume that \(x_{n_{i}}\rightharpoonup p\). From \(\lim_{n\rightarrow\infty}\|x_{n}-Tx_{n}\|=0\) and Lemma 2.2 we have \(p=Tp\), that is, \(p\in F(T)\). This together with the property of the metric projection implies that

$$ \limsup_{n\rightarrow\infty}\bigl\langle x^{*}-f\bigl(x^{*}\bigr), x^{*}-x_{n}\bigr\rangle =\lim_{n\rightarrow\infty}\bigl\langle x^{*}-f \bigl(x^{*}\bigr), x^{*}-x_{n_{i}}\bigr\rangle =\bigl\langle x^{*}-f\bigl(x^{*} \bigr), x^{*}-p\bigr\rangle \leqslant0. $$

Step 5. Finally, we show that \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow\infty\). Here again \(x^{*}\in F(T)\) is the unique fixed point of the contraction \(P_{F(T)}f\) or in other words, \(x^{*}=P_{F(T)}f(x^{*})\).

In fact, we have

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} =&\bigl\Vert \alpha_{n} f(x_{n})+(1-\alpha_{n})T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\Vert ^{2} \\ =&\bigl\Vert \alpha_{n} \bigl[f(x_{n})-x^{*}\bigr]+(1- \alpha_{n})\bigl[T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr]\bigr\Vert ^{2} \\ =&\alpha_{n}^{2}\bigl\Vert f(x_{n})-x^{*}\bigr\Vert ^{2}+(1-\alpha_{n})^{2}\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\Vert ^{2} \\ &{} +2\alpha_{n}(1-\alpha_{n})\bigl\langle f(x_{n})-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ \leqslant&\alpha_{n}^{2}\bigl\Vert f(x_{n})-x^{*} \bigr\Vert ^{2}+(1-\alpha_{n})^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\ &{} +2\alpha_{n}(1-\alpha_{n})\bigl\langle f(x_{n})-f\bigl(x^{*}\bigr),T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ &{} +2\alpha_{n}(1-\alpha_{n})\bigl\langle f\bigl(x^{*} \bigr)-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ \leqslant& (1-\alpha_{n})^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\ &{} +2\alpha_{n}(1-\alpha_{n})\bigl\Vert f(x_{n})-f\bigl(x^{*}\bigr)\bigr\Vert \cdot\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\Vert +L_{n} \\ \leqslant& (1-\alpha_{n})^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\ &{} +2\theta\alpha_{n}(1-\alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert +L_{n}, \end{aligned}$$

where

$$L_{n}:=\alpha_{n}^{2}\bigl\Vert f(x_{n})-x^{*}\bigr\Vert ^{2}+2\alpha_{n}(1- \alpha_{n})\bigl\langle f\bigl(x^{*}\bigr)-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle . $$

It turns out that

$$\begin{aligned}& (1-\alpha_{n})^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad {}+2\theta\alpha _{n}(1-\alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert +L_{n}-\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \geqslant0. \end{aligned}$$

Solving this quadratic inequality for \(\|s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\|\) yields

$$\begin{aligned}& \bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*} \bigr\Vert \\& \quad \geqslant \frac{1}{2(1-\alpha _{n})^{2}} \Bigl\{ -2\theta\alpha_{n}(1- \alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert \\& \qquad {}+\sqrt{4\theta^{2}\alpha_{n}^{2}(1- \alpha_{n})^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-4(1-\alpha_{n})^{2} \bigl(L_{n}- \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \bigr)} \Bigr\} \\& \quad = \frac{-\theta\alpha_{n}\Vert x_{n}-x^{*}\Vert +\sqrt{\theta^{2}\alpha_{n}^{2}\Vert x_{n}-x^{*}\Vert ^{2}-L_{n}+\Vert x_{n+1}-x^{*}\Vert ^{2} }}{1-\alpha_{n}}. \end{aligned}$$

This implies that

$$\begin{aligned}& s_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert +(1-s_{n}) \bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad \geqslant\frac{-\theta\alpha _{n}\Vert x_{n}-x^{*}\Vert +\sqrt{\theta^{2}\alpha_{n}^{2}\Vert x_{n}-x^{*}\Vert ^{2}-L_{n}+\Vert x_{n+1}-x^{*}\Vert ^{2} }}{1-\alpha_{n}}, \end{aligned}$$

namely,

$$\begin{aligned}& (s_{n}-s_{n}\alpha_{n}+\theta\alpha_{n} )\bigl\Vert x_{n}-x^{*}\bigr\Vert +(1-s_{n}) (1- \alpha_{n})\bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad \geqslant\sqrt {\theta^{2}\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-L_{n}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}}. \end{aligned}$$

Then

$$\begin{aligned}& \theta^{2}\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-L_{n}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad \leqslant (s_{n}-s_{n}\alpha_{n}+\theta \alpha_{n} )^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+(1-s_{n})^{2}(1-\alpha_{n})^{2} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \qquad {}+2 (s_{n}-s_{n}\alpha_{n}+\theta \alpha_{n} ) (1-s_{n}) (1-\alpha_{n}) \bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad \leqslant (s_{n}-s_{n}\alpha_{n}+\theta \alpha_{n} )^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+(1-s_{n})^{2}(1-\alpha_{n})^{2} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \qquad {}+(s_{n}-s_{n}\alpha_{n}+\theta \alpha_{n} ) (1-s_{n}) (1-\alpha_{n}) \bigl[\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \bigr], \end{aligned}$$

which is reduced to the inequality

$$\begin{aligned}& \bigl[1-(1-s_{n})^{2}(1-\alpha_{n})^{2}-(s_{n}-s_{n} \alpha_{n}+\theta\alpha_{n} ) (1-s_{n}) (1- \alpha_{n}) \bigr] \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad \leqslant \bigl[(s_{n}-s_{n}\alpha_{n}+ \theta\alpha_{n} )^{2}+(s_{n}-s_{n} \alpha_{n}+\theta\alpha_{n} ) (1-s_{n}) (1- \alpha_{n})-\theta ^{2}\alpha_{n}^{2} \bigr]\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+L_{n}, \end{aligned}$$

that is,

$$\begin{aligned}& \bigl[1-(1-s_{n}) (1-\alpha_{n}) \bigl(1+(\theta-1) \alpha_{n} \bigr) \bigr] \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad \leqslant \bigl[(s_{n}-s_{n}\alpha_{n}+ \theta\alpha_{n} ) \bigl(1+(\theta-1)\alpha_{n} \bigr)- \theta^{2}\alpha_{n}^{2} \bigr]\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+L_{n}. \end{aligned}$$

It follows that

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \leqslant& \frac{(s_{n}-s_{n}\alpha_{n}+\theta\alpha _{n} ) (1+(\theta-1)\alpha_{n} )-\theta^{2}\alpha _{n}^{2}}{1-(1-s_{n})(1-\alpha_{n}) (1+(\theta-1)\alpha_{n} )}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2} \\ &{} +\frac{L_{n}}{1-(1-s_{n})(1-\alpha_{n}) (1+(\theta-1)\alpha _{n} )}. \end{aligned}$$
(3.4)

Let

$$\begin{aligned} w_{n} :=&\frac{1}{\alpha_{n}} \biggl\{ 1-\frac{(s_{n}-s_{n}\alpha_{n}+\theta \alpha_{n} ) (1+(\theta-1)\alpha_{n} )-\theta^{2}\alpha _{n}^{2}}{1-(1-s_{n})(1-\alpha_{n}) (1+(\theta-1)\alpha_{n} )} \biggr\} \\ =&\frac{2(1-\theta)+(2\theta-1)\alpha_{n}}{1-(1-s_{n})(1-\alpha _{n}) (1+(\theta-1)\alpha_{n} )}. \end{aligned}$$

Since the sequence \(\{s_{n}\}\) satisfies \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\), \(\lim_{n\rightarrow \infty}s_{n}\) exists; assume that

$$\lim_{n\rightarrow\infty} s_{n}=s^{*}>0. $$

Then

$$\lim_{n\rightarrow\infty}w_{n}=\frac{2(1-\theta)}{s^{*}}>0. $$

Let \(\rho_{1}\) satisfy

$$0< \rho_{1}< \frac{2(1-\theta)}{s^{*}}, $$

then there exists an integer \(N_{1}\) big enough such that \(w_{n}>\rho_{1}\) for all \(n\geqslant N_{1}\). Hence, we have

$$\frac{(s_{n}-s_{n}\alpha_{n}+\theta\alpha_{n} ) (1+(\theta-1)\alpha _{n} )-\theta^{2}\alpha_{n}^{2}}{1-(1-s_{n})(1-\alpha_{n}) (1+(\theta -1)\alpha_{n} )}\leqslant1-\rho_{1} \alpha_{n} $$

for all \(n\geqslant N_{1}\). It turns out from (3.4) that, for all \(n\geqslant N_{1}\),

$$ \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \leqslant(1-\rho_{1} \alpha_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\frac {L_{n}}{1-(1-s_{n})(1-\alpha_{n}) (1+(\theta-1)\alpha_{n} )}. $$
(3.5)

By \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), (3.3), and Step 4, we have

$$\begin{aligned}& \limsup_{n\rightarrow\infty}\frac{L_{n}}{\rho_{1}\alpha_{n} [1-(1-s_{n})(1-\alpha_{n}) (1+(\theta-1)\alpha_{n} ) ]} \\& \quad =\limsup_{n\rightarrow\infty}\frac{\alpha_{n} \|f(x_{n})-x^{*}\|^{2}+2 (1-\alpha_{n})\langle f(x^{*})-x^{*},T (s_{n} x_{n}+(1-s_{n})x_{n+1} )-x^{*}\rangle}{\rho_{1} [1-(1-s_{n})(1-\alpha_{n}) (1+(\theta -1)\alpha_{n} ) ]} \\& \quad \leqslant0. \end{aligned}$$
(3.6)

From (3.5), (3.6), and Lemma 2.2, we can obtain

$$\lim_{n\rightarrow\infty}\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}=0, $$

namely, \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow\infty\). This completes the proof. □

Theorem 3.2

Let C be a nonempty closed convex subset of the real Hilbert space H. Let \(T: C \rightarrow C\) be a nonexpansive mapping with \(F(T)\neq \emptyset\) and \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n}x_{n}+ \beta_{n} f(x_{n})+\gamma_{n}T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$
(3.7)

where \(\{\alpha_{n}\}, \{\beta_{n}\}, \{\gamma_{n}\}, \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\) and \(\lim_{n\rightarrow\infty }\gamma_{n}=1\);

  2. (2)

    \(\sum_{n=0}^{\infty}\beta_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\) and \(\sum_{n=0}^{\infty}|\beta_{n+1}-\beta_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a fixed point \(x^{*}\) of the nonexpansive mapping T, which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0,\quad \forall y\in F(T). $$

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{F(T)}f\), that is, \(P_{F(T)}f(x^{*})=x^{*}\).

Proof

We divide the proof into five steps.

Step 1. Firstly, we show that \(\{x_{n}\}\) is bounded.

Indeed, take \(p\in F(T)\) arbitrarily, we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert =&\bigl\Vert \alpha_{n}x_{n}+ \beta_{n} f(x_{n})+\gamma_{n}T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-p\bigr\Vert \\ \leqslant& \alpha_{n}\Vert x_{n}-p\Vert + \beta_{n} \bigl\Vert f(x_{n})-p\bigr\Vert + \gamma_{n}\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-p\bigr\Vert \\ \leqslant& \alpha_{n}\Vert x_{n}-p\Vert + \beta_{n} \bigl\Vert f(x_{n})-f(p)\bigr\Vert + \beta_{n} \bigl\Vert f(p)-p\bigr\Vert \\ &{}+\gamma_{n}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-p\bigr\Vert \\ \leqslant& (\alpha_{n}+\theta\beta_{n})\Vert x_{n}-p\Vert +\beta_{n} \bigl\Vert f(p)-p\bigr\Vert + \gamma_{n} \bigl[s_{n}\Vert x_{n}-p\Vert +(1-s_{n})\Vert x_{n+1}-p\Vert \bigr] \\ = & (\alpha_{n}+\theta\beta_{n}+\gamma_{n} s_{n})\Vert x_{n}-p\Vert +\gamma _{n}(1-s_{n}) \Vert x_{n+1}-p\Vert +\beta_{n} \bigl\Vert f(p)-p\bigr\Vert . \end{aligned}$$

It follows that

$$ \bigl[1-\gamma_{n}(1-s_{n}) \bigr] \|x_{n+1}-p\|\leqslant(\alpha_{n}+\theta \beta_{n}+ \gamma_{n} s_{n})\|x_{n}-p\|+\beta_{n} \bigl\Vert f(p)-p\bigr\Vert . $$
(3.8)

Since \(\gamma_{n},s_{n}\in(0,1)\), \(1-\gamma_{n}(1-s_{n})>0\). Moreover, by (3.8) and \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\), we get

$$\begin{aligned} \Vert x_{n+1}-p\Vert \leqslant&\frac{\alpha_{n}+\theta\beta_{n}+\gamma_{n} s_{n}}{1-\gamma _{n}(1-s_{n})}\Vert x_{n}-p\Vert +\frac{\beta_{n}}{1-\gamma_{n}(1-s_{n})} \bigl\Vert f(p)-p\bigr\Vert \\ =& \biggl[1-\frac{1-\alpha_{n}-\gamma_{n}-\theta\beta_{n}}{1-\gamma _{n}(1-s_{n})} \biggr]\Vert x_{n}-p\Vert + \frac{\beta_{n}}{1-\gamma_{n}(1-s_{n})} \bigl\Vert f(p)-p\bigr\Vert \\ =& \biggl[1-\frac{\beta_{n}-\theta\beta_{n}}{1-\gamma_{n}(1-s_{n})} \biggr]\Vert x_{n}-p\Vert + \frac{\beta_{n}}{1-\gamma_{n}(1-s_{n})} \bigl\Vert f(p)-p\bigr\Vert \\ =& \biggl[1-\frac{\beta_{n}(1-\theta)}{1-\gamma_{n}(1-s_{n})} \biggr]\Vert x_{n}-p\Vert + \frac{\beta_{n}(1-\theta)}{1-\gamma_{n}(1-s_{n})} \biggl( \frac{1}{1-\theta} \bigl\Vert f(p)-p\bigr\Vert \biggr). \end{aligned}$$

Thus, we have

$$\|x_{n+1}-p\|\leqslant\max \biggl\{ \|x_{n}-p\|, \frac{1}{1-\theta} \bigl\Vert f(p)-p\bigr\Vert \biggr\} . $$

By induction, we obtain

$$\|x_{n}-p\|\leqslant\max \biggl\{ \|x_{0}-p\|, \frac{1}{1-\theta} \bigl\Vert f(p)-p\bigr\Vert \biggr\} ,\quad \forall n \geqslant0. $$

Hence, it turns out that \(\{x_{n}\}\) is bounded. Consequently, we deduce immediately that \(\{f(x_{n}) \}\), \(\{T (s_{n}x_{x}+(1-s_{n})x_{n+1} ) \}\) are bounded.

Step 2. Next, we prove that \(\lim_{n\rightarrow\infty} \| x_{n+1}-x_{n}\|=0\).

To see this, we apply (3.7) to get

$$\begin{aligned} \Vert x_{n+1}-x_{n}\Vert =&\bigl\Vert \alpha_{n}x_{n}+\beta_{n} f(x_{n})+ \gamma_{n}T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \\ &{} - \bigl[\alpha_{n-1}x_{n-1}+\beta_{n-1} f(x_{n-1})+\gamma _{n-1}T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr) \bigr]\bigr\Vert \\ =&\bigl\Vert \alpha_{n}(x_{n}-x_{n-1})+( \alpha_{n}-\alpha_{n-1})x_{n-1}+\beta _{n} \bigl[f(x_{n})-f(x_{n-1})\bigr] \\ &{}+(\beta_{n}- \beta_{n-1})f(x_{n-1}) \\ &{} +\gamma_{n} \bigl[T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr) \bigr] \\ &{} +(\gamma_{n}-\gamma_{n-1})T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ =&\bigl\Vert \alpha_{n}(x_{n}-x_{n-1})+( \alpha_{n}-\alpha_{n-1})x_{n-1}+\beta _{n} \bigl[f(x_{n})-f(x_{n-1})\bigr] \\ &{}+(\beta_{n}- \beta_{n-1})f(x_{n-1}) \\ &{} +\gamma_{n} \bigl[T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr) \bigr] \\ &{} - \bigl[(\alpha_{n}-\alpha_{n-1})+(\beta_{n}- \beta_{n-1}) \bigr]T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ =&\alpha_{n}\Vert x_{n}-x_{n-1}\Vert +| \alpha_{n}-\alpha_{n-1}|\cdot\bigl\Vert x_{n-1}-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ &{} +\beta_{n}\bigl\Vert f(x_{n})-f(x_{n-1}) \bigr\Vert +|\beta_{n}-\beta_{n-1}| \\ &{}\cdot\bigl\Vert f(x_{n-1})-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ &{} +\gamma_{n}\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-T \bigl(s_{n-1} x_{n-1}+(1-s_{n-1})x_{n} \bigr)\bigr\Vert \\ \leqslant&\alpha_{n}\Vert x_{n}-x_{n-1}\Vert +|\alpha_{n}-\alpha_{n-1}| M_{2}+\theta \beta_{n}\Vert x_{n}-x_{n-1}\Vert +| \beta_{n}-\beta_{n-1}| M_{2} \\ &{} +\gamma_{n}\bigl\Vert \bigl[s_{n} x_{n}+(1-s_{n})x_{n+1}\bigr]-\bigl[s_{n-1} x_{n-1}+(1-s_{n-1})x_{n}\bigr]\bigr\Vert \\ =&\alpha_{n}\Vert x_{n}-x_{n-1}\Vert +| \alpha_{n}-\alpha_{n-1}| M_{2}+\theta\beta _{n}\Vert x_{n}-x_{n-1}\Vert +| \beta_{n}-\beta_{n-1}| M_{2} \\ &{} +\gamma_{n}\bigl\Vert (1-s_{n}) (x_{n+1}-x_{n})+s_{n-1}(x_{n}-x_{n-1}) \bigr\Vert \\ \leqslant&\alpha_{n}\Vert x_{n}-x_{n-1}\Vert +|\alpha_{n}-\alpha_{n-1}| M_{2}+\theta \beta_{n}\Vert x_{n}-x_{n-1}\Vert +| \beta_{n}-\beta_{n-1}| M_{2} \\ &{} +\gamma_{n}(1-s_{n})\Vert x_{n+1}-x_{n} \Vert +\gamma_{n}s_{n-1}\Vert x_{n}-x_{n-1} \Vert \\ =&\gamma_{n}(1-s_{n})\Vert x_{n+1}-x_{n} \Vert +(\alpha_{n}+\theta\beta_{n}+\gamma _{n}s_{n-1})\Vert x_{n}-x_{n-1}\Vert \\ &{} +\bigl( |\alpha_{n}-\alpha_{n-1}|+|\beta_{n}- \beta_{n-1}| \bigr)M_{2}, \end{aligned}$$

where \(M_{2}>0\) is a constant such that

$$M_{2}\geqslant\max \Bigl\{ \sup_{n\geqslant0}\bigl\Vert x_{n}-T \bigl(s_{n}x_{n}+(1-s_{n})x_{n+1} \bigr)\bigr\Vert , \sup_{n\geqslant0}\bigl\Vert f(x_{n})-T \bigl(s_{n}x_{n}+(1-s_{n})x_{n+1} \bigr)\bigr\Vert \Bigr\} . $$

It turns out that

$$\begin{aligned}& \bigl[1-\gamma_{n}(1-s_{n})\bigr]\|x_{n+1}-x_{n} \| \\& \quad \leqslant(\alpha_{n}+\theta\beta _{n}+\gamma_{n}s_{n-1}) \|x_{n}-x_{n-1}\| + \bigl(\vert \alpha_{n}- \alpha_{n-1}\vert +|\beta_{n}-\beta_{n-1}| \bigr)M_{2}, \end{aligned}$$

that is,

$$\begin{aligned} \Vert x_{n+1}-x_{n}\Vert \leqslant& \frac{\alpha_{n}+\theta\beta_{n}+\gamma _{n}s_{n-1}}{1-\gamma_{n}(1-s_{n})} \Vert x_{n}-x_{n-1}\Vert \\ &{}+\frac{M_{2}}{1-\gamma_{n}(1-s_{n})}\bigl(| \alpha_{n}-\alpha_{n-1}|+|\beta _{n}- \beta_{n-1}|\bigr) \\ =& \biggl[1-\frac{\beta_{n}(1-\theta)+\gamma_{n}(s_{n}-s_{n-1})}{1-\gamma _{n}(1-s_{n})} \biggr]\Vert x_{n}-x_{n-1} \Vert \\ &{}+\frac{ M_{2}}{1-\gamma_{n}(1-s_{n})}\bigl(|\alpha_{n}-\alpha_{n-1}|+|\beta _{n}-\beta_{n-1}|\bigr). \end{aligned}$$

Note that \(0<\varepsilon\leqslant s_{n-1}\leqslant s_{n}<1\), we have

$$0< \varepsilon\leqslant s_{n}< 1-\gamma_{n}(1-s_{n})< 1 $$

and

$$\frac{\beta_{n}(1-\theta)+\gamma_{n}(s_{n}-s_{n-1})}{1-\gamma _{n}(1-s_{n})}\geqslant\beta_{n}(1-\theta). $$

Thus,

$$ \|x_{n+1}-x_{n}\|\leqslant \bigl[1-\beta_{n}(1- \theta)\bigr]\|x_{n}-x_{n-1}\| +\frac{ M_{2}}{\varepsilon}\bigl( \vert \alpha_{n}-\alpha_{n-1}\vert +|\beta_{n}- \beta_{n-1}|\bigr). $$

Since \(\sum_{n=0}^{\infty}\beta_{n}=\infty\), \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\), and \(\sum_{n=0}^{\infty}|\beta _{n+1}-\beta_{n}|<\infty\), by Lemma 2.3, we can get \(\| x_{n+1}-x_{n}\|\rightarrow0\) as \(n\rightarrow\infty\).

Step 3. Now, we prove that \(\lim_{n\rightarrow\infty} \|x_{n}-Tx_{n}\|=0\).

In fact, it can see that

$$\begin{aligned} \Vert x_{n}-Tx_{n}\Vert \leqslant&\Vert x_{n}-x_{n+1}\Vert +\bigl\Vert x_{n+1}-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \bigr\Vert \\ &{}+\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-Tx_{n}\bigr\Vert \\ \leqslant&\Vert x_{n}-x_{n+1}\Vert +\bigl\Vert \alpha_{n} \bigl[x_{n}-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \bigr] \\ &{}+ \beta_{n} \bigl[f(x_{n})-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \bigr]\bigr\Vert +\bigl\Vert \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x_{n}\bigr\Vert \\ \leqslant&\Vert x_{n}-x_{n+1}\Vert +\alpha_{n} \bigl\Vert x_{n}-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)\bigr\Vert \\ &{} +\beta_{n}\bigl\Vert f(x_{n})-T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) \bigr\Vert +(1-s_{n})\Vert x_{n+1}-x_{n}\Vert \\ \leqslant&(2-s_{n})\Vert x_{n}-x_{n+1}\Vert +( \alpha_{n}+\beta_{n})M_{2} \\ \leqslant&2\Vert x_{n}-x_{n+1}\Vert +(1- \gamma_{n})M_{2}. \end{aligned}$$

Then, by \(\lim_{n\rightarrow\infty}\|x_{n+1}-x_{n}\|=0\) and \(\lim_{n\rightarrow\infty}\gamma_{n}=1\), we get \(\|x_{n}-Tx_{n}\|\rightarrow0\) as \(n\rightarrow\infty\). Similarly to (3.3), we also have

$$ \bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x_{n}\bigr\Vert \rightarrow0\quad (\text{as } n\rightarrow \infty). $$
(3.9)

Step 4. In this step, we claim that \(\limsup_{n\rightarrow\infty }\langle x^{*}-f(x^{*}), x^{*}-x_{n}\rangle\leqslant0\), where \(x^{*}=P_{F(T)}f(x^{*})\).

The proof is the same as Step 4 in Theorem 3.1, here we omit it.

Step 5. Finally, we show that \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow\infty\). Here again \(x^{*}\in F(T)\) is the unique fixed point of the contraction \(P_{F(T)}f\) or in other words, \(x^{*}=P_{F(T)}f(x^{*})\).

In fact, we have

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} =&\bigl\Vert \alpha_{n}x_{n}+\beta_{n} f(x_{n})+ \gamma_{n}T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\Vert ^{2} \\ =&\bigl\Vert \alpha_{n}\bigl[x_{n}-x^{*}\bigr]+ \beta_{n}\bigl[ f(x_{n})-x^{*}\bigr]+\gamma_{n}\bigl[T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr]\bigr\Vert ^{2} \\ =&\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\beta_{n}^{2}\bigl\Vert f(x_{n})-x^{*}\bigr\Vert ^{2}+\gamma_{n}^{2} \bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\Vert ^{2} \\ &{} +2\alpha_{n}\beta_{n}\bigl\langle x_{n}-x^{*}, f(x_{n})-x^{*}\bigr\rangle \\ &{}+2\alpha _{n}\gamma_{n} \bigl\langle x_{n}-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ &{} +2\beta_{n}\gamma_{n}\bigl\langle f(x_{n})-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ \leqslant&\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+\beta_{n}^{2}\bigl\Vert f(x_{n})-x^{*}\bigr\Vert ^{2}+\gamma _{n}^{2} \bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*} \bigr\Vert ^{2} \\ &{} +2\alpha_{n}\beta_{n}\bigl\langle x_{n}-x^{*}, f(x_{n})-x^{*}\bigr\rangle \\ &{}+2\alpha _{n}\gamma_{n} \bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\Vert \\ &{} +2\beta_{n}\gamma_{n}\bigl\langle f(x_{n})-f \bigl(x^{*}\bigr),T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ &{} +2\beta_{n}\gamma_{n}\bigl\langle f\bigl(x^{*} \bigr)-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle \\ \leqslant& \alpha_{n}^{2}\bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+\gamma_{n}^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\ &{} +2\alpha_{n}\gamma_{n} \bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert \\ &{} +2\beta_{n}\gamma_{n}\bigl\Vert f(x_{n})-f \bigl(x^{*}\bigr)\bigr\Vert \cdot\bigl\Vert T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr) -x^{*}\bigr\Vert +K_{n} \\ \leqslant& \alpha_{n}^{2}\bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+\gamma_{n}^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\ &{} +2\alpha_{n}\gamma_{n} \bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert \\ &{} +2\theta\beta_{n}\gamma_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1} -x^{*}\bigr\Vert +K_{n} \\ =&\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\gamma_{n}^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\ &{} +2\gamma_{n}(\alpha_{n}+\theta\beta_{n}) \bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert +K_{n}, \end{aligned}$$

where

$$\begin{aligned} K_{n} :=&\beta_{n}^{2}\bigl\Vert f(x_{n})-x^{*}\bigr\Vert ^{2}+2\alpha_{n} \beta_{n}\bigl\langle x_{n}-x^{*}, f(x_{n})-x^{*}\bigr\rangle \\ &{}+2\beta_{n}\gamma_{n}\bigl\langle f\bigl(x^{*} \bigr)-x^{*},T \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr)-x^{*}\bigr\rangle . \end{aligned}$$

It turns out that

$$\begin{aligned}& \gamma_{n}^{2}\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad {}+2\gamma_{n}(\alpha _{n}+\theta \beta_{n}) \bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*} \bigr\Vert \\& \quad {}+K_{n}+\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}\geqslant0. \end{aligned}$$

Solving this quadratic inequality for \(\|s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*}\|\) yields

$$\begin{aligned}& \bigl\Vert s_{n} x_{n}+(1-s_{n})x_{n+1}-x^{*} \bigr\Vert \\& \quad \geqslant\frac{1}{2\gamma_{n}^{2}} \Bigl\{ -2\gamma_{n}(\alpha _{n}+\theta\beta_{n}) \bigl\Vert x_{n}-x^{*}\bigr\Vert \\& \qquad {}+\sqrt{4\gamma_{n}^{2}( \alpha_{n}+\theta\beta_{n})^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-4\gamma_{n}^{2} \bigl(K_{n}+\alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \bigr)} \Bigr\} \\& \quad =\frac{1}{\gamma_{n}} \Bigl[-(\alpha_{n}+\theta \beta_{n}) \bigl\Vert x_{n}-x^{*}\bigr\Vert \\& \qquad {}+\sqrt {(\alpha_{n}+\theta\beta_{n})^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-K_{n}- \alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}} \Bigr]. \end{aligned}$$

This implies that

$$\begin{aligned}& s_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert +(1-s_{n}) \bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad \geqslant\frac{1}{\gamma_{n}} \Bigl[-(\alpha_{n}+\theta \beta_{n}) \bigl\Vert x_{n}-x^{*}\bigr\Vert \\& \qquad {}+\sqrt {(\alpha_{n}+\theta\beta_{n})^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-K_{n}- \alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}} \Bigr], \end{aligned}$$

namely,

$$\begin{aligned}& (s_{n}\gamma_{n}+\alpha_{n}+\theta \beta_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert +(1-s_{n})\gamma _{n}\bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad \geqslant\sqrt{(\alpha_{n}+\theta\beta_{n})^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-K_{n}- \alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}}. \end{aligned}$$

Then

$$\begin{aligned}& (\alpha_{n}+\theta\beta_{n})^{2} \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-K_{n}- \alpha_{n}^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad \leqslant(s_{n}\gamma_{n}+\alpha_{n}+\theta \beta_{n})^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+(1-s_{n})^{2}\gamma_{n}^{2} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \qquad {}+2(s_{n}\gamma_{n}+\alpha_{n}+\theta \beta_{n}) (1-s_{n})\gamma_{n}\bigl\Vert x_{n}-x^{*}\bigr\Vert \cdot\bigl\Vert x_{n+1}-x^{*}\bigr\Vert \\& \quad \leqslant(s_{n}\gamma_{n}+\alpha_{n}+\theta \beta_{n})^{2}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+(1-s_{n})^{2}\gamma_{n}^{2} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \qquad {}+(s_{n}\gamma_{n}+\alpha_{n}+\theta \beta_{n}) (1-s_{n})\gamma_{n} \bigl[\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \bigr], \end{aligned}$$

which is reduced to the inequality

$$\begin{aligned}& \bigl[1-(1-s_{n})^{2}\gamma_{n}^{2}-(s_{n} \gamma_{n}+\alpha_{n}+\theta\beta _{n}) (1-s_{n})\gamma_{n} \bigr] \bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2} \\& \quad \leqslant \bigl[(s_{n}\gamma_{n}+\alpha_{n}+ \theta\beta_{n})^{2}+(s_{n}\gamma _{n}+ \alpha_{n}+\theta\beta_{n}) (1-s_{n}) \gamma_{n}+\alpha_{n}^{2}-(\alpha _{n}+ \theta\beta_{n})^{2} \bigr] \\& \qquad {}\times\bigl\Vert x_{n}-x^{*} \bigr\Vert ^{2}+K_{n}, \end{aligned}$$

that is,

$$\begin{aligned}& \bigl[1-(1-s_{n})\gamma_{n} \bigl(1+(\theta-1) \beta_{n} \bigr) \bigr] \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \\& \quad \leqslant \bigl[(s_{n}\gamma_{n}+\alpha_{n}+ \theta\beta_{n}) \bigl(1+(\theta-1)\beta_{n} \bigr) -2\theta \alpha_{n}\beta_{n}-\theta^{2} \beta _{n}^{2} \bigr]\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+K_{n}. \end{aligned}$$

It follows that

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \leqslant& \frac{(s_{n}\gamma_{n}+\alpha_{n}+\theta \beta_{n}) (1+(\theta-1)\beta_{n} ) -2\theta\alpha_{n}\beta _{n}-\theta^{2} \beta_{n}^{2}}{1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta _{n} )}\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2} \\ &{} +\frac{K_{n}}{1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} )}. \end{aligned}$$
(3.10)

Let

$$\begin{aligned} y_{n} :=&\frac{1}{\beta_{n}} \biggl\{ 1-\frac{(s_{n}\gamma_{n}+\alpha _{n}+\theta\beta_{n}) (1+(\theta-1)\beta_{n} ) -2\theta\alpha _{n}\beta_{n}-\theta^{2} \beta_{n}^{2}}{1-(1-s_{n})\gamma_{n} (1+(\theta -1)\beta_{n} )} \biggr\} \\ =&\frac{2+2\theta\alpha_{n}-\beta_{n}}{1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} )}. \end{aligned}$$

Since the sequence \(\{s_{n}\}\) satisfies \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\), \(\lim_{n\rightarrow \infty}s_{n}\) exists; assume that

$$\lim_{n\rightarrow\infty} s_{n}=s^{*}>0. $$

Then

$$\lim_{n\rightarrow\infty}y_{n}=\frac{2 }{s^{*}}>0. $$

Let \(\rho_{2}\) satisfy

$$0< \rho_{2}< \frac{2 }{s^{*}}, $$

then there exists an integer \(N_{2}\) big enough such that \(y_{n}>\rho_{2}\) for all \(n\geqslant N_{2}\). Hence, we have

$$\frac{(s_{n}\gamma_{n}+\alpha_{n}+\theta\beta_{n}) (1+(\theta -1)\beta_{n} ) -2\theta\alpha_{n}\beta_{n}-\theta^{2} \beta _{n}^{2}}{1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} )}\leqslant 1-\rho_{2}\beta_{n} $$

for all \(n\geqslant N_{2}\). It turns out from (3.10) that, for all \(n\geqslant N_{2}\),

$$ \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} \leqslant(1-\rho_{2} \beta_{n})\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+\frac {K_{n}}{1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} )}. $$
(3.11)

By \(\lim_{n\rightarrow\infty}\alpha_{n}=\lim_{n\rightarrow\infty }\beta_{n}=0\), \(\lim_{n\rightarrow\infty}\gamma_{n}=1\), (3.9), and Step 4, we have

$$\begin{aligned}& \limsup_{n\rightarrow\infty}\frac{K_{n}}{\rho_{2} \beta_{n} [1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} ) ]} \\& \quad =\limsup_{n\rightarrow\infty}\biggl(\frac{\beta_{n}\| f(x_{n})-x^{*}\| ^{2}+2\alpha_{n}\langle x_{n}-x^{*}, f(x_{n})-x^{*}\rangle}{\rho_{2} [1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} ) ]} \\& \qquad {} +\frac{2\gamma_{n}\langle f(x^{*})-x^{*},T (s_{n} x_{n}+(1-s_{n})x_{n+1} )-x^{*}\rangle}{\rho_{2} [1-(1-s_{n})\gamma_{n} (1+(\theta-1)\beta_{n} ) ]}\biggr) \\& \quad \leqslant0. \end{aligned}$$
(3.12)

From (3.11), (3.12), and Lemma 2.2, we can obtain that

$$\lim_{n\rightarrow\infty}\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}=0, $$

namely, \(x_{n}\rightarrow x^{*}\) as \(n\rightarrow\infty\). This completes the proof. □

4 Application

4.1 A more general system of variational inequalities

Let C be a nonempty closed convex subset of the real Hilbert space H and \(\{A_{i}\}_{i=1}^{N}:C\rightarrow H\) be a family of mappings. In [3], Cai and Bu considered the problem of finding \((x_{1}^{*},x_{2}^{*},\ldots, x_{N}^{*})\in C\times C\times \cdots\times C \) such that

$$ \left \{ \textstyle\begin{array}{l} \langle\lambda_{N}A_{N}x_{N}^{*}+x_{1}^{*}-x_{N}^{*},x-x_{1}^{*}\rangle\geqslant0,\quad \forall x\in C, \\ \langle\lambda_{N-1}A_{N-1}x_{N-1}^{*}+x_{N}^{*}-x_{N-1}^{*},x-x_{N}^{*}\rangle \geqslant0,\quad \forall x\in C, \\ \ldots, \\ \langle\lambda_{2}A_{2}x_{2}^{*}+x_{3}^{*}-x_{2}^{*},x-x_{3}^{*}\rangle \geqslant0,\quad \forall x\in C, \\ \langle\lambda_{1}A_{1}x_{1}^{*}+x_{2}^{*}-x_{1}^{*},x-x_{2}^{*}\rangle \geqslant0, \quad \forall x\in C. \end{array}\displaystyle \right . $$
(4.1)

Equation (4.1) can be rewritten

$$ \left \{ \textstyle\begin{array}{l} \langle x_{1}^{*}-(I-\lambda_{N}A_{N})x_{N}^{*},x-x_{1}^{*}\rangle\geqslant0,\quad \forall x\in C, \\ \langle x_{N}^{*}-(I-\lambda_{N-1}A_{N-1})x_{N-1}^{*},x-x_{N}^{*}\rangle\geqslant0,\quad \forall x\in C, \\ \ldots, \\ \langle x_{3}^{*}-(I-\lambda_{2}A_{2})x_{2}^{*},x-x_{3}^{*}\rangle\geqslant0, \quad \forall x\in C, \\ \langle x_{2}^{*}-(I-\lambda_{1}A_{1})x_{1}^{*},x-x_{2}^{*}\rangle\geqslant0,\quad \forall x\in C, \end{array}\displaystyle \right . $$

which is called a more general system of variational inequalities in Hilbert spaces, where \(\lambda_{i}>0\) for all \(i\in\{1, 2, \ldots, N\}\). We also have the following lemmas.

Lemma 4.1

[3]

Let C be a nonempty closed convex subset of the real Hilbert space H. For \(i=1,2,\ldots,N\), let \(A_{i}: C\rightarrow H\) be \(\delta_{i}\)-inverse-strongly monotone for some positive real number \(\delta_{i}\), namely,

$$\langle A_{i}x-A_{i}y,x-y\rangle\geqslant \delta_{i} \| A_{i}x-A_{i}y\|^{2},\quad \forall x, y\in C. $$

Let \(G: C\rightarrow C\) be a mapping defined by

$$ G(x)=P_{C}(I-\lambda_{N}A_{N})P_{C}(I- \lambda_{N-1}A_{N-1})\cdots P_{C}(I- \lambda_{2}A_{2})P_{C}(I-\lambda_{1}A_{1})x, \quad \forall x \in C. $$
(4.2)

If \(0 <\lambda_{i}\leqslant2\delta_{i}\) for all \(i\in\{1,2,\ldots,N\} \), then G is nonexpansive.

Lemma 4.2

[4]

Let C be a nonempty closed convex subset of the real Hilbert space H. Let \(A_{i}:C\rightarrow H\) be a nonlinear mapping, where \(i=1,2,\ldots,N\). For given \(x_{i}^{*}\in C\), \(i=1,2,\ldots,N\), \((x_{1}^{*},x_{2}^{*},\ldots,x_{N}^{*})\) is a solution of the problem (4.1) if and only if

$$ x_{1}^{*}=P_{C}(I-\lambda_{N}A_{N})x_{N}^{*}, \qquad x_{i}^{*}=P_{C}(I-\lambda _{i-1}A_{i-1})x_{i-1}^{*}, \quad i=2,3,\ldots,N, $$
(4.3)

that is,

$$x_{1}^{*}=P_{C}(I-\lambda_{N}A_{N})P_{C}(I- \lambda_{N-1}A_{N-1})\cdots P_{C}(I- \lambda_{2}A_{2})P_{C}(I-\lambda_{1}A_{1})x_{1}^{*}. $$

From Lemma 4.2, we know that \(x_{1}^{*}=G(x_{1}^{*})\), that is, \(x_{1}^{*}\) is a fixed point of the mapping G, where G is defined by (4.2). Moreover, if we find the fixed point \(x_{1}^{*}\), it is easy to get the other points by (4.3), in other words, we solve the problem (4.1). Applying Theorems 3.1 and 3.2, we get the results below.

Theorem 4.1

Let C be a nonempty closed convex subset of the real Hilbert space H. For \(i=1,2,\ldots,N\), let \(A_{i}: C\rightarrow H\) be \(\delta _{i}\)-inverse-strongly monotone for some positive real number \(\delta_{i}\) with \(F(G)\neq\emptyset\), where \(G: C\rightarrow C\) is defined by

$$G(x)=P_{C}(I-\lambda_{N}A_{N})P_{C}(I- \lambda_{N-1}A_{N-1})\cdots P_{C}(I- \lambda_{2}A_{2})P_{C}(I-\lambda_{1}A_{1})x, \quad \forall x \in C. $$

Let \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})G \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$

where \(\lambda_{i}\in(0,2\delta_{i})\), \(i=1,2,\ldots,N\), \(\{\alpha_{n}\} , \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  2. (2)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a fixed point \(x^{*}\) of the nonexpansive mapping G, which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0,\quad \forall y\in F(G). $$

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{F(G)}f\), that is, \(P_{F(G)}f(x^{*})=x^{*}\).

Theorem 4.2

Let C be a nonempty closed convex subset of the real Hilbert space H. For \(i=1,2,\ldots,N\), let \(A_{i}: C\rightarrow H\) be \(\delta _{i}\)-inverse-strongly monotone for some positive real number \(\delta_{i}\) with \(F(G)\neq\emptyset\), where \(G: C\rightarrow C\) is defined by

$$G(x)=P_{C}(I-\lambda_{N}A_{N})P_{C}(I- \lambda_{N-1}A_{N-1})\cdots P_{C}(I- \lambda_{2}A_{2})P_{C}(I-\lambda_{1}A_{1})x, \quad \forall x \in C. $$

Let \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n}x_{n}+\beta_{n} f(x_{n})+\gamma_{n}G \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$

where \(\lambda_{i}\in(0,2\delta_{i})\), \(i=1,2,\ldots,N\), \(\{\alpha_{n}\} , \{\beta_{n}\}, \{\gamma_{n}\}, \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\) and \(\lim_{n\rightarrow\infty }\gamma_{n}=1\);

  2. (2)

    \(\sum_{n=0}^{\infty}\beta_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\) and \(\sum_{n=0}^{\infty}|\beta_{n+1}-\beta_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a fixed point \(x^{*}\) of the nonexpansive mapping G, which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0,\quad \forall y\in F(G). $$

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{F(G)}f\), that is, \(P_{F(G)}f(x^{*})=x^{*}\).

4.2 The constrained convex minimization problem

Next, we consider the following constrained convex minimization problem:

$$ \min_{x\in C}\varphi(x), $$
(4.4)

where \(\varphi:C\rightarrow R\) is a real-valued convex function and assumes that the problem (4.4) is consistent (i.e., its solution set is nonempty). Let Ω denote its solution set.

For the minimization problem (4.4), if φ is (Fréchet) differentiable, then we have the following lemma.

Lemma 4.3

(Optimality condition) [5]

A necessary condition of optimality for a point \(x^{*}\in C\) to be a solution of the minimization problem (4.4) is that \(x^{*}\) solves the variational inequality

$$ \bigl\langle \nabla\varphi\bigl(x^{*}\bigr),x-x^{*}\bigr\rangle \geqslant0,\quad \forall x\in C. $$
(4.5)

Equivalently, \(x^{*}\in C\) solves the fixed point equation

$$ x^{*}=P_{C}\bigl(x^{*}-\lambda\nabla\varphi\bigl(x^{*}\bigr)\bigr) $$

for every constant \(\lambda>0\). If, in addition, φ is convex, then the optimality condition (4.5) is also sufficient.

It is well known that the mapping \(P_{C}(I-\lambda A)\) is nonexpansive when the mapping A is δ-inverse-strongly monotone and \(0<\lambda<2\delta\). We therefore have the following results.

Theorem 4.3

Let C be a nonempty closed convex subset of the real Hilbert space H. For the minimization problem (4.4), assume that φ is (Fréchet) differentiable and the gradient ∇φ is a δ-inverse-strongly monotone mapping for some positive real number δ. Let \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})P_{C}(I-\lambda\nabla\varphi ) \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$

where \(\lambda\in(0,2\delta)\), \(\{\alpha_{n}\}, \{s_{n}\} \subset (0,1)\), satisfying the following conditions:

  1. (1)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  2. (2)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a solution \(x^{*}\) of the minimization problem (4.4), which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0,\quad \forall y\in\Omega. $$

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{\Omega}f\), that is, \(P_{\Omega}f(x^{*})=x^{*}\).

Theorem 4.4

Let C be a nonempty closed convex subset of the real Hilbert space H. For the minimization problem (4.4), assume that φ is (Fréchet) differentiable and the gradient ∇φ is a δ-inverse-strongly monotone mapping for some positive real number δ. Let \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n}x_{n}+\beta_{n} f(x_{n})+\gamma_{n}P_{C}(I-\lambda\nabla \varphi) \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$

where \(\lambda\in(0,2\delta)\), \(\{\alpha_{n}\}, \{\beta_{n}\}, \{ \gamma_{n}\}, \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\) and \(\lim_{n\rightarrow\infty }\gamma_{n}=1\);

  2. (2)

    \(\sum_{n=0}^{\infty}\beta_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\) and \(\sum_{n=0}^{\infty}|\beta_{n+1}-\beta_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a solution \(x^{*}\) of the minimization problem (4.4), which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0,\quad \forall y\in\Omega. $$

In other words, \(x^{*}\) is the unique fixed point of the contraction \(P_{\Omega}f\), that is, \(P_{\Omega}f(x^{*})=x^{*}\).

4.3 K-Mapping

In 2009, Kangtunyakarn and Suantai [6] gave K-mapping generated by \(T_{1},T_{2},\ldots,T_{N}\) and \(\lambda_{1}, \lambda_{2},\ldots ,\lambda_{N}\) as follows.

Definition 4.1

[6]

Let C be a nonempty convex subset of a real Banach space. Let \(\{T_{i}\} ^{N}_{i=1}\) be a finite family of mappings of C into itself and let \(\lambda_{1},\lambda_{2},\ldots, \lambda_{N}\) be real numbers such that \(0\leqslant\lambda_{i}\leqslant1\) for every \(i=1,2,\ldots, N\). We define a mapping \(K : C\rightarrow C\) as follows:

$$\begin{aligned}& U_{1} = \lambda_{1} T_{1}+(1- \lambda_{1})I, \\& U_{2} = \lambda_{2} T_{2}U_{1}+(1- \lambda_{2})U_{1}, \\& U_{3} = \lambda_{3} T_{3}U_{2}+(1- \lambda_{3})U_{2}, \\& \ldots, \\& U_{N-1} = \lambda_{N-1} T_{N-1}U_{N-2}+(1- \lambda_{N-1})U_{N-2}, \\& K = U_{N}=\lambda_{N} T_{N}U_{N-1}+(1- \lambda_{N})U_{N-1}. \end{aligned}$$

Such a mapping K is called the K-mapping generated by \(T_{1}, T_{2},\ldots, T_{N}\) and \(\lambda_{1}, \lambda_{2}, \ldots, \lambda_{N}\).

In 2014, Suwannaut and Kangtunyakarn [7] established the following main result for the K-mapping generated by \(T_{1}, T_{2},\ldots, T_{N}\) and \(\lambda_{1}, \lambda_{2},\ldots, \lambda_{N}\).

Lemma 4.4

[7]

Let C be a nonempty closed convex subset of the real Hilbert space H. For \(i=1, 2,\ldots, N\), let \(\{T_{i}\}^{N}_{i=1}\) be a finite family of \(\kappa_{i}\)-strictly pseudo-contractive mapping of C into itself with \(\kappa_{i}\leqslant\omega_{1}\) and \(\bigcap_{i=1}^{N}F(T_{i})\neq\emptyset\), namely, there exist constants \(\kappa_{i}\in[0, 1)\) such that

$$ \| T_{i}x-T_{i}y \|^{2}\leqslant\| x-y \|^{2}+ \kappa_{i} \bigl\Vert (I-T_{i})x-(I-T_{i})y \bigr\Vert ^{2},\quad \forall x, y\in C. $$

Let \(\lambda_{1},\lambda_{2},\ldots, \lambda_{N}\) be real numbers with \(0<\lambda_{i}<\omega_{2}\) for all \(i=1, 2,\ldots, N\) and \(\omega_{1}+\omega_{2}<1\). Let K be the K-mapping generated by \(T_{1}, T_{2}, \ldots, T_{N}\) and \(\lambda_{1}, \lambda_{2},\ldots, \lambda _{N}\). Then the following properties hold:

  1. (1)

    \(F(K)=\bigcap_{i=1}^{N}F(T_{i})\);

  2. (2)

    K is a nonexpansive mapping.

Based on Lemma 4.4, we have the following results.

Theorem 4.5

Let C be a nonempty closed convex subset of the real Hilbert space H. For \(i=1, 2,\ldots, N\), let \(\{T_{i}\}^{N}_{i=1}\) be a finite family of \(\kappa_{i}\)-strictly pseudo-contractive mapping of C into itself with \(\kappa_{i}\leqslant\omega_{1}\) and \(\bigcap_{i=1}^{N}F(T_{i})\neq\emptyset\). Let \(\lambda_{1},\lambda_{2},\ldots, \lambda_{N}\) be real numbers with \(0<\lambda_{i}<\omega_{2}\) for all \(i=1, 2,\ldots, N\) and \(\omega_{1}+\omega_{2}<1\). Let K be the K-mapping generated by \(T_{1}, T_{2}, \ldots, T_{N}\) and \(\lambda_{1}, \lambda_{2},\ldots, \lambda_{N}\). Let \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n} f(x_{n})+(1- \alpha_{n})K \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$

where \(\{\alpha_{n}\}, \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

  2. (2)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a common fixed point \(x^{*}\) of the mappings \(\{T_{i}\}_{i=1}^{N}\), which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0, \quad \forall y\in F(K)=\bigcap _{i=1}^{N}F(T_{i}). $$

In other words, the point \(x^{*}\) is the unique fixed point of the contraction \(P_{\bigcap_{i=1}^{N}F(T_{i})}f\), that is, \(P_{\bigcap_{i=1}^{N}F(T_{i})}f(x^{*})=x^{*}\).

Theorem 4.6

Let C be a nonempty closed convex subset of the real Hilbert space H. For \(i=1, 2,\ldots, N\), let \(\{T_{i}\}^{N}_{i=1}\) be a finite family of \(\kappa_{i}\)-strictly pseudo-contractive mapping of C into itself with \(\kappa_{i}\leqslant\omega_{1}\) and \(\bigcap_{i=1}^{N}F(T_{i})\neq\emptyset\). Let \(\lambda_{1},\lambda_{2},\ldots, \lambda_{N}\) be real numbers with \(0<\lambda_{i}<\omega_{2}\) for all \(i=1, 2,\ldots, N\) and \(\omega_{1}+\omega_{2}<1\). Let K be the K-mapping generated by \(T_{1}, T_{2}, \ldots, T_{N}\) and \(\lambda_{1}, \lambda_{2},\ldots, \lambda_{N}\). Let \(f: C \rightarrow C\) be a contraction with coefficient \(\theta\in[0,1)\). Pick any \(x_{0} \in C\), let \(\{x_{n}\}\) be a sequence generated by

$$ x_{n+1}=\alpha_{n}x_{n}+\beta_{n} f(x_{n})+\gamma_{n}G \bigl(s_{n} x_{n}+(1-s_{n})x_{n+1} \bigr), $$

where \(\{\alpha_{n}\}, \{\beta_{n}\}, \{\gamma_{n}\}, \{s_{n}\} \subset(0,1)\), satisfying the following conditions:

  1. (1)

    \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\) and \(\lim_{n\rightarrow\infty }\gamma_{n}=1\);

  2. (2)

    \(\sum_{n=0}^{\infty}\beta_{n}=\infty\);

  3. (3)

    \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\) and \(\sum_{n=0}^{\infty}|\beta_{n+1}-\beta_{n}|<\infty\);

  4. (4)

    \(0<\varepsilon\leqslant s_{n}\leqslant s_{n+1}<1\) for all \(n\geqslant0\).

Then \(\{x_{n}\}\) converges strongly to a common fixed point \(x^{*}\) of the mappings \(\{T_{i}\}_{i=1}^{N}\), which is also the unique solution of the variational inequality

$$\bigl\langle (I-f)x,y-x\bigr\rangle \geqslant0, \quad \forall y\in F(K)=\bigcap _{i=1}^{N}F(T_{i}). $$

In other words, the point \(x^{*}\) is the unique fixed point of the contraction \(P_{\bigcap_{i=1}^{N}F(T_{i})}f\), that is, \(P_{\bigcap_{i=1}^{N}F(T_{i})}f(x^{*})=x^{*}\).

References

  1. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  2. Xu, HK, Alghamdi, MA, Shahzad, N: The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 41 (2015)

    Article  MathSciNet  Google Scholar 

  3. Cai, G, Bu, SQ: Hybrid algorithm for generalized mixed equilibrium problems and variational inequality problems and fixed point problems. Comput. Math. Appl. 62, 4772-4782 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  4. Ke, YF, Ma, CF: A new relaxed extragradient-like algorithm for approaching common solutions of generalized mixed equilibrium problems, a more general system of variational inequalities and a fixed point problem. Fixed Point Theory Appl. 2013, 126 (2013)

    Article  Google Scholar 

  5. Su, M, Xu, HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 1, 35-43 (2010)

    MathSciNet  Google Scholar 

  6. Kangtunyakarn, A, Suantai, S: A new mapping for finding common solutions of equilibrium problems and fixed point problems of finite family of nonexpansive mappings. Nonlinear Anal., Theory Methods Appl. 71(10), 4448-4460 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  7. Suwannaut, S, Kangtunyakarn, A: Strong convergence theorem for the modified generalized equilibrium problem and fixed point problem of strictly pseudo-contractive mappings. Fixed Point Theory Appl. 2014, 86 (2014)

    Article  Google Scholar 

Download references

Acknowledgements

The project is supported by the National Natural Science Foundation of China (Grant Nos. 11071041 and 11201074), Fujian Natural Science Foundation (Grant Nos. 2013J01006, 2015J01578) and R&D of Key Instruments and Technologies for Deep Resources Prospecting (the National R&D Projects for Key Scientific Instruments) under Grant No. ZDYZ2012-1-02-04.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changfeng Ma.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ke, Y., Ma, C. The generalized viscosity implicit rules of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl 2015, 190 (2015). https://doi.org/10.1186/s13663-015-0439-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0439-6

MSC

Keywords