Skip to main content

The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces

Abstract

The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces is established. The strong convergence of this technique is proved under certain assumptions imposed on the sequence of parameters. Moreover, it is shown that the limit solves an additional variational inequality. Applications to variational inequalities, hierarchical minimization problems, and nonlinear evolution equations are included.

1 Introduction

The viscosity technique for nonexpansive mappings in Hilbert spaces was introduced by Moudafi [1], following the ideas of Attouch [2]. Refinements in Hilbert spaces and extensions to Banach spaces were obtained by Xu [3]. This technique uses (strict) contractions to regularize a nonexpansive mapping for the purpose of selecting a particular fixed point of the nonexpansive mapping, for instance, the fixed point of minimal norm or of a solution to another variational inequality.

Let H be a Hilbert space, let \(T: H\to H\) be a nonexpansive mapping (i.e., \(\|Tx-Ty\|\le\|x-y\|\) for all \(x,y\in H\)), and let \(f: H\to H\) be a contraction (i.e., \(\|f(x)-f(y)\|\le\alpha\|x-y\|\) for all \(x,y\in H\) and some \(\alpha\in[0,1)\)). The explicit viscosity method for nonexpansive mappings generates a sequence \(\{x_{n}\}\) through the iteration process:

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1- \alpha_{n})Tx_{n},\quad n\ge0, $$
(1.1)

where I is the identity of H and \(\{\alpha_{n}\}\) is a sequence in \((0,1)\). It is well known [1, 3] that under certain conditions, the sequence \(\{x_{n}\}\) converges in norm to a fixed point q of T which solves the variational inequality (VI)

$$ \bigl\langle (I-f)q,x-q\bigr\rangle \ge0,\quad x\in S, $$
(1.2)

where S is the set of fixed points of T, namely, \(S=\{x\in H: Tx=x\}\).

The implicit midpoint rule (IMR) is one of the powerful methods for solving ordinary differential equations; see [49] and the references therein. For instance, consider the initial value problem for the differential equation \(y'(t)=f(y(t))\) with the initial condition \(y(0)=y_{0}\), where f is a continuous function from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\). The IMR is an implicit method that generates a sequence \(\{y_{n}\}\) via the relation

$$ \frac{1}{h}(y_{n+1}-y_{n})=f \biggl(\frac{y_{n+1}+y_{n}}{2} \biggr). $$

In the case of nonlinear dissipative evolution equations in a Hilbert space H, the function f is of the form \(f=I-T\) with I the identity and T a nonexpansive mapping of H. The equilibrium problem is reduced to the fixed point problem \(x=Tx\). The IMR has therefore been extended [10] to nonexpansive mappings, which generates a sequence \(\{x_{n}\}\) by the implicit procedure:

$$ x_{n+1}=(1-t_{n})x_{n}+t_{n}T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr),\quad n\ge0, $$
(1.3)

where the initial guess \(x_{0}\in H\) is arbitrarily chosen, \(t_{n}\in (0,1)\) for all n.

In the present paper we will apply the viscosity technique to the implicit midpoint rule for nonexpansive mappings. More precisely, we consider the following semi-implicit algorithm which we call viscosity implicit midpoint rule (VIMR, for short):

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1- \alpha_{n})T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr),\quad n\ge0. $$
(1.4)

The idea is to use contractions to regularize the implicit midpoint rule for nonexpansive mappings. We will prove that the VIMR converges in norm to a fixed point of T which, in addition, also solves the VI (1.2).

The structure of the paper is set as follows. In Section 2, we introduce the notion of nearest point projections, the demiclosedness principle of nonexpansive mappings, and a convergence lemma. The viscosity implicit midpoint rule for nonexpansive mappings is introduced in Section 3. The main result, that is, the strong convergence of this method, is proved also in this section. Applications to variational inequalities, hierarchical minimization problems and nonlinear evolution equations are presented in the final section, Section 4.

2 Preliminaries

Assume that H is a Hilbert space with inner product \(\langle\cdot ,\cdot\rangle\) and norm \(\|\cdot\|\), respectively, and let C be a nonempty, closed, and convex subset of H. We then have the nearest point projection from H onto C, \(P_{C}\), defined by

$$ P_{C}x:=\arg\min_{z\in C} \|x-z\|^{2},\quad x\in H. $$
(2.1)

Namely, \(P_{C}x\) is the only point in C that minimizes the objective \(\| x-z\|^{2}\) over \(z\in C\).

Note that \(P_{C}x\) is characterized as follows:

$$ P_{C}x\in C \quad\mbox{and}\quad \langle x-P_{C}x,z-P_{C}x \rangle\le0 \quad\mbox{for all }z\in C. $$
(2.2)

Recall that a mapping \(T: C\to C\) is said to be nonexpansive if

$$\|Tx-Ty\|\le\|x-y\|,\quad x,y\in C. $$

The set of fixed points of T is written \(\operatorname{Fix}(T)\), that is, \(\operatorname{Fix}(T)=\{x\in C: Tx=x\}\). Note that \(\operatorname{Fix}(T)\) is always closed and convex; further note that if, in addition, C is bounded, then \(\operatorname{Fix}(T)\) is nonempty (cf. [11]).

The demiclosedness principle of nonexpansive mappings is quite helpful in verifying the weak convergence of an algorithm to a fixed point of a nonexpansive mapping.

Lemma 2.1

[11] (The demiclosedness principle)

Let H be a Hilbert space, C a closed convex subset of H, and \(T: C\to C\) a nonexpansive mapping with \(\operatorname{Fix}(T)\neq\emptyset\). If \(\{x_{n}\}\) is a sequence in C such that (i) \(\{x_{n}\}\) weakly converges to x and (ii) \(\{(I-T)x_{n}\}\) converges strongly to 0, then \(x=Tx\).

In proving the strong convergence of a sequence \(\{x_{n}\}\) to a point \(\bar{x}\), we always consider the real sequence \(\{\|x_{n}-\bar{x}\|^{2}\}\) and then apply the following convergence lemma.

Lemma 2.2

[12]

Assume \(\{a_{n}\}\) is a sequence of nonnegative real numbers such that

$$a_{n+1}\le(1-\gamma_{n})a_{n}+\delta_{n},\quad n\ge0, $$

where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\) and \(\{\delta_{n}\}\) is a sequence in such that

  1. (i)

    \(\sum_{n=1}^{\infty}\gamma_{n}=\infty\), and

  2. (ii)

    either \(\limsup_{n\to\infty}\delta_{n}/\gamma_{n}\le0\) or \(\sum_{n=1}^{\infty}|\delta_{n}|<\infty\).

Then \(\lim_{n\to\infty} a_{n}=0\).

3 The viscosity technique for implicit midpoint rule

Let H be a Hilbert space, C a nonempty, closed, and convex subset of H, and \(T: C\to C\) a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). Moreover, let \(f: C\to C\) be a contraction with coefficient \(\alpha\in[0,1)\). The viscosity method for nonexpansive mappings is essentially a regularization method of nonexpansive mappings by contractions. In this section we consider the viscosity technique for the implicit midpoint rule of nonexpansive mappings which generates a sequence \(\{x_{n}\}\) in the semi-implicit manner:

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1- \alpha_{n})T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr),\quad n\ge0, $$
(3.1)

where \(\alpha_{n}\in(0,1)\) for all n. Note that the scheme (3.1) is well defined for all n.

We will employ the following conditions on \(\{\alpha_{n}\}\):

  1. (C1)

    \(\lim_{n\to\infty}\alpha_{n}=0\),

  2. (C2)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\),

  3. (C3)

    either \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\) or \(\lim_{n\to\infty}\frac{\alpha_{n+1}}{\alpha_{n}}=1\).

The main result of this paper is the following result, the proof of which seems nontrivial.

Theorem 3.1

Let H be a Hilbert space, C a closed convex subset of H, \(T: C\to C\) a nonexpansive mapping with \(S:=\operatorname{Fix}(T)\neq\emptyset\), and \(f: C\to C\) a contraction with coefficient \(\alpha\in[0,1)\). Let \(\{x_{n}\}\) be generated by the viscosity implicit midpoint rule (3.1). Assume the conditions (C1)-(C3). Then \(\{x_{n}\}\) converges in norm to a fixed point q of T, which is also the unique solution of the variational inequality

$$ \bigl\langle (I-f)q,x-q\bigr\rangle \ge0,\quad x\in S. $$
(3.2)

In other words, q is the unique fixed point of the contraction \(P_{S}f\), that is, \(P_{S}f(q)=q\).

Proof

We divide the proof into several steps.

Step 1. We prove that \(\{x_{n}\}\) is bounded. To see this we take \(p\in S\) to deduce that

$$\begin{aligned} \|x_{n+1}-p\| &\le(1-\alpha_{n})\biggl\Vert T \biggl( \frac{x_{n}+x_{n+1}}{2} \biggr)-p\biggr\Vert +\alpha_{n} \bigl\| f(x_{n})-p\bigr\| \\ &\le(1-\alpha_{n})\biggl\Vert \frac{x_{n}+x_{n+1}}{2}-p\biggr\Vert + \alpha_{n}\bigl(\bigl\| f(x_{n})-f(p)\bigr\| +\bigl\| f(p)-p\bigr\| \bigr) \\ &\le\frac{1-\alpha_{n}}{2}\bigl(\|x_{n}-p\|+\|x_{n+1}-p\|\bigr)+ \alpha_{n}\bigl(\alpha\| x_{n}-p\|+\bigl\| f(p)-p\bigr\| \bigr). \end{aligned}$$

It then follows that

$$ \frac{1+\alpha_{n}}{2}\|x_{n+1}-p\|\le\frac{1+(2\alpha-1)\alpha_{n}}{2}\| x_{n}-p \|+\alpha_{n}\bigl\| f(p)-p\bigr\| $$

and, moreover,

$$\begin{aligned} \|x_{n+1}-p\| &\le\frac{1+(2\alpha-1)\alpha_{n}}{1+\alpha_{n}}\|x_{n}-p\|+ \frac{2\alpha _{n}}{1+\alpha_{n}}\bigl\| f(p)-p\bigr\| \\ &= \biggl(1-\frac{2(1-\alpha)\alpha_{n}}{1+\alpha_{n}} \biggr)\|x_{n}-p\| +\frac{2(1-\alpha)\alpha_{n}}{1+\alpha_{n}} \biggl(\frac{1}{1-\alpha}\bigl\| f(p)-p\bigr\| \biggr). \end{aligned}$$

Consequently, we get

$$ \|x_{n+1}-p\|\le\max \biggl\{ \|x_{n}-p\|,\frac{1}{1-\alpha} \bigl\| f(p)-p\bigr\| \biggr\} . $$

By induction we readily obtain

$$ \|x_{n}-p\|\le\max \biggl\{ \|x_{0}-p\|,\frac{1}{1-\alpha} \bigl\| f(p)-p\bigr\| \biggr\} $$

for all n. It turns out that \(\{x_{n}\}\) is bounded.

Step 2. \(\lim_{n\to\infty}\|x_{n+1}-x_{n}\|=0\). To see this we apply (3.1) to get

$$\begin{aligned} \|x_{n+1}-x_{n}\| ={}&\biggl\Vert \alpha_{n}f(x_{n})+(1- \alpha_{n})T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr) \\ &{} - \biggl(\alpha_{n-1}f(x_{n-1}) +(1- \alpha_{n-1})T \biggl(\frac{x_{n-1}+x_{n}}{2} \biggr) \biggr)\biggr\Vert \\ &= \biggl\| (1-\alpha_{n}) \biggl(T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr)-T \biggl(\frac{x_{n-1}+x_{n}}{2} \biggr) \biggr) \\ &{} +(\alpha_{n-1}-\alpha_{n}) \biggl(T \biggl( \frac{x_{n-1}+x_{n}}{2} \biggr)-f(x_{n-1}) \biggr) +\alpha_{n} \bigl(f(x_{n})-f(x_{n-1})\bigr) \biggr\| \\ \le{}&(1-\alpha_{n})\biggl\Vert \frac{1}{2}(x_{n+1}-x_{n-1}) \biggr\Vert +M|\alpha_{n-1}-\alpha_{n}|+\alpha \alpha_{n}\|x_{n}-x_{n-1}\| \\ \le{}&\frac{1}{2}(1-\alpha_{n}) \bigl(\|x_{n+1}-x_{n} \|+\|x_{n}-x_{n-1}\|\bigr) +M|\alpha_{n-1}- \alpha_{n}|+\alpha\alpha_{n}\|x_{n}-x_{n-1} \|. \end{aligned}$$

Here \(M>0\) is a constant such that

$$M\ge\sup_{n\ge0}\biggl\Vert T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr)-f(x_{n})\biggr\Vert . $$

It turns out that

$$\begin{aligned} \frac{1+\alpha_{n}}{2}\|x_{n+1}-x_{n}\| &\le \biggl( \frac{1}{2}(1-\alpha_{n})+\alpha\alpha_{n} \biggr) \|x_{n}-x_{n-1}\| +M|\alpha_{n-1}- \alpha_{n}|. \end{aligned}$$

Consequently, we arrive at

$$\begin{aligned} \|x_{n+1}-x_{n}\| &\le\frac{1+(2\alpha-1)\alpha_{n}}{1+\alpha_{n}} \|x_{n}-x_{n-1}\|+\frac {2M}{1+\alpha_{n}}|\alpha_{n-1}- \alpha_{n}| \\ &= \biggl(1-\frac{2(1-\alpha)\alpha_{n}}{1+\alpha_{n}} \biggr)\|x_{n}-x_{n-1}\| + \frac{2M}{1+\alpha_{n}}|\alpha_{n-1}-\alpha_{n}|. \end{aligned}$$
(3.3)

By virtue of the conditions (C2) and (C3), we can apply Lemma 2.2 to (3.3) to obtain \(\|x_{n+1}-x_{n}\|\to0\) as \(n\to\infty\), as required.

Step 3. \(\lim_{n\to\infty}\|x_{n}-Tx_{n}\|=0\). This follows from the argument below:

$$\begin{aligned} \|x_{n}-Tx_{n}\| &\le\|x_{n}-x_{n+1}\|+ \biggl\Vert x_{n+1}-T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr)\biggr\Vert + \biggl\Vert T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr)-Tx_{n}\biggr\Vert \\ &\le\|x_{n}-x_{n+1}\|+\alpha_{n}\biggl\Vert f(x_{n})-T \biggl(\frac {x_{n}+x_{n+1}}{2} \biggr)\biggr\Vert + \frac{1}{2}\|x_{n}-x_{n+1}\| \\ &\le\frac{3}{2}\|x_{n}-x_{n+1}\|+M\alpha_{n} \to0 \quad(\mbox{as } n\to\infty). \end{aligned}$$

Step 4. We prove that \(\omega_{w}(x_{n})\subset \operatorname{Fix}(T)\). Here

$$\omega_{w}(x_{n})=\bigl\{ x\in H: \mbox{there exists a subsequence of }\{x_{n}\}\mbox{ weakly converging to }x\bigr\} $$

is the weak ω-limit set of \(\{x_{n}\}\). This is now a straightforward consequence of Step 3 and Lemma 2.1.

Step 5. We claim that

$$ \limsup_{n\to\infty}\bigl\langle q-f(q),q-x_{n} \bigr\rangle \le0, $$
(3.4)

where \(q\in S\) is the unique fixed point of the contraction \(P_{S}f\), that is, \(q=P_{S}(f(q))\).

As a matter of fact, we can find a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\} \) such that \(\{x_{n_{j}}\}\) converges weakly to a point p and moreover,

$$ \limsup_{n\to\infty}\bigl\langle q-f(q),q-x_{n} \bigr\rangle =\lim_{j\to\infty }\bigl\langle q-f(q),q-x_{n_{j}} \bigr\rangle . $$
(3.5)

Since \(p\in \operatorname{Fix}(T)\) by Step 4, we can combine (3.4) and (3.5) and use (2.2) to conclude

$$ \limsup_{n\to\infty}\bigl\langle q-f(q),q-x_{n}\bigr\rangle =\bigl\langle q-f(q),q-p\bigr\rangle \le0. $$

Step 6. We finally prove that \(x_{n}\to q\) in norm. Here again \(q\in \operatorname{Fix}(T)\) is the unique fixed point of the contraction \(P_{S}f\) or in other words, \(q=P_{S}f(q)\). We present the details as follows:

$$\begin{aligned} \|x_{n+1}-q\|^{2}={}&\biggl\Vert (1-\alpha_{n}) \biggl(T\biggl(\frac{x_{n}+x_{n+1}}{2}\biggr)-q\biggr)+\alpha _{n} \bigl(f(x_{n})-q\bigr)\biggr\Vert ^{2} \\ ={}&(1-\alpha_{n})^{2}\biggl\Vert T\biggl(\frac{x_{n}+x_{n+1}}{2} \biggr)-q\biggr\Vert ^{2}+\alpha_{n}^{2}\bigl\| f(x_{n})-q\bigr\| ^{2} \\ &{} +2\alpha_{n}(1-\alpha_{n})\biggl\langle T\biggl( \frac {x_{n}+x_{n+1}}{2}\biggr)-q,f(x_{n})-q\biggr\rangle \\ \le{}&(1-\alpha_{n})^{2}\biggl\Vert \frac{x_{n}+x_{n+1}}{2}-q \biggr\Vert ^{2}+\alpha_{n}^{2}\bigl\| f(x_{n})-q\bigr\| ^{2} \\ &{} +2\alpha_{n}(1-\alpha_{n})\biggl\langle T\biggl( \frac{x_{n}+x_{n+1}}{2}\biggr)-q,f(x_{n})-f(q)\biggr\rangle \\ &{} +2\alpha_{n}(1-\alpha_{n})\biggl\langle T\biggl( \frac {x_{n}+x_{n+1}}{2}\biggr)-q,f(q)-q\biggr\rangle \\ \le{}&(1-\alpha_{n})^{2}\biggl\Vert \frac{x_{n}+x_{n+1}}{2}-q \biggr\Vert ^{2}+\alpha_{n}^{2}\bigl\| f(x_{n})-q\bigr\| ^{2} \\ &{} +2\alpha\alpha_{n}(1-\alpha_{n})\biggl\Vert \frac{x_{n}+x_{n+1}}{2}-q\biggr\Vert \cdot\|x_{n}-q\| \\ &{} +2\alpha_{n}(1-\alpha_{n})\biggl\langle T\biggl( \frac {x_{n}+x_{n+1}}{2}\biggr)-q,f(q)-q\biggr\rangle . \end{aligned}$$

Let

$$ \beta_{n}=\alpha_{n}^{2} \bigl\| f(x_{n})-q\bigr\| ^{2} +2\alpha_{n}(1- \alpha_{n})\biggl\langle T\biggl(\frac{x_{n}+x_{n+1}}{2}\biggr)-q,f(q)-q \biggr\rangle . $$
(3.6)

It turns out that

$$ (1-\alpha_{n})^{2}\biggl\Vert \frac{x_{n}+x_{n+1}}{2}-q\biggr\Vert ^{2}+ 2\alpha\alpha_{n}(1-\alpha_{n}) \|x_{n}-q\|\biggl\Vert \frac{x_{n}+x_{n+1}}{2}-q\biggr\Vert + \beta_{n}-\|x_{n+1}-q\|^{2}\ge0. $$

Solving this quadratic inequality for \(\Vert \frac {x_{n}+x_{n+1}}{2}-q\Vert \) yields

$$\begin{aligned} \biggl\Vert \frac{x_{n}+x_{n+1}}{2}-q\biggr\Vert \ge{}&\frac{1}{2(1-\alpha_{n})^{2}}\Bigl\{ -2 \alpha\alpha_{n}(1-\alpha_{n})\|x_{n}-q\| \\ &{} +\sqrt{4\alpha^{2}\alpha_{n}^{2}(1- \alpha_{n})^{2}\|x_{n}-q\|^{2}-4(1- \alpha_{n})^{2} \bigl(\beta_{n}-\|x_{n+1}-q \|^{2}\bigr)}\Bigr\} \\ ={}&\frac{-\alpha\alpha_{n}\|x_{n}-q\|+\sqrt{\alpha^{2}\alpha_{n}^{2}\|x_{n}-q\|^{2}+\| x_{n+1}-q\|^{2}-\beta_{n}}}{ 1-\alpha_{n}}. \end{aligned}$$

This implies that

$$\begin{aligned} \frac{1}{2}\|x_{n+1}-q\|+\frac{1}{2}\|x_{n}-q\| \ge\frac{-\alpha\alpha_{n}\|x_{n}-q\|+\sqrt{\alpha^{2}\alpha_{n}^{2}\|x_{n}-q\| ^{2}+\|x_{n+1}-q\|^{2}-\beta_{n}}}{ 1-\alpha_{n}}. \end{aligned}$$

We therefore get

$$\begin{aligned} \frac{1}{4}\bigl((1-\alpha_{n})\|x_{n+1}-q\|+ \bigl(1+(2\alpha-1)\alpha_{n}\bigr)\|x_{n}-q\| \bigr)^{2} \ge\alpha^{2}\alpha_{n}^{2} \|x_{n}-q\|^{2}+\|x_{n+1}-q\|^{2}- \beta_{n}, \end{aligned}$$

which is reduced to the inequality

$$\begin{aligned} &\frac{1}{4}(1-\alpha_{n})^{2}\|x_{n+1}-q \|^{2}+\frac{1}{4}\bigl(1+(2\alpha-1)\alpha_{n} \bigr)^{2}\| x_{n}-q\|^{2} \\ &\qquad{} +\frac{1}{2}(1-\alpha_{n}) \bigl(1+(2\alpha-1) \alpha_{n}\bigr)\|x_{n}-q\|\|x_{n+1}-q\| \\ &\quad \ge\alpha^{2}\alpha_{n}^{2}\|x_{n}-q \|^{2}+\|x_{n+1}-q\|^{2}-\beta_{n}, \end{aligned}$$

which is further reduced by using the elementary inequality

$$2\|x_{n}-q\|\|x_{n+1}-q\|\le\|x_{n}-q \|^{2}+\|x_{n+1}-q\|^{2} $$

to the following inequality:

$$\begin{aligned} & \biggl(1-\frac{1}{4}(1-\alpha_{n})^{2}- \frac{1}{4}(1-\alpha_{n}) \bigl(1+(2\alpha-1)\alpha _{n}\bigr) \biggr)\|x_{n+1}-q\|^{2} \\ & \quad\le \biggl(\frac{1}{4}\bigl(1+(2\alpha-1)\alpha_{n} \bigr)^{2} +\frac{1}{4}(1-\alpha_{n}) \bigl(1+(2 \alpha-1)\alpha_{n}\bigr)-\alpha^{2}\alpha_{n}^{2} \biggr)\| x_{n}-q\|^{2} +\beta_{n}. \end{aligned}$$

Solving for \(\|x_{n+1}-q\|^{2}\) yields

$$\begin{aligned} &\|x_{n+1}-q\|^{2} \\ &\quad\le\frac{\frac{1}{4}(1+(2\alpha-1)\alpha_{n})^{2} +\frac{1}{4}(1-\alpha_{n})(1+(2\alpha-1)\alpha_{n})-\alpha^{2}\alpha_{n}^{2}}{ 1-\frac{1}{4}(1-\alpha_{n})^{2}-\frac{1}{4}(1-\alpha_{n})(1+(2\alpha-1)\alpha_{n})} \| x_{n}-q\|^{2} +\gamma_{n}, \end{aligned}$$
(3.7)

where

$$ \gamma_{n}=\frac{\beta_{n}}{ 1-\frac{1}{4}(1-\alpha_{n})^{2}-\frac{1}{4}(1-\alpha_{n})(1+(2\alpha-1)\alpha_{n})}. $$
(3.8)

Observing

$$1-\frac{1}{4}(1-\alpha_{n})^{2}-\frac{1}{4}(1- \alpha_{n}) \bigl(1+(2\alpha-1)\alpha_{n}\bigr)= 1- \frac{1}{2}(1-\alpha_{n}) \bigl(1-(1-\alpha)\alpha_{n} \bigr) $$

and

$$\begin{aligned} &\frac{1}{4}\bigl(1+(2\alpha-1)\alpha_{n}\bigr)^{2} + \frac{1}{4}(1-\alpha_{n}) \bigl(1+(2\alpha-1)\alpha_{n} \bigr)-\alpha^{2}\alpha_{n}^{2} \\ &\quad=\frac{1}{2}\bigl(1+(2\alpha-1)\alpha_{n}\bigr) \bigl(1-(1- \alpha)\alpha_{n}\bigr)-\alpha^{2}\alpha_{n}^{2}, \end{aligned}$$

we can rewrite (3.7) as

$$ \|x_{n+1}-q\|^{2} \le\frac{\frac{1}{2}(1+(2\alpha-1)\alpha_{n})(1-(1-\alpha)\alpha_{n})-\alpha ^{2}\alpha_{n}^{2}}{ 1-\frac{1}{2}(1-\alpha_{n})(1-(1-\alpha)\alpha_{n})}\|x_{n}-q\|^{2}+\gamma_{n}. $$
(3.9)

Consider the function

$$h(t):=\frac{1}{t} \biggl\{ 1-\frac{\frac{1}{2}(1+(2\alpha-1)t)(1-(1-\alpha )t)-\alpha^{2}t^{2}}{ 1-\frac{1}{2}(1-t)(1-(1-\alpha)t)} \biggr\} , \quad t>0. $$

It is not hard (after certain manipulations) to rewrite \(h(t)\) as

$$h(t)=\frac{2(1-\alpha)-(1-\alpha)^{2}t+\alpha^{2}t}{ 1-\frac{1}{2}(1-t)(1-(1-\alpha)t)}. $$

It turns out that

$$\lim_{t\to0} h(t)=4(1-\alpha)>0. $$

Let \(\delta_{0}>0\) satisfy

$$h(t)>\varepsilon_{0}:=3(1-\alpha)>0,\quad 0< t<\delta_{0}. $$

In other words, we have

$$\frac{\frac{1}{2}(1+(2\alpha-1)t)(1-(1-\alpha)t)-\alpha^{2}t^{2}}{ 1-\frac{1}{2}(1-t)(1-(1-\alpha)t)}< 1-\varepsilon_{0}t,\quad 0<t<\delta_{0}. $$

As \(\alpha_{n}\to0\) as \(n\to\infty\), we have an integer \(N_{0}\) big enough so that \(\alpha_{n}<\delta\) for all \(n\ge N_{0}\). It then turns out from (3.9) that, for all \(n\ge N_{0}\),

$$ \|x_{n+1}-q\|^{2}\le(1-\varepsilon_{0} \alpha_{n})\|x_{n}-q\|^{2}+\gamma_{n}. $$
(3.10)

Notice that by Steps 2 and 3, we have

$$\biggl\Vert T \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr)-x_{n}\biggr\Vert \to0\quad (\mbox{as } n\to\infty). $$

It then turns out from the definition (3.6) of \(\beta_{n}\) and (3.4) that

$$ \limsup_{n\to\infty}\frac{\beta_{n}}{\alpha_{n}}\le0, $$

which in turn implies that

$$ \limsup_{n\to\infty}\frac{\gamma_{n}}{\alpha_{n}}\le0. $$
(3.11)

Finally, (3.11) and the conditions (C1) and (C2) enable us to apply Lemma 2.2 to the inequality (3.10) to conclude that \(\lim_{n\to\infty} \|x_{n}-q\|^{2}=0\), namely, \(x_{n}\to q\) in norm. The proof is therefore complete. □

4 Applications

4.1 Application to variational inequalities

Consider the variational inequality (VI)

$$ \bigl\langle Ax^{*},x-x^{*}\bigr\rangle \ge0,\quad x\in C, $$
(4.1)

where A is a (single-valued) monotone operator in H and C is a closed convex subset of H. We assume \(C\subset\operatorname{dom}(A)\). An example of (4.1) is the constrained minimization problem

$$ \min_{x\in C}\varphi(x), $$
(4.2)

where \(\varphi: H\to\mathbb{R}\) is a lower-semicontinuous convex function. If φ is (Fréchet) differentiable, then the minimization (4.2) is equivalently reformulated as (4.1) with \(A=\nabla\varphi\).

Notice that the VI (4.1) is equivalent to the fixed point problem, for any \(\lambda>0\),

$$ Tx^{*}=x^{*},\qquad Tx:=P_{C}(I-\lambda A)x. $$
(4.3)

If A is Lipschitzian and strongly monotone, then, for \(\lambda>0\) small enough, T is a contraction and its unique fixed point is also the unique solution of the VI (4.1). However, if A is not strongly monotone, T is no longer a contraction, in general. In this case we must deal with nonexpansive mappings for solving the VI (4.1). More precisely, we assume

  1. (A1)

    A is L-Lipschitzian for some \(L>0\), that is,

    $$\|Ax-Ay\|\le L\|x-y\|,\quad x,y\in H. $$
  2. (A2)

    A is μ-inverse strongly monotone (μ-ism) for some \(\mu>0\), namely,

    $$\langle Ax-Ay,x-y\rangle\ge\mu\|Ax-Ay\|^{2},\quad x,y \in H. $$

Note that if φ is L-Lipschtzian, then φ is \(\frac{1}{L}\)-ism.

Under the conditions (A1) and (A2), it is well known [13] that the operator \(T=P_{C}(I-\lambda A)\) is nonexpansive provided \(0<\lambda<2\mu\). It turns out that for this range of values of λ, fixed point algorithms can be applied to solve the VI (4.1). Applying Theorem 3.1 we get the result below.

Theorem 4.1

Assume the VI (4.1) is solvable. Assume also A satisfies (A1) and (A2), and \(0<\lambda<2\mu\). Let \(f: C\to C\) be a contraction. Define a sequence \(\{x_{n}\}\) by the viscosity implicit midpoint rule:

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})P_{C}(I- \lambda A) \biggl(\frac {x_{n}+x_{n+1}}{2} \biggr),\quad n\ge0. $$

In addition, assume \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C3). Then \(\{x_{n}\}\) converges in norm to a solution \(x^{*}\) of the VI (4.1) which is also a solution to the VI

$$\bigl\langle (I-f)x^{*},x-x^{*}\bigr\rangle \ge0,\quad x\in A^{-1}(0). $$

4.2 Application to hierarchical minimization

We next consider a hierarchical minimization problem (see [14] and references therein).

Let \(\varphi_{0}, \varphi_{1}: H\to\mathbb{R}\) be lower semicontinuous convex functions. Consider the hierarchical minimization

$$ \min_{x\in S_{0}}\varphi_{1}(x),\qquad S_{0}:=\arg\min_{x\in H}\varphi_{0}(x). $$
(4.4)

Here we always assume that \(S_{0}\) is nonempty. Let \(S=\arg\min_{x\in S_{0}}\varphi_{1}(x)\) and assume \(S\neq\emptyset\).

Assume \(\varphi_{0}\) and \(\varphi_{1}\) are differentiable and their gradients satisfy the Lipschitz continuity conditions:

$$ \bigl\| \nabla\varphi_{0}(x)-\nabla\varphi_{0}(y)\bigr\| \le L_{0}\|x-y\|, \qquad\bigl\| \nabla\varphi_{1}(x)-\nabla \varphi_{1}(y)\bigr\| \le L_{1}\|x-y\|. $$
(4.5)

Note that the condition (4.5) implies that \(\nabla\varphi_{i}\) is \(\frac{1}{L_{i}}\)-ism (\(i=0,1\)). Now let

$$T_{0}=I-\gamma_{0} \nabla\varphi_{0},\qquad T_{1}=I-\gamma_{1} \nabla\varphi_{1}, $$

where \(\gamma_{0}>0\) and \(\gamma_{1}>0\). Note that \(T_{i}\) is (averaged) nonexpansive [13] if \(0<\gamma_{i}<2/L_{i}\) (\(i=0,1\)). Also, it is easily seen that \(S_{0}=\operatorname{Fix}(T_{0})\).

The optimality condition for \(x^{*}\in S_{0}\) to be a solution of the hierarchical minimization (4.4) is the VI:

$$ x^{*}\in S_{0},\quad \bigl\langle \nabla\varphi_{1}\bigl(x^{*} \bigr),x-x^{*}\bigr\rangle \ge0,\quad x\in S_{0}. $$
(4.6)

This is the VI (4.1) with \(C=S_{0}\) and \(A=\nabla\varphi_{1}\). We therefore have the following result.

Theorem 4.2

Assume the hierarchical minimization problem (4.4) is solvable. Assume (4.5) and \(0<\gamma_{i}<2/L_{i}\) (\(i=0,1\)). Let \(f: C\to C\) be a contraction. Define a sequence \(\{x_{n}\}\) by the viscosity implicit midpoint rule:

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n})P_{S_{0}}(I- \lambda\nabla\varphi _{1}) \biggl(\frac{x_{n}+x_{n+1}}{2} \biggr),\quad n\ge0. $$

In addition, assume \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C3). Then \(\{x_{n}\}\) converges in norm to a solution \(x^{*}\) of the VI (4.1) which also solves the VI

$$\bigl\langle (I-f)x^{*},x-x^{*}\bigr\rangle \ge0,\quad x\in S. $$

4.3 Application to nonlinear evolution equation

Browder [15] proved the existence of a periodic solution of the time-dependent nonlinear evolution equation in a (real) Hilbert space H,

$$ \frac{du}{dt}+A(t)u=f(t,u),\quad t>0, $$
(4.7)

where \(A(t)\), a family of closed linear operators in H, and \(f: \mathbb{R}\times H \to H\) satisfy the following conditions:

  1. (B1)

    \(A(t)\) and \(f(t,u)\) are periodic in t of period \(\xi> 0\).

  2. (B2)

    For each t and each pair \(u, v \in H\),

    $$\bigl\langle f(t,u) - f(t,v), u - v\bigr\rangle \le0. $$
  3. (B3)

    For each t and each \(u\in D(A(t))\), \(\langle A(t)u,u\rangle\ge0\).

  4. (B4)

    There exists a mild solution u of (4.7) on \(\mathbb{R}^{+}\) for each initial value \(v \in H\). Recall that u is a mild solution of (4.7) with initial value \(u(0)=v\) if, for each \(t>0\),

    $$u(t)=U(t,0)v+\int_{0}^{t} U(t,s)f\bigl(s,u(s) \bigr)\,ds, $$

    where \(\{U(t,s)\}_{t\ge s\ge0}\) is the evolution system for the homogeneous linear system

    $$ \frac{du}{dt}+A(t)u=0\quad (t>s). $$
    (4.8)
  5. (B5)

    There exists some \(R > 0\) such that

    $$\bigl\langle f(t,u),u\bigr\rangle < 0 $$

    for \(\|u\|= R\) and all \(t \in[0,\xi]\).

Note that under the conditions (B1)-(B5), the solution u has period ξ and \(\|u(0)\|< R\).

We now apply our viscosity technique for IMR to (4.7). To this end, we define a mapping \(T: H\to H\) by

$$Tv:=u(\xi),\quad v\in H, $$

where u is the solution of (4.7) satisfying the initial condition \(u(0)=v\).

It is easy to find that T is nonexpansive. Moreover, the assumption (B5) implies that T is a self-mapping of the closed ball \(B:=\{v\in H: \|v\|\le R\}\). Consequently, T has a fixed point in B which we denote by v, and the corresponding solution u of (4.7) is the periodic solution of (4.7) with period ξ with the initial condition \(u(0)=v\). In other words, finding a periodic solution of (4.7) is equivalent to finding a fixed point of T. Therefore, our viscosity technique for IMR is applicable to (4.7). It turns out that the sequence \(\{v_{n}\}\) defined by the IMR

$$ v_{n+1}=\alpha_{n} f(v_{n})+(1- \alpha_{n}) T \biggl(\frac{v_{n}+v_{n+1}}{2} \biggr) $$
(4.9)

with \(\{\alpha_{n}\}\) satisfying the conditions (C1)-(C3) of Theorem 3.1, converges weakly to a fixed point v of T, and then the corresponding mild solution u of (4.7) with initial value \(u(0)=\xi\) is a periodic solution of (4.7). Note that the iteration procedure (4.9) is essentially to find a mild solution of the nonlinear evolution system (4.7) with the initial value \((v_{n}+v_{n+1})/2\).

References

  1. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  2. Attouch, H: Viscosity approximation methods for minimization problems. SIAM J. Optim. 6(3), 769-806 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  3. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  4. Auzinger, W, Frank, R: Asymptotic error expansions for stiff equations: an analysis for the implicit midpoint and trapezoidal rules in the strongly stiff case. Numer. Math. 56, 469-499 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  5. Bader, G, Deuflhard, P: A semi-implicit mid-point rule for stiff systems of ordinary differential equations. Numer. Math. 41, 373-398 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  6. Deuflhard, P: Recent progress in extrapolation methods for ordinary differential equations. SIAM Rev. 27(4), 505-535 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  7. Schneider, C: Analysis of the linearly implicit mid-point rule for differential-algebra equations. Electron. Trans. Numer. Anal. 1, 1-10 (1993)

    MATH  MathSciNet  Google Scholar 

  8. Somalia, S: Implicit midpoint rule to the nonlinear degenerate boundary value problems. Int. J. Comput. Math. 79(3), 327-332 (2002)

    Article  MathSciNet  Google Scholar 

  9. Van Veldhuxzen, M: Asymptotic expansions of the global error for the implicit midpoint rule (stiff case). Computing 33, 185-192 (1984)

    Article  MathSciNet  Google Scholar 

  10. Alghamdi, MA, Alghamdi, MA, Shahzad, N, Xu, HK: The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory Appl. 2014, 96 (2014)

    Article  MathSciNet  Google Scholar 

  11. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)

    Book  MATH  Google Scholar 

  12. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)

    Article  MATH  Google Scholar 

  13. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  14. Cabot, A: Proximal point algorithm controlled by a slowly vanishing term: applications to hierarchical minimization. SIAM J. Optim. 15, 555-572 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  15. Browder, FE: Existence of periodic solutions for nonlinear equations of evolution. Proc. Natl. Acad. Sci. USA 53, 1100-1103 (1965)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referees for their helpful comments and suggestions, which improved the presentation of this manuscript. This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, under grant No. (49-130-35-HiCi). The authors, therefore, acknowledge technical and financial support of KAU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong-Kun Xu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, HK., Alghamdi, M.A. & Shahzad, N. The viscosity technique for the implicit midpoint rule of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl 2015, 41 (2015). https://doi.org/10.1186/s13663-015-0282-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0282-9

MSC

Keywords