Skip to main content

Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping

Abstract

The purpose of this paper is to present accelerations of the Mann and CQ algorithms. We first apply the Picard algorithm to the smooth convex minimization problem and point out that the Picard algorithm is the steepest descent method for solving the minimization problem. Next, we provide the accelerated Picard algorithm by using the ideas of conjugate gradient methods that accelerate the steepest descent method. Then, based on the accelerated Picard algorithm, we present accelerations of the Mann and CQ algorithms. Under certain assumptions, we show that the new algorithms converge to a fixed point of a nonexpansive mapping. Finally, we show the efficiency of the accelerated Mann algorithm by numerically comparing with the Mann algorithm. A numerical example is provided to illustrate that the acceleration of the CQ algorithm is ineffective.

1 Introduction

Let H be a real Hilbert space with inner product \(\langle\cdot,\cdot \rangle\) and induced norm \(\|\cdot\|\). Suppose that \(C \subset H\) is nonempty, closed and convex. A mapping \(T:C\rightarrow C \) is said to be nonexpansive if

$$\|Tx-Ty\|\leq\|x-y\| $$

for all \(x, y \in C\). The set of fixed points of T is defined by \(\operatorname{Fix}(T):=\{x\in C : Tx=x\}\).

In this paper, we consider the following fixed point problem.

Problem 1.1

Suppose that \(T: C\rightarrow C\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\). Then

$$\mbox{find } x^{\ast}\in C \mbox{ such that } T\bigl(x^{\ast}\bigr)=x^{\ast}. $$

The fixed point problems for nonexpansive mappings [1–4] capture various applications in diversified areas, such as convex feasibility problems, convex optimization problems, problems of finding the zeros of monotone operators, and monotone variational inequalities (see [1, 5] and the references therein). The Picard algorithm [6], the Mann algorithm [7, 8], and the CQ method [9] are useful fixed point algorithms to solve the fixed point problems. Meanwhile, to guarantee practical systems and networks (see, e.g., [10–13]) are stable and reliable, the fixed point has to be quickly found. Recently, Sakurai and Liduka [14] accelerated the Halpern algorithm and obtained a fast algorithm with strong convergence. Inspired by their work, we focus on the Mann and the CQ algorithms and present new algorithms to accelerate the approximation of a fixed point of a nonexpansive mapping.

We first apply the Picard algorithm to the smooth convex minimization problem and illustrate that the Picard algorithm is the steepest descent method [15] for solving the minimization problem. Since conjugate gradient methods [15] have been widely seen as an efficient accelerated version of most gradient methods, we introduce an accelerated Picard algorithm by combining the conjugate gradient methods and the Picard algorithm. Finally, based on the accelerated Picard algorithm, we present accelerations of the Mann and CQ algorithms.

In this paper, we propose two accelerated algorithms for finding a fixed point of a nonexpansive mapping and prove the convergence of the algorithms. Finally, the numerical examples are presented to demonstrate the effectiveness and fast convergence of the accelerated Mann algorithm and the ineffectiveness of the accelerated CQ algorithm.

2 Mathematical preliminaries

2.1 Picard algorithm and our algorithm

The Picard algorithm generates the sequence \(\{x_{n}\}_{n=0}^{\infty}\) as follows: given \(x_{0}\in H\),

$$ x_{n+1}=Tx_{n},\quad n\geq0. $$
(1)

The Picard algorithm (1) converges to a fixed point of the mapping T if \(T:C\rightarrow C\) is contractive (see, e.g., [1]).

When \(\operatorname{Fix}(T)\) is the set of all minimizers of a convex, continuously Fréchet differentiable functional f over H, that algorithm (1) is the steepest descent method [15] to minimize f over H. Suppose that the gradient of f, denoted by ∇f, is Lipschitz continuous with a constant \(L >0\) and define \(T^{f}:H\rightarrow H\) by

$$ T^{f}:=I-\lambda\nabla f, $$
(2)

where \(\lambda\in(0,2/L)\) and \(I:H\rightarrow H\) stands for the identity mapping. Accordingly, \(T^{f}\) satisfies the contraction condition (see, e.g., [10]) and

$$\operatorname{Fix}\bigl(T^{f}\bigr)=\mathop{\operatorname {arg\,min}}\limits_{x\in H}f(x):= \Bigl\{ x^{*}\in H:f\bigl(x^{*}\bigr)=\min_{x\in H} f(x) \Bigr\} . $$

Therefore, algorithm (1) with \(T := T^{f}\) can be expressed as follows:

$$ \left \{ \textstyle\begin{array}{@{}l} d^{f}_{n+1}:=-\nabla f(x_{n}),\\ x_{n+1}:=T^{f}(x_{n})=x_{n}-\lambda \nabla f(x_{n})=x_{n}+\lambda d^{f}_{n+1}. \end{array}\displaystyle \right . $$
(3)

The conjugate gradient methods [15] are popular acceleration methods of the steepest descent method. The conjugate gradient direction of f at \(x_{n}\) (\(n\geq0\)) is \(d^{f,CGD}_{n+1}:= -\nabla f(x_{n})+ \beta_{n}d^{f,CGD}_{n}\), where \(d^{f,CGD}_{0}:= -\nabla f(x_{0})\) and \(\{\beta_{n}\}_{n=0}^{\infty}\subset (0,\infty)\), which, together with (2), implies that

$$ d^{f,CGD}_{n+1}=\frac{1}{\lambda} \bigl(T^{f}(x_{n})-x_{n}\bigr)+ \beta_{n}d^{f,CGD}_{n}. $$
(4)

By replacing \(d^{f}_{n+1}:= -\nabla f (x_{n})\) in algorithm (3) with \(d^{f,CGD}_{n+1}\) defined by (4), we get the accelerated Picard algorithm as follows:

$$ \left \{ \textstyle\begin{array}{@{}l} d^{f,CGD}_{n+1}:=\frac{1}{\lambda}\bigl(T^{f}(x_{n})-x_{n}\bigr)+ \beta_{n}d^{f,CGD}_{n},\\ x_{n+1}:=x_{n}+\lambda d^{f,CGD}_{n+1}. \end{array}\displaystyle \right . $$
(5)

The convergence condition of Picard algorithm is very restrictive and it does not converge for general nonexpansive mappings (see, e.g., [16]). So, in 1953 Mann [8] introduced the Mann algorithm

$$ x_{n+1}=\alpha_{n}x_{n}+(1- \alpha_{n})Tx_{n},\quad n\geq0, $$
(6)

and showed that the sequence generated by it converges to a fixed point of a nonexpansive mapping. In this paper we combine (5)-(6) and the CQ algorithm to present two novel algorithms.

2.2 Some lemmas

We will use the following notation:

  1. (1)

    \(x_{n}\rightharpoonup x\) means that \(\{x_{n}\}\) converges weakly to x and \(x_{n}\rightarrow x\) means that \(\{x_{n}\}\) converges strongly to x.

  2. (2)

    \(\omega_{w}(x_{n}):=\{x:\exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).

Lemma 2.1

Let H be a real Hilbert space. There hold the following identities:

  1. (i)

    \(\|x-y\|^{2}=\|x\|^{2}-\|y\|^{2}-2\langle x-y, y\rangle\), \(\forall x, y \in H\),

  2. (ii)

    \(\|tx+(1-t)y\|^{2}=t\|x\|^{2}+(1-t)\|y\|^{2}-t(1-t)\|x-y\|^{2}\), \(t\in[0, 1]\), \(\forall x, y \in H\).

Lemma 2.2

Let K be a closed convex subset of a real Hilbert space H, and let \(P_{K}\) be the (metric or nearest point) projection from H onto K (i.e., for \(x\in H\), \(P_{K}x\) is the only point in K such that \(\|x-P_{K}x\|=\inf\{\|x-z\|: z \in K\}\)). Given \(x \in H\) and \(z\in K\). Then \(z=P_{K}x\) if and only if there holds the relation

$$\langle x-z,y-z\rangle\leq0 \quad\textit{for all } y\in K. $$

Lemma 2.3

(See [17])

Let K be a closed convex subset of H. Let \(\{x_{n}\}\) be a sequence in H and \(u\in H\). Let \(q=P_{K}u\). Suppose that \(\{x_{n}\}\) is such that \(\omega_{w}(x_{n})\subset K\) and satisfies the condition

$$\|x_{n}-u\|\leq\|u-q\| \quad\textit{for all }n. $$

Then \(x_{n}\rightarrow q\).

Lemma 2.4

(See [2])

Let C be a closed convex subset of a real Hilbert space H, and let \(T : C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). If a sequence \(\{x_{n}\}\) in C is such that \(x_{n}\rightharpoonup z\) and \(x_{n} - T x_{n} \rightarrow0\), then \(z = T z\).

Lemma 2.5

(See [18])

Assume that \(\{a_{n}\}\) is a sequence of nonnegative real numbers satisfying the property

$$a_{n+1}\leq a_{n}+u_{n},\quad n\geq0, $$

where \(\{u_{n}\}\) is a sequence of nonnegative real numbers such that \(\sum_{n=1}^{\infty}u_{n}<\infty\). Then \(\lim_{n\rightarrow\infty} a_{n}\) exists.

3 The accelerated Mann algorithm

In this section, we present the accelerated Mann algorithm and give its convergence.

Algorithm 3.1

Choose \(\mu\in(0,1]\), \(\lambda>0\), and \(x_{0}\in{H}\) arbitrarily, and set \(\{\alpha_{n}\}_{n=0}^{\infty}\subset(0,1)\), \(\{\beta_{n}\}_{n=0}^{\infty}\subset[0,\infty)\). Set \(d_{0}:=(T(x_{0})-x_{0})/\lambda\). Compute \(d_{n+1}\) and \(x_{n+1}\) as follows:

$$ \left \{ \textstyle\begin{array}{@{}l} d_{n+1}:=\frac{1}{\lambda} \bigl(T(x_{n})-x_{n}\bigr)+\beta_{n}d_{n},\\ y_{n}:=x_{n}+\lambda d_{n+1},\\ x_{n+1}:=\mu\alpha_{n} x_{n}+(1-\mu \alpha_{n})y_{n}. \end{array}\displaystyle \right . $$
(7)

We can check that Algorithm 3.1 coincides with the Mann algorithm (6) when \(\beta_{n}:\equiv0\) and \(\mu:= 1\).

In this section we make the following assumptions.

Assumption 3.1

The sequences \(\{\alpha_{n}\}_{n=0}^{\infty}\) and \(\{\beta_{n}\}_{n=0}^{\infty}\) satisfy

  1. (C1)

    \(\sum_{n=0}^{\infty}\mu\alpha_{n}(1-\mu\alpha_{n})=\infty\),

  2. (C2)

    \(\sum_{n=0}^{\infty}\beta_{n}<\infty\).

Moreover, \(\{x_{n}\}_{n=0}^{\infty}\) satisfies

  1. (C3)

    \(\{T(x_{n})-x_{n}\}_{n=0}^{\infty}\) is bounded.

Before doing the convergence analysis of Algorithm 3.1, we first show the two lemmas.

Lemma 3.1

Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption  3.1 holds. Then \(\{d_{n}\}_{n=0}^{\infty}\) and \(\{\|x_{n}-p\|\}_{n=0}^{\infty}\) are bounded for any \(p\in \operatorname{Fix}(T)\). Furthermore, \(\lim_{n\rightarrow\infty} \|x_{n}-p\|\) exists.

Proof

We have from (C2) that \(\lim_{n\rightarrow\infty} \beta_{n}=0\). Accordingly, there exists \(n_{0}\in\mathbb{N}\) such that \(\beta_{n}\leq 1/2 \) for all \(n\geq n_{0}\). Define \(M_{1}:=\max\{\max_{1\leq k\leq n_{0}}\| d_{k}\|, (2/\lambda)\sup_{n\in\mathbb{N}}\|T(x_{n})-x_{n}\|\}\). Then (C3) implies that \(M_{1}< \infty\). Assume that \(\|d_{n}\|\leq M_{1}\) for some \(n\geq n_{0}\). The triangle inequality ensures that

$$ \|d_{n+1}\|=\biggl\| \frac{1}{\lambda}\bigl(T(x_{n})-x_{n} \bigr)+\beta_{n}d_{n}\biggr\| \leq\frac {1}{\lambda} \bigl\| T(x_{n})-x_{n}\bigr\| +\beta_{n}\|d_{n}\| \leq M_{1}, $$
(8)

which means that \(\|d_{n}\|\leq M_{1}\) for all \(n\geq0\), i.e., \(\{ d_{n}\}_{n=0}^{\infty}\) is bounded.

The definition of \(\{y_{n}\}_{n=0}^{\infty}\) implies that

$$\begin{aligned} y_{n} &=x_{n}+\lambda\biggl( \frac{1}{\lambda}\bigl(T(x_{n})-x_{n}\bigr)+ \beta_{n}d_{n}\biggr) \\ &= T(x_{n})+\lambda\beta_{n}d_{n}. \end{aligned}$$
(9)

The nonexpansivity of T and (9) imply that, for any \(p\in \operatorname{Fix}(T)\) and for all \(n\geq n_{0}\),

$$\begin{aligned} \|y_{n}-p\|&=\bigl\| T(x_{n})+\lambda \beta_{n} d_{n}-p\bigr\| \\ &\leq\bigl\| T(x_{n})-T(p)\bigr\| +\lambda\beta_{n} \|d_{n}\| \\ &\leq\|x_{n}-p\|+\lambda M_{1}\beta_{n}. \end{aligned}$$
(10)

Therefore, we find

$$\begin{aligned} \|x_{n+1}-p\|&=\bigl\| \mu\alpha_{n}(x_{n}-p)+(1- \mu\alpha_{n}) (y_{n}-p)\bigr\| \\ &\leq\mu\alpha_{n}\|x_{n}-p\|+(1-\mu\alpha_{n}) \|y_{n}-p\| \\ &\leq\mu\alpha_{n}\|x_{n}-p\|+(1-\mu\alpha_{n}) \bigl\{ \|x_{n}-p\|+\lambda M_{1}\beta _{n}\bigr\} \\ &\leq\|x_{n}-p\|+\lambda M_{1}\beta_{n}, \end{aligned}$$
(11)

which implies

$$\|x_{n}-p\|\leq\|x_{0}-p\|+\lambda M_{1}\sum _{k=0}^{n-1}\beta_{k}< \infty. $$

So, we get that \(\{x_{n}\}_{n=0}^{\infty}\) is bounded. From (10) it follows that \(\{y_{n}\}_{n=0}^{\infty}\) is bounded.

In addition, using Lemma 2.5, (C2), and (11), we obtain \(\lim_{n\rightarrow\infty} \|x_{n}-p\|\) exists. □

Lemma 3.2

Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption  3.1 holds. Then

$$\lim_{n\rightarrow\infty} \bigl\| x_{n}-T(x_{n})\bigr\| =0. $$

Proof

By (7)-(9) and the nonexpansivity of T, we have

$$\begin{aligned} &\bigl\| x_{n+1}-T(x_{n+1})\bigr\| \\ &\quad= \bigl\| \bigl[\mu\alpha_{n} x_{n}+(1-\mu\alpha_{n}) \bigl(T(x_{n})+\lambda\beta _{n}d_{n}\bigr) \bigr]-T\bigl(\mu\alpha_{n} x_{n}+(1-\mu \alpha_{n}) \bigl(T(x_{n})+\lambda\beta_{n}d_{n} \bigr)\bigr)\bigr\| \\ &\quad=\bigl\| \bigl[\mu\alpha_{n} \bigl(x_{n}-T(x_{n}) \bigr)+(1-\mu\alpha_{n})\lambda\beta _{n}d_{n} \bigr]\\ &\qquad{}+\bigl[T(x_{n})-T\bigl(\mu\alpha_{n} x_{n}+(1-\mu\alpha_{n}) \bigl(T(x_{n})+\lambda \beta _{n}d_{n}\bigr)\bigr)\bigr]\bigr\| \\ &\quad\leq\mu\alpha_{n} \bigl\| x_{n}-T(x_{n})\bigr\| +(1-\mu \alpha_{n})\lambda\beta_{n}\|d_{n}\| \\ &\qquad{}+ \bigl\| x_{n}-\bigl[\mu\alpha_{n} x_{n}+(1-\mu \alpha_{n}) \bigl(T(x_{n})+\lambda\beta_{n}d_{n} \bigr)\bigr]\bigr\| \\ &\quad\leq\mu\alpha_{n} \bigl\| x_{n}-T(x_{n})\bigr\| +(1-\mu \alpha_{n})\lambda\beta_{n}\|d_{n}\| \\ &\qquad{}+(1-\mu \alpha_{n})\bigl\| x_{n}-T(x_{n})\bigr\| +(1-\mu \alpha_{n})\lambda\beta_{n}\|d_{n}\|\\ &\quad\leq\bigl\| x_{n}-T(x_{n})\bigr\| +2\lambda\beta_{n} \|d_{n}\|\\ &\quad\leq\bigl\| x_{n}-T(x_{n})\bigr\| +2\lambda\beta_{n}M_{1}, \end{aligned}$$

which, with (C2) and Lemma 2.5, yields that the limit of \(\|x_{n}-T(x_{n})\| \) exists.

On the other hand, for any \(p\in \operatorname{Fix}(T)\) and for all \(n\geq n_{0}\), using the triangle inequality, the Cauchy-Schwarz inequality, and Lemma 2.1(ii), we obtain

$$\begin{aligned} \|x_{n+1}-p\|^{2}={}& \bigl\| \mu\alpha_{n} x_{n}+(1-\mu\alpha_{n})y_{n}-p\bigr\| ^{2}\\ ={}& \bigl\| \mu\alpha_{n} x_{n}+(1-\mu\alpha_{n}) \bigl(T(x_{n})+\lambda\beta_{n}d_{n}\bigr)-p\bigr\| ^{2}\\ ={}& \bigl\| \mu\alpha_{n} (x_{n}-p)+(1-\mu\alpha_{n}) \bigl(T(x_{n})-p\bigr)+(1-\mu\alpha _{n})\lambda \beta_{n}d_{n}\bigr\| ^{2} \\ \leq{}&\bigl\| \mu\alpha_{n} (x_{n}-p)+(1-\mu\alpha_{n}) \bigl(T(x_{n})-p\bigr)\bigr\| ^{2}+\bigl\| (1-\mu \alpha_{n}) \lambda\beta_{n}d_{n}\bigr\| ^{2} \\ &{} +2\bigl\| \mu\alpha_{n} (x_{n}-p)+(1-\mu\alpha_{n}) \bigl(T(x_{n})-p\bigr)\bigr\| \bigl\| (1-\mu\alpha_{n})\lambda\beta _{n}d_{n}\bigr\| \\ \leq{}&\mu\alpha_{n}\| x_{n}-p\|^{2}+(1-\mu \alpha_{n})\bigl\| T(x_{n})-p\bigr\| ^{2}-\mu\alpha _{n}(1-\mu\alpha_{n})\bigl\| T(x_{n})-x_{n} \bigr\| ^{2} \\ &{} +(1-\mu\alpha_{n})^{2}\lambda^{2} \beta_{n}^{2}M_{1}^{2}+2\bigl[\mu \alpha_{n}\| x_{n}-p\|+(1-\mu\alpha_{n}) \bigl\| T(x_{n})-p\bigr\| \bigr]\\ &{}\times(1-\mu\alpha_{n})\lambda \beta_{n} M_{1} \\ \leq{}&\|x_{n}-p\|^{2}-\mu\alpha_{n}(1-\mu \alpha_{n})\bigl\| T(x_{n})-x_{n}\bigr\| ^{2}\\ &{}+\beta _{n}(1-\mu\alpha_{n})\lambda\bigl\{ 2M_{1} \|x_{n}-p\| +(1-\mu\alpha _{n})\lambda\beta_{n} M_{1}^{2} \bigr\} \\ \leq{}&\|x_{n}-p\|^{2}-\mu\alpha_{n}(1-\mu \alpha_{n})\bigl\| T(x_{n})-x_{n}\bigr\| ^{2}+ \beta_{n}M_{2}. \end{aligned}$$

We have from Lemma 3.1 that \(M_{2}:=\sup_{k\geq0}(1-\mu\alpha_{k})\lambda \{2M_{1}\|x_{k}-p\|+(1-\mu\alpha_{k})\lambda\beta_{k} M_{1}^{2}\}\) is bounded. Therefore, using (C2), we obtain

$$\sum_{k=0}^{n}\mu\alpha_{k}(1- \mu\alpha_{k})\bigl\| T(x_{k})-x_{k}\bigr\| ^{2} \leq\| x_{0}-p\|^{2}-\|x_{n+1}-p \|^{2}+M_{2}\sum_{k=0}^{n} \beta_{k}< \infty, $$

which, with (C1), implies that

$$\liminf_{n\rightarrow\infty}\bigl\| T(x_{n})-x_{n}\bigr\| =0. $$

Due to existence of the limit of \(\|T(x_{n})-x_{n}\|\), we have

$$\lim_{n\rightarrow\infty}\bigl\| T(x_{n})-x_{n}\bigr\| =0, $$

which with Lemma 2.4 implies that \(\omega_{w}(x_{n})\subset \operatorname{Fix}(T)\). □

Theorem 3.1

Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption  3.1 holds. Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 weakly converges to a fixed point of T.

Proof

To see that \(\{x_{n}\}\) is actually weakly convergent, we need to show that \(\omega_{\omega}(x_{n})\) consists of exactly one point. Take \(p, q\in \omega_{w}(x_{n})\) and let \(\{x_{n_{i}}\}\) and \(\{x_{m_{j}}\}\) be two subsequences of \(\{x_{n}\}\) such that \(x_{n_{i}}\rightharpoonup p\) and \(x_{m_{j}}\rightharpoonup q\), respectively. Using Lemma 2.7 of [19] and Lemma 3.1, we have \(p=q\). Hence, the proof is complete. □

4 The accelerated CQ algorithm

In general, the Mann algorithm (6) has only weak convergence (see [20] for an example). However, strong convergence is often much more desirable than weak convergence in many problems that arise in infinite dimensional spaces (see [21] and the references therein). In 2003, Nakajo and Takahashi [9] introduced the following modification of the Mann algorithm:

$$ \textstyle\begin{cases} x_{0}\in C \mbox{ chosen arbitrarily},\\ y_{n}:=\alpha_{n} x_{n}+(1- \alpha_{n})Tx_{n},\\ C_{n}=\{z\in C: \|y_{n}-z\|\leq\|x_{n}-z\|\},\\ Q_{n}=\bigl\{ z\in C:\langle x_{n}-z, x_{0}-x_{n} \rangle\geq0\bigr\} ,\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{cases} $$
(12)

where C is a nonempty closed convex subset of a Hilbert space H and \(T :C\rightarrow C\) is a nonexpansive mapping, and \(P_{K}\) denotes the metric projection from H onto a closed convex subset K of H.

Here, we introduce an acceleration of CQ algorithm based on Algorithm 3.1 and show its strong convergence.

Theorem 4.1

Let C be a bounded, closed and convex subset of a Hilbert space H and \(T: C\rightarrow C\) be nonexpansive with \(\operatorname{Fix}(T)\neq\emptyset\). Assume that \(\{\alpha_{n}\}_{n=0}^{\infty}\) is a sequence in \((0, a]\) for some \(0< a<1\) and \(\{\beta_{n}\}_{n=0}^{\infty}\subset[0,\infty)\) such that \(\lim_{n\rightarrow\infty}\beta_{n}=0\). Define a sequence \(\{x_{n}\}_{n=0}^{\infty}\) in C by the following algorithm:

$$ \textstyle\begin{cases} x_{0}\in C \textit{ chosen arbitrarily},\\ d_{n+1}:=\frac{1}{\lambda}\bigl(T(x_{n})-x_{n} \bigr)+\beta_{n}d_{n},\\ y_{n}:=x_{n}+\lambda d_{n+1},\\ z_{n}:=\alpha_{n} x_{n}+(1- \alpha_{n})y_{n},\\ C_{n}=\bigl\{ z\in C: \|z_{n}-z\|^{2}\leq \|x_{n}-z\|^{2}+\theta_{n}\bigr\} ,\\ Q_{n}=\bigl\{ z\in C:\langle x_{n}-z, x_{0}-x_{n} \rangle\geq0\bigr\} ,\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{cases} $$
(13)

where

$$\theta_{n}=\lambda\beta_{n}M_{4}[\lambda \beta_{n}M_{4}+2M_{3}]\rightarrow0 \quad\textit{as }n\rightarrow \infty, $$

\(M_{3}=\operatorname{diam} C\) and \(M_{4}:=\max\{\max_{1\leq k\leq n_{0}}\|d_{k}\|, (2/\lambda )M_{3}\}\), \(n_{0}\) is chosen such that \(\beta_{n}\leq1/2\) for all \(n\geq n_{0}\). Then \(\{x_{n}\}_{n=0}^{\infty}\) converges in norm to \(P_{\operatorname{Fix}(T)}(x_{0})\).

Proof

First observe that \(C_{n}\) is convex. Indeed, the inequality defined in \(C_{n}\) can be rewritten as

$$\bigl\langle 2(x_{n}-z_{n}), z \bigr\rangle \leq \|x_{n}\|^{2}-\|z_{n}\|^{2}+ \theta_{n} $$

which is affine (and hence convex ) in z. Next we show that \(\operatorname{Fix}(T)\subset C_{n}\) for all n. Similar to the proof of (8), we get \(\|d_{n}\|\leq M_{4}\). Due to \(x_{n}\in C\), we have, for all \(p\in \operatorname{Fix}(T)\subset C\),

$$ \|x_{n}-p\|\leq M_{3}, $$
(14)

and

$$\|y_{n}-p\|\leq\|x_{n}-p\|+\lambda\beta_{n}M_{4}, $$

which comes from (10). Thus we get

$$\begin{aligned} \|z_{n}-p\|&\leq\alpha_{n}\|x_{n}-p \|+(1-\alpha_{n})\|y_{n}-p\|\\ &\leq\|x_{n}-p\|+\lambda\beta_{n} M_{4}, \end{aligned}$$

and consequently,

$$\begin{aligned} \|z_{n}-p\|^{2}&\leq\| x_{n}-p\|^{2}+2 \lambda\beta_{n} M_{4}\| x_{n}-p\|+(\lambda \beta_{n} M_{4})^{2}\\ &\leq\| x_{n}-p\|^{2}+\theta_{n}, \end{aligned}$$

where \(\theta_{n}=\lambda\beta_{n}M_{4}[\lambda\beta_{n}M_{4}+2M_{3}]\). So, \(p\in C_{n}\) for all n. Next we show that

$$ \operatorname{Fix}(T)\subset Q_{n} \quad\mbox{for all }n \geq0. $$
(15)

We prove this by induction. For \(n=0\), we have \(\operatorname{Fix}(T)\subset C= Q_{0}\). Assume that \(\operatorname{Fix}(T)\subset Q_{n}\). Since \(x_{n+1}\) is the projection of \(x_{0}\) onto \(C_{n}\cap Q_{n} \), we have

$$\langle x_{n+1}-z, x_{0}-x_{n+1}\rangle\geq0,\quad \forall z\in C_{n}\cap Q_{n}. $$

As \(\operatorname{Fix}(T)\subset C_{n}\cap Q_{n}\), the last inequality holds, in particular, for all \(z\in \operatorname{Fix}(T)\). This together with the definition of \(Q_{n+1}\) implies that \(\operatorname{Fix}(T)\subset Q_{n+1}\). Hence (15) holds for all \(n\geq0\).

Notice that the definition of \(Q_{n}\) actually implies \(x_{n}=P_{Q_{n}}(x_{0})\). This together with the fact that \(\operatorname{Fix}(T)\subset Q_{n}\) further implies

$$\|x_{n}-x_{0}\|\leq\|p-x_{0}\|,\quad p \in \operatorname{Fix}(T). $$

Due to \(q=P_{\operatorname{Fix}(T)}(x_{0})\in \operatorname{Fix}(T)\), we have

$$ \|x_{n}-x_{0}\|\leq\|q-x_{0}\|. $$
(16)

The fact that \(x_{n+1}\in Q_{n}\) asserts that \(\langle x_{n+1}-x_{n}, x_{n}-x_{0}\rangle\geq0\). This together with Lemma 2.1(i) implies

$$ \begin{aligned}[b] \|x_{n+1}-x_{n}\|^{2}&= \bigl\| (x_{n+1}-x_{0})-(x_{n}-x_{0}) \bigr\| ^{2}\\ &=\|x_{n+1}-x_{0}\|^{2}-\|x_{n}-x_{0}\|^{2}-2\langle x_{n+1}-x_{n}, x_{n}-x_{0} \rangle\\ &\leq\|x_{n+1}-x_{0}\|^{2}-\|x_{n}-x_{0} \|^{2}. \end{aligned} $$
(17)

This implies that the sequence \(\{\|x_{n}-x_{0}\|\}_{n=0}^{\infty}\) is increasing. Recall (14), we get that \(\lim_{n\rightarrow\infty }\|x_{n}-x_{0}\|\) exists. It turns out from (17) that

$$\lim_{n\rightarrow\infty}\|x_{n+1}-x_{n}\|=0. $$

By the fact \(x_{n+1}\in C_{n}\) we get

$$\|z_{n}-x_{n+1}\|^{2}\leq\|x_{n}-x_{n+1} \|^{2}+\theta_{n}, $$

and thus

$$ \|z_{n}-x_{n+1}\|\leq\|x_{n}-x_{n+1} \|+\sqrt{\theta_{n}}. $$
(18)

On the other hand, since \(z_{n}=\alpha_{n}x_{n}+(1-\alpha_{n})T(x_{n})+(1-\alpha _{n})\lambda\beta_{n} d_{n}\) and \(\alpha_{n}\leq a\), we have

$$ \begin{aligned}[b] \bigl\| T(x_{n})-x_{n}\bigr\| &=\frac{1}{1-\alpha_{n}} \bigl\| z_{n}-x_{n}-(1-\alpha_{n})\lambda\beta _{n} d_{n}\bigr\| \\ &\leq\frac{1}{1-a}\|z_{n}-x_{n}\|+\lambda \beta_{n} \|d_{n}\|\\ &\leq\frac{1}{1-a}\bigl(\|z_{n}-x_{n+1}\|+ \|x_{n+1}-x_{n}\|\bigr)+\lambda\beta_{n} M_{4}\\ &\leq\frac{1}{1-a}\bigl(2\|x_{n+1}-x_{n}\|+\sqrt{ \theta_{n}}\bigr)+\lambda\beta_{n} M_{4} \rightarrow0, \end{aligned} $$
(19)

where the last inequality comes from (18).

Lemma 2.4 and (19) then guarantee that every weak limit point of \(\{x_{n}\}_{n=0}^{\infty}\) is a fixed point of T. That is, \(\omega _{w}(x_{n})\subset \operatorname{Fix} (T)\). This fact, with inequality (16) and Lemma 2.3, ensures the strong convergence of \(\{x_{n}\}_{n=0}^{\infty}\) to \(q=P_{\operatorname{Fix}(T)}x_{0}\). □

5 Numerical examples and conclusion

In this section, we compare the original algorithms and the accelerated algorithms. The codes were written in Matlab 7.0 and run on personal computer.

Firstly, we apply the Mann algorithm (6) and Algorithm 3.1 to the following convex feasibility problem (see [1, 14]).

Problem 5.1

(From [14])

Given a nonempty, closed convex set \(C_{i}\subset\mathbb{R}^{N} \) (\(i= 0, 1, \ldots, m\)),

$$\mbox{find } x^{*}\in C :=\bigcap_{i=0}^{m} C_{i}, $$

where one assumes that \(C\neq\emptyset\). Define a mapping \(T:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) by

$$ T:=P_{0}\Biggl(\frac{1}{m}\sum ^{m}_{i=1}P_{i}\Biggr), $$
(20)

where \(P_{i}=P_{C_{i}} \) (\(i= 0, 1, \ldots, m\)) stands for the metric projection onto \(C_{i}\). Since \(P_{i} \) (\(i= 0, 1, \ldots, m\)) is nonexpansive, T defined by (20) is also nonexpansive. Moreover, we find that

$$\operatorname{Fix}(T)=\operatorname{Fix} (P_{0})\bigcap^{m}_{i=1} \operatorname{Fix}(P_{i})=C_{0} \bigcap^{m}_{i=1} C_{i} =C. $$

Set \(\lambda:=2\), \(\mu:=0.05\), \(\alpha_{n}:=1/(n+1) \) (\(n\geq0\)), and \(\beta_{n}:=1/(n+1) \) in Algorithm 3.1 and \(\alpha_{n}:={\mu}/(n+1)\) in the Mann algorithm (6). In the experiment, we set \(C_{i}\) (\(i=0,1,\ldots, m\)) as a closed ball with center \(c_{i}\in\mathbb{R}^{N}\) and radius \(r_{i} >0\). Thus, \(P_{i} \) (\(i=0,1,\ldots, m\)) can be computed with

$$P_{i}(x):= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} c_{i}+\frac{r_{i}}{\|c_{i}-x\|}(x-c_{i}) &\mbox{if }\|c_{i}-x\|>r_{i},\\ x &\mbox{if } \|c_{i}-x\|\leq r_{i}. \end{array}\displaystyle \right . $$

We set \(r_{i}:=1 \) (\(i=0,1,\ldots, m\)), \(c_{0}:=0\) and \(c_{i}\in({-1}/{\sqrt {N}}, {1}/{\sqrt{N}})^{N} \) (\(i=1,\ldots, m\)) were randomly chosen. Set \(e:=(1,1,\ldots,1)\). In Table 1, ‘Iter.’ and ‘Sec.’ denote the number of iterations and the cpu time in seconds, respectively. We took \(\|T(x_{n})-x_{n}\| <\varepsilon=10^{-6}\) as the stopping criterion.

Table 1 Computational results for Problem 5.1 with different dimensions

Table 1 illustrates that, with a few exceptions, Algorithm 3.1 significantly reduces the running time and iterations needed to find a fixed point compared with the Mann algorithm. The advantage is more obvious, as the parameters N and m become larger. It is worth further research on the reason of emergence of a few exceptions.

Next, we apply the CQ algorithm (12) and the accelerated CQ algorithm (13) to the following problem.

Problem 5.2

(From [22])

Let C be the unit closed ball \(S(0,1)=\{x \in\mathbb{R}^{3} | \|x\| \leq1\}\) and \(T:S(0,1)\rightarrow S(0,1)\) be defined by \(T:(x_{1},x_{2},x_{3})^{T}\mapsto(\frac{1}{\sqrt{3}}\sin(x+z),\frac{1}{\sqrt {3}}\sin(x+z),\frac{1}{\sqrt{3}}(x+y))^{T}\). Then

$$\mbox{find } x^{\ast}\in S(0,1) \mbox{ such that } T\bigl(x^{\ast}\bigr)=x^{\ast}. $$

He and Yang [22] showed that T is nonexpansive and has at least a fixed point in \(S(0,1)\).

Take the sequence \(\alpha_{n}=\frac{1}{n}\) in (12) and (13), and \(\beta_{n}=\frac{1}{66\times n^{3}}\), \(\lambda=1.2\). in (13). We tested four different initial points and the numerical results are listed in Table 2.

Table 2 Computational results for Problem 5.2 with different initial points

Table 2 shows that the acceleration of the CQ algorithm is ineffective, that is, the accelerated CQ algorithm does not in fact accelerate the CQ algorithm from running time or the number of iterations. The acceleration may be eliminated by the projection onto the sets \(C_{n}\) and \(Q_{n}\).

6 Concluding remarks

In this paper, we accelerate the Mann and CQ algorithms to obtain the accelerated Mann and CQ algorithms, respectively. Then we present the weak convergence of the accelerated Mann algorithm and the strong convergence of the accelerated CQ algorithm under some conditions. The numerical examples illustrate that the acceleration of the Mann algorithm is effective, however, the acceleration of the CQ algorithm is ineffective.

References

  1. Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)

    Book  Google Scholar 

  2. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)

    Book  Google Scholar 

  3. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    Google Scholar 

  4. Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    Google Scholar 

  5. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367-426 (1996)

    Article  MathSciNet  Google Scholar 

  6. Picard, E: Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. Math. Pures Appl. 6, 145-210 (1890)

    Google Scholar 

  7. Krasnosel’skii, MA: Two remarks on the method of successive approximations. Usp. Mat. Nauk 10, 123-127 (1955)

    MathSciNet  Google Scholar 

  8. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)

    Article  Google Scholar 

  9. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003)

    Article  MathSciNet  Google Scholar 

  10. Iiduka, H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 148, 580-592 (2011)

    Article  MathSciNet  Google Scholar 

  11. Iiduka, H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 133, 227-242 (2012)

    Article  MathSciNet  Google Scholar 

  12. Iiduka, H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22(3), 862-878 (2012)

    Article  MathSciNet  Google Scholar 

  13. Iiduka, H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1-26 (2013)

    Article  MathSciNet  Google Scholar 

  14. Sakurai, K, Liduka, H: Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2014, 202 (2014)

    Article  Google Scholar 

  15. Nocedal, J, Wright, SJ: Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (2006)

    Google Scholar 

  16. Yao, Y, Marino, G, Muglia, L: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63(4), 1-11 (2012)

    MathSciNet  Google Scholar 

  17. Matinez-Yanes, C, Xu, HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 64, 2400-2411 (2006)

    Article  MathSciNet  Google Scholar 

  18. Tan, KK, Xu, HK: Approximating fixed points of nonexpansive mapping by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301-308 (1993)

    Article  MathSciNet  Google Scholar 

  19. Suantai, S: Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings. J. Math. Anal. Appl. 311, 506-517 (2005)

    Article  MathSciNet  Google Scholar 

  20. Genel, A, Lindenstrauss, J: An example concerning fixed points. Isr. J. Math. 22, 81-86 (1975)

    Article  MathSciNet  Google Scholar 

  21. Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248-264 (2001)

    Article  MathSciNet  Google Scholar 

  22. He, S, Yang, Z: Realization-based method of successive projection for nonexpansive mappings and nonexpansive semigroups (submitted)

Download references

Acknowledgements

The authors express their thanks to the reviewers, whose constructive suggestions led to improvements in the presentation of the results. Supported by the National Natural Science Foundation of China (No. 61379102) and Fundamental Research Funds for the Central Universities (No. 3122013D017), in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiao-Li Dong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, QL., Yuan, Hb. Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping. Fixed Point Theory Appl 2015, 125 (2015). https://doi.org/10.1186/s13663-015-0374-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0374-6

Keywords