Skip to main content

A new hybrid algorithm for a nonexpansive mapping

Abstract

In the paper, we introduce a new hybrid algorithm which is not based on the modification to weak convergence algorithms. The strong convergence theorem of the proposed algorithm is presented. Finally, the numerical experiments suggest that the new algorithm could be faster than Nakajo and Takahashi’s algorithm in J. Math. Anal. Appl. 279:372-379, 2003.

1 Introduction

Let H be a real Hilbert space with the inner product \(\langle\cdot ,\cdot\rangle\) and the norm \(\|\cdot\|\) and C be a nonempty closed convex subset of H. Recall that a mapping \(T : C\rightarrow C\) is said to be nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) holds for all \(x, y \in C\). We denote by \(\operatorname{Fix}(T)\) the set of fixed points of T, i.e., \(\operatorname{Fix}(T) = \{x \in C : Tx = x\}\).

Recently, a great deal of literatures on iteration algorithms for approximating fixed points of nonexpansive mappings have been published since they have a variety of applications in inverse problem, image recovery, and signal processing; see [17]. Mann’s iteration process [8] is often used to approximate a fixed point of the operators, but it has only weak convergence (see [9] for an example). However, strong convergence is often much more desirable than weak convergence in many problems that arise in infinite dimensional spaces (see [10] and references therein). So, attempts have been made to modify Mann’s iteration process so that strong convergence is guaranteed. Let \(T:C\rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq \emptyset\). Nakajo and Takahashi [11] firstly introduced the following hybrid algorithm.

Algorithm 1

$$ \left \{ \begin{array}{@{}l} x_{0}\in C \mbox{ chosen arbitrarily},\\ y_{n}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ C_{n}=\{z\in C: \|y_{n}-z\|\leq\|x_{n}-z\|\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array} \right . $$
(1)

where \(P_{K}\) denotes the metric projection onto the set K, \(\{\alpha _{n}\}\subset[0,\sigma]\) for some \(\sigma\in(0,1]\). Thereafter, many hybrid algorithms have been studied extensively since they have strong convergence, see [1219]. As far as we know, most hybrid algorithms can be seen as the modification for weak convergence algorithms.

Inspired by the recent work of Malitsky and Semenov [20], we propose the following algorithm.

Algorithm 2

$$ \left \{ \begin{array}{@{}l} x_{0},z_{0}\in C \mbox{ chosen arbitrarily},\\ z_{n+1}=\alpha_{n}z_{n}+(1-\alpha_{n})Tx_{n}, \\ C_{n}=\{z\in C: \|z_{n+1}-z\|^{2}\leq\alpha_{n}\|z_{n}-z\|^{2}+(1-\alpha_{n})\| x_{n}-z\|^{2}\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array} \right . $$
(2)

where \(\{\alpha_{n}\}\subset[0, \sigma]\) for some \(\sigma\in [0, \frac{1}{2})\). It is easy to see that Algorithm 2 is not a modification of any weak convergence algorithm.

The paper is organized as follows. In the next section, we present some lemmas which will be used in the main results. In Section 3, strong convergence theorem and its proof are given. In the final section, Section 4, some numerical results are provided, which show advantages of our algorithm.

2 Preliminaries

We will use the following notation:

  1. (1)

    for weak convergence and → for strong convergence.

  2. (2)

    \(\omega_{w}(x_{n}) = \{x : \exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 2.1

The following identity in a real Hilbert space H holds:

$$\|u-v\|^{2}=\|u\|^{2}-\|v\|^{2}-2\langle u-v,v \rangle, \quad u,v\in H. $$

Lemma 2.2

(Goebel and Kirk [21])

Let C be a closed convex subset of a real Hilbert space H, and let \(T : C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). If a sequence \(\{x_{n}\}\) in C is such that \(x_{n}\rightharpoonup z\) and \(x_{n} - T x_{n} \rightarrow0\), then \(z = T z\).

Lemma 2.3

Let K be a closed convex subset of a real Hilbert space H, and let \(P_{K}\) be the (metric or nearest point) projection from H onto K (i.e., for \(x\in H\), \(P_{K}x\) is the only point in K such that \(\|x-P_{K}x\|=\inf\{\|x-z\|: z \in K\}\)). Given \(x \in H\) and \(z\in K\). Then \(z=P_{K}x\) if and only if the following relation holds:

$$\langle x-z,y-z\rangle\leq0 \quad\textit{for all } y\in K. $$

Lemma 2.4

(Matinez-Yanes and Xu [22])

Let K be a closed convex subset of H. Let \(\{x_{n}\}\) be a sequence in H and \(u\in H\). Let \(q = P_{K}u\). If \(\{x_{n}\}\) is such that \(\omega_{w}\{x_{n}\}\subset K\) and satisfies the condition

$$\|x_{n}-u\|\leq\|u-q\| \quad\textit{for all } n, $$

then \(x_{n}\rightarrow q\).

Lemma 2.5

Let \(\{a_{n}\}\) and \(\{b_{n}\}\) be nonnegative real sequences, \(\alpha\in [0,1)\), \(\beta\in\mathbb{R}^{+}\), and for all \(n\in\mathbb{N}\) the following inequality holds:

$$ a_{n+1}\leq\alpha a_{n}+\beta b_{n}. $$
(3)

If \(\sum_{n=1}^{\infty}b_{n}<+\infty\), then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Proof

Using inequality (3) for \(n=1,2,\ldots,N-1\), we obtain

$$\begin{aligned}& a_{2}\leq\alpha a_{1}+\beta b_{1}, \\& a_{3}\leq\alpha a_{2}+\beta b_{2}, \\& \vdots \\& a_{N}\leq\alpha a_{N-1}+\beta b_{N-1}. \end{aligned}$$

Adding all these inequalities yields

$$\sum_{n=1}^{N}a_{n}\leq \frac{1}{1-\alpha} \Biggl(a_{1}-\alpha a_{N}+\beta\sum _{n=1}^{N-1}b_{n} \Biggr) \leq \frac{1}{1-\alpha} \Biggl(a_{1}+\beta\sum_{n=1}^{\infty}b_{n} \Biggr). $$

Since N is arbitrary, we see that the series \(\sum_{n=1}^{\infty}a_{n}\) is convergent and hence \(a_{n} \rightarrow0\). □

3 Algorithm and its convergence

In this section, we present strong convergence theorem and its proof for Algorithm 2.

Theorem 3.1

Let C be a closed convex subset of a Hilbert space H, and let \(T: C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset \). Assume that \(\{\alpha_{n}\}\subset[0,\sigma]\) holds for some \(\sigma \in [0, \frac{1}{2})\). Then \(\{x_{n}\}\) and \(\{z_{n}\}\) generated by Algorithm 2 converge strongly to \(P_{\operatorname{Fix}(T)}x_{0}\).

Proof

It is easy to know that \(C_{n}\) is convex (see Lemma 1.3 in [22]). Next we show that \(\operatorname{Fix}(T)\subset C_{n}\) for all \(n \geq0\). To observe this, taking \(p\in \operatorname{Fix}(T)\) arbitrarily, we have

$$\begin{aligned} \|z_{n+1}-p\|^{2}&=\bigl\| \alpha_{n}z_{n}+(1- \alpha_{n})Tx_{n}-p\bigr\| ^{2} \\ &\leq\alpha_{n}\|z_{n}-p\|^{2}+(1- \alpha_{n})\|x_{n}-p\|^{2}, \end{aligned}$$

which implies \(\operatorname{Fix}(T)\subset C_{n}\) for all \(n \geq0\). Next we show

$$ \operatorname{Fix}(T)\subset Q_{n} \quad\mbox{for all } n \geq0 $$
(4)

by induction. For \(n=0\), we have \(\operatorname{Fix}(T)\subset C=Q_{0}\). Assume \(\operatorname{Fix}(T)\subset Q_{n}\). Since \(x_{n+1} \) is the projection of \(x_{0}\) onto \(C_{n}\cap Q_{n}\), by Lemma 2.3 we have

$$\langle x_{n+1}-z,x_{n+1}-x_{0}\rangle\leq0 \quad \forall z\in C_{n}\cap Q_{n}. $$

As \(\operatorname{Fix}(T)\subset C_{n}\cap Q_{n}\), by the induction assumption, the last inequality holds, in particular, for all \(z\in \operatorname{Fix}(T)\). This together with the definition of \(Q_{n+1}\) implies that \(\operatorname{Fix}(T) \subset Q_{n+1}\). Hence (4) holds for all \(n\geq0\).

Since \(\operatorname{Fix}(T)\) is a nonempty closed convex subset of C, there exists a unique element \(q\in \operatorname{Fix}(T)\) such that \(q=P_{\operatorname{Fix}(T)}x_{0}\). From \(x_{n} = P_{Q_{n}}x_{0}\) (by the definition of \(Q_{n}\)) and \(\operatorname{Fix}(T)\subset Q_{n}\), we have \(\|x_{n}-x_{0}\|\leq\|p-x_{0}\|\) for all \(p \in \operatorname{Fix}(T)\). Due to \(q\in \operatorname{Fix}(T)\), we get

$$ \|x_{n}-x_{0}\|\leq\|q-x_{0}\|, $$
(5)

which implies that \(\{x_{n}\}\) is bounded.

The fact that \(x_{n+1}\in Q_{n}\) implies that \(\langle x_{n+1}-x_{n},x_{n}-x_{0}\rangle\geq0\). This together with Lemma 2.1 implies

$$ \|x_{n+1}-x_{n}\|^{2} \leq \|x_{n+1}-x_{0}\|^{2}-\|x_{n}-x_{0} \|^{2}. $$
(6)

From (5) and (6) we obtain

$$\begin{aligned} \sum_{n=1}^{N}\|x_{n+1}-x_{n} \|^{2}&\leq\sum_{n=1}^{N} \bigl( \|x_{n+1}-x_{0}\|^{2}-\| x_{n}-x_{0} \|^{2} \bigr) \\ &=\|x_{N+1}-x_{0}\|^{2}-\|x_{1}-x_{0} \|^{2} \\ &\leq\|q-x_{0}\|^{2}-\|x_{1}-x_{0} \|^{2}. \end{aligned}$$

So it follows that \(\sum_{n=1}^{\infty}\|x_{n+1}-x_{n}\|^{2}\) is convergent and thus \(\|x_{n+1}-x_{n}\|\rightarrow0\) as \(n\rightarrow\infty\). The fact that \(x_{n+1}\in C_{n}\) implies that

$$\begin{aligned} \|z_{n+1}-x_{n+1}\|^{2}\leq{}&\alpha_{n} \|z_{n}-x_{n+1}\|^{2}+(1-\alpha_{n})\| x_{n}-x_{n+1}\|^{2} \\ ={}&\alpha_{n}\bigl(\|z_{n}-x_{n} \|^{2}+2\langle z_{n}-x_{n}, x_{n}-x_{n+1} \rangle+\| x_{n}-x_{n+1}\|^{2}\bigr) \\ &{} +(1-\alpha_{n})\|x_{n}-x_{n+1} \|^{2} \\ \leq{}&2\alpha_{n}\bigl(\|z_{n}-x_{n} \|^{2}+\|x_{n}-x_{n+1}\|^{2}\bigr)+(1- \alpha_{n})\| x_{n}-x_{n+1}\|^{2} \\ \leq{}&2\sigma\|z_{n}-x_{n}\|^{2}+2 \|x_{n}-x_{n+1}\|^{2}, \end{aligned}$$

where the second inequality follows from the AM-GM and the Cauchy-Schwarz inequalities, and the last inequality follows from \(\alpha_{n}\leq\sigma\). From Lemma 2.5 and \(\sigma\in(0,\frac{1}{2})\), we obtain

$$ \|z_{n}-x_{n}\|\rightarrow0. $$
(7)

For this reason, we have

$$ \|z_{n+1}-x_{n}\|\leq\|z_{n+1}-x_{n+1} \|+\|x_{n+1}-x_{n}\|\rightarrow0. $$
(8)

Noting that \((1-\alpha_{n})(Tx_{n}-x_{n})=(z_{n+1}-x_{n})+\alpha_{n}(x_{n}-z_{n})\), we obtain

$$\|Tx_{n}-x_{n}\|\leq\frac{1}{1-\alpha_{n}}\|z_{n+1}-x_{n} \|+\frac{\alpha _{n}}{1-\alpha_{n}}\|x_{n}-z_{n}\|. $$

Since \(\alpha_{n}\leq\sigma\) and by (7) and (8), we get

$$ \|Tx_{n}-x_{n}\|\rightarrow0. $$
(9)

By Lemma 2.2, we obtain that \(\omega_{w}(x_{n}) \subset \operatorname{Fix}(T)\). This, together with (5) and Lemma 2.4, guarantees strong convergence of \(\{x_{n}\}\) to \(P_{\operatorname{Fix}(T)}x_{0}\). From (7), strong convergence of \(\{z_{n}\}\) to \(P_{\operatorname{Fix}(T)}x_{0}\) is obtained. □

Changing the definitions of \(z_{n+1}\) and \(C_{n}\) in Algorithm 2, we get the following algorithm:

$$ \left \{ \begin{array}{@{}l} x_{0},z_{0}\in C \mbox{ chosen arbitrarily},\\ z_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tz_{n}, \\ C_{n}=\{z\in C: \|z_{n+1}-z\|^{2}\leq\alpha_{n}\|x_{n}-z\|^{2}+(1-\alpha_{n})\| z_{n}-z\|^{2}\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array} \right . $$
(10)

where \(\{\alpha_{n}\}_{n=0}^{\infty}\subset[a,b]\) for some \(a,b\in (\frac{1}{2}, 1)\). Using the process of proof of Theorem 3.1, we can show the following theorem.

Theorem 3.2

Let C be a closed convex subset of a Hilbert space H, and let \(T: C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset \). Assume \(\{\alpha_{n}\}\subset[a,b]\) for some \(a,b\in (\frac{1}{2}, 1)\). Then \(\{x_{n}\}\) and \(\{z_{n}\}\) generated by the iteration process (10) strongly converge to \(P_{\operatorname{Fix}(T)}x_{0}\).

4 Numerical experiments

In this section, we firstly present specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) in Algorithm 2 and then compare Algorithms 1 and 2 through numerical examples.

He et al. [23] pointed out that it is difficult to realize the hybrid algorithm in actual computing programs because the specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) cannot be got, in general. For a special case \(C = H\), where \(C_{n}\) and \(Q_{n}\) are two half-spaces, they obtained the specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) and realized Algorithm 1.

In the case \(C = H\), following the ideas of He et al. [23], we obtain the specific expression of \(P_{C_{n}\cap Q_{n}}x_{0}\) of Algorithm 2 as follows:

$$ \left \{ \begin{array}{@{}l} x_{0}, z_{0}\in H \mbox{ chosen arbitrarily},\\ z_{n+1}=\alpha_{n}z_{n}+(1-\alpha_{n})Tx_{n}, \\ u_{n}=\alpha_{n} z_{n}+(1-\alpha_{n})x_{n}-z_{n+1},\\ v_{n}=(\alpha_{n}\|z_{n}\|^{2}+(1-\alpha_{n})\|x_{n}\|^{2}-\|z_{n+1}\|^{2})/2,\\ C_{n}=\{z\in C: \langle u_{n},z\rangle\leq v_{n}\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=p_{n}, \quad\mbox{if } p_{n}\in Q_{n},\\ x_{n+1}=q_{n}, \quad\mbox{if } p_{n}\notin Q_{n}, \end{array} \right . $$
(11)

where

$$\begin{aligned}& p_{n}=x_{0}-\frac{\langle u_{n},x_{0}\rangle-v_{n}}{\|u_{n}\|^{2}}u_{n}, \\& q_{n}= \biggl(1-\frac{\langle x_{0}-x_{n},x_{n}-p_{n}\rangle}{\langle x_{0}-x_{n},w_{n}-p_{n}\rangle} \biggr)p_{n} + \frac{\langle x_{0}-x_{n},x_{n}-p_{n}\rangle}{\langle x_{0}-x_{n},w_{n}-p_{n}\rangle }w_{n}, \\& w_{n}=x_{n}-\frac{\langle u_{n},x_{n}\rangle-v_{n}}{\|u_{n}\|^{2}}u_{n}. \end{aligned}$$

Let \(R^{2}\) be a two-dimensional Euclidean space with the usual inner product \(\langle v^{(1)},v^{(2)}\rangle=v^{(1)}_{1}v^{(2)}_{1}+v^{(1)}_{2}v^{(2)}_{2}\) (\(\forall v^{(1)}=(v^{(1)}_{1},v^{(1)}_{2})^{T}, v^{(2)}=(v^{(2)}_{1},v^{(2)}_{2})^{T}\in R^{2}\)) and the norm \(\|v\|=\sqrt {v_{1}^{2}+v_{2}^{2}}\) (\(v=(v_{1},v_{2})^{T}\in R^{2}\)). He et al. [23] defined a mapping

$$ T:v=(v_{1},v_{2})^{T}\mapsto \biggl(\sin\frac{v_{1}+v_{2}}{\sqrt{2}},\cos\frac {v_{1}+v_{2}}{\sqrt{2}} \biggr)^{T}, $$
(12)

and showed it is nonexpansive. It is easy to get that T has a fixed point in the unit disk which is difficult to calculate.

Next, we compare Algorithms 1 and 2 with the nonexpansive mapping T defined in (12). In the numerical results listed in Table 1, ‘Iter.’ and ‘Sec.’ denote the number of iterations and the cpu time in seconds, respectively. We took \(E(x)<\varepsilon\) as the stopping criterion and \(\varepsilon=10^{-4}\). We set \(x_{0}=z_{0}\) in Algorithm 2 and took \(\alpha_{n}=0.1\) for Algorithms 1 and 2. The algorithms were coded in Matlab 7.1 and run on a personal computer.

Table 1 Comparison of Algorithms 1 and 2 with different initial values

Table 1 illustrates that in our examples Algorithm 2 has a competitive performance. We caution, however, that this study is a very preliminary one.

References

  1. Xu, HK: A variable Krasnosel’skiĭ-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22, 2021-2034 (2006)

    Article  MATH  Google Scholar 

  2. Combettes, PL: On the numerical robustness of the parallel projection method in signal synthesis. IEEE Signal Process. Lett. 8(2), 45-47 (2001)

    Article  Google Scholar 

  3. Podilchuk, CI, Mammone, RJ: Image recovery by convex projections using a least-squares constraint. J. Opt. Soc. Am. 7(3), 517-521 (1990)

    Article  Google Scholar 

  4. Youla, D: Mathematical theory of image restoration by the method of convex projection. In: Stark, H (ed.) Image Recovery Theory and Applications, pp. 29-77. Academic Press, Orlando (1987)

    Google Scholar 

  5. Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967)

    Article  MATH  Google Scholar 

  6. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  7. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279-291 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  8. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)

    Article  MATH  Google Scholar 

  9. Genel, A, Lindenstrass, J: An example concerning fixed points. Isr. J. Math. 22, 81-86 (1975)

    Article  MATH  Google Scholar 

  10. Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248-264 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  11. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  12. Kim, TH, Xu, HK: Strong convergence of modified Mann iterations. Nonlinear Anal. 61, 51-60 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  13. Marino, G, Xu, HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336-346 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  14. Yao, Y, Liou, YC, Marino, G: A hybrid algorithm for pseudo-contractive mappings. Nonlinear Anal. 71, 4997-5002 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  15. Zeng, LC, Ansari, QH, Al-Homidan, S: Hybrid proximal-type algorithms for generalized equilibrium problems, maximal monotone operators and relatively nonexpansive mappings. Fixed Point Theory Appl. 2011, Article ID 973028 (2011)

    MathSciNet  Google Scholar 

  16. Ceng, LC, Ansari, QH, Yao, JC: Hybrid proximal-type and hybrid shrinking projection algorithms for equilibrium problems, maximal monotone operators and relatively nonexpansive mappings. Numer. Funct. Anal. Optim. 31(7), 763-797 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  17. Ceng, LC, Guu, SM, Yao, JC: Hybrid viscosity CQ method for finding a common solution of a variational inequality, a general system of variational inequalities, and a fixed point problem. Fixed Point Theory Appl. 2013, Article ID 313 (2013)

    Article  MathSciNet  Google Scholar 

  18. Zhou, H, Su, Y: Strong convergence theorems for a family of quasi-asymptotic pseudo-contractions in Hilbert spaces. Nonlinear Anal. 70, 4047-4052 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. Nilsrakoo, W, Saejung, S: Weak and strong convergence theorems for countable Lipschitzian mappings and its applications. Nonlinear Anal. 69, 2695-2708 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  20. Malitsky, YV, Semenov, VV: A hybrid method without extrapolation step for solving variational inequality problems. J. Glob. Optim. 61(1), 193-202 (2015)

    Article  MATH  MathSciNet  Google Scholar 

  21. Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)

    Book  MATH  Google Scholar 

  22. Matinez-Yanes, C, Xu, HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 64, 2400-2411 (2006)

    Article  MathSciNet  Google Scholar 

  23. He, S, Yang, C, Duan, P: Realization of the hybrid method for Mann iterations. Appl. Math. Comput. 217, 4239-4247 (2010)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

Supported by the National Natural Science Foundation of China (No. 11201476) and Fundamental Research Funds for the Central Universities (No. 3122013D017), in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiao-Li Dong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, QL., Lu, YY. A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theory Appl 2015, 37 (2015). https://doi.org/10.1186/s13663-015-0285-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0285-6

MSC

Keywords