 Research
 Open Access
A new hybrid algorithm and its numerical realization for two nonexpansive mappings
 QiaoLi Dong^{1, 2},
 Songnian He^{1, 2} and
 Yeol Je Cho^{3, 4}Email author
https://doi.org/10.1186/s136630150399x
© Dong et al. 2015
 Received: 30 December 2014
 Accepted: 7 August 2015
 Published: 22 August 2015
Abstract
In the paper, first, we introduce a new hybrid projection algorithm and present its strong convergence theorem. Next, we analyze different hybrid algorithms in computing and conclude that our proposed algorithm has an advantage. Finally, the numerical experiments validate the efficiency and advantages of the new algorithm.
Keywords
 nonexpansive mapping
 hybrid algorithm
 cyclic algorithm
 parallel algorithm
 strong convergence
MSC
 90C47
 49J35
1 Introduction
The construction of common fixed points for a finite family of nonlinear mappings is of practical importance. In particular, iteration algorithms for finding common fixed points of a finite family of nonexpansive mappings have received extensive investigation (see [1–3]) since these algorithms have a variety of applications in inverse problem, image recovery, and signal processing (see [4–7]).
Mann’s iteration algorithm [8] is often used to find a fixed point of nonexpansive mappings, but it has only weak convergence (see [9] for an example). However, strong convergence is often much more desirable than weak convergence in many problems that arise in infinite dimensional spaces (see [10] and references therein). So, attempts have been made to modify Mann’s iteration algorithm so that strong convergence is guaranteed. Let \(T:C\rightarrow C\) be a nonexpansive mapping. Then \(IT\) is a maximal monotone operator [11]. Inspired by Solodov and Svaiter’s hybrid method for finding a zero of a maximal monotone operator [12], Nakajo and Takahashi [13] first introduced a hybrid algorithm for a nonexpansive mapping. Thereafter, some hybrid algorithms have been studied extensively since they have strong convergence (see [14–18]).
In this paper, motivated by Eckstein and Svaiter’s splitting methods for approximating a zero of the sum of two maximal monotone operators [19], we introduce a new hybrid algorithm. Let \(T,S:C\rightarrow C\) be two nonexpansive mappings such that \(\operatorname {Fix}(T)\cap \operatorname {Fix}(S)\neq\emptyset\). We consider the following algorithm.
Algorithm 1
2 Relation to the previous work
In [20], Takahashi et al. modified the cyclic algorithm (2) and introduced another hybrid algorithm.
Algorithm 2
The computational complexity of Algorithm 1 on every step is one computation of a metric projection and two values of S and T, while the computational complexity of Algorithm 2 on every step is one computation of a metric projection and one value of S or T. In general, the computational cost of metric projection is larger than that of operators.
By modifying the parallel algorithm (see [4]), it is easy to obtain the following algorithm (see [21] for details).
Algorithm 3
In the sense of computational complexity Algorithm 3 is similar to Algorithm 1. However, it is generally recognized that the cyclic algorithm (like the GaussSeidel iteration) is faster than the parallel algorithm (like Jacob iteration).
3 Preliminaries

⇀ for weak convergence and → for strong convergence;

\(\omega_{w}(x_{n}) = \{x : \exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ωlimit set of \(\{x_{n}\}\).
We need some facts and tools in a real Hilbert space H which are listed as lemmas below.
Lemma 3.1
Lemma 3.2
(Goebel and Kirk [22])
Let C be a nonempty closed convex subset of a real Hilbert space H and \(T : C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname {Fix}(T)\neq\emptyset\). If a sequence \(\{x_{n}\}\) in C is such that \(x_{n}\rightharpoonup z\) and \(x_{n}  T x_{n} \rightarrow0\), then \(z = T z\).
Lemma 3.3
Lemma 3.4
(MartinezYanes and Xu [23])
Lemma 3.5
Proof
4 Main results
In this section, we first present a strong convergence theorem and its proof for Algorithm 1. Then we extend it to a finite family of nonexpansive mappings.
Theorem 4.1
Let C be a nonempty closed convex subset of a Hilbert space H and \(T,S: C \rightarrow C\) be two nonexpansive mappings such that \(\operatorname {Fix}(T)\cap \operatorname {Fix}(S)\neq\emptyset\). Assume that \(\{\alpha_{n}\}\), \(\{\beta _{n}\}\), and \(\{\gamma_{n}\}\) are the sequences in \([0, 1]\) such that \(\alpha_{n},\beta_{n}\leq1\delta\) for some \(\delta \in (0, 1]\). Assume \(\sigma\in(0,1)\). Then the sequence \(\{x_{n}\}\) generated by Algorithm 1 converges in norm to \(P_{\operatorname {Fix}(T)\cap \operatorname {Fix}(S)}x_{0}\).
Proof
It is easily observed that the algorithm (11) with \(\gamma_{n}=0\) is different from the algorithm (12) in the definitions of the sets \(C_{n}\) and the conditions on \(\alpha_{n}\).
From Theorem 4.1, we get directly the following result.
Corollary 4.1
Let C be a nonempty closed convex subset of a Hilbert space H and \(T: C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname {Fix}(T)\neq \emptyset\). Assume that \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\} \) are the sequences in \([0, 1]\) such that \(\alpha_{n}\), \(\beta_{n}\leq1\delta\) for some \(\delta \in (0, 1]\). Assume \(\sigma\in(0,1)\). Then the sequence \(\{x_{n}\}\) generated by the algorithm (11) converges in norm to \(P_{\operatorname {Fix}(T)}x_{0}\).
Letting \(\gamma_{n}=1\) in Algorithm 1, from Theorem 4.1, we obtain the following result.
Corollary 4.2
Extending Corollary 4.2 to a finite family of nonexpansive mappings, we can easily obtain the strong convergence for the algorithm (14), whose proof is similar to Theorem 4.1 and omitted here.
Corollary 4.3
Let C be a nonempty closed convex subset of a Hilbert space H and \(\{T_{k}: C \rightarrow C, k=1,2,\ldots,N\}\) be a finite family of nonexpansive mappings such that \(\bigcap_{k=1}^{N}\operatorname {Fix}(T_{k})\neq \emptyset\). Assume that \(\{\alpha_{n_{k}}\}\) is the sequence in \([0, 1]\) such that \(\alpha_{n_{k}}\leq1\delta\) for each \(k=1,2,\ldots,N\) for some \(\delta\in (0, 1]\). Assume that \(\sigma_{k}\in(0,1)\) for each \(k=1,2,\ldots,N\), and \(\sum_{k=1}^{N}\sigma_{k}=1\). Then the sequence \(\{x_{n}\}\) generated by the algorithm (14) converges in norm to \(P_{\bigcap_{k=1}^{N}\operatorname {Fix}(T_{k})}x_{0}\).
5 Numerical experiments
Many authors studied hybrid algorithms and analyzed strong convergence of the algorithms, however, as far as we know, the results of the realization for the algorithms are very limited (see, for example, [25, 26]).
Recently, He et al. [27] pointed out that it is difficult to realize the hybrid algorithm in actual computing programs because the specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) cannot be got, in general. For the special case \(C = H\), where \(C_{n}\) and \(Q_{n}\) are two halfspaces, they obtained the specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) and thus easily realized the hybrid algorithm proposed by Nakajo and Takahashi [13].
Denote by \(E(x)=\frac{\Vert xTx\Vert +\Vert xSx\Vert }{\Vert x\Vert }\) the relative rate of convergence of the algorithms since we do not know the exact value of the projection of \(x_{0}\) onto common fixed points set of S and T.
In the numerical results listed in the following tables, Iter. and Sec. denote the number of iterations and the cpu time in seconds, respectively. We took \(E(x)<\varepsilon\) as the stopping criterion and \(\varepsilon=10^{4}\) unless specified otherwise. We chose different \(x_{0}\) as initial point. The algorithms were coded in Matlab 7.1 and run on a personal computer.
We firstly investigated the choice of the parameters of Algorithm 1.
In Algorithm 1, there are four parameters, σ, \(\alpha_{n}\), \(\beta _{n}\), \(\gamma_{n}\) (indeed three nonnegative sequences). Thus we need to set them before performing the algorithm.
Algorithm 1 with \(\pmb{\alpha_{n}=0.1}\) , \(\pmb{\beta _{n}=0.1}\) , \(\pmb{\gamma_{n}=1.0}\)
\(\boldsymbol{x_{0}}\)  σ  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9 

(0,0)  Iter.  372  472  404  538  539  417  461  452  523 
(2,7)  Iter.  167  760  249  897  385  321  615  355  1,182 
(−5,2)  Iter.  915  1,424  2,047  1,597  972  1,077  1,672  2,309  2,101 
(−3,−4)  Iter.  1,032  669  1,930  1,166  934  766  753  1,181  530 
Algorithm 1 with \(\pmb{\alpha_{n}=0.1}\) , \(\pmb{\beta_{n}=0.1}\) , \(\pmb{\sigma=0.1}\)
\(\boldsymbol{x_{0}}\)  \(\boldsymbol{\gamma_{n}}\)  0.0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1.0 

(0,0)  Iter.  539  626  393  441  408  303  576  759  469  422  372 
(2,7)  Iter.  205  825  242  488  1,278  214  301  362  174  750  167 
(−5,2)  Iter.  1,237  1,373  1,853  1,627  2,275  2,173  1,609  2,036  1,316  2,633  915 
(−3,−4)  Iter.  919  780  1,647  958  875  957  1,411  663  1,146  573  1,032 
Algorithm 1 with \(\pmb{\beta_{n}=0.1}\) , \(\pmb{\gamma_{n}=1.0}\) , \(\pmb{\sigma=0.1}\)
\(\boldsymbol{x_{0}}\)  \(\boldsymbol{\alpha_{n}}\)  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9 

(0,0)  Iter.  372  524  853  1,080  1,398  –  –  –  – 
(2,7)  Iter.  167  482  625  1,479  2,186  2,046  3,115  4,111  14,440 
(−5,2)  Iter.  915  1,418  2,950  3,998  6,098  6,827  1,330  20,114  68,421 
(−3,−4)  Iter.  1,032  2,306  2,262  1,975  4,341  2,882  6,321  11,761  38,330 
6 Conclusions
Hybrid algorithms for nonexpansive mappings have extensively been studied over the past decade. In this paper, we introduced a new hybrid algorithm and, for the first time in the literature, compared the different hybrid algorithms in computing. Numerical examples were provided, which showed the advantages of the new algorithm.
Declarations
Acknowledgements
The authors would like to express their thanks to Professor Peichao Duan for her help in program. The first and second authors were supported by National Natural Science Foundation of China (No. 11201476) and Fundamental Research Funds for the Central Universities (No. 3122013D017), in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing and the third author was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2014R1A2A2A01002100).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Chang, SS: Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 323, 14021416 (2006) View ArticleMathSciNetMATHGoogle Scholar
 Atsushiba, S, Takahashi, W: Strong convergence theorems for a finite family of nonexpansive mappings and applications. Indian J. Math., 41(3), 435453 (1999). BN Prasad birth centenary commemoration volume. MathSciNetMATHGoogle Scholar
 Yao, Y: A general iterative method for a finite family of nonexpansive mappings. Nonlinear Anal. 66, 26762687 (2007) View ArticleMathSciNetMATHGoogle Scholar
 Xu, HK: A variable Krasnosel’skiĭMann algorithm and the multipleset split feasibility problem. Inverse Probl. 22, 20212034 (2006) View ArticleMATHGoogle Scholar
 Combettes, PL: On the numerical robustness of the parallel projection method in signal synthesis. IEEE Signal Process. Lett. 8, 4547 (2001) View ArticleMathSciNetGoogle Scholar
 Podilchuk, CI, Mammone, RJ: Image recovery by convex projections using a leastsquares constraint. J. Opt. Soc. Am. 7, 517521 (1990) View ArticleGoogle Scholar
 Youla, D: Mathematical theory of image restoration by the method of convex projection. In: Stark, H (ed.) Image Recovery Theory and Applications, pp. 2977. Academic Press, Orlando (1987) Google Scholar
 Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506510 (1953) View ArticleMATHGoogle Scholar
 Genel, A, Lindenstrass, J: An example concerning fixed points. Isr. J. Math. 22, 8186 (1975) View ArticleMATHGoogle Scholar
 Bauschke, HH: The approximation of fixed points of compositions of nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 202, 150159 (1996) View ArticleMathSciNetMATHGoogle Scholar
 Wang, F: Iterative methods for nonlinear optimization problems in Hilbert spaces (in Chinese), Ph.D. thesis, East China University of Science and Technology (2011). Google Scholar
 Solodov, MV, Svaiter, BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program., Ser. A 87, 189202 (2000) MathSciNetMATHGoogle Scholar
 Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372379 (2003) View ArticleMathSciNetMATHGoogle Scholar
 Kim, TH, Xu, HK: Strong convergence of modified Mann iterations. Nonlinear Anal. 61, 5160 (2005) View ArticleMathSciNetMATHGoogle Scholar
 Marino, G, Xu, HK: Weak and strong convergence theorems for strict pseudocontractions in Hilbert spaces. J. Math. Anal. Appl. 329, 336346 (2007) View ArticleMathSciNetMATHGoogle Scholar
 Wei, L, Cho, YJ, Zhou, HY: A strong convergence theorem for common fixed points of two relatively nonexpansive mappings and its applications. J. Appl. Math. Comput. 29, 95103 (2009) View ArticleMathSciNetMATHGoogle Scholar
 Zhou, H, Su, Y: Strong convergence theorems for a family of quasiasymptotic pseudocontractions in Hilbert spaces. Nonlinear Anal. 70, 40474052 (2009) View ArticleMathSciNetMATHGoogle Scholar
 Nilsrakoo, W, Saejung, S: Weak and strong convergence theorems for countable Lipschitzian mappings and its applications. Nonlinear Anal. 69, 26952708 (2008) View ArticleMathSciNetMATHGoogle Scholar
 Eckstein, J, Svaiter, BF: A family of projective splitting methods for the sum of two maximal monotone operators. Math. Program., Ser. B 111, 173199 (2008) View ArticleMathSciNetMATHGoogle Scholar
 Takahashi, W, Takeuchi, Y, Kubota, R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276286 (2008) View ArticleMathSciNetMATHGoogle Scholar
 Plubtieng, S, Ungchittrakool, K: Strong convergence theorems for a common fixed point of two relatively nonexpansive mappings in a Banach space. J. Approx. Theory 149, 103115 (2007) View ArticleMathSciNetMATHGoogle Scholar
 Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990) View ArticleMATHGoogle Scholar
 MartinezYanes, C, Xu, HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 64, 24002411 (2006) View ArticleMathSciNetMATHGoogle Scholar
 Ishikawa, S: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147150 (1974) View ArticleMathSciNetMATHGoogle Scholar
 Dong, QL, Lu, YY: A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theory Appl. 2015, 37 (2015) View ArticleMathSciNetGoogle Scholar
 He, S, Yang, Z: Realizationbased method of successive projection for nonexpansive mappings and nonexpansive semigroups (submitted) Google Scholar
 He, S, Yang, C, Duan, P: Realization of the hybrid method for Mann iterations. Appl. Math. Comput. 217, 42394247 (2010) View ArticleMathSciNetMATHGoogle Scholar