Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping
- Qiao-Li Dong^{1, 2}Email author and
- Han-bo Yuan^{1}
https://doi.org/10.1186/s13663-015-0374-6
© Dong and Yuan 2015
Received: 26 February 2015
Accepted: 30 June 2015
Published: 22 July 2015
Abstract
The purpose of this paper is to present accelerations of the Mann and CQ algorithms. We first apply the Picard algorithm to the smooth convex minimization problem and point out that the Picard algorithm is the steepest descent method for solving the minimization problem. Next, we provide the accelerated Picard algorithm by using the ideas of conjugate gradient methods that accelerate the steepest descent method. Then, based on the accelerated Picard algorithm, we present accelerations of the Mann and CQ algorithms. Under certain assumptions, we show that the new algorithms converge to a fixed point of a nonexpansive mapping. Finally, we show the efficiency of the accelerated Mann algorithm by numerically comparing with the Mann algorithm. A numerical example is provided to illustrate that the acceleration of the CQ algorithm is ineffective.
Keywords
1 Introduction
In this paper, we consider the following fixed point problem.
Problem 1.1
The fixed point problems for nonexpansive mappings [1–4] capture various applications in diversified areas, such as convex feasibility problems, convex optimization problems, problems of finding the zeros of monotone operators, and monotone variational inequalities (see [1, 5] and the references therein). The Picard algorithm [6], the Mann algorithm [7, 8], and the CQ method [9] are useful fixed point algorithms to solve the fixed point problems. Meanwhile, to guarantee practical systems and networks (see, e.g., [10–13]) are stable and reliable, the fixed point has to be quickly found. Recently, Sakurai and Liduka [14] accelerated the Halpern algorithm and obtained a fast algorithm with strong convergence. Inspired by their work, we focus on the Mann and the CQ algorithms and present new algorithms to accelerate the approximation of a fixed point of a nonexpansive mapping.
We first apply the Picard algorithm to the smooth convex minimization problem and illustrate that the Picard algorithm is the steepest descent method [15] for solving the minimization problem. Since conjugate gradient methods [15] have been widely seen as an efficient accelerated version of most gradient methods, we introduce an accelerated Picard algorithm by combining the conjugate gradient methods and the Picard algorithm. Finally, based on the accelerated Picard algorithm, we present accelerations of the Mann and CQ algorithms.
In this paper, we propose two accelerated algorithms for finding a fixed point of a nonexpansive mapping and prove the convergence of the algorithms. Finally, the numerical examples are presented to demonstrate the effectiveness and fast convergence of the accelerated Mann algorithm and the ineffectiveness of the accelerated CQ algorithm.
2 Mathematical preliminaries
2.1 Picard algorithm and our algorithm
2.2 Some lemmas
- (1)
\(x_{n}\rightharpoonup x\) means that \(\{x_{n}\}\) converges weakly to x and \(x_{n}\rightarrow x\) means that \(\{x_{n}\}\) converges strongly to x.
- (2)
\(\omega_{w}(x_{n}):=\{x:\exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).
Lemma 2.1
- (i)
\(\|x-y\|^{2}=\|x\|^{2}-\|y\|^{2}-2\langle x-y, y\rangle\), \(\forall x, y \in H\),
- (ii)
\(\|tx+(1-t)y\|^{2}=t\|x\|^{2}+(1-t)\|y\|^{2}-t(1-t)\|x-y\|^{2}\), \(t\in[0, 1]\), \(\forall x, y \in H\).
Lemma 2.2
Lemma 2.3
(See [17])
Lemma 2.4
(See [2])
Let C be a closed convex subset of a real Hilbert space H, and let \(T : C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). If a sequence \(\{x_{n}\}\) in C is such that \(x_{n}\rightharpoonup z\) and \(x_{n} - T x_{n} \rightarrow0\), then \(z = T z\).
Lemma 2.5
(See [18])
3 The accelerated Mann algorithm
In this section, we present the accelerated Mann algorithm and give its convergence.
Algorithm 3.1
We can check that Algorithm 3.1 coincides with the Mann algorithm (6) when \(\beta_{n}:\equiv0\) and \(\mu:= 1\).
In this section we make the following assumptions.
Assumption 3.1
- (C1)
\(\sum_{n=0}^{\infty}\mu\alpha_{n}(1-\mu\alpha_{n})=\infty\),
- (C2)
\(\sum_{n=0}^{\infty}\beta_{n}<\infty\).
- (C3)
\(\{T(x_{n})-x_{n}\}_{n=0}^{\infty}\) is bounded.
Before doing the convergence analysis of Algorithm 3.1, we first show the two lemmas.
Lemma 3.1
Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption 3.1 holds. Then \(\{d_{n}\}_{n=0}^{\infty}\) and \(\{\|x_{n}-p\|\}_{n=0}^{\infty}\) are bounded for any \(p\in \operatorname{Fix}(T)\). Furthermore, \(\lim_{n\rightarrow\infty} \|x_{n}-p\|\) exists.
Proof
In addition, using Lemma 2.5, (C2), and (11), we obtain \(\lim_{n\rightarrow\infty} \|x_{n}-p\|\) exists. □
Lemma 3.2
Proof
Theorem 3.1
Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption 3.1 holds. Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 weakly converges to a fixed point of T.
Proof
To see that \(\{x_{n}\}\) is actually weakly convergent, we need to show that \(\omega_{\omega}(x_{n})\) consists of exactly one point. Take \(p, q\in \omega_{w}(x_{n})\) and let \(\{x_{n_{i}}\}\) and \(\{x_{m_{j}}\}\) be two subsequences of \(\{x_{n}\}\) such that \(x_{n_{i}}\rightharpoonup p\) and \(x_{m_{j}}\rightharpoonup q\), respectively. Using Lemma 2.7 of [19] and Lemma 3.1, we have \(p=q\). Hence, the proof is complete. □
4 The accelerated CQ algorithm
Here, we introduce an acceleration of CQ algorithm based on Algorithm 3.1 and show its strong convergence.
Theorem 4.1
Proof
Lemma 2.4 and (19) then guarantee that every weak limit point of \(\{x_{n}\}_{n=0}^{\infty}\) is a fixed point of T. That is, \(\omega _{w}(x_{n})\subset \operatorname{Fix} (T)\). This fact, with inequality (16) and Lemma 2.3, ensures the strong convergence of \(\{x_{n}\}_{n=0}^{\infty}\) to \(q=P_{\operatorname{Fix}(T)}x_{0}\). □
5 Numerical examples and conclusion
In this section, we compare the original algorithms and the accelerated algorithms. The codes were written in Matlab 7.0 and run on personal computer.
Firstly, we apply the Mann algorithm (6) and Algorithm 3.1 to the following convex feasibility problem (see [1, 14]).
Problem 5.1
(From [14])
Computational results for Problem 5.1 with different dimensions
Initial point | rand( N ,1) | 200 × rand( N ,1) | 5e | 5,000e | ||
---|---|---|---|---|---|---|
N = 5 m = 5 | Algorithm 3.1 | Iter. | 2 | 62 | 5 | 9 |
Sec. | 0 | 0 | 0 | 0 | ||
Mann algorithm | Iter. | 64 | 68 | 73 | 74 | |
Sec. | 0 | 0 | 0 | 0 | ||
N = 1,000 m = 50 | Algorithm 3.1 | Iter. | 4 | 448 | 7 | 425 |
Sec. | 0.0156 | 1.0608 | 0.0312 | 1.0608 | ||
Mann algorithm | Iter. | 505 | 515 | 478 | 429 | |
Sec. | 1.2948 | 1.2480 | 1.2012 | 1.1076 | ||
N = 50 m = 1,000 | Algorithm 3.1 | Iter. | 6,814 | 7 | 4 | 9 |
Sec. | 38.9846 | 0.1092 | 0.0312 | 0.1248 | ||
Mann algorithm | Iter. | 8,438 | 7,771 | 8,692 | 6,913 | |
Sec. | 53.6799 | 43.4463 | 49.7799 | 38.7818 |
Table 1 illustrates that, with a few exceptions, Algorithm 3.1 significantly reduces the running time and iterations needed to find a fixed point compared with the Mann algorithm. The advantage is more obvious, as the parameters N and m become larger. It is worth further research on the reason of emergence of a few exceptions.
Next, we apply the CQ algorithm (12) and the accelerated CQ algorithm (13) to the following problem.
Problem 5.2
(From [22])
He and Yang [22] showed that T is nonexpansive and has at least a fixed point in \(S(0,1)\).
Computational results for Problem 5.2 with different initial points
Initial point | (1,0,0) | (0,1,0) | (0,0.5,0.5) | (0.5,0.5,0.5) | |
---|---|---|---|---|---|
CQ algorithm | Iter. | 652 | 167 | 1,441 | 38 |
Sec. | 0.1092 | 0.0312 | 0.2652 | 0 | |
Accelerated CQ algorithm | Iter. | 577 | 273 | 1,430 | 84 |
Sec. | 0.0936 | 0.0468 | 0.2496 | 0.0156 |
Table 2 shows that the acceleration of the CQ algorithm is ineffective, that is, the accelerated CQ algorithm does not in fact accelerate the CQ algorithm from running time or the number of iterations. The acceleration may be eliminated by the projection onto the sets \(C_{n}\) and \(Q_{n}\).
6 Concluding remarks
In this paper, we accelerate the Mann and CQ algorithms to obtain the accelerated Mann and CQ algorithms, respectively. Then we present the weak convergence of the accelerated Mann algorithm and the strong convergence of the accelerated CQ algorithm under some conditions. The numerical examples illustrate that the acceleration of the Mann algorithm is effective, however, the acceleration of the CQ algorithm is ineffective.
Declarations
Acknowledgements
The authors express their thanks to the reviewers, whose constructive suggestions led to improvements in the presentation of the results. Supported by the National Natural Science Foundation of China (No. 61379102) and Fundamental Research Funds for the Central Universities (No. 3122013D017), in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011) View ArticleGoogle Scholar
- Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990) View ArticleGoogle Scholar
- Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984) Google Scholar
- Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000) Google Scholar
- Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367-426 (1996) MathSciNetView ArticleGoogle Scholar
- Picard, E: Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. Math. Pures Appl. 6, 145-210 (1890) Google Scholar
- Krasnosel’skii, MA: Two remarks on the method of successive approximations. Usp. Mat. Nauk 10, 123-127 (1955) MathSciNetGoogle Scholar
- Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953) View ArticleGoogle Scholar
- Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003) MathSciNetView ArticleGoogle Scholar
- Iiduka, H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 148, 580-592 (2011) MathSciNetView ArticleGoogle Scholar
- Iiduka, H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 133, 227-242 (2012) MathSciNetView ArticleGoogle Scholar
- Iiduka, H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22(3), 862-878 (2012) MathSciNetView ArticleGoogle Scholar
- Iiduka, H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1-26 (2013) MathSciNetView ArticleGoogle Scholar
- Sakurai, K, Liduka, H: Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2014, 202 (2014) View ArticleGoogle Scholar
- Nocedal, J, Wright, SJ: Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (2006) Google Scholar
- Yao, Y, Marino, G, Muglia, L: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63(4), 1-11 (2012) MathSciNetGoogle Scholar
- Matinez-Yanes, C, Xu, HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 64, 2400-2411 (2006) MathSciNetView ArticleGoogle Scholar
- Tan, KK, Xu, HK: Approximating fixed points of nonexpansive mapping by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301-308 (1993) MathSciNetView ArticleGoogle Scholar
- Suantai, S: Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings. J. Math. Anal. Appl. 311, 506-517 (2005) MathSciNetView ArticleGoogle Scholar
- Genel, A, Lindenstrauss, J: An example concerning fixed points. Isr. J. Math. 22, 81-86 (1975) MathSciNetView ArticleGoogle Scholar
- Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248-264 (2001) MathSciNetView ArticleGoogle Scholar
- He, S, Yang, Z: Realization-based method of successive projection for nonexpansive mappings and nonexpansive semigroups (submitted) Google Scholar