- Research
- Open Access
On iterative computation of fixed points and optimization
- Ioannis K Argyros^{1},
- Yeol Je Cho^{2, 3}Email author and
- Saïd Hilout^{4}
https://doi.org/10.1186/s13663-015-0372-8
© Argyros et al. 2015
- Received: 29 April 2015
- Accepted: 30 June 2015
- Published: 25 July 2015
Abstract
In this paper, a semi-local convergence analysis of the Gauss-Newton method for convex composite optimization is presented using the concept of quasi-regularity in order to approximate fixed points in optimization. Our convergence analysis is presented first under the L-average Lipschitz and then under generalized convex majorant conditions. The results extend the applicability of the Gauss-Newton method under the same computational cost as in earlier studies such as Li and Ng (SIAM J. Optim. 18:613-642, 2007), Moldovan and Pellegrini (J. Optim. Theory Appl. 142:147-163, 2009), Moldovan and Pellegrini (J. Optim. Theory Appl. 142:165-183, 2009), Wang (Math. Comput. 68:169-186, 1999) and Wang (IMA J. Numer. Anal. 20:123-134, 2000).
Keywords
- fixed point
- the Gauss-Newton method
- majorizing sequences
- convex composite optimization
- semi-local convergence
MSC
- 47H10
- 47J05
- 47J25
- 65G99
- 49M15
- 41A29
1 Introduction
In this paper, we are concerned with the convex composite optimizations problem. Many problems in mathematical programming such as convex inclusion problems, minimax problems, penalization methods, goal programming problems, constrained optimization problems, and other problems can be formulated like composite optimization problems (see, for example, [1–6]).
Recently, in the elegant study by Li and Ng [7], the notion of quasi-regularity for \(x_{0} \in \mathbb{R}^{l}\) with respect to inclusion the problem was used. This notion generalizes the case of regularity studied in the seminal paper by Burke and Ferris [3] as well as the case when \(d \longrightarrow F'(x_{0}) d - \mathcal {C}\) is surjective. This condition was inaugurated by Robinson in [8, 9] (see, also, [1, 10, 11]).
In this paper, we present a convergence analysis of the Gauss-Newton method (GNM) (see the method (GNA) in Section 2). In [7], the convergence of the method (GNA) is based on the generalized Lipschitz conditions inaugurated by Wang [12, 13] (to be precise in Section 2). In [11], we presented a finer convergence analysis in the setting of Banach spaces than in [12–16] for the method (GNM) with the advantages \((\mathcal{A})\): tighter error estimates on the distances involved and the information on the location of the solution is at least as precise. These advantages were obtained (under the same computational cost) using the same or weaker hypotheses. Here, we provide the same advantages \((\mathcal{A})\) but for the method (GNA).
The rest of the study is organized as follows: Section 2 contains the notions of generalized Lipschitz conditions and the majorizing sequences for the method (GNA). In order for us to make the paper as self-contained as possible, the notion of quasi-regularity is re-introduced (see, for example, [7]) in Section 3. Semi-local convergence analysis of the method (GNA) using L-average conditions is presented in Section 4. In Section 5, some convex majorant conditions are used for the semi-local convergence of the method (GNA).
2 Generalized Lipschitz conditions and majorizing sequences
The study of the problem (2.1) is very important. On the other hand, the study of the problem (2.1) provides a unified framework for the development and analysis of algorithmic method and on the other hand it is a powerful tool for the study of first- and second-order optimality conditions in constrained optimality (see, for example, [1–7]).
A semi-local convergence analysis for the Gauss-Newton method (GNM) was presented using the popular algorithm (see, for example, [1, 7, 17]):
Algorithm
(GNA): \((\xi, \Delta, x_{0})\)
Let also \(x_{0} \in \mathbb{R}^{l}\) be given. Having \(x_{0}, x_{1}, \ldots, x_{k}\) (\(k \geq0\)), determine \(x_{k+1}\) by the following.
If \(0 \in \mathcal {D}_{\Delta}(x_{k})\), then STOP;
Then set \(x_{k+1} = x_{k} + d_{k}\).
Notice that, in the special case when \(l=m\) and \(F(x)=H(x)-x\), the results obtained in this paper can be used to iteratively compute fixed points of the operator \(H:\mathbb {R}^{m} \rightarrow \mathbb {R}^{m}\). Therefore, the results obtained in this paper are useful in fixed point theory and its applications in optimization.
We need the following notion of the generalized Lipschitz condition due to Wang in [12, 13] (see also [7]). From now on, \(L : [0, \infty[\longrightarrow\,]0, \infty[\) (or \(L_{0}\)) denotes a nondecreasing and absolutely continuous function. Moreover, η and α denote given positive numbers.
Definition 2.1
Let \(\mathcal {Y}\) be a Banach space and let \(x_{0} \in \mathbb{R}^{l}\). Let \(G : \mathbb{R}^{l} \longrightarrow \mathcal {Y}\). Then, G is said to satisfy:
Remark 2.2
Definition 2.3
The results concerning \(\{ t_{\alpha, n} \}\) are already in the literature (see, for example, [1, 7, 11]), whereas the corresponding ones for the sequence \(\{ s_{\alpha, n} \}\) can be derived in an analogous way by simply using \(\psi' _{\alpha, 0}\) instead of \(\psi' _{\alpha}\).
First, we need some auxiliary results for the properties of functions \(\psi_{\alpha}\), \(\psi_{\alpha, 0}\) and the relationship between sequences \(\{ s_{\alpha, n } \}\) and \(\{ t_{\alpha, n } \}\). The proofs of the next four lemmas involving the \(\psi_{\alpha}\) function can be found in [7], whereas the proofs for the function \(\psi_{\alpha, 0 }\) are analogously obtained by simply replacing L by \(L_{0}\).
Lemma 2.4
Suppose that \(0 < \eta\leq b_{\alpha}\). Then \(b_{\alpha}< r_{\alpha}\) and the following assertions hold:
(1) \(\psi_{\alpha}\) is strictly decreasing on \([0, r_{\alpha}]\) and strictly increasing on \([r_{\alpha}, \infty)\) with \(\psi_{\alpha}(\eta) >0\), \(\psi_{\alpha}(r_{\alpha}) = \eta- b_{\alpha}\leq0\), \(\psi_{\alpha}(+ \infty) \geq\eta>0\);
(2) \(\psi_{\alpha, 0}\) is strictly decreasing on \([0, r_{\alpha, 0}]\) and strictly increasing on \([r_{\alpha, 0} , \infty)\) with \(\psi_{\alpha, 0} (\eta) >0\), \(\psi_{\alpha, 0}(r_{\alpha, 0}) = \eta- b_{\alpha, 0} \leq0\), \(\psi_{ \alpha, 0} (+ \infty) \geq\eta>0\).
(3) \(\{ t _{\alpha, n} \}\) is strictly monotonically increasing and converges to \(r _{\alpha}^{\star} \);
(4) \(\{ s _{\alpha, n} \}\) is strictly monotonically increasing and converges to its unique least upper bound \(s _{\alpha }^{\star} \leq r_{\alpha, 0}^{\star}\);
(5) The convergence of \(\{ t _{\alpha, n} \}\) is quadratic if \(\eta< b_{\alpha}\) and linear if \(\eta= b_{\alpha}\).
Lemma 2.5
- (1)
The functions \(\alpha\rightarrow r_{\alpha}\), \(\alpha\rightarrow r_{\alpha, 0}\), \(\alpha\rightarrow b_{\alpha}\), \(\alpha\rightarrow b_{\alpha, 0}\) are strictly decreasing on \([0, \infty)\);
- (2)
\(\psi_{\alpha}< \psi_{\overline{\alpha}}\) and \(\psi_{\alpha, 0} < \psi_{\overline{\alpha} , 0}\) on \([0, \infty)\);
- (3)The function \(\alpha\rightarrow r _{\alpha}^{\star}\) is strictly increasing on \(I(\eta)\), where$$I(\eta) = \{ \alpha>0 :\eta\leq b_{\alpha}\}; $$
- (4)
The function \(\alpha\rightarrow r _{\alpha, 0}^{\star}\) is strictly increasing on \(I(\eta)\).
Lemma 2.6
Lemma 2.7
Next, we show that the sequence \(\{ s _{\alpha, n} \}\) is tighter than \(\{ t _{\alpha, n} \}\).
Lemma 2.8
Moreover, if the strict inequality holds in (2.10), so does in (2.29) and (2.30) for all \(n >1\). Furthermore, the convergence of \(\{ s _{\alpha, n} \}\) is quadratic if \(\eta< b_{\alpha}\) and linear if \(L_{0}=L\) and \(\eta= b_{\alpha}\).
Proof
Finally, the estimate (2.31) follows from (2.30) by letting \(n\rightarrow\infty\). The convergence order part for the sequence \(\{ s_{\alpha, n} \}\) follows from (2.30) and Lemma 2.4(v). This completes the proof. □
3 Background on regularities
In order for us to make the study as self-contained as possible, we mention some concepts and results on regularities which can be found in [7] (see, also, [1, 10, 12, 15, 20–22]).
Definition 3.1
Let \(x_{0} \in\mathbb{R}^{l}\).
Proposition 3.2
(see [3])
Let \(x_{0}\) be a regular point of (3.1). Then there are constants \(R>0\) and \(\beta>0\) such that (3.3) holds for R and \(\beta( \cdot) = \beta\). Therefore, \(x_{0}\) is a quasi-regular point with the quasi-regular radius \(R_{x_{0}} \geq R\) and the quasi-regular bound function \(\beta_{x_{0}} \leq\beta\) on \([0, R]\).
Remark 3.3
Definition 3.4
- (a)
\(Tx+ T y \subseteq T (x+y)\) for all \(x, y \in\mathbb{R}^{l}\);
- (b)
\(T \lambda x=\lambda Tx\) for all \(\lambda>0\) and \(x\in\mathbb{R}^{l}\);
- (c)
\(0 \in T0\).
Remark 3.5
- (1)
T is convex ⟺ the graph \(Gr(T)\) is a convex cone in \(\mathbb{R}^{l} \times\mathbb{R}^{m}\).
- (2)
T is convex \(\Longrightarrow T^{-1} \) is convex from \(\mathbb {R}^{m}\) to \(\mathbb{R}^{l}\).
Lemma 3.6
(see [8])
Let C be a closed convex cone in \(\mathbb{R}^{m}\). Suppose that \(x_{0} \in\mathbb{R}^{l}\) satisfies the Robinson condition (3.11). Then we have the following assertions:
(1) \(T_{x_{0}}^{-1}\) is normed.
The following proposition shows that the condition (3.11) implies that \(x_{0}\) is regular point of (3.1). Using the center \(L_{0}\)-average Lipschitz condition, we also estimate in Proposition 3.7 the quasi-regular bound function. The proof is given in an analogous way to the corresponding result in [7] by simply using \(L_{0}\) instead of L.
Proposition 3.7
Let C be a closed convex cone in \(\mathbb{R}^{m}\), \(x_{0} \in\mathbb {R}^{l}\), and define \(T_{x_{0}}\) as in (3.9). Suppose that \(x_{0}\) satisfies the Robinson condition (3.11). Then we have the following assertions:
(1) \(x_{0}\) is a regular point of (3.1).
4 Semi-local convergence analysis for (GNA)
Theorem 4.1
Proof
Remark 4.2
(1) If \(L=L_{0}\), then Theorem 4.1 reduces to the corresponding ones in [7]. Otherwise, in view of (2.29)-(2.31), our results constitute an improvement. The rest of [7] is improved since those results are corollaries of Theorem 4.1. For more details, we leave this part to the motivated reader.
5 General majorant conditions
In this section, we provide a semilocal convergence analysis for (GNA) using more general majorant conditions than (2.8) and (2.9).
Definition 5.1
Let \(\mathcal {Y}\) be a Banach space, \(x_{0} \in \mathbb{R}^{l}\) and \(\alpha>0\). Let \(G : \mathbb{R}^{l} \longrightarrow \mathcal {Y}\) and \(f_{\alpha}: [0,r [ \longrightarrow\,]{-}\infty, + \infty[\) be continuously differentiable. Then G is said to satisfy:
Next, we provide sufficient conditions for the convergence of the sequence \(\{ t_{\alpha, n} \}\) corresponding to the ones given in Lemma 2.4.
Lemma 5.2
(see, for example, [2, 10, 20])
Let \(r>0\), \(\alpha>0\), and \(f_{\alpha}: [0,r) \longrightarrow (-\infty, + \infty)\) be continuously differentiable. Suppose that:
(1) \(f_{\alpha}(0) >0\), \(f'_{\alpha}(0) = -1\);
(2) \(f'_{\alpha}\) is convex and strictly increasing;
Now, we show the following semilocal convergence result for the method (GNA) using the generalized majorant conditions (5.1) and (5.2).
Theorem 5.3
Proof
Remark 5.4
6 Conclusion
Using a combination of average and center-average type conditions, we presented a semilocal convergence analysis for the method (GNA) to approximate a solution or a fixed point of a convex composite optimization problem in the setting of finite dimensional spaces. Our analysis extends the applicability of the method (GNA) under the same computational cost as in earlier studies, such as [4, 5, 7, 12, 13, 26–35].
Declarations
Acknowledgements
The second author was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2014R1A2A2A01002100).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Argyros, IK: Convergence and Applications of Newton-Type Iterations. Springer, New York (2009) Google Scholar
- Argyros, IK, Hilout, S: Computational Methods in Nonlinear Analysis: Efficient Algorithms, Fixed Point Theory and Applications. World Scientific, Singapore (2013) View ArticleGoogle Scholar
- Burke, JV, Ferris, MC: A Gauss-Newton method for convex composite optimization. Math. Program., Ser. A 71, 179-194 (1995) MathSciNetMATHGoogle Scholar
- Moldovan, A, Pellegrini, L: On regularity for constrained extremum problems, I. Sufficient optimality conditions. J. Optim. Theory Appl. 142, 147-163 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Moldovan, A, Pellegrini, L: On regularity for constrained extremum problems, II. Necessary optimality conditions. J. Optim. Theory Appl. 142, 165-183 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Rockafellar, RT: Convex Analysis. Princeton Mathematical Series, vol. 28. Princeton University Press, Princeton (1970) MATHGoogle Scholar
- Li, C, Ng, KF: Majorizing functions and convergence of the Gauss-Newton method for convex composite optimization. SIAM J. Optim. 18, 613-642 (2007) MathSciNetView ArticleMATHGoogle Scholar
- Robinson, SM: Extension of Newton’s method to nonlinear functions with values in a cone. Numer. Math. 19, 341-347 (1972) MathSciNetView ArticleMATHGoogle Scholar
- Robinson, SM: Stability theory for systems of inequalities, I. Linear systems. SIAM J. Numer. Anal. 12, 754-769 (1975) MathSciNetView ArticleMATHGoogle Scholar
- Argyros, IK, Cho, YJ, Hilout, S: Numerical Methods for Equations and Its Applications. CRC Press, New York (2012) MATHGoogle Scholar
- Argyros, IK, Hilout, S: Extending the applicability of the Gauss-Newton method under average Lipschitz-type conditions. Numer. Algorithms 58, 23-52 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Wang, XH: Convergence of Newton’s method and inverse function theorem in Banach space. Math. Comput. 68, 169-186 (1999) View ArticleMATHGoogle Scholar
- Wang, XH: Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal. 20, 123-134 (2000) MathSciNetView ArticleMATHGoogle Scholar
- Xu, XB, Li, C: Convergence of Newton’s method for systems of equations with constant rank derivatives. J. Comput. Math. 25, 705-718 (2007) MathSciNetMATHGoogle Scholar
- Xu, XB, Li, C: Convergence criterion of Newton’s method for singular systems with constant rank derivatives. J. Math. Anal. Appl. 345, 689-701 (2008) MathSciNetView ArticleMATHGoogle Scholar
- Zabrejko, PP, Nguen, DF: The majorant method in the theory of Newton-Kantorovich approximations and the Ptǎk error estimates. Numer. Funct. Anal. Optim. 9, 671-684 (1987) MathSciNetView ArticleMATHGoogle Scholar
- Häubler, WM: A Kantorovich-type convergence analysis for the Gauss-Newton method. Numer. Math. 48, 119-125 (1986) MathSciNetView ArticleGoogle Scholar
- Hiriart-Urruty, JB, Lemaréchal, C: Convex Analysis and Minimization Algorithms I: Fundamentals. Advanced Theory and Bundle Methods, vol. 305. Springer, Berlin (1993) MATHGoogle Scholar
- Hiriart-Urruty, JB, Lemaréchal, C: Convex Analysis and Minimization Algorithms II. Advanced Theory and Bundle Methods, vol. 306. Springer, Berlin (1993) MATHGoogle Scholar
- Ferreira, OP, Svaiter, BF: Kantorovich’s majorants principle for Newton’s method. Comput. Optim. Appl. 42, 213-229 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Li, C, Wang, XH: On convergence of the Gauss-Newton method for convex composite optimization. Math. Program., Ser. A 91, 349-356 (2002) View ArticleMATHGoogle Scholar
- Ng, KF, Zheng, XY: Characterizations of error bounds for convex multifunctions on Banach spaces. Math. Oper. Res. 29, 45-63 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Argyros, IK, Hilout, S: Weaker conditions for the convergence of Newton’s method. J. Complex. 28, 364-387 (2012) MathSciNetView ArticleMATHGoogle Scholar
- Kantorovich, LV, Akilov, GP: Functional Analysis. Pergamon Press, Oxford (1982) MATHGoogle Scholar
- Argyros, IK: Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point. Rev. Anal. Numér. Théor. Approx. 36, 123-138 (2007) MathSciNetMATHGoogle Scholar
- Argyros, IK, Cho, YJ, George, S: On the ‘Terra incognita’ for the Newton-Kantorovich method. J. Korean Math. Soc. 51, 251-266 (2014) MathSciNetView ArticleMATHGoogle Scholar
- Argyros, IK, Cho, YJ, Khattri, SK: On a new semilocal convergence analysis for the Jarratt method. J. Inequal. Appl. 2013, 194 (2013) MathSciNetView ArticleGoogle Scholar
- Argyros, IK, Cho, YJ, Ren, HM: Convergence of Halley’s method for operators with the bounded second derivative in Banach spaces. J. Inequal. Appl. 2013, 260 (2013) MathSciNetView ArticleGoogle Scholar
- Argyros, IK, Hilout, S: Local convergence analysis for a certain class of inexact methods. J. Nonlinear Sci. Appl. 1, 244-253 (2008) MathSciNetMATHGoogle Scholar
- Argyros, IK, Hilout, S: Local convergence analysis of inexact Newton-like methods. J. Nonlinear Sci. Appl. 2, 11-18 (2009) MathSciNetMATHGoogle Scholar
- Argyros, IK, Hilout, S: Multipoint iterative processes of efficiency index higher than Newton’s method. J. Nonlinear Sci. Appl. 2, 195-203 (2009) MathSciNetMATHGoogle Scholar
- Sahu, DR, Cho, YJ, Agarwal, RP, Argyros, IK: Accessibility of solutions of operator equations by Newton-like method. J. Complex. 31, 637-657 (2015) MathSciNetView ArticleGoogle Scholar
- Li, C, Hu, N, Wang, J: Convergence behavior of Gauss-Newton’s method and extensions to the Smale point estimate theory. J. Complex. 26, 268-295 (2010) MathSciNetView ArticleMATHGoogle Scholar
- Li, C, Zhang, WH, Jin, XQ: Convergence and uniqueness properties of Gauss-Newton’s method. Comput. Math. Appl. 47, 1057-1067 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Chen, X, Yamamoto, T: Convergence domains of certain iterative methods for solving nonlinear equations. Numer. Funct. Anal. Optim. 10, 37-48 (1989) MathSciNetView ArticleMATHGoogle Scholar