Open Access

Convergence criteria of Newton’s method on Lie groups

Fixed Point Theory and Applications20132013:293

https://doi.org/10.1186/1687-1812-2013-293

Received: 20 August 2013

Accepted: 26 August 2013

Published: 9 November 2013

Abstract

In the present paper, we study Newton’s method on Lie groups (independent of affine connections) for finding zeros of a mapping f from a Lie group to its Lie algebra. Under a generalized L-average Lipschitz condition of the differential of f, we establish a unified convergence criterion of Newton’s method. As applications, we get the convergence criteria under the Kantorovich’s condition and the γ-condition, respectively. Moreover, applications to optimization problems are also provided.

MSC:65H10, 65D99.

Keywords

Newton’s methodLie groupL-average Lipschitz condition

1 Introduction

Newton’s method is one of the most important methods for finding the approximation solution of the equation f ( x ) = 0 , where f is an operator from some domain D in a real or complex Banach space X to another Y. As is well known, one of the most important results on Newton’s method is Kantorovich’s theorem (cf. [1]). Under the mild condition that the second Fréchet derivative of F is bounded (or more general, the first derivative is Lipschitz continuous) on a proper open metric ball of the initial point x 0 , Kantorovich’s theorem provides a simple and clear criterion ensuring the quadratic convergence of Newton’s method. Another important result on Newton’s method is Smale’s point estimate theory (i.e., α-theory and γ-theory) in [2], where the notions of approximate zeros were introduced and the rules to judge an initial point x 0 to be an approximate zero were established, depending on the information of the analytic nonlinear operator at this initial point and at a solution x , respectively. There are a lot of works on the weakness and/or the extension of the Lipschitz continuity made on the mappings; see, for example, [37] and references therein. In particular, Zabrejko-Nguen parametrized in [7] the classical Lipschitz continuity. Wang introduced in [6] the notion of Lipschitz conditions with L-average to unify both Kantorovich’s and Smale’s criteria.

In a Riemannian manifold framework, an analogue of the well-known Kantorovich’s theorem was given in [8] for Newton’s method for vector fields on Riemannian manifolds while the extensions of the famous Smale’s α-theory and γ-theory in [2] to analytic vector fields and analytic mappings on Riemannian manifolds were done in [9]. In the recent paper [10], the convergence criteria in [9] were improved by using the notion of the γ-condition for the vector fields and mappings on Riemannian manifolds. The radii of uniqueness balls of singular points of vector fields satisfying the γ-conditions were estimated in [11], while the local behavior of Newton’s method on Riemannian manifolds was studied in [12, 13]. Furthermore, in [14], Li and Wang extended the generalized L-average Lipschitz condition (introduced in [6]) to Riemannian manifolds and established a unified convergence criterion of Newton’s method on Riemannian manifolds. Similarly, inspired by previous work of Zabrejko and Nguen in [7] on Kantorovich’s majorant method, Alvarez et al. introduced in [15] a Lipschitz-type radial function for the covariant derivative of vector fields and mappings on Riemannian manifolds and established a unified convergence criterion of Newton’s method on Riemannian manifolds.

Note also that Mahony used one-parameter subgroups of a Lie group to develop a version of Newton’s method on an arbitrary Lie group in [16], where the algorithm presented is independent of affine connections on the Lie group. This means that Newton’s method on Lie groups is different from the one defined on Riemannian manifolds. On the other hand, motivated by looking for approaches to solving ordinary differential equations on Lie groups, Owren and Welfert also studied in [17] Newton’s method, independent of affine connections on the Lie group, and showed the local quadratical convergence. Recently, Wang and Li [18] established Kantorovich’s theorem (independent of the connection) for Newton’s method on the Lie group. More precisely, under the assumption that the differential of f satisfies the Lipschitz condition around the initial point (which is in terms of one-parameter semigroups and independent of the metric), the convergence criterion of Newton’s method is presented. Extensions of Smale’s point estimate theory for Newton’s method on Lie groups were given in [19].

The purpose of the present paper is to establish a unified convergence criterion for Newton’s method (independent of the connection) on Lie groups under a generalized L-average Lipschitz condition. As applications, we get the convergence criteria under the Kantorovich’s condition and the γ-condition, respectively. Hence, our results extend the corresponding results in [18] and [19], respectively. Moreover, applications to optimization problems are also provided.

The remainder of the paper is organized as follows. Some preliminary results and notions are given in Section 2, while the main results about a unified convergence criterion are presented in Section 3. In Section 4, applications to optimization problems are explored. Theorems under the Kantorovich’s condition and the γ-condition are provided in the final section.

2 Notions and preliminaries

Most of the notions and notations which are used in the present paper are standard; see, for example, [20, 21]. The Lie group ( G , ) is a Hausdorff topological group with countable bases which also has the structure of an analytic manifold such that the group product and the inversion are analytic operations in the differentiable structure given on the manifold. The dimension of a Lie group is that of the underlying manifold, and we shall always assume that it is m-dimensional. The symbol e designates the identity element of G. Let G be the Lie algebra of the Lie group G which is the tangent space T e G of G at e, equipped with Lie bracket [ , ] : G × G G .

In the sequel we make use of the left translation of the Lie group G. We define, for each y G , the left translation L y : G G by
L y ( z ) = y z for each  z G .
(2.1)
The differential of L y at z is denoted by ( L y ) z , which clearly determines a linear isomorphism from T z G to the tangent space T ( y z ) G . In particular, the differential ( L y ) e of L y at e determines a linear isomorphism from G to the tangent space T y G . The exponential map exp : G G is certainly the most important construction associated to G and G , and is defined as follows. Given u G , let σ u : R G be a one-parameter subgroup of G determined by the left invariant vector field X u : y ( L y ) e ( u ) ; i.e., σ u satisfies that
σ u ( 0 ) = e and σ u ( t ) = X u ( σ u ( t ) ) = ( L σ u ( t ) ) e ( u ) for each  t R .
(2.2)
The value of the exponential map exp at u is then defined by
exp ( u ) = σ u ( 1 ) .
Moreover, we have that
exp ( t u ) = σ t u ( 1 ) = σ u ( t ) for each  t R  and  u G
(2.3)
and
exp ( t + s ) u = exp ( t u ) exp ( s u ) for any  t , s R  and  u G .
(2.4)
Note that the exponential map is not surjective in general. However, the exponential map is a diffeomorphism on an open neighborhood of 0 G . In the case when G is Abelian, exp is also a homomorphism from G to G, i.e.,
exp ( u + v ) = exp ( u ) exp ( v ) for all  u , v G .
(2.5)
In the non-abelian case, exp is not a homomorphism and, by the Baker-Campbell-Hausdorff (BCH) formula (cf. [[21], p.114]), (2.5) must be replaced by
exp ( w ) = exp ( u ) exp ( v )
(2.6)
for all u , v in a neighborhood of 0 G , where w is defined by
w : = u + v + 1 2 [ u , v ] + 1 12 ( [ u , [ u , v ] ] + [ v , [ v , u ] ] ) + .
(2.7)
Let f : G G be a C 1 -map and let x G . We use f x to denote the differential of f at x. Then, by [[22], p.9] (the proof given there for a smooth mapping still works for a C 1 -map), for each x T x G and any nontrivial smooth curve c : ( ε , ε ) G with c ( 0 ) = x and c ( 0 ) = x , one has that
f x x = ( d d t ( f c ) ( t ) ) t = 0 .
(2.8)
In particular,
f x x = ( d d t f ( x exp ( t ( L x 1 ) x x ) ) ) t = 0 for each  x T x G .
(2.9)
Define the linear map d f x : G G by
d f x u = ( d d t f ( x exp ( t u ) ) ) t = 0 for each  u G .
(2.10)
Then, by (2.9),
d f x = f x ( L x ) e .
(2.11)
Also, in view of the definition, we have that for all t 0 ,
d d t f ( x exp ( t u ) ) = d f x exp ( t u ) u for each  u G
(2.12)
and
f ( x exp ( t u ) ) f ( x ) = 0 t d f x exp ( s u ) u d s for each  u G .
(2.13)
For the remainder of the present paper, we always assume that , is an inner product on G and is the associated norm on G . We now introduce the following distance on G which plays a key role in the study. Let x , y G and define
ϱ ( x , y ) : = inf { i = 1 k u i | there exist  k 1  and  u 1 , , u k G  such that y = x exp u 1 exp u k } ,
(2.14)

where we adapt the convention that inf = + . It is easy to verify that ϱ ( , ) is a distance on G and the topology induced by this distance is equivalent to the original one on G.

Let x G and r > 0 . We denote the corresponding ball of radius r around x of G by C r ( x ) , that is,
C r ( x ) : = { y G | ϱ ( x , y ) < r } .

Let L ( G ) denote the set of all linear operators on G . Below, we will modify the notion of the Lipschitz condition with L-average for mappings on Banach spaces to suit sections. Let L be a positive nondecreasing integrable function on [ 0 , R ] , where R is a positive number large enough such that 0 R ( R s ) L ( s ) d s R . The notion of Lipschitz condition in the inscribed sphere with the L average for operators from Banach spaces to Banach spaces was first introduced in [23] by Wang for the study of Smale’s point estimate theory.

Definition 2.1 Let r > 0 , x 0 G , and let T be a mapping from G to L ( G ) . Then T is said to satisfy the L-average Lipschitz condition on C r ( x 0 ) if
T ( x exp u ) T ( x ) ρ ( x 0 , x ) ρ ( x 0 , x ) + u L ( s ) d s
(2.15)

holds for any u , u 0 , , u k G and x C r ( x 0 ) such that x = x 0 exp u 0 exp u 1 exp u k and u + ρ ( x , x 0 ) < r , where ρ ( x , x 0 ) : = i = 0 k u i .

The majorizing function h defined in the following, which was first introduced and studied by Wang (cf. [23]), is a powerful tool in our study. Let r 0 > 0 and b > 0 be such that
0 r 0 L ( s ) d s = 1 and b = 0 r 0 L ( s ) s d s .
(2.16)
For β > 0 , define the majorizing function h by
h ( t ) = β t + 0 t L ( s ) ( t s ) d s for each  0 t R .
(2.17)

Some useful properties are described in the following propositions, see [23].

Proposition 2.1 The function h is monotonic decreasing on [ 0 , r 0 ] and monotonic increasing on [ r 0 , R ] . Moreover, if β b , h has a unique zero respectively in [ 0 , r 0 ] and [ r 0 , R ] , which are denoted by r 1 and r 2 .

Let { t n } denote the sequence generated by Newton’s method with initial value t 0 = 0 for h, that is,
t n + 1 = t n h ( t n ) 1 h ( t n ) for each  n = 0 , 1 , .
(2.18)

Proposition 2.2 Suppose that β b . Then the sequence { t n } generated by (2.18) is monotonic increasing and convergent to r 1 .

The following lemma will be useful in the proof of the main theorem.

Lemma 2.1 Let 0 < r r 0 and let x 0 G be such that d f x 0 1 exists. Suppose that d f x 0 1 d f satisfies the L-average Lipschitz condition on C r ( x 0 ) . Let x C r ( x 0 ) be such that there exist k 1 and u 0 , , u k G satisfying x = x 0 exp u 0 exp u k and ρ ( x , x 0 ) : = i = 0 k u i < r . Then d f x 1 exists and
d f x 1 d f x 0 1 1 0 ρ ( x , x 0 ) L ( s ) d s .
(2.19)
Proof Write y 0 = x 0 and y i + 1 = y i exp u i for each i = 0 , , k . Since (2.15) holds with T = d f x 0 1 d f , one has that
d f x 0 1 ( d f y i exp u i d f y i ) ρ ( y i , x 0 ) ρ ( y i + 1 , x 0 ) L ( s ) d s for each  0 i k .
(2.20)
Noting that y k + 1 = x , we have that

Thus the conclusion follows from the Banach lemma and the proof is complete. □

3 Convergence criteria

Following [17], we define Newton’s method with initial point x 0 for f on a Lie group as follows:
x n + 1 = x n exp ( d f x n 1 f ( x n ) ) for each  n = 0 , 1 , .
(3.1)

Recall that f : G G is a C 1 -mapping. In the remainder of this section, we always assume that x 0 G is such that d f x 0 1 exists and set β : = d f x 0 1 f ( x 0 ) . Let r 0 and b given by (2.16), and r 1 be given by Proposition 2.1.

Theorem 3.1 Suppose that d f x 0 1 d f satisfies the L-average Lipschitz condition on C r 1 ( x 0 ) and that
β = d f x 0 1 f ( x 0 ) b .
(3.2)
Then the sequence { x n } generated by Newton’s method (3.1) with initial point x 0 is well defined and converges to a zero x of f. Moreover, the following assertions hold for each n = 0 , 1 ,  :
ϱ ( x n + 1 , x n ) d f x n 1 f ( x n ) t n + 1 t n ;
(3.3)
ϱ ( x n , x ) r 1 t n .
(3.4)
Proof Write v n = d f x n 1 f ( x n ) for each n = 0 , 1 ,  . Below we shall show that each v n is well defined and
ϱ ( x n + 1 , x n ) v n t n + 1 t n
(3.5)
holds for each n = 0 , 1 ,  . Granting this, one sees that the sequence { x n } generated by Newton’s method (3.1) with initial point x 0 is well defined and converges to a zero x of f, because, by (3.1),
x n + 1 = x n exp v n for each  n = 0 , 1 , .

Furthermore, assertions (3.3) and (3.4) hold for each n and the proof of the theorem is completed.

Note that v 0 is well defined by assumption and x 1 = x 0 exp v 0 . Hence, ϱ ( x 1 , x 0 ) v 0 . Since v 0 = d f x 0 1 ( f ( x 0 ) ) = β = t 1 t 0 , it follows that (3.5) is true for n = 0 . We now proceed by mathematical induction on n. For this purpose, assume that v n is well defined and (3.5) holds for each n k 1 . Then
i = 0 k 1 v i t k t 0 = t k < r 1 and x k = x 0 exp v 0 exp v k 1 .
(3.6)
Thus, we use Lemma 2.1 to conclude that d f x k 1 exists and
d f x k 1 d f x 0 1 1 0 t k L ( s ) d s = h ( t k ) 1 .
(3.7)
Therefore, v k is well defined. Observe that
f ( x k ) = f ( x k ) f ( x k 1 ) d f x k 1 v k 1 = 0 1 d f x k 1 exp ( t v k 1 ) v k 1 d t d f x k 1 v k 1 = 0 1 [ d f x k 1 exp ( t v k 1 ) d f x k 1 ] v k 1 d t ,
where the second equality is valid because of (2.13). Therefore, applying (2.15), one has that
d f x 0 1 f ( x k ) 0 1 d f x 0 1 [ d f x k 1 exp ( t v k 1 ) d f x k 1 ] v k 1 d t 0 1 ρ ( x k 1 , x 0 ) ρ ( x k 1 , x 0 ) + t v k 1 L ( s ) d s v k 1 d t 0 1 t k 1 t k 1 + t ( t k t k 1 ) L ( s ) d s ( t k t k 1 ) d t = t k 1 t k L ( s ) ( t k s ) d s = h ( t k ) h ( t k 1 ) h ( t k 1 ) ( t k t k 1 ) = h ( t k ) ,
(3.8)
where the first equality holds because h ( t k 1 ) + h ( t k 1 ) ( t k t k 1 ) = 0 . Combining this with (3.7) yields that
v k = d f x k 1 f ( x k ) d f x k 1 d f x 0 d f x 0 1 f ( x k ) h ( t k ) 1 h ( t k ) = t k + 1 t k .
(3.9)

Since x k + 1 = x k exp v k , we have ϱ ( x k + 1 , x k ) v k . This together with (3.9) gives that (3.5) holds for n = k , which completes the proof of the theorem. □

4 Applications to optimization problems

Let ϕ : G R be a C 2 -map. Consider the following optimization problem:
min x G ϕ ( x ) .
(4.1)

Newton’s method for solving (4.1) was presented in [16], where local quadratical convergence result was established for a smooth function ϕ.

Let X G . Following [16], we use X ˜ to denote the left invariant vector field associated with X defined by
X ˜ ( x ) = ( L x ) e X for each  x G ,
and X ˜ ϕ the Lie derivative of ϕ with respect to the left invariant vector field X ˜ , that is, for each x G ,
( X ˜ ϕ ) ( x ) = d d t | t = 0 ϕ ( x exp t X ) .
(4.2)
Let { X 1 , , X n } be an orthonormal basis of G . According to [[24], p.356] (see also [16]), gradϕ is a vector field on G defined by
grad ϕ ( x ) = ( X ˜ 1 , , X ˜ n ) ( X ˜ 1 ϕ ( x ) , , X ˜ n ϕ ( x ) ) T = j = 1 n X ˜ j ϕ ( x ) X ˜ j for each  x G .
(4.3)

Then Newton’s method with initial point x 0 G considered in [16] can be written in a coordinate-free form as follows.

Algorithm 4.1 Find X k G such that X ˜ k = ( L x ) e X k and
grad ϕ ( x k ) + grad ( X ˜ k ϕ ) ( x k ) = 0 ;

Set x k + 1 = x k exp X k ;

Set k k + 1 and repeat.

Let f : G G be a mapping defined by
f ( x ) = ( L x ) e 1 grad ϕ ( x ) for each  x G .
(4.4)
Define the linear operator H x ϕ : G G for each x G by
( H x ϕ ) X = ( L x ) e 1 grad ( X ˜ ϕ ) ( x ) for each  X G .
(4.5)

Then H ( ) ϕ defines a mapping from G to L ( G ) . The following proposition gives the equivalence between d f x and H x ϕ . The following proposition was given in [18].

Proposition 4.1 Let f ( ) and H ( ) ϕ be defined respectively by (4.4) and (4.5). Then
d f x = H x ϕ for each  x G .
(4.6)

Remark 4.1 One can easily see from Proposition 4.1 that, with the same initial point, the sequence generated by Algorithm 4.1 for ϕ coincides with the one generated by Newton’s method (3.1) for f defined by (4.4).

Let x 0 G be such that ( H x 0 ϕ ) 1 exists, and let β ϕ : = ( H x 0 ϕ ) 1 ( L x 0 ) e 1 grad ϕ ( x 0 ) . Recall that r 0 and b are given by (2.16), and r 1 is given by Proposition 2.1. Then the main theorem of this section is as follows.

Theorem 4.1 Suppose that
β ϕ = ( H x 0 ϕ ) 1 ( L x 0 ) e 1 grad ϕ ( x 0 ) b ,
(4.7)

and that ( H x 0 ϕ ) 1 ( H ( ) ϕ ) satisfies the L-average Lipschitz condition on C r 1 ( x 0 ) . Then the sequence generated by Algorithm 4.1 with initial point x 0 is well defined and converges to a critical point x of ϕ: grad ϕ ( x ) = 0 .

Furthermore, if H x 0 ϕ is additionally positive definite and the following Lipschitz condition is satisfied:
( H x 0 ϕ ) 1 H x exp u ϕ H x ϕ ρ ( x , x 0 ) ρ ( x , x 0 ) + u L ( s ) d s for  x G  and  u G  with  ρ ( x 0 , x ) + u < r 1 .
(4.8)

Then x is a local solution of (4.1).

Proof Recall that f is defined by (4.4). Then by Proposition 4.1, d f x = H x ϕ for each x G . Hence, by assumptions, d f x 0 1 d f satisfies the L-average Lipschitz condition on C r 1 ( x 0 ) and condition (3.2) is satisfied because β ϕ b . Thus, Theorem 3.1 is applicable; hence the sequence generated by Newton’s method for f with initial point x 0 is well defined and converges to a zero x of f. Consequently, by Remark 4.1, one sees that the first assertion holds.

To prove the second assertion, we assume that H x 0 ϕ is additionally positive definite and the Lipschitz condition (4.8) is satisfied. It is sufficient to prove that H x ϕ is positive definite. Let λ and λ 0 be the minimum eigenvalues of H x ϕ and H x 0 ϕ , respectively. Then λ 0 > 0 . We have to show that λ > 0 . To do this, let { x n } be the sequence generated by Algorithm 4.1 and write v n = d f x n 1 f ( x n ) for each n = 0 , 1 ,  . Then, by Remark 4.1,
x n + 1 = x n exp ( v n ) for each  n = 0 , 1 , ,
(4.9)
and by Theorem 3.1,
v n t n + 1 t n for each  n = 0 , 1 , .
(4.10)
Therefore, for each n = 0 , 1 ,  ,
H x 0 ϕ 1 ( H x n + 1 ϕ H x 0 ϕ ) = H x 0 ϕ 1 ( H x n exp ( v n ) ϕ H x 0 ϕ ) = j = 0 n H x 0 ϕ 1 H x j exp ( v n ) ϕ H x j ϕ j = 0 n ρ ( x j , x 0 ) ρ ( x j , x 0 ) + v n L ( s ) d s 0 t k L ( s ) d s < 0 r 0 L ( s ) d s = 1
(4.11)
thanks to (4.8)-(4.10). Since
| λ λ 0 1 | = 1 λ 0 | min v G , v = 1 ( H x ϕ ) v , v min v G , v = 1 ( H x 0 ϕ ) v , v | H x 0 ϕ 1 H x ϕ H x 0 ϕ ,
it follows that
| λ λ 0 1 | lim n H x 0 ϕ 1 H x n + 1 ϕ H x 0 ϕ < 1

thanks to (4.11). This implies that λ > 0 and completes the proof. □

5 Theorems under the Kantorovich’s condition and the γ-condition

If L ( ) is a constant, then the L-average Lipschitz condition is reduced to the classical Lipschitz condition.

Let r > 0 , x 0 G , and let T be a mapping from G to L ( G ) . Then T is said to satisfy the L Lipschitz condition on C r ( x 0 ) if
T ( x exp u ) T ( x ) L u

holds for any u , u 0 , , u k G and x C r ( x 0 ) such that x = x 0 exp u 0 exp u 1 exp u k and u + ρ ( x , x 0 ) < r , where ρ ( x , x 0 ) = i = 0 k u i .

Let β > 0 and L > 0 . The quadratic majorizing function h is reduced to
h ( t ) = L 2 t 2 t + β for each  t 0 .
Let { t n } denote the sequence generated by Newton’s method with initial value t 0 = 0 for h, that is,
t n + 1 = t n h ( t n ) 1 h ( t n ) for each  n = 0 , 1 , .
Assume that λ : = L β 1 2 . Then h has two zeros r 1 and r 2 :
r 1 = 1 1 2 λ L and r 2 = 1 + 1 2 λ L ;
(5.1)
moreover, { t n } is monotonic increasing and convergent to r 1 , and satisfies that
r 1 t n = q 2 n 1 j = 0 2 n 1 q j r 1 for each  n = 0 , 1 , ,
where
q = 1 1 2 λ 1 + 1 2 λ .

Recall that f : G G is a C 1 -mapping. As in the previous section, we always assume that x 0 G is such that d f x 0 1 exists and set β : = d f x 0 1 f ( x 0 ) . Then, by Theorem 3.1, we obtain the following results, which were given in [18].

Theorem 5.1 Suppose that d f x 0 1 d f satisfies the L-Lipschitz condition on C r 1 ( x 0 ) and that λ = L β 1 2 . Then the sequence { x n } generated by Newton’s method (3.1) with initial point x 0 is well defined and converges to a zero x of f. Moreover, the following assertions hold for each n = 0 , 1 ,  :
ϱ ( x n + 1 , x n ) d f x n 1 f ( x n ) t n + 1 t n ; ϱ ( x n , x ) q 2 n 1 j = 0 2 n 1 q j r 1 .

Let x 0 G be such that ( H x 0 ϕ ) 1 exists, and let β ϕ = ( H x 0 ϕ ) 1 ( L x 0 ) e 1 grad ϕ ( x 0 ) . Recall that r 1 is defined by (5.1). Then, by Theorem 4.1, we get the following results, which were given in [18].

Theorem 5.2 Suppose that λ = L β ϕ 1 2 , and that ( H x 0 ϕ ) 1 ( H ( ) ϕ ) satisfies the L-Lipschitz condition on C r 1 ( x 0 ) . Then the sequence generated by Algorithm 4.1 with initial point x 0 is well defined and converges to a critical point x of ϕ: grad ϕ ( x ) = 0 .

Furthermore, if H x 0 ϕ is additionally positive definite and the following Lipschitz condition is satisfied:
( H x 0 ϕ ) 1 H x exp u ϕ H x ϕ L u for  x G  and  u G  with  ϱ ( x 0 , x ) + u < r 1 .

Then x is a local solution of (4.1).

Let k be a positive integer and assume further that f : G G is a C k -map. Define the map d k f x : G k G by
d k f x u 1 u k = ( k t k t 1 f ( x exp t k u k exp t 1 u 1 ) ) t k = = t 1 = 0
for each ( u 1 , , u k ) G k . In particular,
d k f x u k = ( d k d t k f ( x exp t u ) ) t = 0 for each  u G .
Let 1 i k . Then, in view of the definition, one has that
d k f x u 1 u k = d k i ( d i f ( u 1 u i ) ) x u i + 1 u k for each  ( u 1 , , u k ) G k .
In particular, for fixed u 1 , , u i 1 , u i + 1 , , u k G ,
d i f x u 1 u i 1 = d ( d i 1 f ( u 1 u i 1 ) ) x ( ) .
This implies that d i f x u 1 u i 1 u is linear with respect to u G and so is d k f x u 1 u i 1 u u i + 1 u k . Consequently, d k f x is a multilinear map from G k to G because 1 i k is arbitrary. Thus we can define the norm of d k f x by
d k f x = sup { d k f x u 1 u 2 u k : ( u 1 , , u k ) G k  with each  u j = 1 } .
For the remainder of the paper, we always assume that f is a C 2 -map from G to G . Then, taking i = 2 , we have
d 2 f z v u = d ( d f v ) z u for any  u , v G  and each  z G .
Thus, (2.13) is applied (with d f v in place of f ( ) for each v G ) to conclude the following formula:
d f x exp ( t u ) d f x = 0 t d 2 f x exp ( s u ) u d s for each  u G  and  t R .
(5.2)

The γ-conditions for nonlinear operators in Banach spaces were first introduced and explored by Wang [25, 26] to study Smale’s point estimate theory, which was extended in [19] for a map f from a Lie group to its Lie algebra in view of the map d 2 f as given in Definition 5.1 below. Let r > 0 and γ > 0 be such that γ r 1 .

Definition 5.1 Let x 0 G be such that d f x 0 1 exists. f is said to satisfy the γ-condition at x 0 on C r ( x 0 ) if, for any x C r ( x 0 ) with x = x 0 exp u 0 exp u 1 exp u k such that ρ ( x , x 0 ) : = i = 0 k u i < r ,
d f x 0 1 d 2 f x 2 γ ( 1 γ ρ ( x , x 0 ) ) 3 .

As shown in Proposition 5.3, if f is analytic at x 0 , then f satisfies the γ-condition at x 0 .

Let γ > 0 and let L be the function defined by
L ( s ) = 2 γ ( 1 γ s ) 3 for each  0 < s < 1 γ .
(5.3)

The following proposition shows that the γ-condition implies the L-average Lipschitz condition.

Proposition 5.1 Suppose that f satisfies the γ-condition at x 0 on C r ( x 0 ) . Then d f x 0 1 d f satisfies the L-average Lipschitz condition on C r ( x 0 ) with L defined by (5.3).

Proof Let x C r ( x 0 ) and let u , u 0 , , u k G be such that x = x 0 exp u 0 exp u 1 exp u k and i = 0 k u i + u < r . Write ρ ( x , x 0 ) : = i = 0 k u i . Observe from (5.2) that
d f x exp u d f x = 0 1 d 2 f x exp ( s u ) u d s .
Combining this with the assumption yields that
d f x 0 1 ( d f x exp u d f x ) 0 1 d f x 0 1 d 2 f x exp ( s u ) u d s 0 1 2 γ ( 1 γ ( ρ ( x , x 0 ) + s u ) ) 3 u d s = ρ ( x , x 0 ) ρ ( x , x 0 ) + u 2 γ ( 1 γ t ) 3 d t .

Hence, d f x 0 1 d f satisfies the L-average Lipschitz condition on C r ( x 0 ) with L defined by (5.3). □

Corresponding to the function L defined by (5.3), r 0 and b in (2.16) are r 0 = ( 1 2 2 ) 1 γ and b = ( 3 2 2 ) 1 γ , and the majorizing function given in (2.17) reduces to
h ( t ) = β t + γ t 2 1 γ t for each  0 t R .

Hence the condition β b is equivalent to α = γ β 3 2 2 . Let { t n } denote the sequence generated by Newton’s method with the initial value t 0 = 0 for h. Then the following proposition was proved in [27], see also [10] and [6].

Proposition 5.2 Assume that α = γ β 3 2 2 . Then the zeros of h are
r 1 = 1 + α ( 1 + α ) 2 8 α 4 γ , r 2 = 1 + α + ( 1 + α ) 2 8 α 4 γ
and
β r 1 ( 1 + 1 2 ) β ( 1 1 2 ) 1 γ r 2 1 2 γ .
Moreover, the following assertions hold:
t n + 1 t n = ( 1 μ 2 n ) ( 1 + α ) 2 8 α 2 α ( 1 ν μ 2 n 1 ) ( 1 ν μ 2 n + 1 1 ) ν μ 2 n 1 β μ 2 n 1 β for each  n = 0 , 1 , ,
where
μ = 1 α ( 1 + α ) 2 8 α 1 α + ( 1 + α ) 2 8 α and ν = 1 + α ( 1 + α ) 2 8 α 1 + α + ( 1 + α ) 2 8 α .
(5.4)

Recall that x 0 G is such that d f x 0 1 exists, and let β : = d f x 0 1 f ( x 0 ) . Then, by Theorem 3.1 and Proposition 5.2, we get the following results, which were given in [19].

Theorem 5.3 Suppose that
α : = β γ 3 2 2
and that f satisfies the γ-condition at x 0 on C r 1 ( x 0 ) . Then Newton’s method (3.1) with initial point x 0 is well defined, and the generated sequence { x n } converges to a zero x of f. Moreover, if α < 3 2 2 , then for each n = 0 , 1 ,  ,
ϱ ( x n + 1 , x n ) ν 2 n 1 β ,

where ν is given by (5.4).

Below, we always assume that f is analytic on G. For x G such that d f x 1 exists, we define
γ x : = γ ( f , x ) = sup i 2 d f x 1 d i f x i ! 1 i 1 .

Also, we adopt the convention that γ ( f , x ) = if d f x is not invertible. Note that this definition is justified and, in the case when d f x is invertible, γ ( f , x ) is finite by analyticity.

The following proposition is taken from [19].

Proposition 5.3 Let γ x 0 : = γ ( f , x 0 ) and let r = 2 2 2 γ x 0 . Then f satisfies the γ x 0 -condition at x 0 on C r ( x 0 ) .

Thus, by Theorem 5.3 and Proposition 5.3, we get the following corollary, which was given in [19].

Corollary 5.1 Suppose that
α : = β γ x 0 3 2 2 .
Then Newton’s method (3.1) with initial point x 0 is well defined and the generated sequence { x n } converges to a zero x of f. Moreover, if α < 3 2 2 , then for each n = 0 , 1 ,  ,
ϱ ( x n + 1 , x n ) ν 2 n 1 β ,

where ν is given by (5.4).

Declarations

Acknowledgements

The research of the second author was partially supported by the National Natural Science Foundation of China (grant 11001241; 11371325) and by Zhejiang Provincial Natural Science Foundation of China (grant LY13A010011). The research of the third author was partially supported by a grant from NSC of Taiwan (NSC 102-2115-M-037-002-MY3).

Authors’ Affiliations

(1)
Department of Mathematics, Zhejiang Normal University
(2)
Department of Mathematics, Zhejiang University of Technology
(3)
Department of Mathematics, National Sun Yat-sen University
(4)
Center for Fundamental Science, Kaohsiung Medical University
(5)
Department of Mathematics, King Abdulaziz University

References

  1. Kantorovich LV, Akilov GP: Functional Analysis. Pergamon, Oxford; 1982.Google Scholar
  2. Smale S: Newton’s method estimates from data at one point. In The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics. Edited by: Ewing R, Gross K, Martin C. Springer, New York; 1986:185–196.View ArticleGoogle Scholar
  3. Ezquerro JA, Hernández MA: Generalized differentiability conditions for Newton’s method. IMA J. Numer. Anal. 2002, 22: 187–205. 10.1093/imanum/22.2.187MathSciNetView ArticleGoogle Scholar
  4. Ezquerro JA, Hernández MA: On an application of Newton’s method to nonlinear operators with w -conditioned second derivative. BIT Numer. Math. 2002, 42: 519–530.Google Scholar
  5. Gutiérrez JM, Hernández MA: Newton’s method under weak Kantorovich conditions. IMA J. Numer. Anal. 2000, 20: 521–532. 10.1093/imanum/20.4.521MathSciNetView ArticleGoogle Scholar
  6. Wang XH: Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal. 2000, 20(1):123–134. 10.1093/imanum/20.1.123MathSciNetView ArticleGoogle Scholar
  7. Zabrejko PP, Nguen DF: The majorant method in the theory of Newton-Kantorovich approximates and the Ptak error estimates. Numer. Funct. Anal. Optim. 1987, 9: 671–674. 10.1080/01630568708816254MathSciNetView ArticleGoogle Scholar
  8. Ferreira OP, Svaiter BF: Kantorovich’s theorem on Newton’s method in Riemannian manifolds. J. Complex. 2002, 18: 304–329. 10.1006/jcom.2001.0582MathSciNetView ArticleGoogle Scholar
  9. Dedieu JP, Priouret P, Malajovich G: Newton’s method on Riemannian manifolds: covariant alpha theory. IMA J. Numer. Anal. 2003, 23: 395–419. 10.1093/imanum/23.3.395MathSciNetView ArticleGoogle Scholar
  10. Li C, Wang JH: Newton’s method on Riemannian manifolds: Smale’s point estimate theory under the γ -condition. IMA J. Numer. Anal. 2006, 26: 228–251.MathSciNetView ArticleGoogle Scholar
  11. Wang JH, Li C: Uniqueness of the singular points of vector fields on Riemannian manifolds under the γ -condition. J. Complex. 2006, 22: 533–548. 10.1016/j.jco.2005.11.004View ArticleGoogle Scholar
  12. Li C, Wang JH: Convergence of Newton’s method and uniqueness of zeros of vector fields on Riemannian manifolds. Sci. China Ser. A 2005, 48: 1465–1478. 10.1360/04ys0147MathSciNetView ArticleGoogle Scholar
  13. Wang JH: Convergence of Newton’s method for sections on Riemannian manifolds. J. Optim. Theory Appl. 2011, 148: 125–145. 10.1007/s10957-010-9748-4MathSciNetView ArticleGoogle Scholar
  14. Li C, Wang JH: Newton’s method for sections on Riemannian manifolds: generalized covariant α -theory. J. Complex. 2008, 24: 423–451. 10.1016/j.jco.2007.12.003View ArticleGoogle Scholar
  15. Alvarez F, Bolte J, Munier J: A unifying local convergence result for Newton’s method in Riemannian manifolds. Found. Comput. Math. 2008, 8: 197–226. 10.1007/s10208-006-0221-6MathSciNetView ArticleGoogle Scholar
  16. Mahony RE: The constrained Newton method on a Lie group and the symmetric eigenvalue problem. Linear Algebra Appl. 1996, 248: 67–89.MathSciNetView ArticleGoogle Scholar
  17. Owren B, Welfert B: The Newton iteration on Lie groups. BIT Numer. Math. 2000, 40(2):121–145.MathSciNetView ArticleGoogle Scholar
  18. Wang JH, Li C: Kantorovich’s theorems for Newton’s method for mappings and optimization problems on Lie groups. IMA J. Numer. Anal. 2011, 31: 322–347. 10.1093/imanum/drp015MathSciNetView ArticleGoogle Scholar
  19. Li C, Wang JH, Dedieu JP: Smale’s point estimate theory for Newton’s method on Lie groups. J. Complex. 2009, 25: 128–151. 10.1016/j.jco.2008.11.001MathSciNetView ArticleGoogle Scholar
  20. Helgason S: Differential Geometry, Lie Groups, and Symmetric Spaces. Academic Press, New York; 1978.Google Scholar
  21. Varadarajan VS Graduate Texts in Mathematics 102. In Lie Groups, Lie Algebras and Their Representations. Springer, New York; 1984.View ArticleGoogle Scholar
  22. DoCarmo MP: Riemannian Geometry. Birkhäuser Boston, Cambridge; 1992.Google Scholar
  23. Wang XH: Convergence of Newton’s method and inverse function theorem in Banach spaces. Math. Comput. 1999, 68: 169–186. 10.1090/S0025-5718-99-00999-0View ArticleGoogle Scholar
  24. Helmke U, Moore JB Commun. Control Eng. Ser. In Optimization and Dynamical Systems. Springer, London; 1994.View ArticleGoogle Scholar
  25. Wang XH, Han DF: Criterion α and Newton’s method in weak condition. Chin. J. Numer. Math. Appl. 1997, 19: 96–105.MathSciNetGoogle Scholar
  26. Wang XH: Convergence on the iteration of Halley family in weak conditions. Chin. Sci. Bull. 1997, 42: 552–555. 10.1007/BF03182614View ArticleGoogle Scholar
  27. Wang XH, Han DF: On the dominating sequence method in the point estimates and Smale’s theorem. Sci. Sin., Ser. A, Math. Phys. Astron. Tech. Sci. 1990, 33: 135–144.MathSciNetGoogle Scholar

Copyright

© He et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.