- Research
- Open Access
- Published:

# Convergence criteria of Newton’s method on Lie groups

*Fixed Point Theory and Applications*
**volume 2013**, Article number: 293 (2013)

## Abstract

In the present paper, we study Newton’s method on Lie groups (independent of affine connections) for finding zeros of a mapping *f* from a Lie group to its Lie algebra. Under a generalized *L*-average Lipschitz condition of the differential of *f*, we establish a unified convergence criterion of Newton’s method. As applications, we get the convergence criteria under the Kantorovich’s condition and the *γ*-condition, respectively. Moreover, applications to optimization problems are also provided.

**MSC:**65H10, 65D99.

## 1 Introduction

Newton’s method is one of the most important methods for finding the approximation solution of the equation $f(x)=0$, where *f* is an operator from some domain *D* in a real or complex Banach space *X* to another *Y*. As is well known, one of the most important results on Newton’s method is Kantorovich’s theorem (*cf.* [1]). Under the mild condition that the second Fréchet derivative of *F* is bounded (or more general, the first derivative is Lipschitz continuous) on a proper open metric ball of the initial point ${x}_{0}$, Kantorovich’s theorem provides a simple and clear criterion ensuring the quadratic convergence of Newton’s method. Another important result on Newton’s method is Smale’s point estimate theory (*i.e.*, *α*-theory and *γ*-theory) in [2], where the notions of approximate zeros were introduced and the rules to judge an initial point ${x}_{0}$ to be an approximate zero were established, depending on the information of the analytic nonlinear operator at this initial point and at a solution ${x}^{\ast}$, respectively. There are a lot of works on the weakness and/or the extension of the Lipschitz continuity made on the mappings; see, for example, [3–7] and references therein. In particular, Zabrejko-Nguen parametrized in [7] the classical Lipschitz continuity. Wang introduced in [6] the notion of Lipschitz conditions with *L*-average to unify both Kantorovich’s and Smale’s criteria.

In a Riemannian manifold framework, an analogue of the well-known Kantorovich’s theorem was given in [8] for Newton’s method for vector fields on Riemannian manifolds while the extensions of the famous Smale’s *α*-theory and *γ*-theory in [2] to analytic vector fields and analytic mappings on Riemannian manifolds were done in [9]. In the recent paper [10], the convergence criteria in [9] were improved by using the notion of the *γ*-condition for the vector fields and mappings on Riemannian manifolds. The radii of uniqueness balls of singular points of vector fields satisfying the *γ*-conditions were estimated in [11], while the local behavior of Newton’s method on Riemannian manifolds was studied in [12, 13]. Furthermore, in [14], Li and Wang extended the generalized *L*-average Lipschitz condition (introduced in [6]) to Riemannian manifolds and established a unified convergence criterion of Newton’s method on Riemannian manifolds. Similarly, inspired by previous work of Zabrejko and Nguen in [7] on Kantorovich’s majorant method, Alvarez *et al.* introduced in [15] a Lipschitz-type radial function for the covariant derivative of vector fields and mappings on Riemannian manifolds and established a unified convergence criterion of Newton’s method on Riemannian manifolds.

Note also that Mahony used one-parameter subgroups of a Lie group to develop a version of Newton’s method on an arbitrary Lie group in [16], where the algorithm presented is independent of affine connections on the Lie group. This means that Newton’s method on Lie groups is different from the one defined on Riemannian manifolds. On the other hand, motivated by looking for approaches to solving ordinary differential equations on Lie groups, Owren and Welfert also studied in [17] Newton’s method, independent of affine connections on the Lie group, and showed the local quadratical convergence. Recently, Wang and Li [18] established Kantorovich’s theorem (independent of the connection) for Newton’s method on the Lie group. More precisely, under the assumption that the differential of *f* satisfies the Lipschitz condition around the initial point (which is in terms of one-parameter semigroups and independent of the metric), the convergence criterion of Newton’s method is presented. Extensions of Smale’s point estimate theory for Newton’s method on Lie groups were given in [19].

The purpose of the present paper is to establish a unified convergence criterion for Newton’s method (independent of the connection) on Lie groups under a generalized *L*-average Lipschitz condition. As applications, we get the convergence criteria under the Kantorovich’s condition and the *γ*-condition, respectively. Hence, our results extend the corresponding results in [18] and [19], respectively. Moreover, applications to optimization problems are also provided.

The remainder of the paper is organized as follows. Some preliminary results and notions are given in Section 2, while the main results about a unified convergence criterion are presented in Section 3. In Section 4, applications to optimization problems are explored. Theorems under the Kantorovich’s condition and the *γ*-condition are provided in the final section.

## 2 Notions and preliminaries

Most of the notions and notations which are used in the present paper are standard; see, for example, [20, 21]. The Lie group $(G,\cdot )$ is a Hausdorff topological group with countable bases which also has the structure of an analytic manifold such that the group product and the inversion are analytic operations in the differentiable structure given on the manifold. The dimension of a Lie group is that of the underlying manifold, and we shall always assume that it is *m*-dimensional. The symbol *e* designates the identity element of *G*. Let $\mathcal{G}$ be the Lie algebra of the Lie group *G* which is the tangent space ${T}_{e}G$ of *G* at *e*, equipped with Lie bracket $[\cdot ,\cdot ]:\mathcal{G}\times \mathcal{G}\to \mathcal{G}$.

In the sequel we make use of the left translation of the Lie group *G*. We define, for each $y\in G$, the left translation ${L}_{y}:G\to G$ by

The differential of ${L}_{y}$ at *z* is denoted by ${({L}_{y}^{\prime})}_{z}$, which clearly determines a linear isomorphism from ${T}_{z}G$ to the tangent space ${T}_{(y\cdot z)}G$. In particular, the differential ${({L}_{y}^{\prime})}_{e}$ of ${L}_{y}$ at *e* determines a linear isomorphism from $\mathcal{G}$ to the tangent space ${T}_{y}G$. The exponential map $exp:\mathcal{G}\to G$ is certainly the most important construction associated to *G* and $\mathcal{G}$, and is defined as follows. Given $u\in \mathcal{G}$, let ${\sigma}_{u}:\mathbb{R}\to G$ be a one-parameter subgroup of *G* determined by the left invariant vector field ${X}_{u}:y\mapsto {({L}_{y}^{\prime})}_{e}(u)$; *i.e.*, ${\sigma}_{u}$ satisfies that

The value of the exponential map exp at *u* is then defined by

Moreover, we have that

and

Note that the exponential map is not surjective in general. However, the exponential map is a diffeomorphism on an open neighborhood of $0\in \mathcal{G}$. In the case when *G* is Abelian, exp is also a homomorphism from $\mathcal{G}$ to *G*, *i.e.*,

In the non-abelian case, exp is not a homomorphism and, by the Baker-Campbell-Hausdorff (BCH) formula (*cf.* [[21], p.114]), (2.5) must be replaced by

for all $u,v$ in a neighborhood of $0\in \mathcal{G}$, where *w* is defined by

Let $f:G\to \mathcal{G}$ be a ${C}^{1}$-map and let $x\in G$. We use ${f}_{x}^{\prime}$ to denote the differential of *f* at *x*. Then, by [[22], p.9] (the proof given there for a smooth mapping still works for a ${C}^{1}$-map), for each ${\mathrm{\u25b3}}_{x}\in {T}_{x}G$ and any nontrivial smooth curve $c:(-\epsilon ,\epsilon )\to G$ with $c(0)=x$ and ${c}^{\prime}(0)={\mathrm{\u25b3}}_{x}$, one has that

In particular,

Define the linear map $\mathrm{d}{f}_{x}:\mathcal{G}\to \mathcal{G}$ by

Then, by (2.9),

Also, in view of the definition, we have that for all $t\ge 0$,

and

For the remainder of the present paper, we always assume that $\u3008\cdot ,\cdot \u3009$ is an inner product on $\mathcal{G}$ and $\parallel \cdot \parallel $ is the associated norm on $\mathcal{G}$. We now introduce the following distance on *G* which plays a key role in the study. Let $x,y\in G$ and define

where we adapt the convention that $inf\mathrm{\varnothing}=+\mathrm{\infty}$. It is easy to verify that $\varrho (\cdot ,\cdot )$ is a distance on *G* and the topology induced by this distance is equivalent to the original one on *G*.

Let $x\in G$ and $r>0$. We denote the corresponding ball of radius *r* around *x* of *G* by ${C}_{r}(x)$, that is,

Let $\mathcal{L}(\mathcal{G})$ denote the set of all linear operators on $\mathcal{G}$. Below, we will modify the notion of the Lipschitz condition with *L*-average for mappings on Banach spaces to suit sections. Let *L* be a positive nondecreasing integrable function on $[0,R]$, where *R* is a positive number large enough such that ${\int}_{0}^{R}(R-s)L(s)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\ge R$. The notion of Lipschitz condition in the inscribed sphere with the *L* average for operators from Banach spaces to Banach spaces was first introduced in [23] by Wang for the study of Smale’s point estimate theory.

**Definition 2.1** Let $r>0$, ${x}_{0}\in G$, and let *T* be a mapping from *G* to $\mathcal{L}(\mathcal{G})$. Then *T* is said to satisfy the *L*-average Lipschitz condition on ${C}_{r}({x}_{0})$ if

holds for any $u,{u}_{0},\dots ,{u}_{k}\in \mathcal{G}$ and $x\in {C}_{r}({x}_{0})$ such that $x={x}_{0}exp{u}_{0}exp{u}_{1}\cdots exp{u}_{k}$ and $\parallel u\parallel +\rho (x,{x}_{0})<r$, where $\rho (x,{x}_{0}):={\sum}_{i=0}^{k}\parallel {u}_{i}\parallel $.

The majorizing function *h* defined in the following, which was first introduced and studied by Wang (*cf.* [23]), is a powerful tool in our study. Let ${r}_{0}>0$ and $b>0$ be such that

For $\beta >0$, define the majorizing function *h* by

Some useful properties are described in the following propositions, see [23].

**Proposition 2.1** *The function* *h* *is monotonic decreasing on* $[0,{r}_{0}]$ *and monotonic increasing on* $[{r}_{0},R]$. *Moreover*, *if* $\beta \le b$, *h* *has a unique zero respectively in* $[0,{r}_{0}]$ *and* $[{r}_{0},R]$, *which are denoted by* ${r}_{1}$ *and* ${r}_{2}$.

Let $\{{t}_{n}\}$ denote the sequence generated by Newton’s method with initial value ${t}_{0}=0$ for *h*, that is,

**Proposition 2.2** *Suppose that* $\beta \le b$. *Then the sequence* $\{{t}_{n}\}$ *generated by* (2.18) *is monotonic increasing and convergent to* ${r}_{1}$.

The following lemma will be useful in the proof of the main theorem.

**Lemma 2.1** *Let* $0<r\le {r}_{0}$ *and let* ${x}_{0}\in G$ *be such that* $\mathrm{d}{f}_{{x}_{0}}^{-1}$ *exists*. *Suppose that* $\mathrm{d}{f}_{{x}_{0}}^{-1}\phantom{\rule{0.2em}{0ex}}\mathrm{d}f$ *satisfies the* *L*-*average Lipschitz condition on* ${C}_{r}({x}_{0})$. *Let* $x\in {C}_{r}({x}_{0})$ *be such that there exist* $k\ge 1$ *and* ${u}_{0},\dots ,{u}_{k}\in \mathcal{G}$ *satisfying* $x={x}_{0}\cdot exp{u}_{0}\cdot \dots \cdot exp{u}_{k}$ *and* $\rho (x,{x}_{0}):={\sum}_{i=0}^{k}\parallel {u}_{i}\parallel <r$. *Then* $\mathrm{d}{f}_{x}^{-1}$ *exists and*

*Proof* Write ${y}_{0}={x}_{0}$ and ${y}_{i+1}={y}_{i}\cdot exp{u}_{i}$ for each $i=0,\dots ,k$. Since (2.15) holds with $T=\mathrm{d}{f}_{{x}_{0}}^{-1}\phantom{\rule{0.2em}{0ex}}\mathrm{d}f$, one has that

Noting that ${y}_{k+1}=x$, we have that

Thus the conclusion follows from the Banach lemma and the proof is complete. □

## 3 Convergence criteria

Following [17], we define Newton’s method with initial point ${x}_{0}$ for *f* on a Lie group as follows:

Recall that $f:G\to \mathcal{G}$ is a ${C}^{1}$-mapping. In the remainder of this section, we always assume that ${x}_{0}\in G$ is such that $\mathrm{d}{f}_{{x}_{0}}^{-1}$ exists and set $\beta :=\parallel \mathrm{d}{f}_{{x}_{0}}^{-1}f({x}_{0})\parallel $. Let ${r}_{0}$ and *b* given by (2.16), and ${r}_{1}$ be given by Proposition 2.1.

**Theorem 3.1** *Suppose that* $\mathrm{d}{f}_{{x}_{0}}^{-1}\phantom{\rule{0.2em}{0ex}}\mathrm{d}f$ *satisfies the* *L*-*average Lipschitz condition on* ${C}_{{r}_{1}}({x}_{0})$ *and that*

*Then the sequence* $\{{x}_{n}\}$ *generated by Newton’s method* (3.1) *with initial point* ${x}_{0}$ *is well defined and converges to a zero* ${x}^{\ast}$ *of* *f*. *Moreover*, *the following assertions hold for each* $n=0,1,\dots $ :

*Proof* Write ${v}_{n}=-\mathrm{d}{f}_{{x}_{n}}^{-1}f({x}_{n})$ for each $n=0,1,\dots $ . Below we shall show that each ${v}_{n}$ is well defined and

holds for each $n=0,1,\dots $ . Granting this, one sees that the sequence $\{{x}_{n}\}$ generated by Newton’s method (3.1) with initial point ${x}_{0}$ is well defined and converges to a zero ${x}^{\ast}$ of *f*, because, by (3.1),

Furthermore, assertions (3.3) and (3.4) hold for each *n* and the proof of the theorem is completed.

Note that ${v}_{0}$ is well defined by assumption and ${x}_{1}={x}_{0}\cdot exp{v}_{0}$. Hence, $\varrho ({x}_{1},{x}_{0})\le \parallel {v}_{0}\parallel $. Since $\parallel {v}_{0}\parallel =\parallel -\mathrm{d}{f}_{{x}_{0}}^{-1}(f({x}_{0}))\parallel =\beta ={t}_{1}-{t}_{0}$, it follows that (3.5) is true for $n=0$. We now proceed by mathematical induction on *n*. For this purpose, assume that ${v}_{n}$ is well defined and (3.5) holds for each $n\le k-1$. Then

Thus, we use Lemma 2.1 to conclude that $\mathrm{d}{f}_{{x}_{k}}^{-1}$ exists and

Therefore, ${v}_{k}$ is well defined. Observe that

where the second equality is valid because of (2.13). Therefore, applying (2.15), one has that

where the first equality holds because $h({t}_{k-1})+{h}^{\prime}({t}_{k-1})({t}_{k}-{t}_{k-1})=0$. Combining this with (3.7) yields that

Since ${x}_{k+1}={x}_{k}\cdot exp{v}_{k}$, we have $\varrho ({x}_{k+1},{x}_{k})\le \parallel {v}_{k}\parallel $. This together with (3.9) gives that (3.5) holds for $n=k$, which completes the proof of the theorem. □

## 4 Applications to optimization problems

Let $\varphi :G\to \mathbb{R}$ be a ${C}^{2}$-map. Consider the following optimization problem:

Newton’s method for solving (4.1) was presented in [16], where local quadratical convergence result was established for a smooth function *ϕ*.

Let $X\in \mathcal{G}$. Following [16], we use $\tilde{X}$ to denote the left invariant vector field associated with *X* defined by

and $\tilde{X}\varphi $ the Lie derivative of *ϕ* with respect to the left invariant vector field $\tilde{X}$, that is, for each $x\in G$,

Let $\{{X}_{1},\dots ,{X}_{n}\}$ be an orthonormal basis of $\mathcal{G}$. According to [[24], p.356] (see also [16]), grad*ϕ* is a vector field on *G* defined by

Then Newton’s method with initial point ${x}_{0}\in G$ considered in [16] can be written in a coordinate-free form as follows.

**Algorithm 4.1** Find ${X}^{k}\in \mathcal{G}$ such that ${\tilde{X}}^{k}={({L}_{x}^{\prime})}_{e}{X}^{k}$ and

Set ${x}_{k+1}={x}_{k}\cdot exp{X}^{k}$;

Set $k\leftarrow k+1$ and repeat.

Let $f:G\to \mathcal{G}$ be a mapping defined by

Define the linear operator ${H}_{x}\varphi :\mathcal{G}\to \mathcal{G}$ for each $x\in G$ by

Then ${H}_{(\cdot )}\varphi $ defines a mapping from *G* to $\mathcal{L}(\mathcal{G})$. The following proposition gives the equivalence between $\mathrm{d}{f}_{x}$ and ${H}_{x}\varphi $. The following proposition was given in [18].

**Proposition 4.1** *Let* $f(\cdot )$ *and* ${H}_{(\cdot )}\varphi $ *be defined respectively by* (4.4) *and* (4.5). *Then*

**Remark 4.1** One can easily see from Proposition 4.1 that, with the same initial point, the sequence generated by Algorithm 4.1 for *ϕ* coincides with the one generated by Newton’s method (3.1) for *f* defined by (4.4).

Let ${x}_{0}\in G$ be such that ${({H}_{{x}_{0}}\varphi )}^{-1}$ exists, and let ${\beta}_{\varphi}:=\parallel {({H}_{{x}_{0}}\varphi )}^{-1}{({L}_{{x}_{0}}^{\prime})}_{e}^{-1}grad\varphi ({x}_{0})\parallel $. Recall that ${r}_{0}$ and *b* are given by (2.16), and ${r}_{1}$ is given by Proposition 2.1. Then the main theorem of this section is as follows.

**Theorem 4.1**
*Suppose that*

*and that* ${({H}_{{x}_{0}}\varphi )}^{-1}({H}_{(\cdot )}\varphi )$ *satisfies the* *L*-*average Lipschitz condition on* ${C}_{{r}_{1}}({x}_{0})$. *Then the sequence generated by Algorithm* 4.1 *with initial point* ${x}_{0}$ *is well defined and converges to a critical point* ${x}^{\ast}$ *of* *ϕ*: $grad\varphi ({x}^{\ast})=0$.

*Furthermore*, *if* ${H}_{{x}_{0}}\varphi $ *is additionally positive definite and the following Lipschitz condition is satisfied*:

*Then* ${x}^{\ast}$ *is a local solution of* (4.1).

*Proof* Recall that *f* is defined by (4.4). Then by Proposition 4.1, $\mathrm{d}{f}_{x}={H}_{x}\varphi $ for each $x\in G$. Hence, by assumptions, $\mathrm{d}{f}_{{x}_{0}}^{-1}\phantom{\rule{0.2em}{0ex}}\mathrm{d}f$ satisfies the *L*-average Lipschitz condition on ${C}_{{r}_{1}}({x}_{0})$ and condition (3.2) is satisfied because ${\beta}_{\varphi}\le b$. Thus, Theorem 3.1 is applicable; hence the sequence generated by Newton’s method for *f* with initial point ${x}_{0}$ is well defined and converges to a zero ${x}^{\ast}$ of *f*. Consequently, by Remark 4.1, one sees that the first assertion holds.

To prove the second assertion, we assume that ${H}_{{x}_{0}}\varphi $ is additionally positive definite and the Lipschitz condition (4.8) is satisfied. It is sufficient to prove that ${H}_{{x}^{\ast}}\varphi $ is positive definite. Let ${\lambda}^{\ast}$ and ${\lambda}^{0}$ be the minimum eigenvalues of ${H}_{{x}^{\ast}}\varphi $ and ${H}_{{x}_{0}}\varphi $, respectively. Then ${\lambda}^{0}>0$. We have to show that ${\lambda}^{\ast}>0$. To do this, let $\{{x}_{n}\}$ be the sequence generated by Algorithm 4.1 and write ${v}_{n}=\mathrm{d}{f}_{{x}_{n}}^{-1}f({x}_{n})$ for each $n=0,1,\dots $ . Then, by Remark 4.1,

and by Theorem 3.1,

Therefore, for each $n=0,1,\dots $ ,

thanks to (4.8)-(4.10). Since

it follows that

thanks to (4.11). This implies that ${\lambda}^{\ast}>0$ and completes the proof. □

## 5 Theorems under the Kantorovich’s condition and the *γ*-condition

If $L(\cdot )$ is a constant, then the *L*-average Lipschitz condition is reduced to the classical Lipschitz condition.

Let $r>0$, ${x}_{0}\in G$, and let *T* be a mapping from *G* to $\mathcal{L}(\mathcal{G})$. Then *T* is said to satisfy the *L* Lipschitz condition on ${C}_{r}({x}_{0})$ if

holds for any $u,{u}_{0},\dots ,{u}_{k}\in \mathcal{G}$ and $x\in {C}_{r}({x}_{0})$ such that $x={x}_{0}exp{u}_{0}exp{u}_{1}\cdots exp{u}_{k}$ and $\parallel u\parallel +\rho (x,{x}_{0})<r$, where $\rho (x,{x}_{0})={\sum}_{i=0}^{k}\parallel {u}_{i}\parallel $.

Let $\beta >0$ and $L>0$. The quadratic majorizing function *h* is reduced to

Let $\{{t}_{n}\}$ denote the sequence generated by Newton’s method with initial value ${t}_{0}=0$ for *h*, that is,

Assume that $\lambda :=L\beta \le \frac{1}{2}$. Then *h* has two zeros ${r}_{1}$ and ${r}_{2}$:

moreover, $\{{t}_{n}\}$ is monotonic increasing and convergent to ${r}_{1}$, and satisfies that

where

Recall that $f:G\to \mathcal{G}$ is a ${C}^{1}$-mapping. As in the previous section, we always assume that ${x}_{0}\in G$ is such that $\mathrm{d}{f}_{{x}_{0}}^{-1}$ exists and set $\beta :=\parallel \mathrm{d}{f}_{{x}_{0}}^{-1}f({x}_{0})\parallel $. Then, by Theorem 3.1, we obtain the following results, which were given in [18].

**Theorem 5.1** *Suppose that* $\mathrm{d}{f}_{{x}_{0}}^{-1}\mathrm{d}f$ *satisfies the* *L*-*Lipschitz condition on* ${C}_{{r}_{1}}({x}_{0})$ *and that* $\lambda =L\beta \le \frac{1}{2}$. *Then the sequence* $\{{x}_{n}\}$ *generated by Newton’s method* (3.1) *with initial point* ${x}_{0}$ *is well defined and converges to a zero* ${x}^{\ast}$ *of* *f*. *Moreover*, *the following assertions hold for each* $n=0,1,\dots $ :

Let ${x}_{0}\in G$ be such that ${({H}_{{x}_{0}}\varphi )}^{-1}$ exists, and let ${\beta}_{\varphi}=\parallel {({H}_{{x}_{0}}\varphi )}^{-1}{({L}_{{x}_{0}}^{\prime})}_{e}^{-1}grad\varphi ({x}_{0})\parallel $. Recall that ${r}_{1}$ is defined by (5.1). Then, by Theorem 4.1, we get the following results, which were given in [18].

**Theorem 5.2** *Suppose that* $\lambda =L{\beta}_{\varphi}\le \frac{1}{2}$, *and that* ${({H}_{{x}_{0}}\varphi )}^{-1}({H}_{(\cdot )}\varphi )$ *satisfies the* *L*-*Lipschitz condition on* ${C}_{{r}_{1}}({x}_{0})$. *Then the sequence generated by Algorithm* 4.1 *with initial point* ${x}_{0}$ *is well defined and converges to a critical point* ${x}^{\ast}$ *of* *ϕ*: $grad\varphi ({x}^{\ast})=0$.

*Furthermore*, *if* ${H}_{{x}_{0}}\varphi $ *is additionally positive definite and the following Lipschitz condition is satisfied*:

*Then* ${x}^{\ast}$ *is a local solution of* (4.1).

Let *k* be a positive integer and assume further that $f:G\to \mathcal{G}$ is a ${C}^{k}$-map. Define the map ${\mathrm{d}}^{k}{f}_{x}:{\mathcal{G}}^{k}\to \mathcal{G}$ by

for each $({u}_{1},\dots ,{u}_{k})\in {\mathcal{G}}^{k}$. In particular,

Let $1\le i\le k$. Then, in view of the definition, one has that

In particular, for fixed ${u}_{1},\dots ,{u}_{i-1},{u}_{i+1},\dots ,{u}_{k}\in \mathcal{G}$,

This implies that ${\mathrm{d}}^{i}{f}_{x}{u}_{1}\cdots {u}_{i-1}u$ is linear with respect to $u\in \mathcal{G}$ and so is ${\mathrm{d}}^{k}{f}_{x}{u}_{1}\cdots {u}_{i-1}u{u}_{i+1}\cdots {u}_{k}$. Consequently, ${\mathrm{d}}^{k}{f}_{x}$ is a multilinear map from ${\mathcal{G}}^{k}$ to $\mathcal{G}$ because $1\le i\le k$ is arbitrary. Thus we can define the norm of ${\mathrm{d}}^{k}{f}_{x}$ by

For the remainder of the paper, we always assume that *f* is a ${C}^{2}$-map from *G* to $\mathcal{G}$. Then, taking $i=2$, we have

Thus, (2.13) is applied (with $\mathrm{d}{f}_{\cdot}v$ in place of $f(\cdot )$ for each $v\in \mathcal{G}$) to conclude the following formula:

The *γ*-conditions for nonlinear operators in Banach spaces were first introduced and explored by Wang [25, 26] to study Smale’s point estimate theory, which was extended in [19] for a map *f* from a Lie group to its Lie algebra in view of the map ${\mathrm{d}}^{2}f$ as given in Definition 5.1 below. Let $r>0$ and $\gamma >0$ be such that $\gamma r\le 1$.

**Definition 5.1** Let ${x}_{0}\in G$ be such that $\mathrm{d}{f}_{{x}_{0}}^{-1}$ exists. *f* is said to satisfy the *γ*-condition at ${x}_{0}$ on ${C}_{r}({x}_{0})$ if, for any $x\in {C}_{r}({x}_{0})$ with $x={x}_{0}exp{u}_{0}exp{u}_{1}\cdots exp{u}_{k}$ such that $\rho (x,{x}_{0}):={\sum}_{i=0}^{k}\parallel {u}_{i}\parallel <r$,

As shown in Proposition 5.3, if *f* is analytic at ${x}_{0}$, then *f* satisfies the *γ*-condition at ${x}_{0}$.

Let $\gamma >0$ and let *L* be the function defined by

The following proposition shows that the *γ*-condition implies the *L*-average Lipschitz condition.

**Proposition 5.1** *Suppose that* *f* *satisfies the* *γ*-*condition at* ${x}_{0}$ *on* ${C}_{r}({x}_{0})$. *Then* $\mathrm{d}{f}_{{x}_{0}}^{-1}\phantom{\rule{0.2em}{0ex}}\mathrm{d}f$ *satisfies the* *L*-*average Lipschitz condition on* ${C}_{r}({x}_{0})$ *with* *L* *defined by* (5.3).

*Proof* Let $x\in {C}_{r}({x}_{0})$ and let $u,{u}_{0},\dots ,{u}_{k}\in \mathcal{G}$ be such that $x={x}_{0}exp{u}_{0}exp{u}_{1}\cdots exp{u}_{k}$ and ${\sum}_{i=0}^{k}\parallel {u}_{i}\parallel +\parallel u\parallel <r$. Write $\rho (x,{x}_{0}):={\sum}_{i=0}^{k}\parallel {u}_{i}\parallel $. Observe from (5.2) that

Combining this with the assumption yields that

Hence, $\mathrm{d}{f}_{{x}_{0}}^{-1}\phantom{\rule{0.2em}{0ex}}\mathrm{d}f$ satisfies the *L*-average Lipschitz condition on ${C}_{r}({x}_{0})$ with *L* defined by (5.3). □

Corresponding to the function *L* defined by (5.3), ${r}_{0}$ and *b* in (2.16) are ${r}_{0}=(1-\frac{\sqrt{2}}{2})\frac{1}{\gamma}$ and $b=(3-2\sqrt{2})\frac{1}{\gamma}$, and the majorizing function given in (2.17) reduces to

Hence the condition $\beta \le b$ is equivalent to $\alpha =\gamma \beta \le 3-2\sqrt{2}$. Let $\{{t}_{n}\}$ denote the sequence generated by Newton’s method with the initial value ${t}_{0}=0$ for *h*. Then the following proposition was proved in [27], see also [10] and [6].

**Proposition 5.2** *Assume that* $\alpha =\gamma \beta \le 3-2\sqrt{2}$. *Then the zeros of* *h* *are*

*and*

*Moreover*, *the following assertions hold*:

*where*

Recall that ${x}_{0}\in G$ is such that $\mathrm{d}{f}_{{x}_{0}}^{-1}$ exists, and let $\beta :=\parallel \mathrm{d}{f}_{{x}_{0}}^{-1}f({x}_{0})\parallel $. Then, by Theorem 3.1 and Proposition 5.2, we get the following results, which were given in [19].

**Theorem 5.3**
*Suppose that*

*and that* *f* *satisfies the* *γ*-*condition at* ${x}_{0}$ *on* ${C}_{{r}_{1}}({x}_{0})$. *Then Newton’s method* (3.1) *with initial point* ${x}_{0}$ *is well defined*, *and the generated sequence* $\{{x}_{n}\}$ *converges to a zero* ${x}^{\ast}$ *of* *f*. *Moreover*, *if* $\alpha <3-2\sqrt{2}$, *then for each* $n=0,1,\dots $ ,

*where* *ν* *is given by* (5.4).

Below, we always assume that *f* is analytic on *G*. For $x\in G$ such that $\mathrm{d}{f}_{x}^{-1}$ exists, we define

Also, we adopt the convention that $\gamma (f,x)=\mathrm{\infty}$ if $\mathrm{d}{f}_{x}$ is not invertible. Note that this definition is justified and, in the case when $\mathrm{d}{f}_{x}$ is invertible, $\gamma (f,x)$ is finite by analyticity.

The following proposition is taken from [19].

**Proposition 5.3** *Let* ${\gamma}_{{x}_{0}}:=\gamma (f,{x}_{0})$ *and let* $r=\frac{2-\sqrt{2}}{2{\gamma}_{{x}_{0}}}$. *Then* *f* *satisfies the* ${\gamma}_{{x}_{0}}$-*condition at* ${x}_{0}$ *on* ${C}_{r}({x}_{0})$.

Thus, by Theorem 5.3 and Proposition 5.3, we get the following corollary, which was given in [19].

**Corollary 5.1**
*Suppose that*

*Then Newton’s method* (3.1) *with initial point* ${x}_{0}$ *is well defined and the generated sequence* $\{{x}_{n}\}$ *converges to a zero* ${x}^{\ast}$ *of* *f*. *Moreover*, *if* $\alpha <3-2\sqrt{2}$, *then for each* $n=0,1,\dots $ ,

*where* *ν* *is given by* (5.4).

## References

- 1.
Kantorovich LV, Akilov GP:

*Functional Analysis*. Pergamon, Oxford; 1982. - 2.
Smale S: Newton’s method estimates from data at one point. In

*The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics*. Edited by: Ewing R, Gross K, Martin C. Springer, New York; 1986:185–196. - 3.
Ezquerro JA, Hernández MA: Generalized differentiability conditions for Newton’s method.

*IMA J. Numer. Anal.*2002, 22: 187–205. 10.1093/imanum/22.2.187 - 4.
Ezquerro JA, Hernández MA: On an application of Newton’s method to nonlinear operators with

*w*-conditioned second derivative.*BIT Numer. Math.*2002, 42: 519–530. - 5.
Gutiérrez JM, Hernández MA: Newton’s method under weak Kantorovich conditions.

*IMA J. Numer. Anal.*2000, 20: 521–532. 10.1093/imanum/20.4.521 - 6.
Wang XH: Convergence of Newton’s method and uniqueness of the solution of equations in Banach space.

*IMA J. Numer. Anal.*2000, 20(1):123–134. 10.1093/imanum/20.1.123 - 7.
Zabrejko PP, Nguen DF: The majorant method in the theory of Newton-Kantorovich approximates and the Ptak error estimates.

*Numer. Funct. Anal. Optim.*1987, 9: 671–674. 10.1080/01630568708816254 - 8.
Ferreira OP, Svaiter BF: Kantorovich’s theorem on Newton’s method in Riemannian manifolds.

*J. Complex.*2002, 18: 304–329. 10.1006/jcom.2001.0582 - 9.
Dedieu JP, Priouret P, Malajovich G: Newton’s method on Riemannian manifolds: covariant alpha theory.

*IMA J. Numer. Anal.*2003, 23: 395–419. 10.1093/imanum/23.3.395 - 10.
Li C, Wang JH: Newton’s method on Riemannian manifolds: Smale’s point estimate theory under the

*γ*-condition.*IMA J. Numer. Anal.*2006, 26: 228–251. - 11.
Wang JH, Li C: Uniqueness of the singular points of vector fields on Riemannian manifolds under the

*γ*-condition.*J. Complex.*2006, 22: 533–548. 10.1016/j.jco.2005.11.004 - 12.
Li C, Wang JH: Convergence of Newton’s method and uniqueness of zeros of vector fields on Riemannian manifolds.

*Sci. China Ser. A*2005, 48: 1465–1478. 10.1360/04ys0147 - 13.
Wang JH: Convergence of Newton’s method for sections on Riemannian manifolds.

*J. Optim. Theory Appl.*2011, 148: 125–145. 10.1007/s10957-010-9748-4 - 14.
Li C, Wang JH: Newton’s method for sections on Riemannian manifolds: generalized covariant

*α*-theory.*J. Complex.*2008, 24: 423–451. 10.1016/j.jco.2007.12.003 - 15.
Alvarez F, Bolte J, Munier J: A unifying local convergence result for Newton’s method in Riemannian manifolds.

*Found. Comput. Math.*2008, 8: 197–226. 10.1007/s10208-006-0221-6 - 16.
Mahony RE: The constrained Newton method on a Lie group and the symmetric eigenvalue problem.

*Linear Algebra Appl.*1996, 248: 67–89. - 17.
Owren B, Welfert B: The Newton iteration on Lie groups.

*BIT Numer. Math.*2000, 40(2):121–145. - 18.
Wang JH, Li C: Kantorovich’s theorems for Newton’s method for mappings and optimization problems on Lie groups.

*IMA J. Numer. Anal.*2011, 31: 322–347. 10.1093/imanum/drp015 - 19.
Li C, Wang JH, Dedieu JP: Smale’s point estimate theory for Newton’s method on Lie groups.

*J. Complex.*2009, 25: 128–151. 10.1016/j.jco.2008.11.001 - 20.
Helgason S:

*Differential Geometry, Lie Groups, and Symmetric Spaces*. Academic Press, New York; 1978. - 21.
Varadarajan VS Graduate Texts in Mathematics 102. In

*Lie Groups, Lie Algebras and Their Representations*. Springer, New York; 1984. - 22.
DoCarmo MP:

*Riemannian Geometry*. Birkhäuser Boston, Cambridge; 1992. - 23.
Wang XH: Convergence of Newton’s method and inverse function theorem in Banach spaces.

*Math. Comput.*1999, 68: 169–186. 10.1090/S0025-5718-99-00999-0 - 24.
Helmke U, Moore JB Commun. Control Eng. Ser. In

*Optimization and Dynamical Systems*. Springer, London; 1994. - 25.
Wang XH, Han DF: Criterion

*α*and Newton’s method in weak condition.*Chin. J. Numer. Math. Appl.*1997, 19: 96–105. - 26.
Wang XH: Convergence on the iteration of Halley family in weak conditions.

*Chin. Sci. Bull.*1997, 42: 552–555. 10.1007/BF03182614 - 27.
Wang XH, Han DF: On the dominating sequence method in the point estimates and Smale’s theorem.

*Sci. Sin., Ser. A, Math. Phys. Astron. Tech. Sci.*1990, 33: 135–144.

## Acknowledgements

The research of the second author was partially supported by the National Natural Science Foundation of China (grant 11001241; 11371325) and by Zhejiang Provincial Natural Science Foundation of China (grant LY13A010011). The research of the third author was partially supported by a grant from NSC of Taiwan (NSC 102-2115-M-037-002-MY3).

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors participated in its construction and drafted the manuscript. All authors read and approve the final manuscript.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

He, J., Wang, J. & Yao, JC. Convergence criteria of Newton’s method on Lie groups.
*Fixed Point Theory Appl* **2013, **293 (2013). https://doi.org/10.1186/1687-1812-2013-293

Received:

Accepted:

Published:

### Keywords

- Newton’s method
- Lie group
*L*-average Lipschitz condition