Skip to main content

Some Newton-like methods with sharper error estimates for solving operator equations in Banach spaces

Abstract

It is well known that the rate of convergence of S-iteration process introduced by Agarwal et al. (pp. 61-79) is faster than Picard iteration process for contraction operators. Following the ideas of S-iteration process, we introduce some Newton-like algorithms to solve the non-linear operator equation in Banach space setting. We study the semi-local as well as local convergence analysis of our algorithms. The rate of convergence of our algorithms are faster than the modified Newton method.

Mathematics Subject Classification 2010: 49M15; 65K10; 47H10.

1 Introduction

Let D be an open convex subset of a Banach space X and F be a Fréchet differentiable operator at each point of D with values in a Banach space Y . In the sequel, given any x X and r > 0, B r [x] will designate the set {y X : ||y - x || ≤ r}, B r (x) will designate the set {y X : || y - x || < r}, B(Y, X) will designate the space of all bounded linear operators from Y to X and 0 will designate the set {0}.

Many applied problems can be formulated to fit the model of the nonlinear operator equation

F x = 0 ,
(1.1)

where F is Fréchet differentiable operator at each point of D with values in a Banach space Y. A lot of problems about finding the solution of (1.1) are brought forward in many sciences and engineering (see [1]). Undoubtedly, Newton method is the most popular method for solving such problems. Starting with x0 X, the famous Newton method is given by

x n + 1 = x n - F x n - 1 F ( x n ) , n 0 ,
(1.2)

where F x denotes the Fréchet derivative of F at the point x D. There are numerous generalizations of Newton method for solving nonlinear operator Equation (1.1). Details can be found in Argyros [2], Wu and Zhao [3] and references therein.

In Newton method (1.2), functional value of inverse of derivative is required at each iteration. This bring us a natural question how to modify Newton iteration process (1.2), so that the computation of the inverse of derivative at each step in Newton method (1.2) can be avoided. Argyros [4], Bartle [5], Dennis [6] and Rheinboldt [7] discussed the modified Newton method

x n + 1 = x n - F x 0 - 1 F ( x n ) , n 0 .
(1.3)

In [8], Argyros proved the following theorem for semilocal convergence analysis of (1.3) to solve the operator Equation (1.1).

Theorem 1.1 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. For some x0 D, let F x 0 - 1 B ( Y , X ) . Assume that F x 0 - 1 and F satisfy the following conditions:

(i)|| F x 0 - 1 ||β, for some β > 0,

(ii)|| F x 0 - 1 F ( x 0 ) ||η, for some η > 0,

(iii) | | F x - F y | | K 0 | | x - y | | , x , y D and for some K0 > 0.

Assume thath=ηβ K 0 < 1 2 and B r [x0] D, wherer= 1 - 1 - 2 h h η. Then we have the following:

(a) The Equation (1.1) has a unique solution x* B r [x0].

(b) The sequence {x n } generated by (1.3) is in B r [x0] and it converges to x*.

(c) The following error estimate holds:

| | x n + 1 - x * | | γ n + 1 | | x 0 - x * | | , n 0 ,
(1.4)

where γ = rβK0.

Let X be a Banach space and F a Fréchet differentiable operator on an open convex subset D of X with values in a Banach space Y. Let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . For some x0 D, assume that F x * - 1 and F satisfy the following:

| | F x - F x 0 | | K 0 | | x - x 0 | | , x D  and for some  K 0 > 0 ,
(1.5)
| | F x * - 1 F x - F x 0 | | K 1 | | x - x 0 | | , x D  and for some  K 1 > 0
(1.6)

and

| | F x * - 1 F x - F x * | | K 2 | | x - x * | | , x D  and for some  K 2 > 0 .
(1.7)

Ren and Argyros [9] studied the following local convergence analysis to solve the operator Equation (1.1).

Theorem 1.2 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . For some x0 D, let F x * - 1 and F satisfy (1.6) and (1.7). Assume that B r 1 x * D, where r 1 = 2 K 2 . Then, for any initial point x0 B r (x*), wherer= 2 2 K 2 + 3 K 1 , we have the following:

(a) The sequence {x n } generated by (1.3) is in B r (x0) and it converges to the unique solution x* in B r 1 x * .

(b) The following error estimate holds:

| | x n + 1 - x * | | ( δ 0 ) n + 1 | | x 0 - x * | | , n 0 ,
(1.8)

where δ 0 = | | x 0 - x * | | r .

Recently, Agarwal et al. [10] have introduced the S-iteration process as follows: Let X be a normed space, D a nonempty convex subset of X and A : DD an operator. Then, for arbitrary x0 D, the S-iteration process is defined by

x n + 1 = 1 - α n A x n + α n A y n , y n = 1 - β n x n + β n A x n , n 0 ,
(1.9)

where {α n } and {β n } are sequences in (0, 1).

In [11], motivated by S-iteration process, the first author has introduced the normal S-iteration process as follows: Let X be a normed space, D a nonempty convex subset of X and A : DD an operator. Then, for arbitrary x0 D, the normal S-iteration process is defined by

x n + 1 = A 1 - α n x n + α n A x n , n 0 ,
(1.10)

where {α n } be a sequence in (0, 1). Noticing that the normal S-iteration process is applicable for finding solutions of constrained minimization problems and split feasibility problems (see Sahu [11]).

Following [[11], Theorem 3.6], we remark that the normal S-iteration process is faster than the Picard and Mann iteration processes for contraction mappings.

In the present article, motivated by normal S-iteration process, we introduce the S-iteration processes of Newton-like for finding the solution of operator Equation (1.1).

Algorithm 1.3 Let α (0, 1). Starting with x0 X and after x n X is defined, we define the next iterate xn+1as follows:

x n + 1 = z n - F x 0 - 1 F z n , z n = 1 - α x n + α y n , y n = x n - F x 0 - 1 F x n , n 0 .
(1.11)

Algorithm 1.4 Let α (0, 1). Starting with x0 X and after x n X is defined, we define the next iterate xn+1as follows:

x n + 1 = z n - F z 0 - 1 F z n , z n = 1 - α x n + α y n , y n = x n - F x 0 - 1 F x n , n 0 .
(1.12)

The purpose of this article is to prove the semi-local as well as local convergence analysis of Algorithms 1.3 and 1.4. It is shown that the rate of convergence of (1.11) and (1.12) are faster than (1.3). Applications to initial value and boundary value problems are included.

2 Preliminaries

Definition 2.1 Let C be a nonempty subset of normed space X. A mapping T : CX is said to be

(i) Lipschitzian if there exists a constant L > 0 such that

| | T x - T y | | L | | x - y | | , x , y C ;

(ii) contraction if there exists a constant L (0, 1) such that

| | T x - T y | | L | | x - y | | , x , y C ;

(iii) quasi-contraction[12]if there exists a constant L (0, 1) and

F T = x C : T x = x

such that

| | T x - p | | L | | x - p | | , x C a n d p F T .

Definition 2.2[11]Let C be a nonempty convex subset of a normed space X and T : CC an operator. The operator G : CC is said to be S-operator generated by α (0, 1) and T if

G = T 1 - α I + α T ,

where I is the identity operator.

Before presenting our main results we need the following technical lemmas.

Lemma 2.3[4, 13, 14]Let P be a bounded linear operator on a Banach space X. Then the following are equivalent:

(a) There is a bounded linear operator Q on X such that Q-1exists, and||Q-P||< 1 | | Q - 1 | | .

(b) P-1exists.

Further, if P -1 exists, then

| | P - 1 | | | | Q - 1 | | 1 - | | 1 - Q - 1 P | | | | Q - 1 | | 1 - | | Q - 1 | | | | Q - P | | .

Lemma 2.4[9]Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) and the operator F satisfies the conditions (1.7). Assume that B r (x*) D, wherer= 1 K 2 . Then, for any x B r (x*), F x is invertible, and the following estimate holds:

| | F x * - 1 F x - 1 | | 1 1 - K 2 | | x - x * | | .

Lemma 2.5[15, 16]Let (X, d) be a complete metric space and F : XX a contraction mapping. Then F has a unique fixed point in X.

Lemma 2.6 [[17], Theorem 9.4.2] Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Then, for all x, y D, we have

F x - F y = 0 1 F y + t x - y x - y d t .

3 Convergence analysis for Algorithms 1.3 and 1.4

Before studying convergence analysis of Algorithm 1.3, we establish the following theorem for existence of a unique solution of operator Equation (1.1).

Theorem 3.1 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. For some x0 D, let F x 0 - 1 B ( Y , X ) and the operator F satisfies (1.5) and the following conditions:

(i) | | F x 0 - 1 F x 0 | | η , for some η > 0,

(ii)|| F x 0 - 1 ||β, for some β > 0.

Assume thath=ηβ K 0 < 1 2 and B r [x0] D, wherer= 1 - 1 - 2 h h η. Then, for fixed α (0, 1), we have the following:

(a) The operator A : B r [x0] → X defined by

A x = x - F x 0 - 1 F x , x B r x 0
(3.1)

is a contraction self-operator on B r [x0] with Lipschitz constant β r K0and the operator Equation (1.1) has a unique solution in B r [x0].

(b) The S-operator A α : B r [x0] → X generated by α and A is a contraction self-operator on B r [x0] with Lipschitz constant βrK0(1 - α + αβrK0).

Proof: (a) Set γ : = βrK0. Note that γ=1- 1 - 2 h <1. For x, y B r [x0], we have

| | A x - A y | | = | | x - y - F x 0 - 1 F x - F y | | = F x 0 - 1 0 1 F y + t x - y x - y d t - 0 1 F x 0 x - y d t | | F x 0 - 1 | | 0 1 | | F y + t x - y x - y - F x 0 x - y | | d t β 0 1 K 0 y + t x - y - x 0 x - y d t β r K 0 | | x - y | | γ | | x - y | | .

Therefore, the operator A is a contraction with Lipschitz constant γ.

Now we claim that A(B r [x0]) B r [x0]. For x B r [x0], we have

| | A x - x 0 | | | | A x - A x 0 | | + | | A x 0 - x 0 | | | | F x 0 - 1 | | 0 1 | | F x 0 - F x 0 + t x - x 0 | | | | x - x 0 | | d t + η β 0 1 K 0 | | t x - x 0 | | | | x - x 0 | | d t + η β K 0 | | x - x 0 | | 2 0 1 t d t + η β r 2 K 0 2 + η = β K 0 2 1 - 1 - 2 h h η 2 + η = r .

Hence, the operator A maps B r [x0] into itself. By Banach contraction principle (see Lemma 2.5), A has a unique fixed point in B r [x0].

  1. (b)

    For x, y B r [x 0], we have

    | | A α x - A α y | | = | | A 1 - α x + α A x - A 1 - α y + α A y | | γ | | 1 - α x + α A x - 1 - α y - α A y | | γ | | 1 - α x - y + α A x - A y | | γ 1 - α + α γ | | x - y | | .

Therefore, the S-operator A α generated by α and A is a contraction operator on B r [x0] with Lipschitz constant βrK0(1- α + αβrK0). □

Next we formulate that the operator A λ defined by (3.2) is quasi-contraction.

Theorem 3.2 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Assume that λ (0, 1] and x* D is a solution of (1.1) such that F x * - 1 B ( Y , X ) . For some x0 D, let F x * - 1 and F satisfy the conditions (1.6) and (1.7). Assume that B r 1 x * D, where r 1 = 2 K 2 . For x0 B r (x*) with r= 2 2 K 2 + 3 K 1 , let A λ be an operator defined by

A λ x = x - λ F x 0 - 1 F x , x B r x * .
(3.2)

Then, we have the following

(a) For x B r (x*), we have

| | A λ x - x * | | λ δ x + 1 - λ | | x - x * | | ,
(3.3)

where

δ x = K 1 2 1 - r K 2 | | x - x * | | + 2 | | x 0 - x * | | .
(3.4)

(b) A λ is a quasi-contraction and self-operator on B r (x*) with constant 1 - (1 -δ)λ, where δ = sup x B r x * δ x .

Proof: (a) For x B r (x*) with xx*, we have

| | A λ x - x * | | = | | x - λ F x 0 - 1 F x - x * | | = | | λ x - x * - λ F x 0 - 1 F x - F x * - 1 - λ x - x * | | λ | | F x 0 - 1 F x - F x * - F x 0 x - x * | | + 1 - λ | | x - x * | | = λ F x * - 1 F x 0 - 1 0 1 F x * - 1 F t x + 1 - t x * x - x * - F x 0 x - x * d t + 1 - λ | | x - x * | | λ | | F x * - 1 F x 0 - 1 | | 0 1 | | F x * - 1 F t x + 1 - t x * - F x 0 | | | | x - x * | | d t + 1 - λ | | x - x * | | .

By (1.6) and Lemma 2.4, we have

| | A λ x - x * | | λ K 1 1 - K 2 | | x 0 - x * | | 0 1 | | t x - x * + x * - x 0 | | | | x - x * | | d t + 1 - λ | | x - x * | | λ K 1 1 - K 2 | | x 0 - x * | | 0 1 t | | x - x * | | + | | x 0 - x * | | | | x - x * | | d t + 1 - λ | | x - x * | | = λ K 1 1 - K 2 | | x 0 - x * | | | | x - x * | | + | | x 0 - x * | | 2 - | | x 0 - x * | | 2 2 + 1 - λ | | x - x * | | λ K 1 2 1 - r K 2 | | x - x * | | + 2 | | x 0 - x * | | | | x - x * | | + 1 - λ | | x - x * | | λ δ x + 1 - λ | | x - x * | | .
  1. (b)

    The operator A λ is a quasi-contraction with constant 1 - (1 - δ)λ. Indeed,

    sup x B r x * δ x = K 1 2 1 - r K 2 sup x B r x * | | x - x * | | + 2 | | x 0 - x * | | K 1 2 1 - r K 2 r + 2 | | x 0 - x * | | < 3 r K 1 2 1 - r K 2 = 1 .

This completes the proof. □

Corollary 3.3 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let α (0, 1) and x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . For x0 D, let F x * - 1 and F satisfy the conditions (1.6) and (1.7). Assume that B r 1 x * D, where r 1 = 2 K 2 . For x0in B r (x*) withr= 2 2 K 2 + 3 K 1 , let A be an operator defined by (3.1) and let A α be the S-operator generated by α and A. Then, the following hold:

| | A x - x * | | δ x | | x - x * | | , x B r x * ,
(3.5)
| | A α x - x * | | δ x 1 - α + α δ x | | x - x * | | , x B r x * ,
(3.6)

where δ x is defined in (3.4).

Corollary 3.4 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . For x0 D and α (0, 1), let F x * - 1 and F satisfy the conditions (1.6) and (1.7). Assume that B r 1 x * D, where r 1 = 2 K 2 . For x0in B r (x*) withr= 2 2 K 2 + 3 K 1 , let U α be an operator defined by

U α x = x - α F x 0 - 1 F x , x B r x * .
(3.7)

Then U α is a quasi-contraction, self-operator on B r (x*) and the following holds:

| | U α x - x * | | α δ x + 1 - α | | x - x * | | , x B r x * .
(3.8)

Now, we ready to study the semilocal convergence analysis of Algorithm 1.3.

Theorem 3.5 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. For some x0 D, let F x 0 - 1 B ( Y , X ) . Assume that F x 0 - 1 and F satisfy (1.5) with the following conditions:

(i)|| F x 0 - 1 F x 0 ||η, for some η > 0,

(ii)|| F x 0 - 1 ||β, for some β > 0.

Assume that α (0, 1), h=ηβ K 0 < 1 2 and B r [x0] D, wherer= 1 - 1 - 2 h h η. Then we have the

following:

(a) The Equation (1.1) has a unique solution x* B r [x0].

(b) The sequence {x n } generated by Algorithm 1.3 is in B r [x0] and it converges strongly to x*.

(c) The following error estimate holds:

| | x n + 1 - x * | | ρ n + 1 | | x 0 - x * | | , n 0 ,
(3.9)

where ρ = γ (1 - α + αγ) and γ = βrK0.

Proof : (a) It follows from Theorem 3.1.

  1. (b)

    From Algorithm 1.3, we have

    | | x n + 1 - x * | | = | | A α x n - A α x * | | γ 1 - α + α γ | | x n - x * | | γ n + 1 1 - α + α γ n + 1 | | x 0 - x * | | , n 0 .
    (3.10)

Therefore, x n x* as n.

  1. (c)

    It follows from (3.10).

Remark 3.6 The condition (1.5) of Theorem 3.5 is weaker assumption than assumption (iii) of Theorem 1.1. Also one can observe from (1.4) and (3.9) that

ρ = γ 1 - α + α γ < γ .
(3.11)

The strict inequality (3.11) shows that the error estimate in Theorem 3.5 is sharper than that of Theorem 1.1.

Now, we study the local convergence analysis for Algorithm 1.3.

Theorem 3.7 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let α (0, 1) and let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . For x0 D, let F x * - 1 and F satisfy the conditions (1.6) and (1.7). Assume that B r 1 x * D, where r 1 = 2 K 2 . Then, we have the following:

(a) For initial x0 B r (x*) withr= 2 2 K 2 + 3 K 1 , the sequence {x n } generated by Algorithm 1.3 is in B r (x*) and it converges strongly to the unique solution x* in B r 1 x * .

(b) The following error estimate holds:

| | x n + 1 - x * | | ( ρ ) n + 1 | | x 0 - x * | | , n 0 ,
(3.12)

where ρ' = δ0(1 - α + αδ0) and δ 0 = | | x 0 - x * | | r .

Proof : (a) First we show that x* is unique solution of (1.1) in B r 1 x * . For contradiction, suppose that y* is another solution of (1.1) in B r 1 x * . Then, we have

0 = F x * - F y * = 0 1 F y * + t x * - y * ( x * - y * ) d t .

Define an operator L by

L h = 0 1 F y * + t x * - y * h d t , h X .

Consequently, we have

| | I - F x * - 1 L | | = 0 1 F x * - 1 F x * - F y * + t x * - y * d t K 2 2 | | x * - y * | | < r 1 K 2 2 = 1 .

It follows from Lemma 2.3 that the operator L is invertible and hence, x* = y*, a contradiction. Thus, x* is the unique solution of (1.1) in B r 1 x * .

Next, we show that {x n } converges to x*. By Corollary 3.3, the operator A α is a quasi-contraction self-operator on B r (x*). Thus, x n B r (x*), n 0. Now, we have

| | x n + 1 - x * | | = | | A α x n - x * | | δ x n 1 - α + α δ x n | | x n - x * | | , n 0 ,
(3.13)

where δ x is defined by (3.4). Since δ x n < 1 , n 0 , we have

0 | | x n + 1 - x * | | | | x n - x * | | | | x 0 - x * | | , n 0 .

By definition of δ x , we have

δ x n = K 1 2 1 - r K 2 | | x n - x * | | + 2 | | x 0 - x * | | 3 K 1 | | x 0 - x * | | 2 1 - r K 2 | | x 0 - x * | | r = δ 0 , n 0 .

Thus, by (3.13), we have

| | x n + 1 - x * | | δ 0 1 - α + α δ 0 n + 1 | | x 0 - x * | | , n 0 ,
(3.14)

which implies x n x* as n.

  1. (b)

    By (3.14), we get the error estimates. □

Remark 3.8 One can observe from (1.8) and (3.12) that

ρ = δ 0 1 - α + α δ 0 < δ 0 .
(3.15)

The strict inequality (3.15) shows that the error estimate in Theorem 3.7 is sharper than that of Theorem 1.2.

Before presenting local convergence result for Algorithm 1.4, we need the following theorem:

Theorem 3.9 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . For x0 D and α (0, 1), let F x * - 1 and F satisfy the conditions (1.6) and (1.7). Let B r 1 x * Dand y0 B r (x*), where r 1 = 2 K 2 andr= 2 2 K 2 + 3 K 1 . Assume that

| | F x * - 1 F x - F y 0 | | K 3 | | x - y 0 | | , x D a n d f o r s o m e K 3 > 0 .
(3.16)

Define an operator V by

V x = x - F y 0 - 1 F x , x B r x * .
(3.17)

Then, we have the following:

(a) For x B r (x*), we have

| | V x - x * | | δ x | | x - x * | | ,

where δ x = K 3 2 1 - r K 2 | | x - x * | | + 2 | | y 0 - x * | | , x B r x * .

(b) If K3K1, then V is a quasi-contraction and self-operator on B r (x*) with constant δ', whereδ= sup x B r x * δ x .

Proof: (a) For x B r (x*), we have

| | V x - x * | | = | | x - F y 0 - 1 F x - x * | | = | | F y 0 - 1 F x - F x * - F y 0 x - x * | | = F x * - 1 F y 0 - 1 0 1 F x * - 1 ( F x * + t ( x - x * ) ( x - x * ) - F y 0 ( x - x * ) ) d t K 3 2 1 - K 2 | | y 0 - x * | | | | x - x * | | + | | y 0 - x * | | 2 - | | y 0 - x * | | 2 K 3 2 1 - r K 2 | | x - x * | | + 2 | | y 0 - x * | | | | x - x * | | = δ x | | x - x * | | .
  1. (b)

    The operator V is a quasi-contraction with constant δ'. Indeed,

    sup x B r x * δ x = K 3 2 1 - r K 2 sup x B r x * | | x - x * | | + 2 | | y 0 - x * | | K 3 2 1 - r K 2 r + 2 | | y 0 - x * | | < 3 r K 1 2 1 - r K 2 = 1 .

That completes the proof. □

Now, we ready to study the local convergence analysis for Algorithm 1.4.

Theorem 3.10 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let α (0, 1) and let x* D be a solution of (1.1) such that F x * - 1 B ( Y , X ) . Assume that B r 1 x * D, where r 1 = 2 K 2 . For any x0 B r (x*), wherer= 2 2 K 2 + 3 K 1 , assume that F x * - 1 and F satisfy the conditions (1.6), (1.7) and (3.16) and that K3K1. Then we have the following:

(a) The sequence {x n } generated by Algorithm 1.4 is in B r (x*) and it converges strongly to the unique solution x* in B r 1 x * .

(b) The following error estimate holds:

| | x n + 1 - x * | | ρ n + 1 | | x 0 - x * | | , n 0 ,
(3.18)

where ρ = α δ 0 + 1 - α δ 0 and δ 0 = 3 K 3 2 1 - r K 2 || x 0 - x * ||.

Proof: (a) Similarly, as in Theorem 3.7, x* is the unique solution of (1.1) in B r 1 x * . Again, Corollary 3.4 shows that the operator U α defined by (3.7) is a self-operator on B r [x*]. Hence, from (1.12), we have

y 0 = U α x 0 B r x * .

It follows from Theorem 3.9 that the operator V defined by (3.17) is a self operator on B r [x*]. Thus, 1.4 can be written as

x n + 1 = V U α x n , n 0 .

Since,

| | x n + 1 - x * | | = | | V U α x n - x * | | δ y n | | U α x n - x * α δ x n + 1 - α δ y n | | x n - x * | | , n 0 .
(3.19)

By definition of δ x , we have

δ y n = K 3 2 1 - r K 2 | | U α x n - x * | | + 2 | | x 0 - x * | | K 3 2 1 - r K 2 α δ x n + 1 - α | | x n - x * | | + 2 | | x 0 - x * | | K 3 2 1 - r K 2 | | x 0 - x * | | + 2 | | x 0 - x * | | = 3 K 3 2 1 - r K 2 | | x 0 - x * | | = δ 0 , n 0 .

By (3.19), we have

| | x n + 1 - x * | | α δ 0 + 1 - α δ 0 n + 1 | | x 0 - x * | | , n 0 ,
(3.20)

which implies x n x* as n.

  1. (b)

    From (3.20), we get the error estimates. □

Remark 3.11 One can observe from (1.8) and (3.18) that

ρ = α δ 0 + 1 - α δ 0 = 3 K 3 α δ 0 + 1 - α 2 1 - r K 2 | | x 0 - x * | | 3 K 1 α δ 0 + 1 - α 2 1 - r K 2 | | x 0 - x * | | δ 0 α δ 0 + 1 - α = ρ < δ 0 .
(3.21)

The strict inequality (3.21) shows that the error estimate in Theorem 3.10 is sharper than that of Theorems 1.2 and 3.7.

4 Application to initial and boundary value problems

Throughout this section, let C 0[1] be the space of real-valued continuous functions defined on the interval 0[1] with norm

| | x | | = max 0 t 1 | x ( t ) | .

4.1 Initial value problem

Consider initial value problem

d y s d s = f s , y s , y 0 = 0 , 0 s 1 .
(4.1)

Let f 2 s , y s = y f s , y s , the partial derivative of f with respect to second component. Assume that f 2 s , y s exists for all (s, y(s)) S 0[1] × and

| | f 2 s , y s - f 2 s , y s | | K 0 | | y - y 0 | | , y C 1 0 , 1  and for some  K 0 > 0 ,
(4.2)

where y0 C10[1] and C10[1] is the space of all real-valued continuously differentiable functions defined on 0[1].

Consider the operator F : C10[1] → C 0[1] defined by

F y s = d y s d s - f s , y s .
(4.3)

Then, solving problem (4.1) is equivalent to solving the Equation (1.1). One can observe that the operator F defined by (4.3) is Fréchet differetiable and its Fréchet derivative is given by

F y h s = d h s d s - f 2 s , y s h s , h C 1 0 , 1 .

Theorem 4.1 Let F : C10[1] → C 0[1]be an operator defined by (4.3). For some y0 C10[1], assume that F y 0 - 1 exists and f 2 s , y s satisfies (4.2). Suppose thath= K 0 | | F y 0 | | 1 - θ 1 2 < 1 2 , where θ 1 = sup s [ 0 , 1 ] | f 2 s , y 0 s |. Then, we have the following:

(a) The initial value problem (4.1) has a unique solution y* in B r [y0], wherer= 1 - 1 - 2 h h η.

(b) For x0 = y0, the sequence {x n } generated by Algorithm 1.3 is in B r [y0] and it converges strongly to y*.

Proof: Our goal is to find an upper bound for F y 0 - 1 . Set

F y 0 h s = d h s d s - f 2 s , y 0 s h s = u s .

Since F y 0 - 1 exists, we can immediately write h s = F y 0 - 1 u s and arrive at the first order linear initial value problem

d h s d s = u s + f 2 s , y 0 s h s ; h 0 = 0 .
(4.4)

It should be noted that problem (4.4) is equivalent to the integral equation of Voltera type of second kind (see [18])

h s = 0 s u τ + f 2 τ , y 0 τ h τ d τ .

Again consider a operator L defined by

L h s = 0 s f 2 τ , y 0 τ h τ d τ .

Clearly, the operator L is linear. We can write

I - L h s = 0 s u τ d τ .

Using max-norm in C 0[1], the operator L is bounded and satisfies ||L|| ≤ θ1, where θ 1 = sup s 0 , 1 | f 2 τ , y 0 τ | . Consequently, if θ1 < 1, then by Lemma 2.3, the inverse (I - L)-1 exists and || I - L - 1 || 1 1 - θ 1 . Therefore, | | F y 0 - 1 | | 1 1 - θ 1 . Indeed

| | F y 0 - 1 u | | = max s [ 0 , 1 ] | F y 0 - 1 u s | = max s [ 0 , 1 ] I - L - 1 0 s u τ d τ 1 1 - θ 1 | | u | | .

Next, by (4.2), we have

| | F y - F y 0 | | K 0 | | y - y 0 | | , y C 1 0 , 1 .

Clearly, η = | | F y 0 | | 1 - θ 1 and β= 1 1 - θ 1 . By assumption, h=ηβ K 0 < 1 2 . Thus, all the assumption of Theorem 3.5 are satisfied. Therefore, Theorem 4.1 follows from Theorem 3.5. □

4.2 Boundary value problem

Consider the second-order boundary value problem

d 2 y s d s 2 = g s , y s , y 0 = 0 , y 1 = 0 .
(4.5)

Assume that g 2 s , y s exists for all (s, y(s)) S [0, 1]× and

| | g 2 s , y s - g 2 s , y 0 s | | K 0 | | y - y 0 | | ,
(4.6)

where y0 C20[1] and C20[1] is the space of all real-valued second time continuously differentiable functions defined on 0[1]. Consider the operator F : C20[1] → C 0[1] defined by

F y s = d 2 y s d s 2 - g s , y s .
(4.7)

Then, solving problem (4.5) is equivalent to solving the (1.1). As in the case of initial value problem, one can observe that the operator F defined by (4.7) is Fréchet differentiable and its Fréchet derivative at each y C20[1] is given by

F y h s = d 2 h s d s 2 - g 2 s , y s h s , h C 2 0 , 1 , s 0 , 1 .

Theorem 4.2 Let F : C20[1] → C 0[1]be an operator defined by (4.7). For some y0 C20[1], assume that F y 0 - 1 exists and g 2 s , y s satisfies (4.6). Suppose thath= K 0 | | F y 0 | | 8 - θ 2 2 < 1 2 , where θ 2 = sup s 0 , 1 | g 2 s , y 0 s |. Then, we have the following:

(a) The boundary value problem (4.5) has a unique solution y* in B r [y0], wherer= 1 - 1 - 2 h h η.

(b) For x0 = y0, the sequence {x n } generated by Algorithm 1.3 is in B r [y0] and it converges strongly to y*.

Proof: To prove the theorem it is sufficient to find an upper bound of || F y 0 - 1 ||. Set

F y 0 h ( s ) = d 2 h ( s ) d s 2 - g 2 ( s , y 0 ( s ) ) h ( s ) = u ( s ) ,

then h ( s ) = F y 0 - 1 w ( s ) and we arrive at the linear boundary value problem

d 2 h ( s ) d s 2 = u ( s ) + g 2 ( s , y 0 ( s ) ) h ( s ) ; u ( 0 ) = u ( 1 ) = 0 .
(4.8)

Problem (4.8) may be written in the form of the integral equation of Fredholm type of second kind (see [18]) as

h ( s ) = - 0 1 G ( s , τ ) [ u ( τ ) + g 2 ( τ , y 0 ( τ ) ) h ( τ ) ] d τ ,

where

G ( s , τ ) = τ ( 1 - s ) , s τ ; s ( 1 - τ ) , s τ .

Consider the operator L defined by

L ( h ) ( s ) = - 0 1 G ( s , τ ) g 2 ( τ , y 0 ( τ ) ) h ( τ ) d τ

and consequently, we have

( I - L ) ( h ) ( s ) = - 0 1 G ( s , τ ) u ( τ ) d τ .

Using the max norm in C 0[1], the operator L is bounded and

L θ 2 8 , where  θ 2 = sup s [ 0 , 1 ] | g 2 ( s , y ( s ) ) | .

By Lemma 2.3, there exists (I - L)-1 if θ2 < 8 and

( I - L ) - 1 8 8 - θ 2 .

Observe that

F y 0 - 1 1 8 - θ 2  and  F y 0 - 1 F ( y 0 ) F ( y 0 ) 8 - θ 2 .

Finally, by (4.6), we have

F y - F y 0 K 0 y - y 0 , y C 2 [ 0 , 1 ] .

Clearly, η= F ( y 0 ) 8 - θ 2 and β= 1 8 - θ 2 . By assumption, h=ηβ K 0 < 1 2 . Thus, all the assumption of Theorem 3.5 are satisfied. Therefore, Theorem 4.2 follows from Theorem 3.5. □

5 Numerical examples

First, we derive the following corollary from Theorem 3.5.

Corollary 5.1 For N , let F : N N be a Fréchet differentiable operator at each point of an open convex subset D of N defined by

F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f N ( x ) ) , x = ( x 1 , x 2 , , x N ) D ,

where f i : N, for i = 1, 2,..., N. For some x0 D, assume that the Jacobian matrix [J F (x0)] of F at x0defined by

[ J F ( x 0 ) ] = f 1 ( x 0 ) x 1 f 1 ( x 0 ) x 2 f 1 ( x 0 ) x N f 2 ( x 0 ) x 1 f 2 ( x 0 ) x 2 f 2 ( x 0 ) x N f N ( x 0 ) x 1 f N ( x 0 ) x 2 f N ( x 0 ) x N

is invertible. Suppose that the inverse matrix [J F (x0)]-1and F satisfy the following conditions:

(i) ||[J F (x0)]-1F (x0) || ≤ η, for some η > 0,

(ii) || [J F (x0)]-1 || ≤ β, for some β > 0,

(iii) || [J F (x)] - [J F (x0)]|| ≤ K0 ||x- x0 ||, x D and for some K0 > 0.

Let α (0, 1), h=ηβ K 0 < 1 2 and B r [x0] D, wherer= 1 - 1 - 2 h h η. Then we have the following:

(a) The Equation (1.1) has a unique solution x * B r [x0].

(b) The sequence {x n } generated by

x n + 1 = z n - [ J F ( x 0 ) ] - 1 F ( z n ) , z n = ( 1 - α ) x n + α y n , y n = x n - [ J F ( x 0 ) ] - 1 F ( x n ) , n 0
(5.1)

is in B r [x0] and it converges to x*.

(c) The following error estimate holds:

x n + 1 - x * ρ n + 1 x 0 - x * , n 0 ,

where ρ = γ (1 - α + αγ) and γ = βrK0.

Following example shows numerically that (5.1) is faster than the modified Newton method defined by (1.3).

Example 5.2 Let X = , D = (-1, 1) and F : D an operator defined by

F ( x ) = e x - 1 , x D .

Then F is Fréchet differentiable and its Fréchet derivative F x at any point x D is given by

F x = e x .

For x0 = 0.26, we have

F x 0 - 1 = 1 e 0 . 26 .

Set β = 0.771051585803566, η = 0.228948414196434 and K0 = 2.718281828459046, we haveh=ηβ K 0 < 1 2 and

F x 0 - 1 β 0 , F x 0 - 1 F ( x 0 ) η 0 , F x - F x 0 K 0 x - x 0 .

Hence, all the condition of Corollary 5.1 are satisfied. Therefore, the sequence {x n } generated by (5.1) is in B r [x0] and it converges to a unique x* B r [x0]. The Figure 1 and Table 1show that sequence {x n } generated by (5.1) is faster than the modified Newton method defined by (1.3).

Figure 1
figure 1

Comparison of the iteration processes defined by (1.3) and (1.11).

Table 1 Comparison of the iteration processes defined by (1

For N = 2 in Corollary 5.1, the following example shows numerically the convergence of (5.1).

Example 5.3 Let D = X = Y = 2under the norm

x = x 2 + y 2 , x = ( x , y ) 2

and induced matrix norm. Consider an operator F : 22defined by

F ( x ) = - x 2 + 1 3 , - y 2 + 1 3 , x = ( x , y ) 2 .

Clearly, the point 1 3 , 1 3 is the zero of F in D. It can be seen that F is Fréchet differentiable at each point of D and its Jacobian matrix[J F (x)] at any point x = (x, y) 2is given by

[ J F ( x ) ] = - 2 x 0 0 - 2 y .

Now, for any x , x0 D, we have

F x - F x 0 2 x - x 0 .

For x0 = (1, 1), we get

[ J F ( x 0 ) ] - 1 = - 1 2 0 0 - 1 2 .

Therefore, forβ= 1 2 , η= 2 3 and K0 = 2, we have h = βηK0 < 1/2 and

[ J F ( x 0 ) ] - 1 β , [ J F ( x 0 ) ] - 1 F ( x 0 ) η ,

and B r [x0], where r = 1 - ( 1 - 2 h ) h η = 3 - 3 , is contained in D = X. Hence, all the conditions of Corollary 5.1 are satisfied. One can write sequence {x n } generated by (5.1) as

x n + 1 = x n + 1 y n + 1 = 1 3 - α 2 72 + 1 - α 6 x n - 1 2 + α 2 - α 2 12 x n 2 + α 2 x n 3 - α 2 8 x n 4 1 3 - α 2 72 + 1 - α 6 y n - 1 2 + α 2 - α 2 12 y n 2 + α 2 y n 3 - α 2 8 y n 4 .

Therefore, in view of Corollary 5.1, the sequence {x n } is in B r [x0] and it converges to a unique x * = (x*, y*) B r [x0]. The Figure 2 and Table 2show that sequence {x n } generated by (5.1) with α = 0.5 is faster than the modified Newton method defined by (1.3).

Figure 2
figure 2

Comparison of the iteration processes defined by (1.3) and (5.1).

Table 2 Comparison of the iteration processes defined by (1

Now, we study the convergence of (1.11) for infinite dimensional cases,

Example 5.4 Let X = D = C 0[1]be the space of real-valued continuous functions defined on the interval 0[1]with norm

x = max 0 t 1 | x ( t ) | .

Consider the integral equation F (x) = 0, where

F ( x ) ( s ) = - 1 + x ( s ) + π x ( s ) 0 1 s s + t x ( t ) d t ,

with s 0[1], x C 0[1]and π (0, 2]. Integral equations of this kind called Chandrasekhar equations arise in elasticity or neutron transport problems[19]. The norm is taken as sup-norm. Now it is easy to find the Fréchet derivative of F as

F x h ( s ) = h ( s ) + π h ( s ) 0 1 s s + t x ( t ) d t + π x ( s ) 0 1 s s + t h ( t ) d t , h X .

Now one can easily compute

F ( x 0 ) = - 1 + x 0 ( s ) + π x 0 ( s ) 0 1 s s + t x 0 ( t ) d t x 0 - 1 + | π | max s [ 0 , 1 ] 0 1 s s + t d t x 0 2 x 0 - 1 + π log 2 x 0 2 .

Also notice that

I - F x 0 = π 0 1 s s + t x 0 ( t ) d t + π x 0 ( s ) 0 1 s s + t d t 2 π max s [ 0 , 1 ] 0 1 s s + t d t x 0 2 π log 2 x 0

Now, we have

F x - F x 0 = π 0 1 s s + t x ( t ) d t + π x ( s ) 0 1 s s + t d t - π 0 1 s s + t x 0 ( t ) d t - π x 0 ( s ) 0 1 s s + t d t = | π | 0 1 s s + t ( x ( t ) - x 0 ( t ) d t + ( x ( s ) - x 0 ( s ) ) 0 1 s s + t d t 2 π max s [ 0 , 1 ] 0 1 s s + t d t x - x 0 2 π log 2 x - x 0

and, if 2|π | log 2|| x0|| < 1, then by Lemma 2.3, we obtain

F x 0 - 1 1 1 - 2 π log 2 | | x 0 | | .

Hence, we have

F x 0 - 1 F ( x 0 ) x 0 - 1 + π | log 2 | x 0 2 1 - 2 π log 2 x 0 .

Now, forπ= 1 4 and the initial point x0 = x0(s) = 1, we obtain

| | F x 0 - 1 β = 1 . 17718382 , F x 0 - 1 F ( x 0 ) η = 0 . 08859191 , K 0 = 0 . 346573590279973 .

Hence, h=βη K 0 =0.036143800345579< 1 2 . So the hypotheses of Theorem 3: 5 are satisfied. Therefore, the sequence {x n } generated by Algorithm 1.3 is in B r [x0] and it converges to a unique solution x* B r [x0] of the integral equation.

References

  1. Li Q, Mo Z, Qi L: Numerical Methods for Solving Non-linear Equations. Science Press, Beijing; 1997.

    Google Scholar 

  2. Argyros IK: On a class of Newton-like methods for solving nonlinear equations. J Comput Appl Math 2009, 228: 115–122. 10.1016/j.cam.2008.08.042

    Article  MathSciNet  Google Scholar 

  3. Wu Q, Zhao Y: Third-order convergence theorem by using majorizing function for a modified Newton method in Banach space. Appl Math Comput 2006, 175: 1515–1524. 10.1016/j.amc.2005.08.043

    Article  MathSciNet  Google Scholar 

  4. Argyros IK: Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics. Volume 15. Elsevier Publ. Comp., New York; 2007.

    Google Scholar 

  5. Bartle RG: Newton's method in Banach spaces. Proc Am Math Soc 1955, 6: 827–831.

    MathSciNet  Google Scholar 

  6. Dennis JE: On the Kantorovich hypothesis for Newton's method. SIAM J Numer Anal 1969, 6: 493–507. 10.1137/0706045

    Article  MathSciNet  Google Scholar 

  7. Rheinboldt WC: A unified convergence theory for a class of iterative processes. SIAM J Numer Anal 1968, 5: 42–63. 10.1137/0705003

    Article  MathSciNet  Google Scholar 

  8. Argyros IK: Convergence and applications of Newton-type iterations. Springer, Berlin; 2008.

    Google Scholar 

  9. Ren H, Argyros IK: On convergence of the modified Newtons method under Hölder continuous Fréchet derivative. Appl Math Comput 2009, 213: 440–448. 10.1016/j.amc.2009.03.040

    Article  MathSciNet  Google Scholar 

  10. Agarwal RP, O'Regan D, Sahu DR: Iterative construction of fixed points of nearly asymptotically nonexpansive mappings J. Nonlinear Convex Anal 2007, 8(1):61–79.

    MathSciNet  Google Scholar 

  11. Sahu DR: Applications of the S -iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12: 187–204.

    MathSciNet  Google Scholar 

  12. Scherzer O: Convergence criteria of iterative methos based on Landweber iteration for solving nonlinear problems. J Math Anal Appl 1995, 194: 911–933. 10.1006/jmaa.1995.1335

    Article  MathSciNet  Google Scholar 

  13. Ortega JM, Rheinbolt WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York; 1970.

    Google Scholar 

  14. Rall LB: Computational Solution of Nonlinear Operator Equations. Wiley, New York; 1969.

    Google Scholar 

  15. Agarwal RP, O'Regan D, Sahu DR: Fixed Point Theory for Lipschitzian-type Mappings with Applications. Series: Topological Fixed Point Theory and its Applications. Volume 6. Springer, New York; 2009.

    Google Scholar 

  16. Kreyszig E: Introductory Functional Analysis with Applications. Wiley, New York; 1978.

    Google Scholar 

  17. Suhubi ES: Functional Analysis. Kluwer Academic Publishers, London; 2003.

    Chapter  Google Scholar 

  18. Porter D, Stirling DSG: Integral Equations. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  19. Lambert JD: Computational Methods in Ordinary Differential Equations. Wiley, New York; 1979.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to DR Sahu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All authors contribute equally and significantly in this research work. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Sahu, D., Singh, K.K. & Singh, V.K. Some Newton-like methods with sharper error estimates for solving operator equations in Banach spaces. Fixed Point Theory Appl 2012, 78 (2012). https://doi.org/10.1186/1687-1812-2012-78

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-78

Keywords