- Research Article
- Open access
- Published:
Some Newton-like methods with sharper error estimates for solving operator equations in Banach spaces
Fixed Point Theory and Applications volume 2012, Article number: 78 (2012)
Abstract
It is well known that the rate of convergence of S-iteration process introduced by Agarwal et al. (pp. 61-79) is faster than Picard iteration process for contraction operators. Following the ideas of S-iteration process, we introduce some Newton-like algorithms to solve the non-linear operator equation in Banach space setting. We study the semi-local as well as local convergence analysis of our algorithms. The rate of convergence of our algorithms are faster than the modified Newton method.
Mathematics Subject Classification 2010: 49M15; 65K10; 47H10.
1 Introduction
Let D be an open convex subset of a Banach space X and F be a Fréchet differentiable operator at each point of D with values in a Banach space Y . In the sequel, given any x ∈ X and r > 0, B r [x] will designate the set {y ∈ X : ||y - x || ≤ r}, B r (x) will designate the set {y ∈ X : || y - x || < r}, B(Y, X) will designate the space of all bounded linear operators from Y to X and ℕ0 will designate the set ℕ ∪ {0}.
Many applied problems can be formulated to fit the model of the nonlinear operator equation
where F is Fréchet differentiable operator at each point of D with values in a Banach space Y. A lot of problems about finding the solution of (1.1) are brought forward in many sciences and engineering (see [1]). Undoubtedly, Newton method is the most popular method for solving such problems. Starting with x0 ∈ X, the famous Newton method is given by
where denotes the Fréchet derivative of F at the point x ∈ D. There are numerous generalizations of Newton method for solving nonlinear operator Equation (1.1). Details can be found in Argyros [2], Wu and Zhao [3] and references therein.
In Newton method (1.2), functional value of inverse of derivative is required at each iteration. This bring us a natural question how to modify Newton iteration process (1.2), so that the computation of the inverse of derivative at each step in Newton method (1.2) can be avoided. Argyros [4], Bartle [5], Dennis [6] and Rheinboldt [7] discussed the modified Newton method
In [8], Argyros proved the following theorem for semilocal convergence analysis of (1.3) to solve the operator Equation (1.1).
Theorem 1.1 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. For some x0 ∈ D, let. Assume thatand F satisfy the following conditions:
(i), for some β > 0,
(ii), for some η > 0,
(iii)and for some K0 > 0.
Assume thatand B r [x0] ⊆ D, where. Then we have the following:
(a) The Equation (1.1) has a unique solution x* ∈ B r [x0].
(b) The sequence {x n } generated by (1.3) is in B r [x0] and it converges to x*.
(c) The following error estimate holds:
where γ = rβK0.
Let X be a Banach space and F a Fréchet differentiable operator on an open convex subset D of X with values in a Banach space Y. Let x* ∈ D be a solution of (1.1) such that . For some x0 ∈ D, assume that and F satisfy the following:
and
Ren and Argyros [9] studied the following local convergence analysis to solve the operator Equation (1.1).
Theorem 1.2 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* ∈ D be a solution of (1.1) such that. For some x0 ∈ D, letand F satisfy (1.6) and (1.7). Assume that, where. Then, for any initial point x0 ∈ B r (x*), where, we have the following:
(a) The sequence {x n } generated by (1.3) is in B r (x0) and it converges to the unique solution x* in.
(b) The following error estimate holds:
where.
Recently, Agarwal et al. [10] have introduced the S-iteration process as follows: Let X be a normed space, D a nonempty convex subset of X and A : D → D an operator. Then, for arbitrary x0 ∈ D, the S-iteration process is defined by
where {α n } and {β n } are sequences in (0, 1).
In [11], motivated by S-iteration process, the first author has introduced the normal S-iteration process as follows: Let X be a normed space, D a nonempty convex subset of X and A : D → D an operator. Then, for arbitrary x0 ∈ D, the normal S-iteration process is defined by
where {α n } be a sequence in (0, 1). Noticing that the normal S-iteration process is applicable for finding solutions of constrained minimization problems and split feasibility problems (see Sahu [11]).
Following [[11], Theorem 3.6], we remark that the normal S-iteration process is faster than the Picard and Mann iteration processes for contraction mappings.
In the present article, motivated by normal S-iteration process, we introduce the S-iteration processes of Newton-like for finding the solution of operator Equation (1.1).
Algorithm 1.3 Let α ∈ (0, 1). Starting with x0 ∈ X and after x n ∈ X is defined, we define the next iterate xn+1as follows:
Algorithm 1.4 Let α ∈ (0, 1). Starting with x0 ∈ X and after x n ∈ X is defined, we define the next iterate xn+1as follows:
The purpose of this article is to prove the semi-local as well as local convergence analysis of Algorithms 1.3 and 1.4. It is shown that the rate of convergence of (1.11) and (1.12) are faster than (1.3). Applications to initial value and boundary value problems are included.
2 Preliminaries
Definition 2.1 Let C be a nonempty subset of normed space X. A mapping T : C → X is said to be
(i) Lipschitzian if there exists a constant L > 0 such that
(ii) contraction if there exists a constant L ∈ (0, 1) such that
(iii) quasi-contraction[12]if there exists a constant L ∈ (0, 1) and
such that
Definition 2.2[11]Let C be a nonempty convex subset of a normed space X and T : C → C an operator. The operator G : C → C is said to be S-operator generated by α ∈ (0, 1) and T if
where I is the identity operator.
Before presenting our main results we need the following technical lemmas.
Lemma 2.3[4, 13, 14]Let P be a bounded linear operator on a Banach space X. Then the following are equivalent:
(a) There is a bounded linear operator Q on X such that Q-1exists, and.
(b) P-1exists.
Further, if P -1 exists, then
Lemma 2.4[9]Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* ∈ D be a solution of (1.1) such thatand the operator F satisfies the conditions (1.7). Assume that B r (x*) ⊆ D, where. Then, for any x ∈ B r (x*), is invertible, and the following estimate holds:
Lemma 2.5[15, 16]Let (X, d) be a complete metric space and F : X → X a contraction mapping. Then F has a unique fixed point in X.
Lemma 2.6 [[17], Theorem 9.4.2] Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Then, for all x, y ∈ D, we have
3 Convergence analysis for Algorithms 1.3 and 1.4
Before studying convergence analysis of Algorithm 1.3, we establish the following theorem for existence of a unique solution of operator Equation (1.1).
Theorem 3.1 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. For some x0 ∈ D, letand the operator F satisfies (1.5) and the following conditions:
(i), for some η > 0,
(ii), for some β > 0.
Assume thatand B r [x0] ⊆ D, where. Then, for fixed α ∈ (0, 1), we have the following:
(a) The operator A : B r [x0] → X defined by
is a contraction self-operator on B r [x0] with Lipschitz constant β r K0and the operator Equation (1.1) has a unique solution in B r [x0].
(b) The S-operator A α : B r [x0] → X generated by α and A is a contraction self-operator on B r [x0] with Lipschitz constant βrK0(1 - α + αβrK0).
Proof: (a) Set γ : = βrK0. Note that . For x, y ∈ B r [x0], we have
Therefore, the operator A is a contraction with Lipschitz constant γ.
Now we claim that A(B r [x0]) ⊆ B r [x0]. For x ∈ B r [x0], we have
Hence, the operator A maps B r [x0] into itself. By Banach contraction principle (see Lemma 2.5), A has a unique fixed point in B r [x0].
-
(b)
For x, y ∈ B r [x 0], we have
Therefore, the S-operator A α generated by α and A is a contraction operator on B r [x0] with Lipschitz constant βrK0(1- α + αβrK0). □
Next we formulate that the operator A λ defined by (3.2) is quasi-contraction.
Theorem 3.2 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Assume that λ ∈ (0, 1] and x* ∈ D is a solution of (1.1) such that . For some x0 ∈ D, letand F satisfy the conditions (1.6) and (1.7). Assume that , where . For x0 ∈ B r (x*) with , let A λ be an operator defined by
Then, we have the following
(a) For x ∈ B r (x*), we have
where
(b) A λ is a quasi-contraction and self-operator on B r (x*) with constant 1 - (1 -δ)λ, where.
Proof: (a) For x ∈ B r (x*) with x ≠ x*, we have
By (1.6) and Lemma 2.4, we have
-
(b)
The operator A λ is a quasi-contraction with constant 1 - (1 - δ)λ. Indeed,
This completes the proof. □
Corollary 3.3 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let α ∈ (0, 1) and x* ∈ D be a solution of (1.1) such that. For x0 ∈ D, letand F satisfy the conditions (1.6) and (1.7). Assume that, where. For x0in B r (x*) with, let A be an operator defined by (3.1) and let A α be the S-operator generated by α and A. Then, the following hold:
where δ x is defined in (3.4).
Corollary 3.4 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* ∈ D be a solution of (1.1) such that. For x0 ∈ D and α ∈ (0, 1), letand F satisfy the conditions (1.6) and (1.7). Assume that, where. For x0in B r (x*) with, let U α be an operator defined by
Then U α is a quasi-contraction, self-operator on B r (x*) and the following holds:
Now, we ready to study the semilocal convergence analysis of Algorithm 1.3.
Theorem 3.5 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. For some x0 ∈ D, let. Assume thatand F satisfy (1.5) with the following conditions:
(i), for some η > 0,
(ii), for some β > 0.
Assume that α ∈ (0, 1), and B r [x0] ⊆ D, where. Then we have the
following:
(a) The Equation (1.1) has a unique solution x*∈ B r [x0].
(b) The sequence {x n } generated by Algorithm 1.3 is in B r [x0] and it converges strongly to x*.
(c) The following error estimate holds:
where ρ = γ (1 - α + αγ) and γ = βrK0.
Proof : (a) It follows from Theorem 3.1.
-
(b)
From Algorithm 1.3, we have
(3.10)
Therefore, x n → x* as n → ∞.
-
(c)
It follows from (3.10).
Remark 3.6 The condition (1.5) of Theorem 3.5 is weaker assumption than assumption (iii) of Theorem 1.1. Also one can observe from (1.4) and (3.9) that
The strict inequality (3.11) shows that the error estimate in Theorem 3.5 is sharper than that of Theorem 1.1.
Now, we study the local convergence analysis for Algorithm 1.3.
Theorem 3.7 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let α ∈ (0, 1) and let x* ∈ D be a solution of (1.1) such that. For x0 ∈ D, letand F satisfy the conditions (1.6) and (1.7). Assume that, where. Then, we have the following:
(a) For initial x0 ∈ B r (x*) with, the sequence {x n } generated by Algorithm 1.3 is in B r (x*) and it converges strongly to the unique solution x* in.
(b) The following error estimate holds:
where ρ' = δ0(1 - α + αδ0) and .
Proof : (a) First we show that x* is unique solution of (1.1) in . For contradiction, suppose that y* is another solution of (1.1) in . Then, we have
Define an operator L by
Consequently, we have
It follows from Lemma 2.3 that the operator L is invertible and hence, x* = y*, a contradiction. Thus, x* is the unique solution of (1.1) in .
Next, we show that {x n } converges to x*. By Corollary 3.3, the operator A α is a quasi-contraction self-operator on B r (x*). Thus, x n ∈ B r (x*), ∀n ∈ ℕ0. Now, we have
where δ x is defined by (3.4). Since , we have
By definition of δ x , we have
Thus, by (3.13), we have
which implies x n → x* as n → ∞.
-
(b)
By (3.14), we get the error estimates. □
Remark 3.8 One can observe from (1.8) and (3.12) that
The strict inequality (3.15) shows that the error estimate in Theorem 3.7 is sharper than that of Theorem 1.2.
Before presenting local convergence result for Algorithm 1.4, we need the following theorem:
Theorem 3.9 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let x* ∈ D be a solution of (1.1) such that. For x0 ∈ D and α ∈ (0, 1), letand F satisfy the conditions (1.6) and (1.7). Letand y0 ∈ B r (x*), whereand. Assume that
Define an operator V by
Then, we have the following:
(a) For x ∈ B r (x*), we have
where
(b) If K3 ≤ K1, then V is a quasi-contraction and self-operator on B r (x*) with constant δ', where.
Proof: (a) For x ∈ B r (x*), we have
-
(b)
The operator V is a quasi-contraction with constant δ'. Indeed,
That completes the proof. □
Now, we ready to study the local convergence analysis for Algorithm 1.4.
Theorem 3.10 Let F be a Fréchet differentiable operator defined on an open convex subset D of a Banach space X with values in a Banach space Y. Let α ∈ (0, 1) and let x* ∈ D be a solution of (1.1) such that. Assume that, where. For any x0 ∈ B r (x*), where, assume thatand F satisfy the conditions (1.6), (1.7) and (3.16) and that K3 ≤ K1. Then we have the following:
(a) The sequence {x n } generated by Algorithm 1.4 is in B r (x*) and it converges strongly to the unique solution x* in.
(b) The following error estimate holds:
where and
Proof: (a) Similarly, as in Theorem 3.7, x* is the unique solution of (1.1) in . Again, Corollary 3.4 shows that the operator U α defined by (3.7) is a self-operator on B r [x*]. Hence, from (1.12), we have
It follows from Theorem 3.9 that the operator V defined by (3.17) is a self operator on B r [x*]. Thus, 1.4 can be written as
Since,
By definition of , we have
By (3.19), we have
which implies x n → x* as n → ∞.
-
(b)
From (3.20), we get the error estimates. □
Remark 3.11 One can observe from (1.8) and (3.18) that
The strict inequality (3.21) shows that the error estimate in Theorem 3.10 is sharper than that of Theorems 1.2 and 3.7.
4 Application to initial and boundary value problems
Throughout this section, let C 0[1] be the space of real-valued continuous functions defined on the interval 0[1] with norm
4.1 Initial value problem
Consider initial value problem
Let , the partial derivative of f with respect to second component. Assume that exists for all (s, y(s)) ∈ S ⊆ 0[1] × ℝ and
where y0 ∈ C10[1] and C10[1] is the space of all real-valued continuously differentiable functions defined on 0[1].
Consider the operator F : C10[1] → C 0[1] defined by
Then, solving problem (4.1) is equivalent to solving the Equation (1.1). One can observe that the operator F defined by (4.3) is Fréchet differetiable and its Fréchet derivative is given by
Theorem 4.1 Let F : C10[1] → C 0[1]be an operator defined by (4.3). For some y0 ∈ C10[1], assume thatexists andsatisfies (4.2). Suppose that, where . Then, we have the following:
(a) The initial value problem (4.1) has a unique solution y* in B r [y0], where.
(b) For x0 = y0, the sequence {x n } generated by Algorithm 1.3 is in B r [y0] and it converges strongly to y*.
Proof: Our goal is to find an upper bound for . Set
Since exists, we can immediately write and arrive at the first order linear initial value problem
It should be noted that problem (4.4) is equivalent to the integral equation of Voltera type of second kind (see [18])
Again consider a operator L defined by
Clearly, the operator L is linear. We can write
Using max-norm in C 0[1], the operator L is bounded and satisfies ||L|| ≤ θ1, where . Consequently, if θ1 < 1, then by Lemma 2.3, the inverse (I - L)-1 exists and . Therefore, . Indeed
Next, by (4.2), we have
Clearly, and . By assumption, . Thus, all the assumption of Theorem 3.5 are satisfied. Therefore, Theorem 4.1 follows from Theorem 3.5. □
4.2 Boundary value problem
Consider the second-order boundary value problem
Assume that exists for all (s, y(s)) ∈ S ⊆ [0, 1]× ℝ and
where y0 ∈ C20[1] and C20[1] is the space of all real-valued second time continuously differentiable functions defined on 0[1]. Consider the operator F : C20[1] → C 0[1] defined by
Then, solving problem (4.5) is equivalent to solving the (1.1). As in the case of initial value problem, one can observe that the operator F defined by (4.7) is Fréchet differentiable and its Fréchet derivative at each y ∈ C20[1] is given by
Theorem 4.2 Let F : C20[1] → C 0[1]be an operator defined by (4.7). For some y0 ∈ C20[1], assume thatexists andsatisfies (4.6). Suppose that, where. Then, we have the following:
(a) The boundary value problem (4.5) has a unique solution y* in B r [y0], where.
(b) For x0 = y0, the sequence {x n } generated by Algorithm 1.3 is in B r [y0] and it converges strongly to y*.
Proof: To prove the theorem it is sufficient to find an upper bound of . Set
then and we arrive at the linear boundary value problem
Problem (4.8) may be written in the form of the integral equation of Fredholm type of second kind (see [18]) as
where
Consider the operator L defined by
and consequently, we have
Using the max norm in C 0[1], the operator L is bounded and
By Lemma 2.3, there exists (I - L)-1 if θ2 < 8 and
Observe that
Finally, by (4.6), we have
Clearly, and . By assumption, . Thus, all the assumption of Theorem 3.5 are satisfied. Therefore, Theorem 4.2 follows from Theorem 3.5. □
5 Numerical examples
First, we derive the following corollary from Theorem 3.5.
Corollary 5.1 For N ∈ ℕ, let F : ℝ N → ℝ N be a Fréchet differentiable operator at each point of an open convex subset D of ℝN defined by
where f i : ℝN → ℝ, for i = 1, 2,..., N. For some x0 ∈ D, assume that the Jacobian matrix [J F (x0)] of F at x0defined by
is invertible. Suppose that the inverse matrix [J F (x0)]-1and F satisfy the following conditions:
(i) ||[J F (x0)]-1F (x0) || ≤ η, for some η > 0,
(ii) || [J F (x0)]-1 || ≤ β, for some β > 0,
(iii) || [J F (x)] - [J F (x0)]|| ≤ K0 ||x- x0 ||, ∀x∈ D and for some K0 > 0.
Let α ∈ (0, 1), and B r [x0] ⊆ D, where. Then we have the following:
(a) The Equation (1.1) has a unique solution x * ∈ B r [x0].
(b) The sequence {x n } generated by
is in B r [x0] and it converges to x*.
(c) The following error estimate holds:
where ρ = γ (1 - α + αγ) and γ = βrK0.
Following example shows numerically that (5.1) is faster than the modified Newton method defined by (1.3).
Example 5.2 Let X = ℝ, D = (-1, 1) and F : D → ℝ an operator defined by
Then F is Fréchet differentiable and its Fréchet derivative at any point x∈ D is given by
For x0 = 0.26, we have
Set β = 0.771051585803566, η = 0.228948414196434 and K0 = 2.718281828459046, we haveand
Hence, all the condition of Corollary 5.1 are satisfied. Therefore, the sequence {x n } generated by (5.1) is in B r [x0] and it converges to a unique x* ∈ B r [x0]. The Figure 1 and Table 1show that sequence {x n } generated by (5.1) is faster than the modified Newton method defined by (1.3).
For N = 2 in Corollary 5.1, the following example shows numerically the convergence of (5.1).
Example 5.3 Let D = X = Y = ℝ2under the norm
and induced matrix norm. Consider an operator F : ℝ2 → ℝ2defined by
Clearly, the point is the zero of F in D. It can be seen that F is Fréchet differentiable at each point of D and its Jacobian matrix[J F (x)] at any point x = (x, y) ∈ ℝ2is given by
Now, for any x , x0 ∈ D, we have
For x0 = (1, 1), we get
Therefore, for, and K0 = 2, we have h = βηK0 < 1/2 and
and B r [x0], where, is contained in D = X. Hence, all the conditions of Corollary 5.1 are satisfied. One can write sequence {x n } generated by (5.1) as
Therefore, in view of Corollary 5.1, the sequence {x n } is in B r [x0] and it converges to a unique x * = (x*, y*) ∈ B r [x0]. The Figure 2 and Table 2show that sequence {x n } generated by (5.1) with α = 0.5 is faster than the modified Newton method defined by (1.3).
Now, we study the convergence of (1.11) for infinite dimensional cases,
Example 5.4 Let X = D = C 0[1]be the space of real-valued continuous functions defined on the interval 0[1]with norm
Consider the integral equation F (x) = 0, where
with s ∈ 0[1], x ∈ C 0[1]and π ∈ (0, 2]. Integral equations of this kind called Chandrasekhar equations arise in elasticity or neutron transport problems[19]. The norm is taken as sup-norm. Now it is easy to find the Fréchet derivative of F as
Now one can easily compute
Also notice that
Now, we have
and, if 2|π | log 2|| x0|| < 1, then by Lemma 2.3, we obtain
Hence, we have
Now, forand the initial point x0 = x0(s) = 1, we obtain
Hence, . So the hypotheses of Theorem 3: 5 are satisfied. Therefore, the sequence {x n } generated by Algorithm 1.3 is in B r [x0] and it converges to a unique solution x* ∈ B r [x0] of the integral equation.
References
Li Q, Mo Z, Qi L: Numerical Methods for Solving Non-linear Equations. Science Press, Beijing; 1997.
Argyros IK: On a class of Newton-like methods for solving nonlinear equations. J Comput Appl Math 2009, 228: 115–122. 10.1016/j.cam.2008.08.042
Wu Q, Zhao Y: Third-order convergence theorem by using majorizing function for a modified Newton method in Banach space. Appl Math Comput 2006, 175: 1515–1524. 10.1016/j.amc.2005.08.043
Argyros IK: Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics. Volume 15. Elsevier Publ. Comp., New York; 2007.
Bartle RG: Newton's method in Banach spaces. Proc Am Math Soc 1955, 6: 827–831.
Dennis JE: On the Kantorovich hypothesis for Newton's method. SIAM J Numer Anal 1969, 6: 493–507. 10.1137/0706045
Rheinboldt WC: A unified convergence theory for a class of iterative processes. SIAM J Numer Anal 1968, 5: 42–63. 10.1137/0705003
Argyros IK: Convergence and applications of Newton-type iterations. Springer, Berlin; 2008.
Ren H, Argyros IK: On convergence of the modified Newtons method under Hölder continuous Fréchet derivative. Appl Math Comput 2009, 213: 440–448. 10.1016/j.amc.2009.03.040
Agarwal RP, O'Regan D, Sahu DR: Iterative construction of fixed points of nearly asymptotically nonexpansive mappings J. Nonlinear Convex Anal 2007, 8(1):61–79.
Sahu DR: Applications of the S -iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12: 187–204.
Scherzer O: Convergence criteria of iterative methos based on Landweber iteration for solving nonlinear problems. J Math Anal Appl 1995, 194: 911–933. 10.1006/jmaa.1995.1335
Ortega JM, Rheinbolt WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York; 1970.
Rall LB: Computational Solution of Nonlinear Operator Equations. Wiley, New York; 1969.
Agarwal RP, O'Regan D, Sahu DR: Fixed Point Theory for Lipschitzian-type Mappings with Applications. Series: Topological Fixed Point Theory and its Applications. Volume 6. Springer, New York; 2009.
Kreyszig E: Introductory Functional Analysis with Applications. Wiley, New York; 1978.
Suhubi ES: Functional Analysis. Kluwer Academic Publishers, London; 2003.
Porter D, Stirling DSG: Integral Equations. Cambridge University Press, Cambridge; 1990.
Lambert JD: Computational Methods in Ordinary Differential Equations. Wiley, New York; 1979.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All authors contribute equally and significantly in this research work. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Sahu, D., Singh, K.K. & Singh, V.K. Some Newton-like methods with sharper error estimates for solving operator equations in Banach spaces. Fixed Point Theory Appl 2012, 78 (2012). https://doi.org/10.1186/1687-1812-2012-78
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2012-78