 Research
 Open Access
 Published:
Strong convergence of an inertial algorithm for maximal monotone inclusions with applications
Fixed Point Theory and Applications volume 2020, Article number: 13 (2020)
Abstract
An inertial iterative algorithm is proposed for approximating a solution of a maximal monotone inclusion in a uniformly convex and uniformly smooth real Banach space. The sequence generated by the algorithm is proved to converge strongly to a solution of the inclusion. Moreover, the theorem proved is applied to approximate a solution of a convex optimization problem and a solution of a Hammerstein equation. Furthermore, numerical experiments are given to compare, in terms of CPU time and number of iterations, the performance of the sequence generated by our algorithm with the performance of the sequences generated by three recent inertial type algorithms for approximating zeros of maximal monotone operators. In addition, the performance of the sequence generated by our algorithm is compared with the performance of a sequence generated by another recent algorithm for approximating a solution of a Hammerstein equation. Finally, a numerical example is given to illustrate the implementability of our algorithm for approximating a solution of a convex optimization problem.
Introduction
Let H be a real Hilbert space. A setvalued map \(A:H\rightrightarrows {H}\) is called monotone if for each \(u,v\in H\), \(\eta_{u}\in Au\), \(\gamma_{v}\in Av\), the following inequality holds:
Monotone maps in Hilbert spaces were first introduced by Minty [34] to aid the abstract study of electrical networks and later studied by Browder [6] and his school in the setting of partial differential equations. The map A is called maximal monotone if it is monotone, and in addition, its graph is not included in the graph of any other monotone map. The extension of the monotonicity definition from a Hilbert space to itself
… to operators from a Banach space into its dual has been the starting point for the development of nonlinear functional analysis … The monotone mappings appear in a rather wide variety of contexts, since they can be found in many functional equations. Many of them appear also in calculus of variations, as subdifferentials of convex functions (Pascali and Sburian [36], p. 101).
For example, consider the following: Let E be a real Banach space with dual space \(E^{*}\) and let \({f:E\rightarrow\mathbb{R}\cup\{\infty\}}\) be a proper lower semicontinuous (lsc) and convex function. The subdifferential of f, \(\partial f:E\rightrightarrows{E^{*}}\) is defined by
It is well known that ∂f is a monotone operator and that \(0\in\partial f(u^{*})\)if and only if \(u^{*}\)is a minimizer of f. Setting \(\partial f\equiv A\), it follows that solving the inclusion \(0\in Au\), in this case, is equivalent to solving for a minimizer of f. It is well known that any maximal monotone map \(A:\mathbb{R}\rightrightarrows\mathbb{R}\) is the subdifferential of a proper, convex, and lsc function (see, e.g., Cioranescu, [24], Corollary 4.5, p. 170).
In general, a fundamental problem in the study of monotone maps in Banach spaces is the following:
This problem has been investigated in Hilbert spaces by numerous researchers. The proximal point algorithm (PPA) introduced by Martinet [33] and studied extensively by Rockafellar [41] and numerous other authors is concerned with an iterative method for approximating a solution of the inclusion \(0\in Au\), where A is a maximal monotone map. Specifically, given \(x_{n}\in H\), the proximal point algorithm generates the next iterate \(x_{n+1}\) by solving the following equation:
where \(\lambda_{n} > 0\) is a regularizing parameter. Rockafellar [41] proved that if the sequence \(\{\lambda_{n}\}_{n=1}^{\infty}\) is bounded from above, then the resulting sequence \(\{x_{n}\}_{n=1}^{\infty}\) of proximal point iterates converges weakly to a solution of (1.3), when \(E=H\), provided that a solution exists. Several alternatives and modifications of the PPA have been proposed to obtain strong convergence under suitable conditions. For a brief review of these alternatives and modifications in Banach spaces more general than Hilbert spaces, interested readers may see, e.g., [14, 30, 31, 39, 43, 46] and the references therein.
Chidume et al. [21] recently proved the following strong convergence theorem.
Theorem 1.1
(Chidume et al. [21])
Let E be a uniformly convex and uniformly smooth real Banach space and let \(E^{*}\)be its dual. Let \(A:E\rightrightarrows{E^{*}}\)be a maximal monotone and bounded mapping with \(A^{1}(0)\neq\emptyset \). For arbitrary \(u_{1}\in E\), define a sequence \(\{u_{n}\}\)iteratively by
where \(\{\lambda_{n}\}\)and \(\{\theta_{n}\}\)are sequences in \((0,1)\)satisfying certain conditions and J is the normalized duality map on E. Then, the sequence \(\{u_{n}\}\)converges strongly to a solution of \(0\in Au\).
It is well known that the convergence of iterative algorithms for approximating zeros of monotone maps are generally slow. This is expected since monotone maps are generally not differentiable. Thus, fast converging algorithms such as the Newton–Kantorovich algorithm cannot be used. Consequently, a lot of effort is now being put into iterative algorithms for approximating zeros of maximal monotone maps that improve the speed of convergence of known algorithms. One method that is now studied is to incorporate the inertial extrapolation term in algorithms.
In a recent paper, Alvarez [3] studied the asymptotic weak convergence of three inertial implicit iterative methods for solving the inclusion \(0\in Au\), when A is a maximal monotone operator on a real Hilbert space, which generalizes the classical PPA. The motivation for the first of these three methods, called Inertial Proximal Point Algorithm (IPPA), stems from a discretization of the equation for an oscillator with damping and conservative restoring force: \(x{''}(t)+\gamma x{'}(t)+\nabla f(x(t))=0\), where \(\gamma>0\) and \(f:H\to\mathbb{R}\) is differentiable. In the context of optimization problems, this dynamical system which is called Heavy Ball with Friction (HBF) was first considered by Polyak [37]. It has been known that the inertial nature of the HBF could be exploited in numerical computations to accelerate the trajectories and speed up convergence (see, e.g., [17, 18]). Concerning asymptotic convergence, Alvarez [2] showed that if f is differentiable, i.e., if ∇f is monotone and \({(\nabla f)}^{1}(0)\neq\emptyset\), then, every trajectory of HBF converges weakly to some \(x^{*}\in H\) with \({(\nabla f)}(x^{*})=0\). Considering the implicit discretization of the HBF, the following recursion formula, in terms of resolvents, has been obtained (see, e.g., Alvarez [3], p. 774):
where λ is a regularizing parameter that combines the damping factor of γ and the actual step size \(h>0\). Replacing ∇f with a maximal monotone operator A, and considering variable parameters \(\lambda_{k}>0\) and \(\alpha_{k}\in[0,1)\), the discussion above motivated the introduction of the inertialtype iteration:
where the extrapolation term \(\alpha_{k}(x_{k}x_{k1})\) is intended to speed up convergence. The Inertial Proximal Point Algorithm (IPPA) was first considered in [2] for nonsmooth conservative operator \(A=\partial f\), the subdifferential of a closed, proper, and convex function \(f:H\to\mathbb{R}\cup\{\infty\}\). Alvarez [2, Theorem 3.1] proved, under suitable conditions, that \(\{x_{k}\}\) converges weakly to a minimizer of f. For the nonconservative case, a partial positive result for cocoercive operators was obtained in [29], where comparisons with firstorderintime methods are also given through numerical facts, showing improvements in the speed of convergence.
The case of arbitrary maximal monotone operators is treated in [4] under the following conditions:

(i)
\(\lambda=\inf_{k\ge0}\lambda_{k}>0\),

(ii)
\(\forall k\in \mathbb{N}\), \(\alpha_{k}\in[0,1)\), \(\alpha:=\sup_{k\ge0}\alpha_{k}<1\),

(iii)
\(\sum\alpha_{k}\x_{k}x_{k1}\^{2}<\infty\).
From a different point of view, the following Relaxed Proximal Point Algorithm (RPPA) was proposed in [28] to accelerate the standard PPA:
where \(\{\rho_{k}\}\subset(0,2)\) is a relaxing factor which is assumed to satisfy the following conditions: \(\inf_{k\ge0}\rho_{k}>0\) and \(\sup_{k\ge0}\rho_{k}<2\).
Alvarez [3] recently coupled the IPPA and RPPA, two acceleration strategies, to propose the following iterative method:
He proved weak convergence of the sequence \(\{x_{k}\}\) to some \(x^{*}\in A^{1}(0)\).
We remark that each of the algorithms, IPPA, RPPA, and RIPPA, involves the resolvent operator, \(J_{\lambda}^{A}\).
In this paper, an inertial iterative algorithm is proposed for approximating a solution of a maximal monotone inclusion in a uniformly convex and uniformly smooth real Banach space. The sequence generated by the algorithm is proved to converge strongly to a solution of the inclusion. Moreover, the theorem proved is applied to approximate a solution of a convex optimization problem, and a solution of a Hammerstein equation. Furthermore, numerical experiments are given to compare, in terms of CPU time and number of iterations, the performance of the sequence generated by our algorithm with the sequences generated by IPPA, RPPA, and RIPPA, respectively, for approximating a solution of a maximal monotone inclusion in Hilbert spaces. Finally, numerical examples are given to illustrate the implementability of our algorithm for approximating a solution of a convex optimization problem and for approximating a solution of a Hammerstein equation.
Preliminaries
Let E be a real normed space with dual space, \(E^{*}\). A map \(J: E\rightrightarrows{E^{*}}\) defined by
is called the normalized duality map.
A normed space E is called uniformly convex, if for all \(\epsilon\in(0,2]\), there exists a \(\delta=\delta(\epsilon)>0\) such that, if \(u,v\in E\), with \(\u\\le1\), \(\v\\le1\), and \(\uv\\ge\epsilon\), then \(\ \frac{1}{2}(u+v)\\le1\delta\).
A normed space E is called strictly convex, if for all \(u,v\in E\), with \(u\ne v\), \(\u\=\v\=1\), we have that \(\\lambda u+(1\lambda)v\<1\), for all \(\lambda\in(0,1)\).
A normed space E is called uniformly smooth, if given \(\epsilon>0\), there exists a \(\delta=\delta(\epsilon)>0\) such that, for all \(u,v\in E\), with \(\ u\=1\), \(\v\\le\delta\), one has
A normed space E is called smooth, if for every \(u \in E\), \(\u\= 1\), there exists a unique \(u^{*}\) in \(E^{*}\) such that \(\u^{*}\=1\) and \(\langle u,u^{*}\rangle=\u\\).
Remark 1
It is well known that if E is a smooth, strictly convex, and reflexive Banach space, the normalized duality map, J, is singlevalued, onetoone and onto, respectively. Also, if E is uniformly smooth, then J is uniformly continuous on bounded subsets of E. For more properties of the normalized duality map, see e.g., Alber and Ryazantseva [1], Lindenstrauss and Tzafriri [32], Chidume [9], and Cioranescu [24].
Let E be a real normed space with \(\dim E\ge2\). The modulus of convexity of E is the function \(\delta_{E}:(0,2]\to[0,1]\) defined by
The following properties of the modulus of convexity will be needed in the sequel (see, e.g., Chidume [9], page 9):

(a)
\(\frac{\delta_{E}(\epsilon)}{\epsilon}\) is a decreasing function on \((0,2]\);

(b)
\(\delta_{E}:(0,2]\to[0,1]\) is a convex and continuous function;

(c)
\(\delta_{E}:(0,2]\to[0,1]\) is a strictly increasing function.
Let E be a smooth real normed space and let \(\phi: E\times E\rightarrow\mathbb{R}^{+}\) be a map defined by
This map was introduced by Alber [1] and has been extensively studied by Alber [1] and a host of other authors (see, e.g., [18, 20, 30]). It is obvious from the definition of the map ϕ that, for any \(u,v\in E\), we have:
Define a map \(V:E\times E^{*}\to\mathbb{R}\) by
Then, it is easy to see that
We shall use the following lemmas in the sequel, where \(\operatorname {Int}(D(A))\) denotes the interior of the domain of A.
Lemma 2.1
(Alber and Ryazantseva [1])
Let E be a reflexive, strictly convex and smooth Banach space with \(E^{*}\)as its dual. Then,
for all \(u\in E\)and \(u^{*},v^{*}\in E^{*}\).
Lemma 2.2
(Pascali and Sburian [36], Lemma 3.6, Chap. III)
Let E be a real normed space and \(A:E\rightrightarrows{E^{*}}\)be a monotone map with \(0\in \operatorname{Int}(D(A))\). Then, A is quasibounded, i.e., for any \(M>0\), there exists \(C>0\)such that:

(i)
\((y,v)\in G(A)\);

(ii)
\(\langle v,y \rangle\leq M\y\\); and

(iii)
\(\y\\leq M\), imply \(\v\\leq C\).
Lemma 2.3
(Kamimura and Takahashi [30])
Let E be a uniformly convex and uniformly smooth real Banach space and \(\{x_{n}\}\), \(\{y_{n}\}\)be sequences in E such that either \(\{x_{n}\}\)or \(\{y_{n}\}\)is bounded. If \(\lim_{n\rightarrow\infty}\phi(x_{n},y_{n})=0\), then \(\lim_{n\rightarrow\infty}\x_{n}y_{n}\=0\).
Lemma 2.4
(Alber and Ryazantseva [1], p. 50)
Let E be a reflexive, strictly convex and smooth Banach space with \(E^{*}\)as its dual. Let \(W:E\times E\rightarrow\mathbb{R}^{1}\)be defined by \(W(x,y)=\frac {1}{2}\phi(y,x)\). Then,
i.e.,
and also
for all \(x, y, z\in E\).
Lemma 2.5
(Alber and Ryazantseva [1], p. 45)
Let E be a uniformly convex Banach space. Then, for any \(R>0\)and any \(x, y\in E\)such that \(\x\\le R\), \(\y\\le R\), the following inequality holds:
where \(c_{2}=2\max\{1,R\}\), \(1< L<1.7\).
Define
Lemma 2.6
(Alber and Ryazantseva [1], p. 46)
Let E be a uniformly smooth and strictly convex Banach space. Then for any \(R>0\)and any \(x, y\in E\)such that \(\x\\le R\), \(\y\\le R\), the following inequality holds:
where \(c_{2}=2\max\{1,R\}\), \(1< L<1.7\), and \(\delta_{E}\)is the modulus of convexity of E.
Lemma 2.7
(Reich [38])
Let \(E^{*}\)be a strictly convex dual Banach space with a Fréchet differentiable norm, and let \(A:E\rightrightarrows E^{*}\)be a maximal monotone map with a zero and \({z\in E^{*}}\). For each \(\lambda>0\), there exists a unique \(x_{\lambda}\in E\)such that \(z\in Jx_{\lambda}+ \lambda Ax_{\lambda}\). Furthermore, \(x_{\lambda}\)converges strongly to a unique zero of A.
Lemma 2.8
From Lemma 2.7, setting \(\lambda_{n} :=\frac{1}{\theta_{n}}\), where \(\theta_{n}\to0\), as \(n \to\infty\), \(\theta_{n}\le\theta _{n1}\), \(\forall n \ge1\), \(\frac{1}{2} (\frac{\theta_{n1}\theta_{n}}{\theta _{n}}K )\le1\), \(z = Jh\), for some \(h \in E\), \(v_{n}\in Ay_{n}\)and \(y_{n}:= (J+\frac {1}{\theta_{n}}A )^{1}z\), we have:
where \(A : E \rightrightarrows E^{*}\)is maximal monotone.
Lemma 2.9
(Xu [45])
Let \(\{a_{n}\}\)be a sequence of nonnegative real numbers satisfying the following relation:
where \(\{\sigma_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\)satisfy the conditions:

(i)
\(\{\sigma_{n}\}\subset[0,1]\), \(\sum_{n=1}^{\infty} \sigma _{n}=\infty\);

(ii)
\(\limsup_{n\to\infty} b_{n} \leq0\);

(iii)
\(c_{n}\geq0\), \(\sum_{n=1}^{\infty} c_{n}<\infty\).
Then, \(\lim_{n\to\infty} a_{n}=0\).
Main results
The following conditions are required in the combined proofs of Lemma 3.1 and Theorem 3.2 below, where \(\{\lambda_{n}\}\), \(\{ \beta_{n}\}\), and \(\{\theta_{n}\}\) are sequences in \((0,1)\):

(i)
\(\sum_{n=1}^{\infty}\lambda_{n}\theta_{n}=\infty\),

(ii)
\(\delta_{E}^{1}(\lambda_{n}K)\leq\theta^{2}_{n}\gamma_{0}\),

(iii)
\(\delta_{E^{*}}^{1}(\lambda_{n}K)\leq\theta^{2}_{n}\gamma_{0}\),

(iv)
\(\omega_{J}(\beta_{n}K)\le\lambda_{n}^{4}\theta_{n}\gamma_{0}\),

(v)
\(\delta_{E}^{1}(\eta_{n}) \to0\),

(vi)
\(\delta_{E^{*}}^{1}(\eta _{n})\to0\),

(vii)
\(\frac{\delta_{E}^{1}(\eta_{n})}{\lambda_{n}\theta_{n}}\to0\),

(viii)
\(\frac{\delta_{E^{*}}^{1}(\eta_{n})}{\lambda_{n}\theta_{n}}\to0\),

(ix)
\(\lambda_{n}\le\theta_{n}\gamma_{0}\),
where \(\eta_{n}= (\frac{\theta_{n1}}{\theta_{n}}1 )K\), for some constants \(\gamma_{0}>0\), \(K>0\); and \(\delta_{E}\) is the modulus of convexity of E, \(\omega_{J}\) is the modulus of continuity of J.
Estimates for the moduli of convexity of \(E=L_{p}\) , \(1< p<\infty\) .
The following estimates have been obtained for \(\delta_{E}\) in \(L_{p}\) spaces, \(1< p<\infty\),
where \(\epsilon\in(0,2]\) (see e.g., Lindenstrauss and Tzafriri [32], see also, Chidume [9], p. 44).
Also, in \(L_{p}\) spaces, J is Lipschitz if \(2\le p<\infty\) and it satisfies the following inequality:
if \(1< p<2\). Consequently, we have the following estimates:
where \(\epsilon>0\), H and M are positive constants, and J is the normalized duality map (see, e.g., Lindenstrauss and Tzafriri [32], see also Chidume [9]).
Prototypes of the parameters for Lemma 3.1 and Theorem 3.2 below in the case that \(E=L_{p}\), \(1< p<\infty\) are:
For \(L_{p}\) spaces, \(2\le p<\infty\) ,
For \(L_{p}\) spaces, \(1< p< 2\) ,
With these choices, conditions (i)–(ix) given in Lemma 3.1 and Theorem 3.2 are easily satisfied.
Furthermore, we have the following formulae, for J and \(J^{1}\) in \(L_{p}\) and \(l_{p} \), \({1< p<\infty}\), \(p^{1} + q^{1} =1 \) (see, e.g., Alber and Ryazantseva [1], p. 36):
We now prove the following lemma.
Lemma 3.1
Let E be a uniformly smooth and uniformly convex real Banach space and \(A:E\rightrightarrows E^{*}\)be a maximal monotone operator with \(D(A)=E\)such that the inclusion \(0\in Az\)has a solution. For arbitrary \(z_{0}, z_{1}\in E\), define a sequence \(\{z_{n}\}\)by
Then, the sequence \(\{z_{n}\}\)is bounded.
Proof
We show that the sequence \(\{z_{n}\}\) is bounded.
Let \(z^{*}\) be a solution of \(0\in Az\), i.e., \(0\in Az^{*}\). Then, there exists \(r>0\) (sufficiently large) such that
Define \(B:=\{z\in E:\phi(z^{*},z)< r\}\), with \(0\in B\). Clearly, \(B\subset\operatorname{Int}(D(A))\). It suffices to show that \(\{\phi(z^{*}, z_{n})\}\) is bounded. We proceed by induction. For \(n=1\), by construction, we have that \({\phi(z^{*}, z_{1})< r.}\) Assume that \(\phi(z^{*}, z_{n})< r\), for some \(n\ge1\). Using inequality (2.1), we have that \({\z_{n}\< \ z^{*}\+\sqrt{r}}\). Now, we show that \(\phi(z^{*}, z_{n+1})< r\). Suppose for contradiction that \(\phi(z^{*}, z_{n+1})< r\) does not hold. Then, \(\phi(z^{*}, z_{n+1})\ge r\).
Let \(y\in B\) be arbitrary and \((y,v)\in G(A)\), \(u\in Ax\). Since A is locally bounded at 0, there exist \(h_{0}>0\), \(m_{0}>0\) such that
By the monotonicity of A, we have that:
Setting \(s=y\), we have that:
so that
Define \(M:=\max\{M_{0}, \z^{*}\+\sqrt{r} \}\). Then, \(\langle v,y\rangle \le M\y\\) and \(\y\\le M\). By Lemma 2.2, there exists \(C>0\) such that \(\v\\le C\), \(\forall y\in B\). Define
From the recursion formula, Lemma 2.5, and the fact that J and \(J^{1}\) are uniformly continuous on bounded sets, we have that
Define
where \(K^{*}=\max\{M^{*},M_{1},M_{2},M_{1}M_{2},M,c_{2}M_{1}\}\). Using Lemma 2.1 and denoting \(0\in Az^{*}\) by 0^{∗}, we compute:
By Lemma 2.4, we have that
It follows from inequality (3.7) that
This is a contradiction. Hence, \(\phi(z^{*},z_{n+1})< r\). Therefore, \(\phi(z^{*},z_{n})< r\), for all \(n\geq1\). □
Theorem 3.2
Let E be a uniformly smooth and uniformly convex real Banach space. Let \(A:E\rightrightarrows E^{*}\)be a maximal monotone operator with \(D(A)=E\)such that the inclusion \(0\in Az\)has a solution. For arbitrary \(z_{0}, z_{1}\in E\), define a sequence \(\{z_{n}\}\)by algorithm (3.1). Then, the sequence \(\{z_{n}\}\)converges strongly to a zero of A (see Remark 2below).
Proof
Using Lemma 2.1 and equation (2.2), we have
Observe that
Thus, from inequalities (3.9), (3.10), and the fact that \(v_{n}\in Ay_{n}\), we obtain
Observe that
Also, from Lemma 2.8, we obtain that
Hence, substituting inequality (3.12) and equation (3.13) into inequality (3.11), we have that
Applying inequalities (3.8) and (3.14), we obtain
Set \(a_{n}:=\phi(y_{n1},z_{n})\), \(\sigma_{n}:=\lambda_{n}\theta_{n}\), \(c_{n}:=\lambda_{n}^{4}\theta_{n} \), and
Hence, inequality (3.15) becomes \(a_{n+1}\leq(1\sigma_{n})a_{n}+\sigma_{n}b_{n}+c_{n}\), \(n\geq1\). It follows from Lemma 2.9 that \(\lim_{n\to\infty}\phi (y_{n1},z_{n})=0\). By Lemma 2.3, we have \(\lim_{n\to\infty}\ z_{n}y_{n1}\ = 0\). Since \(\lim_{n\to\infty}y_{n}=y^{*}\in A^{1}0\), we have that \(\{z_{n}\}\) converges to \(y^{*}\in A^{1}0\). This completes the proof. □
Remark 2
This zero of A may be a minimum norm zero of A, for example, if A is the subdifferential, ∂f, of a proper lower semicontinuous and convex function f.
Applications
Application to a convex optimization problem
The following lemma will be crucial in what follows.
Lemma 4.1
(Rockafellar [40])
Let E be a Banach space and let \(f:E\rightarrow\mathbb{R}\cup\{ \infty\}\)be a proper, convex and lower semicontinuous function. Then, the subdifferential of f, ∂f, is maximal monotone. Furthermore, \(0\in\partial f(u^{*})\)if and only if \(u^{*}\)is a minimizer of f.
We now have the following theorem.
Theorem 4.2
Let E be a uniformly convex and uniformly smooth real Banach space with dual \(E^{*}\). Let \(f:E\rightarrow\mathbb{R}\cup\{\infty\}\)be a proper, lower semicontinuous, and convex function such that \({(\partial f)}^{1}0\ne\emptyset\). For given \(z_{0},z_{1}\in E\), let \(\{z_{n}\}\)be generated by the algorithm
Then, the sequence \(\{z_{n}\}\)converges strongly to a minimizer of f.
Proof
By Lemma 4.1, ∂f is maximal monotone. The conclusion follows from Theorem 3.2. □
Applications to Hammerstein integral equations
Definition 4.3
Let \(\varOmega\subset{\mathbb{R}}^{n}\) be bounded. Let \(k:\varOmega \times\varOmega\to \mathbb{R}\) and \(f:\varOmega\times\mathbb{R} \to \mathbb{R}\) be measurable realvalued functions. An integral equation (generally nonlinear) of Hammersteintype has the form
where the unknown function u and inhomogeneous function w lie in a Banach space E of measurable realvalued functions.
If we define an operator K by \(K(v):= \int_{\varOmega}\kappa (x,y)v(y)\,dy\); \(x\in\varOmega\), and the socalled superposition or Nemytskii operator by \(Fu(y):=f(y,u)\), then equation (4.2) can be put in the form
Without loss of generality, we have taken \(w\equiv0\).
Interest in Hammerstein integral equations stems mainly from the fact that several problems that arise in differential equations, for instance, elliptic boundary value problems whose linear part possesses Green’s function can, as a rule, be transformed into the form of equation (4.2) (see, e.g., Pascali and Sburian [36], Chap. IV). Consider, for example, the following pendulum problem:
where the driving the force z is periodical and odd. The constant \(a\neq0\) depends on the length of the pendulum and on gravity. Since the Green’s function of the problem
is the function defined by
problem (4.4) is equivalent to the nonlinear integral equation
If \(\int_{0}^{1}k(t,x)z(x)\,dx=g(t)\) and \(v(t)+g(t)=u(t)\), then (4.5) can be written as the Hammerstein integral equation
where \(f(x,u(x))=a^{2}\sin[u(x)g(x)]\).
Equations of Hammersteintype also play a special role in the theory of optimal control systems and in automation and network theory (see, e.g., Dolezale [27]).
In the case when K and F are maximal monotone, several existence and uniqueness theorems have been proved for equations of Hammerstein type (see, e.g., [5, 7, 8, 25]).
Iterative methods for approximating solutions of problem (4.3) have been studied (see e.g., [10, 12, 13, 15, 19, 22, 23, 26, 35, 42] and the references therein).
In this section, we shall apply Theorem 3.1, for the case where the map A is singlevalued, to approximate a solution of equation (4.3). First we state the following important lemmas.
Lemma 4.4
(Chidume and Idu [16])
Let X be a uniformly convex and uniformly smooth real Banach space with dual space \(X^{*}\)and \(E=X\times X^{*}\). Let \(F:X\to X^{*}\)and \(K:X^{*}\to X\)be monotone maps with \(R(F)=D(K)\), where \(R(F)\)is the range of F and \(D(K)\)is the domain of K. Let \(A:E\to E^{*}\)be defined by \(A[u,v]=[Fuv,Kv+u]\). Then, A is maximal monotone.
Let \(\{\lambda_{n}\}\), \(\{\beta_{n}\}\), and \(\{\theta_{n}\}\) be sequences in \((0,1)\) and satisfy the conditions as given in Theorem 3.2.
Theorem 4.5
Let E be a uniformly convex and uniformly smooth real Banach space with dual space \(E^{*}\). Let \(F: E \to E^{*}\), \(K:E^{*}\to E\)be maximal monotone maps. Let \(X:=E\times E^{*}\)and \(A:X\to X^{*}\)be defined by \(A[u,v]:=[Fuv,Kv+u]\). For arbitrary \(z_{0}, z_{1}\in X\), define the sequence \(\{z_{n}\}\)in X by
Assume that the equation \(u+KFu=0\)has a solution, then the sequence \(\{z_{n}\}_{n=1}^{\infty}\)converges strongly to a solution of \(u+KFu=0\).
Proof
By a result of Chidume [9], E is uniformly smooth and uniformly convex, also, by Lemma 4.4, A maximal monotone. Therefore, the conclusion follows from Theorem 3.2. □
Theorem 4.5 can also be stated as follows.
Theorem 4.6
Let E be a uniformly smooth and uniformly convex real Banach space with dual space \(E^{*}\). Let \(F:E\to E^{*}\), \(K:E^{*}\to E\)be maximal monotone maps with \({R(F)=D(K)}\), where \(R(F)\)is the range of F and \(D(K)\)is the domain of K.
For arbitrary \({(u_{0},v_{0}), (u_{1},v_{1})\in E\times E^{*}}\), define the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)in \(E\times E^{*}\)by
Assume that the equation \(u+KFu=0\)has a solution, then the sequences \(\{u_{n}\}_{n=1}^{\infty}\)and \(\{v_{n}\}_{n=1}^{\infty }\)converge strongly to \(u^{*}\)and \(v^{*}\), respectively, where \(u^{*}\)is a solution of \(u+KFu=0\), with \(v^{*}=Fu^{*}\).
Remark 3
Algorithm (4.7) (Inertial Algorithm 2) will be compared with Algorithm (4.8) of Uba et al. [44] and Algorithm (4.9) of Chidume et al. [11] below. We state the theorems for completeness.
Theorem 4.7
(Uba et al. [44])
Let E be a uniformly convex and uniformly smooth real Banach space and and \(F: E \to E^{*}\), \(K:E^{*}\to E\)be maximal monotone and bounded maps. For \(u_{1}\in E\)and \(v_{1}\in E^{*}\), define the sequences \(\{u_{n}\}\)and \(\{ v_{n}\}\)in E and \(E^{*}\), respectively, by
where \(\lambda_{n}\)and \(\theta_{n}\)are sequences in \((0,1)\)satisfying appropriate conditions. Assume that the equation \(u + KF u =0\)has a solution. Then, the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)converge strongly to \(u^{*}\)and \(v^{*}\), respectively, where \(u^{*}\)is the solution of \(u+KF u = 0\)with \(v^{*}= F u^{*}\).
Theorem 4.8
(Chidume et al. [11])
Let E be a uniformly convex and uniformly smooth real Banach space and \(F: E \to E^{*}\), \(K:E^{*}\to E\)be maximal monotone maps. For \(u_{1}\in E\)and \(v_{1}\in E^{*}\), define the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)in E and \(E^{*}\), respectively, by
where \(\lambda_{n}\)and \(\theta_{n}\)are sequences in \((0,1)\)satisfying appropriate conditions. Assume that the equation \(u + KF u =0\)has a solution. Then, the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)converge strongly to \(u^{*}\)and \(v^{*}\), respectively, where \(u^{*}\)is the solution of \(u+KF u = 0\)with \(v^{*}= F u^{*}\).
Numerical illustration
In this section, we present numerical examples to compare the convergence of the sequence of our inertial algorithms and some recent important algorithms. First, we compare the convergence of the sequence of Inertial Algorithm (3.1) (Inertial Algorithm 1) with IPPA (1.5), (1.7) (IPPA), (1.8) (RPPA), and (1.9) (RIPPA), respectively. Also, we present numerical examples to compare the convergence of the sequence of Algorithm (4.7) (Inertial Algorithm 2) with Algorithms (4.8) and (4.9), respectively. Finally, we present a numerical example to illustrate the implementability of Algorithm (4.1) whose sequence approximates a solution of a convex optimization problem.
Example 1
(Zeros of a maximal monotone map in a real Hilbert space)
In Theorem 1.1, IPPA, RPPA, RIPPA, and Theorem 3.2 set \({E=L_{2}([0,1])}\). Consider the map \(A:E \to E\) defined by
Then, it is easy to see that A is maximal monotone. Furthermore, the function \(u(t)=0\), \(\forall t\in[0,1]\) is the solution of the equation \(Au(t)=0\). In Theorem 1.1, we take \(\alpha _{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\theta_{n}=\frac{1}{(n+1)^{\frac{1}{4}}}\); in Algorithm (1.7) (IPPA), take \(\lambda_{k}=\frac{k}{k+1}\), \(\alpha_{k}=\frac{1}{(k+1)^{2}}\); in Algorithm (1.8) (RPPA), take \(\lambda_{k}=\frac{k}{k+1}=\rho _{k}\); in Algorithm (1.9) (RPPA), take \(\lambda_{k}=\frac {k}{k+1}=\rho_{k}\), \(\alpha_{k}=\frac{1}{(k+1)^{2}}\); and in Theorem 3.2, we take \({\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}}\), \(\theta_{n}=\frac {1}{(n+1)^{\frac{1}{4}}}\), \(\beta_{n}=\frac{1}{(n+1)^{2}}\), \(n=1,2, \dots\), as our parameters. Clearly, these parameters satisfy the hypothesis of the respective theorems. In all the tables below, we use the following notions:

IP—initial point,

n—number of iterations,

\(\u_{n+1}\\)—norm of the approximate solution at the \((n+1)\)th iteration,

\(T(s)\)—time in seconds.
Setting a tolerance of 10^{−6} and maximum number of iterations \(n=10\), we obtain the iterates which are shown in Tables 1 and 2.
Example 2
(Numerical example for solutions of Hammerstein equation)
In Theorems 4.7, 4.8, and 4.6 (Inertial algorithm 2), respectively, set \({E=L_{5}([0,1]),}\) then, \(E^{*}=L_{\frac{5}{4}}([0,1])\) and \(F:L_{5}([0,1]) \to L_{\frac{5}{4}}([0,1]) \) is defined by
Then, it is easy to see that F is maximal monotone. Let \(K: L_{\frac {5}{4}}([0,1]) \to L_{5}([0,1])\) be defined by
Observe that K is linear. Furthermore, it is easy to see that K maximal monotone and the function \(u^{*}(t)=0\), \(\forall t\in[0,1]\) is the only solution of the equation \(u+KFu=0\). In the algorithm of Theorem 3.1 in [44], we take \(\lambda_{n}=\theta_{n}=\frac{1}{(n+1)^{\frac {1}{2}}}\); in the algorithm of Theorem 3.4 in Theorem [11], \(\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\beta_{n}=\frac {1}{(n+1)^{\frac{1}{4}}}\), \(n=1,2,\dots\); and in the algorithm of Theorem 3.1, we take \(\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\theta_{n}=\frac{1}{(n+1)^{\frac{1}{4}}}\), \(\beta_{n}=\frac{1}{(n+1)^{2}}\), \(n=1,2,\dots\), as our parameters and fixed \(u_{0}(t)=t\) and \(v_{0}(t)=t+1\). Clearly, these parameters satisfy the hypotheses of the respective theorems. Setting a tolerance of 10^{−6} and maximum number of iterations \(n=6\), we obtain the iterates which are shown in Table 3.
Example 3
(Numerical example for solutions of convex optimization problem)
In Theorem 4.2, set set \({E=L_{2}([0,1])}\). Let \(f: E\to \Bbb {R}\cup\{\infty\}\) be defined by
Then it is easy to see that ∂f is maximal monotone. Furthermore, the function \(z(t)=0\), \({\forall t\in[0,1]}\) is the solution of the equation \(\partial fz(t)=0\). We take \(\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\theta_{n}=\frac {1}{(n+1)^{\frac{1}{4}}}\), \(\beta_{n}=\frac{1}{(n+1)^{2}}\), \(n=1,2, \dots\), as our parameters. Clearly, these parameters satisfy the hypotheses of Theorem 4.2. Setting a tolerance of 10^{−6} and maximum number of iterations \(n=10\), we obtain the iterates which are shown in Table 4.
Observations

1.
In Example 1, we presented a numerical experiment for zeros of a maximal monotone map A on E, where \(E=L_{2}([0,1])\). With a tolerance of 10^{−6}, setting the maximum number of iterations to \(n=10\) and considering \(u_{1}(t)=t^{2}+1\), the sequence generated by Algorithm (1.5) and the sequence generated by the IPPA (1.7) are yet to converge to zero, whereas the sequence generated by our algorithm, Algorithm (3.1), converges to zero in less than 6 iterations already with the 8th iterate as \(1.89\mathrm{E}{}6\), a very good approximation to a zero.
Furthermore, the sequence generated by the RPPA converges to zero with the 10th iterate as 0.005, and the sequence generated by the RIPPA converges to zero with the 10th iterate as 0.0051 in 16.68 seconds, whereas the sequence generated by our algorithm, Algorithm (3.1), converges to zero as in the example. The convergence of the sequence generated by our algorithm is better than the convergence of the sequence generated by either the RPPA and RIPPA. A similar trend is observed when the initial vector is changed to \(u_{1}(t)=\frac{1}{t+1}\).

2.
In Example 2, we presented a numerical experiment for solutions of a Hammerstein integral equation, where \(E=L_{5}([0,1])\), \(F:E\to E^{*}\) and \(K:E^{*}\to E\) are maximal monotone. With a tolerance of 10^{−6}, setting the maximum number of iterations to \(n=6\), and taking \(u_{1}(t)=\sin t\) and \(v_{1}(t)=\cos t\), the sequence generated by Algorithm (4.8), after 6 iterations in 41.56 seconds is yet to converge to any zero of A, whereas Algorithms (4.7) and (4.9) after 6 iterations, in 4129.97 and 92.78 seconds, respectively, converged to a zero of A. Furthermore, Algorithm (4.9) and our Algorithm (4.7), with these initial vectors, both converge to zero almost jointly with the 6th iterate as 0.0337 in 92.78 and 0.0291 in 4129.97 seconds, respectively. Similar trends are observed when the initial vectors are changed to \(u_{1}(t)=t^{2}2\), \(v_{1}(t)=e^{t}1\), and \(u_{1}(t)=2t^{3}2\), \(u_{1}(t)=te^{t}\).

3.
Example 3, where \(E=L_{2}([0,1])\), \(f:E\to\mathbb{R}\cup \{\infty\}\) and \(\partial f:E\rightrightarrows E^{*}\) are maps defined in equation (5.1), demonstrates the implementability of the sequence generated by Algorithm (4.1) with a tolerance of 10^{−6}, maximum number of iterations \(n=10\), and \(z_{1}(t)= t+\sin t \).
Remark 4
Theorems involving IPPA, RPPA, and RIPPA as cited above are proved in real Hilbert spaces, whereas our theorems in this paper are proved in much more general, uniformly smooth and uniformly convex real Banach spaces. Moreover, a strong convergence theorem is proved in Theorem 3.2, whereas a weak convergence theorem is proved for IPPA, RPPA and RIPPA, respectively.
Conclusions
An inertial iterative algorithm which does not involve the resolvent operator is proposed for approximating a solution of a maximal monotone inclusion in uniformly convex and uniformly smooth real Banach spaces. The sequence generated by the algorithm is proved to converge strongly to a solution of the inclusion. Furthermore, the theorem proved is applied to approximate a solution of a convex optimization problem, and a solution of a Hammerstein integral equation. In addition, numerical experiments are given to compare, in terms of CPU time and number of iterations, the performance of the sequence generated by our algorithm with the performance of the sequences generated by IPPA, RPPA and RIPPA, respectively. In these examples, the performance of the sequence generated by our algorithm is much better than the performance of the sequence generated by any of IPPA, RPPA, and RIPPA. A numerical example is also given to illustrate the implementability of our algorithm for approximating a solution of a convex optimization problem and for approximating a solution of a Hammerstein integral equation. Finally, it is clear that our algorithm is a welcome addition to the inertial proximal point type algorithms for approximating solutions of maximal monotone inclusions and their applications.
References
 1.
Alber, Ya., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, London (2006)
 2.
Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38, 1102–1119 (2000)
 3.
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projectionproximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)
 4.
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. SetValued Anal. (2001). https://doi.org/10.1023/A:1011253113155
 5.
Brézis, H., Browder, F.E.: Nonlinear integral equations and systems of Hammerstein type. Bull. Am. Math. Soc. 82, 115–147 (1976)
 6.
Browder, F.E.: Fixed point theory and nonlinear problems. Bull. Am. Math. Soc. 9, 1–39 (1983)
 7.
Browder, F.E., De Figueiredo, D.G., Gupta, C.P.: Maximal monotone operators and nonlinear integral equations of Hammerstein type. Bull. Am. Math. Soc. 76, 700–705 (1970)
 8.
Chepanovich, R.S.: Nonlinear Hammerstein equations and fixed points. Publ. Inst. Math. (Belgr.) 35, 119–123 (1984)
 9.
Chidume, C.E.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Notes in Mathematics, vol. 1965. Springer, London (2009)
 10.
Chidume, C.E., Adamu, A., Minjibir, M.S., Nnyaba, U.V.: On the strong convergence of the proximal point algorithm with an application to Hammerstein equations. J. Fixed Point Theory Appl. 22, Article ID 61 (2020)
 11.
Chidume, C.E., Adamu, A., Okereke, L.C.: Approximation of solutions of Hammerstein equations with monotone mappings in real Banach spaces. Carpath. J. Math. 35(3), 305–316 (2019)
 11.
Chidume, C.E., Adamu, A., Okereke, L.C.: Iterative algorithms for solutions of Hammerstein equations in real Banach spaces. Fixed Point Theory Appl. 2020, Article ID 4 (2020)
 13.
Chidume, C.E., Bello, A.U.: An iterative algorithm for approximating solutions of Hammerstein equations with monotone maps in Banach spaces. Appl. Math. Comput. 313, 408–417 (2017)
 14.
Chidume, C.E., De Souza, G.S., Nnyaba, U.V., Romanus, O.M., Adamu, A.: Approximation of zeros of maccretive mappings, with applications to Hammerstein integral equations. Carpath. J. Math. 36(1), 45–55 (2020)
 15.
Chidume, C.E., Djitte, N.: Approximation of solutions of nonlinear integral equations of Hammerstein type. ISRN Math. Anal. (2012). https://doi.org/10.5402/2012/169751
 16.
Chidume, C.E., Idu, K.O.: Approximation of zeros of bounded maximal monotone mappings, solutions of Hammerstein integral equations and convex minimization problems. Fixed Point Theory Appl. (2016). https://doi.org/10.1186/s1366301605828
 17.
Chidume, C.E., Ikechukwu, S.I., Adamu, A.: Inertial algorithm for approximating a common fixed point for a countable family of relatively nonexpansive maps. Fixed Point Theory Appl. 2018, Article ID 9 (2018)
 18.
Chidume, C.E., Kumam, P., Adamu, A.: A hybrid inertial algorithm for approximating solution of convex feasibility problems with applications. Fixed Point Theory Appl. 2020, Article ID 12 (2020)
 19.
Chidume, C.E., Nnakwe, M.O., Adamu, A.: A strong convergence theorem for generalizedΦstrongly monotone maps, with applications. Fixed Point Theory Appl. (2019). https://doi.org/10.1186/s1366301906609
 20.
Chidume, C.E., Okereke, L.C., Adamu, A.: A hybrid algorithm for approximating solutions of a variational inequality problem and a convex feasibility problem. Adv. Nonlinear Var. Inequal. 21 (2018)
 21.
Chidume, C.E., Uba, M.O., Uzochukwu, M.I., Otubo, E.E., Idu, K.O.: Strong convergence theorem for an iterative method for finding zeros of maximal monotone maps with applications convex minimization and variational inequality problems. Proc. Edinb. Math. Soc. (2019). https://doi.org/10.1017/S0013091518000366
 22.
Chidume, C.E., Zegeye, H.: Approximation of solutions of nonlinear equations of monotone and Hammerstein type. Appl. Anal. 82(8), 747–758 (2003)
 23.
Chidume, C.E., Zegeye, H.: Approximation of solutions of nonlinear equations of Hammerstein type in Hilbert space. Proc. Am. Math. Soc. 133(3), 851–858 (2005)
 24.
Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Mathematics and Its Applications, vol. 62. Kluwer Academic, Dordrecht (1990)
 25.
De Figueiredo, D.G., Gupta, C.P.: On the variational methods for the existence of solutions to nonlinear equations of Hammerstein type. Bull. Am. Math. Soc. 40, 470–476 (1973)
 26.
Djitte, N., Sene, M.: Iterative solution of nonlinear integral equations of Hammerstein type with Lipschitz and accretive operators. ISRN Appl. Math. (2012). https://doi.org/10.5402/2012/963802
 27.
Dolezale, V.: Monotone Operators and Its Applications in Automation and Network Theory. Studies in Automation and Control. Elsevier, New York (1979)
 28.
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
 29.
Jules, F., Maingé, P.E.: Numerical approach to a stationary solution of a second order dissipative dynamical system. Optimization 51, 235–255 (2002)
 30.
Kamimura, S., Takahashi, W.: Strong convergence of a proximaltype algorithm in a Banach space. SIAM J. Optim. 13, 938–945 (2002)
 31.
Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization 37, 239–252 (1996)
 32.
Lindenstrauss, J., Tzafriri, L.: Classical Banach Spaces II: Function Spaces. Ergebnisse Math. Grenzgebiete, vol. 97. Springer, Berlin (1979)
 33.
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Autom. Inform. Rech. Opér. 4, 154–158 (1970)
 34.
Minty, G.J.: Monotone networks. Proc. R. Soc. Lond. 257, 194–212 (1960)
 35.
Ofoedu, E.U., Onyi, C.E.: New implicit and explicit approximation methods for solutions of integral equations of Hammerstein type. Appl. Math. Comput. 246, 628–637 (2014)
 36.
Pascali, D., Sburian, S.: Nonlinear Mappings of Monotone Type. Editura Academia, Bucuresti (1978)
 37.
Polyak, B.T.: Some methods of speeding up the convergence of iteration method. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
 38.
Reich, S.: Constructive techniques for accretive and monotone operators. In: Applied Nonlinear Analysis, pp. 335–345. Academic Press, New York (1979)
 39.
Reich, S., Sabach, S.: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 31, 22–44 (2010)
 40.
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
 41.
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)
 42.
Shehu, Y.: Strong convergence theorem for integral equations of Hammerstein type in Hilbert spaces. Appl. Math. Comput. 231, 140–147 (2014)
 43.
Solodov, M.V., Svaiter, B.F.: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program., Ser. A 87, 189–202 (2000)
 44.
Uba, M.O., Uzochukwu, M.I., Onyido, M.A.: Algorithm for approximating solutions of Hammerstein integral equations with maximal monotone operators. Indian J. Pure Appl. Math. 48(3), 391–410 (2017)
 45.
Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. (2000). https://doi.org/10.1007/s1089800690027
 46.
Xu, H.K.: A regularization method for the proximal point algorithm. J. Glob. Optim. 36(1), 115–125 (2006)
Acknowledgements
The authors appreciate the support of their institution. The authors would like to thank the referees for their esteemed comments and suggestions.
Availability of data and materials
Data sharing is not applicable to this article.
Funding
No funding is applicable to this article.
Author information
Affiliations
Contributions
All the authors contributed evenly in the writing of this paper. They read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no conflict of interest.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chidume, C.E., Adamu, A. & Nnakwe, M.O. Strong convergence of an inertial algorithm for maximal monotone inclusions with applications. Fixed Point Theory Appl 2020, 13 (2020). https://doi.org/10.1186/s13663020006802
Received:
Accepted:
Published:
MSC
 47H10
 47J25
 47J05
Keywords
 Nonlinear equations
 Monotone maps
 Zeros
 Optimization
 Hammerstein equation
 Strong convergence