Skip to main content

Strong convergence of an inertial algorithm for maximal monotone inclusions with applications

Abstract

An inertial iterative algorithm is proposed for approximating a solution of a maximal monotone inclusion in a uniformly convex and uniformly smooth real Banach space. The sequence generated by the algorithm is proved to converge strongly to a solution of the inclusion. Moreover, the theorem proved is applied to approximate a solution of a convex optimization problem and a solution of a Hammerstein equation. Furthermore, numerical experiments are given to compare, in terms of CPU time and number of iterations, the performance of the sequence generated by our algorithm with the performance of the sequences generated by three recent inertial type algorithms for approximating zeros of maximal monotone operators. In addition, the performance of the sequence generated by our algorithm is compared with the performance of a sequence generated by another recent algorithm for approximating a solution of a Hammerstein equation. Finally, a numerical example is given to illustrate the implementability of our algorithm for approximating a solution of a convex optimization problem.

1 Introduction

Let H be a real Hilbert space. A set-valued map \(A:H\rightrightarrows {H}\) is called monotone if for each \(u,v\in H\), \(\eta_{u}\in Au\), \(\gamma_{v}\in Av\), the following inequality holds:

$$\begin{aligned} \langle\eta_{u} -\gamma_{v},u- v \rangle\geq0. \end{aligned}$$
(1.1)

Monotone maps in Hilbert spaces were first introduced by Minty [34] to aid the abstract study of electrical networks and later studied by Browder [6] and his school in the setting of partial differential equations. The map A is called maximal monotone if it is monotone, and in addition, its graph is not included in the graph of any other monotone map. The extension of the monotonicity definition from a Hilbert space to itself

… to operators from a Banach space into its dual has been the starting point for the development of non-linear functional analysis … The monotone mappings appear in a rather wide variety of contexts, since they can be found in many functional equations. Many of them appear also in calculus of variations, as sub-differentials of convex functions (Pascali and Sburian [36], p. 101).

For example, consider the following: Let E be a real Banach space with dual space \(E^{*}\) and let \({f:E\rightarrow\mathbb{R}\cup\{\infty\}}\) be a proper lower semicontinuous (lsc) and convex function. The subdifferential of f, \(\partial f:E\rightrightarrows{E^{*}}\) is defined by

$$ \partial f(u):= \bigl\{ u^{*}\in E^{*}:f(v)-f(u)\geq\bigl\langle v-u,u^{*} \bigr\rangle , \forall v\in E \bigr\} ,\quad u\in E. $$
(1.2)

It is well known that ∂f is a monotone operator and that \(0\in\partial f(u^{*})\)if and only if \(u^{*}\)is a minimizer of f. Setting \(\partial f\equiv A\), it follows that solving the inclusion \(0\in Au\), in this case, is equivalent to solving for a minimizer of f. It is well known that any maximal monotone map \(A:\mathbb{R}\rightrightarrows\mathbb{R}\) is the subdifferential of a proper, convex, and lsc function (see, e.g., Cioranescu, [24], Corollary 4.5, p. 170).

In general, a fundamental problem in the study of monotone maps in Banach spaces is the following:

$$\begin{aligned} \mbox{Find } u\in E \mbox{ such that } 0\in Au. \end{aligned}$$
(1.3)

This problem has been investigated in Hilbert spaces by numerous researchers. The proximal point algorithm (PPA) introduced by Martinet [33] and studied extensively by Rockafellar [41] and numerous other authors is concerned with an iterative method for approximating a solution of the inclusion \(0\in Au\), where A is a maximal monotone map. Specifically, given \(x_{n}\in H\), the proximal point algorithm generates the next iterate \(x_{n+1}\) by solving the following equation:

$$\begin{aligned} x_{n+1}= \biggl(I+ \frac{1}{\lambda_{n}}A \biggr)^{-1}x_{n}+e_{n}, \end{aligned}$$
(1.4)

where \(\lambda_{n} > 0\) is a regularizing parameter. Rockafellar [41] proved that if the sequence \(\{\lambda_{n}\}_{n=1}^{\infty}\) is bounded from above, then the resulting sequence \(\{x_{n}\}_{n=1}^{\infty}\) of proximal point iterates converges weakly to a solution of (1.3), when \(E=H\), provided that a solution exists. Several alternatives and modifications of the PPA have been proposed to obtain strong convergence under suitable conditions. For a brief review of these alternatives and modifications in Banach spaces more general than Hilbert spaces, interested readers may see, e.g., [14, 30, 31, 39, 43, 46] and the references therein.

Chidume et al. [21] recently proved the following strong convergence theorem.

Theorem 1.1

(Chidume et al. [21])

Let E be a uniformly convex and uniformly smooth real Banach space and let \(E^{*}\)be its dual. Let \(A:E\rightrightarrows{E^{*}}\)be a maximal monotone and bounded mapping with \(A^{-1}(0)\neq\emptyset \). For arbitrary \(u_{1}\in E\), define a sequence \(\{u_{n}\}\)iteratively by

$$ u_{n+1}=J^{-1} \bigl( Ju_{n}- \lambda_{n}\eta_{n}-\lambda_{n}\theta _{n}(Ju_{n}-Ju_{1}) \bigr),\quad \eta_{n}\in Au_{n}, n\geq1, $$
(1.5)

where \(\{\lambda_{n}\}\)and \(\{\theta_{n}\}\)are sequences in \((0,1)\)satisfying certain conditions and J is the normalized duality map on E. Then, the sequence \(\{u_{n}\}\)converges strongly to a solution of \(0\in Au\).

It is well known that the convergence of iterative algorithms for approximating zeros of monotone maps are generally slow. This is expected since monotone maps are generally not differentiable. Thus, fast converging algorithms such as the Newton–Kantorovich algorithm cannot be used. Consequently, a lot of effort is now being put into iterative algorithms for approximating zeros of maximal monotone maps that improve the speed of convergence of known algorithms. One method that is now studied is to incorporate the inertial extrapolation term in algorithms.

In a recent paper, Alvarez [3] studied the asymptotic weak convergence of three inertial implicit iterative methods for solving the inclusion \(0\in Au\), when A is a maximal monotone operator on a real Hilbert space, which generalizes the classical PPA. The motivation for the first of these three methods, called Inertial Proximal Point Algorithm (IPPA), stems from a discretization of the equation for an oscillator with damping and conservative restoring force: \(x{''}(t)+\gamma x{'}(t)+\nabla f(x(t))=0\), where \(\gamma>0\) and \(f:H\to\mathbb{R}\) is differentiable. In the context of optimization problems, this dynamical system which is called Heavy Ball with Friction (HBF) was first considered by Polyak [37]. It has been known that the inertial nature of the HBF could be exploited in numerical computations to accelerate the trajectories and speed up convergence (see, e.g., [17, 18]). Concerning asymptotic convergence, Alvarez [2] showed that if f is differentiable, i.e., if ∇f is monotone and \({(\nabla f)}^{-1}(0)\neq\emptyset\), then, every trajectory of HBF converges weakly to some \(x^{*}\in H\) with \({(\nabla f)}(x^{*})=0\). Considering the implicit discretization of the HBF, the following recursion formula, in terms of resolvents, has been obtained (see, e.g., Alvarez [3], p. 774):

$$ x_{k+1}=J_{\lambda}^{\nabla f} \bigl(x_{k}+\alpha(x_{k}-x_{k-1}) \bigr),\quad k=1,2,\dots, $$
(1.6)

where λ is a regularizing parameter that combines the damping factor of γ and the actual step size \(h>0\). Replacing ∇f with a maximal monotone operator A, and considering variable parameters \(\lambda_{k}>0\) and \(\alpha_{k}\in[0,1)\), the discussion above motivated the introduction of the inertial-type iteration:

$$ x_{k+1}=J_{\lambda_{k}}^{A} \bigl(x_{k}+\alpha_{k}(x_{k}-x_{k-1} )\bigr),\quad k=1,2,\dots, \qquad\mbox{(IPPA)} $$
(1.7)

where the extrapolation term \(\alpha_{k}(x_{k}-x_{k-1})\) is intended to speed up convergence. The Inertial Proximal Point Algorithm (IPPA) was first considered in [2] for nonsmooth conservative operator \(A=\partial f\), the subdifferential of a closed, proper, and convex function \(f:H\to\mathbb{R}\cup\{\infty\}\). Alvarez [2, Theorem 3.1] proved, under suitable conditions, that \(\{x_{k}\}\) converges weakly to a minimizer of f. For the nonconservative case, a partial positive result for cocoercive operators was obtained in [29], where comparisons with first-order-in-time methods are also given through numerical facts, showing improvements in the speed of convergence.

The case of arbitrary maximal monotone operators is treated in [4] under the following conditions:

  1. (i)

    \(\lambda=\inf_{k\ge0}\lambda_{k}>0\),

  2. (ii)

    \(\forall k\in \mathbb{N}\), \(\alpha_{k}\in[0,1)\), \(\alpha:=\sup_{k\ge0}\alpha_{k}<1\),

  3. (iii)

    \(\sum\alpha_{k}\|x_{k}-x_{k-1}\|^{2}<\infty\).

From a different point of view, the following Relaxed Proximal Point Algorithm (RPPA) was proposed in [28] to accelerate the standard PPA:

$$ x_{k+1}= \bigl[(1-\rho_{k})I+ \rho_{k}J_{\lambda_{k}}^{A} \bigr]\bigl(x_{k} \bigr),\qquad\mbox{(RPPA)} $$
(1.8)

where \(\{\rho_{k}\}\subset(0,2)\) is a relaxing factor which is assumed to satisfy the following conditions: \(\inf_{k\ge0}\rho_{k}>0\) and \(\sup_{k\ge0}\rho_{k}<2\).

Alvarez [3] recently coupled the IPPA and RPPA, two acceleration strategies, to propose the following iterative method:

$$ x_{k+1}= \bigl[(1-\rho_{k})I+ \rho_{k}J_{\lambda_{k}}^{A} \bigr] \bigl(x_{k}+\alpha_{k}(x_{k}-x_{k-1} ) \bigr).\qquad\mbox{(RIPPA)} $$
(1.9)

He proved weak convergence of the sequence \(\{x_{k}\}\) to some \(x^{*}\in A^{-1}(0)\).

We remark that each of the algorithms, IPPA, RPPA, and RIPPA, involves the resolvent operator, \(J_{\lambda}^{A}\).

In this paper, an inertial iterative algorithm is proposed for approximating a solution of a maximal monotone inclusion in a uniformly convex and uniformly smooth real Banach space. The sequence generated by the algorithm is proved to converge strongly to a solution of the inclusion. Moreover, the theorem proved is applied to approximate a solution of a convex optimization problem, and a solution of a Hammerstein equation. Furthermore, numerical experiments are given to compare, in terms of CPU time and number of iterations, the performance of the sequence generated by our algorithm with the sequences generated by IPPA, RPPA, and RIPPA, respectively, for approximating a solution of a maximal monotone inclusion in Hilbert spaces. Finally, numerical examples are given to illustrate the implementability of our algorithm for approximating a solution of a convex optimization problem and for approximating a solution of a Hammerstein equation.

2 Preliminaries

Let E be a real normed space with dual space, \(E^{*}\). A map \(J: E\rightrightarrows{E^{*}}\) defined by

$$J(v):= \bigl\{ v^{*}\in E^{*} : \bigl\langle v,v^{*}\bigr\rangle = \Vert v \Vert \bigl\Vert v^{*} \bigr\Vert , \Vert v \Vert = \bigl\Vert v^{*} \bigr\Vert \bigr\} , $$

is called the normalized duality map.

A normed space E is called uniformly convex, if for all \(\epsilon\in(0,2]\), there exists a \(\delta=\delta(\epsilon)>0\) such that, if \(u,v\in E\), with \(\|u\|\le1\), \(\|v\|\le1\), and \(\|u-v\|\ge\epsilon\), then \(\| \frac{1}{2}(u+v)\|\le1-\delta\).

A normed space E is called strictly convex, if for all \(u,v\in E\), with \(u\ne v\), \(\|u\|=\|v\|=1\), we have that \(\|\lambda u+(1-\lambda)v\|<1\), for all \(\lambda\in(0,1)\).

A normed space E is called uniformly smooth, if given \(\epsilon>0\), there exists a \(\delta=\delta(\epsilon)>0\) such that, for all \(u,v\in E\), with \(\| u\|=1\), \(\|v\|\le\delta\), one has

$$\Vert u+v \Vert + \Vert u-v \Vert < 2+\epsilon \Vert v \Vert . $$

A normed space E is called smooth, if for every \(u \in E\), \(\|u\|= 1\), there exists a unique \(u^{*}\) in \(E^{*}\) such that \(\|u^{*}\|=1\) and \(\langle u,u^{*}\rangle=\|u\|\).

Remark 1

It is well known that if E is a smooth, strictly convex, and reflexive Banach space, the normalized duality map, J, is single-valued, one-to-one and onto, respectively. Also, if E is uniformly smooth, then J is uniformly continuous on bounded subsets of E. For more properties of the normalized duality map, see e.g., Alber and Ryazantseva [1], Lindenstrauss and Tzafriri [32], Chidume [9], and Cioranescu [24].

Let E be a real normed space with \(\dim E\ge2\). The modulus of convexity of E is the function \(\delta_{E}:(0,2]\to[0,1]\) defined by

$$\delta_{E}(\epsilon):= \biggl\{ 1- \biggl\| \frac{u+v}{2} \biggr\| : \Vert u \Vert = \Vert v \Vert =1; \epsilon= \Vert u-v \Vert \biggr\} . $$

The following properties of the modulus of convexity will be needed in the sequel (see, e.g., Chidume [9], page 9):

  1. (a)

    \(\frac{\delta_{E}(\epsilon)}{\epsilon}\) is a decreasing function on \((0,2]\);

  2. (b)

    \(\delta_{E}:(0,2]\to[0,1]\) is a convex and continuous function;

  3. (c)

    \(\delta_{E}:(0,2]\to[0,1]\) is a strictly increasing function.

Let E be a smooth real normed space and let \(\phi: E\times E\rightarrow\mathbb{R}^{+}\) be a map defined by

$${\phi(u,v)= \Vert u \Vert ^{2}-2\langle u,Jv\rangle+ \Vert v \Vert ^{2}}, \quad\textrm{for all } u,v\in E. $$

This map was introduced by Alber [1] and has been extensively studied by Alber [1] and a host of other authors (see, e.g., [18, 20, 30]). It is obvious from the definition of the map Ï• that, for any \(u,v\in E\), we have:

$$\begin{aligned}& \bigl( \Vert u \Vert - \Vert v \Vert \bigr)^{2}\le \phi(u,v)\le \bigl( \Vert u \Vert + \Vert v \Vert \bigr)^{2}, \end{aligned}$$
(2.1)
$$\begin{aligned}& \phi(v,u) = \phi(u,v) + 2\langle u, Jv\rangle-2\langle v, Ju \rangle. \end{aligned}$$
(2.2)

Define a map \(V:E\times E^{*}\to\mathbb{R}\) by

$$\begin{aligned} V\bigl(u,u^{*}\bigr)= \Vert u \Vert ^{2}-2\bigl\langle u,u^{*}\bigr\rangle + \bigl\Vert u^{*} \bigr\Vert ^{2},\quad \text{for }u\in E, u^{*}\in E^{*}. \end{aligned}$$
(2.3)

Then, it is easy to see that

$$\begin{aligned} V\bigl(u,u^{*}\bigr)=\phi\bigl(u,J^{-1}\bigl(u^{*}\bigr)\bigr), \quad\forall u\in E, u^{*}\in E^{*}. \end{aligned}$$
(2.4)

We shall use the following lemmas in the sequel, where \(\operatorname {Int}(D(A))\) denotes the interior of the domain of A.

Lemma 2.1

(Alber and Ryazantseva [1])

Let E be a reflexive, strictly convex and smooth Banach space with \(E^{*}\)as its dual. Then,

$$\begin{aligned} V\bigl(u,u^{*}\bigr)+2\bigl\langle J^{-1}u^{*}-u,v^{*} \bigr\rangle \leq V\bigl(u,u^{*}+v^{*}\bigr), \end{aligned}$$
(2.5)

for all \(u\in E\)and \(u^{*},v^{*}\in E^{*}\).

Lemma 2.2

(Pascali and Sburian [36], Lemma 3.6, Chap. III)

Let E be a real normed space and \(A:E\rightrightarrows{E^{*}}\)be a monotone map with \(0\in \operatorname{Int}(D(A))\). Then, A is quasi-bounded, i.e., for any \(M>0\), there exists \(C>0\)such that:

  1. (i)

    \((y,v)\in G(A)\);

  2. (ii)

    \(\langle v,y \rangle\leq M\|y\|\); and

  3. (iii)

    \(\|y\|\leq M\), imply \(\|v\|\leq C\).

Lemma 2.3

(Kamimura and Takahashi [30])

Let E be a uniformly convex and uniformly smooth real Banach space and \(\{x_{n}\}\), \(\{y_{n}\}\)be sequences in E such that either \(\{x_{n}\}\)or \(\{y_{n}\}\)is bounded. If \(\lim_{n\rightarrow\infty}\phi(x_{n},y_{n})=0\), then \(\lim_{n\rightarrow\infty}\|x_{n}-y_{n}\|=0\).

Lemma 2.4

(Alber and Ryazantseva [1], p. 50)

Let E be a reflexive, strictly convex and smooth Banach space with \(E^{*}\)as its dual. Let \(W:E\times E\rightarrow\mathbb{R}^{1}\)be defined by \(W(x,y)=\frac {1}{2}\phi(y,x)\). Then,

$$ W(x,y)-W(z,y)\ge\langle Jx-Jz, z-y\rangle, $$

i.e.,

$$ \phi(y,x) - \phi(y,z)\ge2\langle Jx-Jz, z-y\rangle, $$

and also

$$ W(x,y)\le\langle Jx-Jy, x-y\rangle, $$

for all \(x, y, z\in E\).

Lemma 2.5

(Alber and Ryazantseva [1], p. 45)

Let E be a uniformly convex Banach space. Then, for any \(R>0\)and any \(x, y\in E\)such that \(\|x\|\le R\), \(\|y\|\le R\), the following inequality holds:

$$\langle Jx-Jy,x-y\rangle\ge(2L)^{-1}\delta_{X} \bigl(c_{2}^{-1} \Vert x-y \Vert \bigr), $$

where \(c_{2}=2\max\{1,R\}\), \(1< L<1.7\).

Define

$$ K:=4RL\sup\bigl\{ \Vert Jx-Jy \Vert : \Vert x \Vert \le R, \Vert y \Vert \le R\bigr\} +1. $$
(2.6)

Lemma 2.6

(Alber and Ryazantseva [1], p. 46)

Let E be a uniformly smooth and strictly convex Banach space. Then for any \(R>0\)and any \(x, y\in E\)such that \(\|x\|\le R\), \(\|y\|\le R\), the following inequality holds:

$$\langle Jx-Jy,x-y\rangle\ge(2L)^{-1}\delta_{E^{*}} \bigl(c_{2}^{-1} \Vert Jx-Jy \Vert \bigr), $$

where \(c_{2}=2\max\{1,R\}\), \(1< L<1.7\), and \(\delta_{E}\)is the modulus of convexity of E.

Lemma 2.7

(Reich [38])

Let \(E^{*}\)be a strictly convex dual Banach space with a Fréchet differentiable norm, and let \(A:E\rightrightarrows E^{*}\)be a maximal monotone map with a zero and \({z\in E^{*}}\). For each \(\lambda>0\), there exists a unique \(x_{\lambda}\in E\)such that \(z\in Jx_{\lambda}+ \lambda Ax_{\lambda}\). Furthermore, \(x_{\lambda}\)converges strongly to a unique zero of A.

Lemma 2.8

From Lemma 2.7, setting \(\lambda_{n} :=\frac{1}{\theta_{n}}\), where \(\theta_{n}\to0\), as \(n \to\infty\), \(\theta_{n}\le\theta _{n-1}\), \(\forall n \ge1\), \(\frac{1}{2} (\frac{\theta_{n-1}-\theta_{n}}{\theta _{n}}K )\le1\), \(z = Jh\), for some \(h \in E\), \(v_{n}\in Ay_{n}\)and \(y_{n}:= (J+\frac {1}{\theta_{n}}A )^{-1}z\), we have:

$$ v_{n}-\theta_{n}(Jh-Jy_{n})=0 \quad\textit{and}\quad y_{n}\to y^{*}\in A^{-1}(0), $$
(2.7)

where \(A : E \rightrightarrows E^{*}\)is maximal monotone.

Lemma 2.9

(Xu [45])

Let \(\{a_{n}\}\)be a sequence of nonnegative real numbers satisfying the following relation:

$$ a_{n+1}\leq(1-\sigma_{n})a_{n}+ \sigma_{n}b_{n}+c_{n},\quad n\geq1, $$

where \(\{\sigma_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\)satisfy the conditions:

  1. (i)

    \(\{\sigma_{n}\}\subset[0,1]\), \(\sum_{n=1}^{\infty} \sigma _{n}=\infty\);

  2. (ii)

    \(\limsup_{n\to\infty} b_{n} \leq0\);

  3. (iii)

    \(c_{n}\geq0\), \(\sum_{n=1}^{\infty} c_{n}<\infty\).

Then, \(\lim_{n\to\infty} a_{n}=0\).

3 Main results

The following conditions are required in the combined proofs of Lemma 3.1 and Theorem 3.2 below, where \(\{\lambda_{n}\}\), \(\{ \beta_{n}\}\), and \(\{\theta_{n}\}\) are sequences in \((0,1)\):

  1. (i)

    \(\sum_{n=1}^{\infty}\lambda_{n}\theta_{n}=\infty\),

  2. (ii)

    \(\delta_{E}^{-1}(\lambda_{n}K)\leq\theta^{2}_{n}\gamma_{0}\),

  3. (iii)

    \(\delta_{E^{*}}^{-1}(\lambda_{n}K)\leq\theta^{2}_{n}\gamma_{0}\),

  4. (iv)

    \(\omega_{J}(\beta_{n}K)\le\lambda_{n}^{4}\theta_{n}\gamma_{0}\),

  5. (v)

    \(\delta_{E}^{-1}(\eta_{n}) \to0\),

  6. (vi)

    \(\delta_{E^{*}}^{-1}(\eta _{n})\to0\),

  7. (vii)

    \(\frac{\delta_{E}^{-1}(\eta_{n})}{\lambda_{n}\theta_{n}}\to0\),

  8. (viii)

    \(\frac{\delta_{E^{*}}^{-1}(\eta_{n})}{\lambda_{n}\theta_{n}}\to0\),

  9. (ix)

    \(\lambda_{n}\le\theta_{n}\gamma_{0}\),

where \(\eta_{n}= (\frac{\theta_{n-1}}{\theta_{n}}-1 )K\), for some constants \(\gamma_{0}>0\), \(K>0\); and \(\delta_{E}\) is the modulus of convexity of E, \(\omega_{J}\) is the modulus of continuity of J.

Estimates for the moduli of convexity of \(E=L_{p}\) , \(1< p<\infty\) .

The following estimates have been obtained for \(\delta_{E}\) in \(L_{p}\) spaces, \(1< p<\infty\),

$$\delta_{E}(\epsilon)\ge \textstyle\begin{cases} \frac{p-1}{8}\epsilon^{2}, & \text{if } 1< p< 2;\\ \frac{1}{p}{(\frac{\epsilon}{2})}^{p}, & \text{if } p\geq2; \end{cases} $$

where \(\epsilon\in(0,2]\) (see e.g., Lindenstrauss and Tzafriri [32], see also, Chidume [9], p. 44).

Also, in \(L_{p}\) spaces, J is Lipschitz if \(2\le p<\infty\) and it satisfies the following inequality:

$${ \Vert Jx-Jy \Vert \le H \Vert x-y \Vert ^{p-1},} $$

if \(1< p<2\). Consequently, we have the following estimates:

$$\omega_{J}{(\epsilon)}\le \textstyle\begin{cases} H\epsilon^{(p-1)}, & \text{if } 1< p< 2;\\ M\epsilon, & \text{if } p\geq2; \end{cases} $$

where \(\epsilon>0\), H and M are positive constants, and J is the normalized duality map (see, e.g., Lindenstrauss and Tzafriri [32], see also Chidume [9]).

Prototypes of the parameters for Lemma 3.1 and Theorem 3.2 below in the case that \(E=L_{p}\), \(1< p<\infty\) are:

For \(L_{p}\) spaces, \(2\le p<\infty\) ,

$$\lambda_{n}=(n+1)^{-\frac{1}{2}},\qquad\theta_{n}=(n+1)^{-\frac {1}{4p}} \quad\text{and} \quad\beta_{n}=(n+1)^{-(2+\frac{1}{4p})}, \quad n \ge1. $$

For \(L_{p}\) spaces, \(1< p< 2\) ,

$$\lambda_{n}=(n+1)^{-\frac{1}{4}},\qquad \theta_{n}=(n+1)^{-\frac{1}{16}} \quad \text{and}\quad \beta_{n}=(n+1)^{-\frac{17}{16(p-1)}},\quad n \geq1. $$

With these choices, conditions (i)–(ix) given in Lemma 3.1 and Theorem 3.2 are easily satisfied.

Furthermore, we have the following formulae, for J and \(J^{-1}\) in \(L_{p}\) and \(l_{p} \), \({1< p<\infty}\), \(p^{-1} + q^{-1} =1 \) (see, e.g., Alber and Ryazantseva [1], p. 36):

$$\begin{aligned}& Ju= \Vert u \Vert ^{2-p}_{l_{p}}v\in l_{q},\quad v=\bigl\{ \vert u_{1} \vert ^{p-2}u_{1}, \vert u_{2} \vert ^{p-2}u_{2},\dots \bigr\} , u=\{u_{1},u_{2}, \dots\}, \\& J^{-1}u= \Vert u \Vert ^{2-q}_{l_{q}}v\in l_{p},\quad v=\bigl\{ \vert u_{1} \vert ^{q-2}u_{1}, \vert u_{2} \vert ^{q-2}u_{2},\dots \bigr\} , u=\{u_{1},u_{2}, \dots\}, \\& Ju= \Vert u \Vert ^{2-p}_{L_{p}} \bigl\vert u(s) \bigr\vert ^{p-2}u(s)\in L_{q}(G),\quad s\in G, \\& J^{-1} u= \Vert u \Vert ^{2-q}_{L_{q}} \bigl\vert u(s) \bigr\vert ^{q-2} u(s)\in L_{p}(G),\quad s \in G. \end{aligned}$$

We now prove the following lemma.

Lemma 3.1

Let E be a uniformly smooth and uniformly convex real Banach space and \(A:E\rightrightarrows E^{*}\)be a maximal monotone operator with \(D(A)=E\)such that the inclusion \(0\in Az\)has a solution. For arbitrary \(z_{0}, z_{1}\in E\), define a sequence \(\{z_{n}\}\)by

$$ \textstyle\begin{cases} w_{n}=z_{n}+\beta_{n}(z_{n}-z_{n-1}),\\ z_{n+1}=J^{-1} (Jw_{n}-\lambda_{n}\mu_{n}-\lambda_{n}\theta _{n}Jw_{n} ),\quad\mu_{n}\in Aw_{n}, n\geq1. \end{cases} $$
(3.1)

Then, the sequence \(\{z_{n}\}\)is bounded.

Proof

We show that the sequence \(\{z_{n}\}\) is bounded.

Let \(z^{*}\) be a solution of \(0\in Az\), i.e., \(0\in Az^{*}\). Then, there exists \(r>0\) (sufficiently large) such that

$$ r>\max\bigl\{ 4 \bigl\Vert v^{*} \bigr\Vert ^{2}, \phi\bigl(z^{*},z_{1}\bigr)\bigr\} . $$
(3.2)

Define \(B:=\{z\in E:\phi(z^{*},z)< r\}\), with \(0\in B\). Clearly, \(B\subset\operatorname{Int}(D(A))\). It suffices to show that \(\{\phi(z^{*}, z_{n})\}\) is bounded. We proceed by induction. For \(n=1\), by construction, we have that \({\phi(z^{*}, z_{1})< r.}\) Assume that \(\phi(z^{*}, z_{n})< r\), for some \(n\ge1\). Using inequality (2.1), we have that \({\|z_{n}\|< \| z^{*}\|+\sqrt{r}}\). Now, we show that \(\phi(z^{*}, z_{n+1})< r\). Suppose for contradiction that \(\phi(z^{*}, z_{n+1})< r\) does not hold. Then, \(\phi(z^{*}, z_{n+1})\ge r\).

Let \(y\in B\) be arbitrary and \((y,v)\in G(A)\), \(u\in Ax\). Since A is locally bounded at 0, there exist \(h_{0}>0\), \(m_{0}>0\) such that

$${ \Vert u \Vert \le m_{0}, \quad\forall x\in B_{h_{0}}(0) \subset B}. $$

By the monotonicity of A, we have that:

$$\begin{aligned}& \langle v,y\rangle\ge \langle u,y-x\rangle+\langle v,x\rangle,\quad \forall x\in B_{h_{0}}(0), v\in Ay, \\& \langle v,-y\rangle\le \langle u,x-y\rangle+\langle v,-x\rangle. \end{aligned}$$

Setting \(s=-y\), we have that:

$$\begin{aligned}& \begin{aligned}\langle v,s\rangle&\le \langle u,x+s\rangle+\langle v,-x\rangle \\ &\le \Vert u \Vert \bigl( \Vert x \Vert + \Vert s \Vert \bigr) + \Vert v \Vert \Vert x \Vert ,\end{aligned} \\& \sup_{ \Vert s \Vert \le( \Vert z^{*} \Vert +\sqrt{r})} \bigl\vert \langle v,s\rangle \bigr\vert \le m_{0} \bigl(h_{0}+ \bigl\Vert z^{*} \bigr\Vert + \sqrt{r} \bigr) + \Vert v \Vert h_{0}, \end{aligned}$$

so that

$$\begin{aligned}& \Vert v \Vert \le \frac{m_{0} (h_{0}+ \Vert z^{*} \Vert +\sqrt{r} )}{ \Vert z^{*} \Vert +\sqrt{r}-h_{0}}:=M_{0},\quad \forall y\in B. \end{aligned}$$

Define \(M:=\max\{M_{0}, \|z^{*}\|+\sqrt{r} \}\). Then, \(\langle v,y\rangle \le M\|y\|\) and \(\|y\|\le M\). By Lemma 2.2, there exists \(C>0\) such that \(\|v\|\le C\), \(\forall y\in B\). Define

$$\begin{aligned}& M_{1}=\sup \bigl\{ \Vert \mu+ \theta Jw \Vert , w\in B, \mu\in Aw, \theta\in (0,1) \bigr\} +1; \end{aligned}$$
(3.3)
$$\begin{aligned}& M_{2}=\sup \bigl\{ \bigl\Vert J^{-1} (Jw-\lambda\mu-\lambda\theta Jw ) \bigr\Vert , w\in B, \mu \in Aw, \theta\in(0,1) \bigr\} +1. \end{aligned}$$
(3.4)

From the recursion formula, Lemma 2.5, and the fact that J and \(J^{-1}\) are uniformly continuous on bounded sets, we have that

$$ \Vert Jz_{n+1}-Jw_{n} \Vert \le\lambda_{n}M_{1} \quad\text{and}\quad \Vert z_{n+1}-w_{n} \Vert \le c_{2}\delta _{E}^{-1}\bigl(\lambda_{n} M^{*}\bigr),\quad \text{for some } M^{*}>0. $$
(3.5)

Define

$$ \gamma_{0}:=\min \biggl\{ 1,\frac {r}{32K^{*}} \biggr\} , $$
(3.6)

where \(K^{*}=\max\{M^{*},M_{1},M_{2},M_{1}M_{2},M,c_{2}M_{1}\}\). Using Lemma 2.1 and denoting \(0\in Az^{*}\) by 0∗, we compute:

$$ \begin{aligned}[b] \phi\bigl(z^{*}, z_{n+1}\bigr)&=V \bigl(z^{*},Jw_{n} -\lambda_{n} \mu_{n}- \lambda_{n}\theta _{n}w_{n}\bigr) \\ &\le V\bigl(z^{*}, Jw_{n}\bigr)-2 \lambda_{n}\bigl\langle z_{n+1}-z^{*}, \mu_{n}+ \theta _{n}Jw_{n} \bigr\rangle \\ &=\phi\bigl(z^{*}, w_{n}\bigr)-2 \lambda_{n}\langle z_{n+1}-w_{n}, \mu_{n}+ \theta _{n}Jw_{n}\rangle-2\lambda_{n}\bigl\langle w_{n}-z^{*}, \mu_{n}+ \theta_{n}Jw_{n} \bigr\rangle \\ &\le\phi\bigl(z^{*}, w_{n}\bigr)+2 c_{2} \lambda_{n}\delta_{E}^{-1}\bigl( \lambda_{n} M^{*}\bigr)M_{1} - 2\lambda_{n} \bigl\langle w_{n}-z^{*}, \mu_{n}-0^{*}\bigr\rangle \\ &\quad - 2\theta_{n}\lambda_{n}\bigl\langle w_{n}-z^{*}, Jw_{n}\bigr\rangle \\ &\le\phi\bigl(z^{*}, w_{n}\bigr)+2 c_{2} \lambda_{n}\delta_{E}^{-1}\bigl( \lambda_{n} M^{*}\bigr)M_{1} - 2\theta_{n} \lambda_{n}\bigl\langle w_{n}-z^{*}, Jw_{n}-Jz_{n+1} \bigr\rangle \\ &\quad - 2\theta_{n}\lambda_{n}\bigl\langle w_{n}-z^{*}, Jz_{n+1}\bigr\rangle .\end{aligned} $$
(3.7)

By Lemma 2.4, we have that

$$\begin{aligned}& -2\lambda_{n}\theta_{n}\bigl\langle w_{n}-z^{*},Jz_{n+1}\bigr\rangle \leq \lambda _{n}\theta_{n} \bigl\Vert z^{*} \bigr\Vert ^{2}+2M\lambda_{n}\theta_{n} \Vert w_{n}-z_{n+1} \Vert -\lambda_{n} \theta_{n} \phi\bigl(z^{*},z_{n+1}\bigr), \\& \phi\bigl(z^{*},w_{n}\bigr) \leq \phi\bigl(z^{*},z_{n} \bigr)+2M_{2} \omega_{J}(\beta_{n}M). \end{aligned}$$
(3.8)

It follows from inequality (3.7) that

$$\begin{aligned} r \le&\phi\bigl(z^{*}, z_{n+1}\bigr) \\ \le&\phi\bigl(z^{*}, z_{n}\bigr)+2 M_{2}\omega_{J}( \beta_{n}M) + 2c_{2}\lambda_{n}\delta _{E}^{-1}\bigl(\lambda_{n} M^{*} \bigr)M_{1} + \lambda_{n}\theta_{n} \bigl\Vert z^{*} \bigr\Vert ^{2} \\ &{} + 2\lambda_{n}\theta_{n}M \Vert Jw_{n}-Jz_{n+1} \Vert + 2\lambda_{n}\theta_{n}M \Vert w_{n}-z_{n+1} \Vert - \lambda_{n} \theta_{n}\phi\bigl(z^{*},z_{n+1}\bigr) \\ \le&\phi\bigl(z^{*}, z_{n}\bigr)+2 M_{2} \omega_{J}(\beta_{n}M) + 2c_{2} \lambda_{n}\delta _{E}^{-1}\bigl( \lambda_{n} M^{*}\bigr)M_{1} + \lambda_{n} \theta_{n} \bigl\Vert z^{*} \bigr\Vert ^{2} \\ &{} + 2\lambda^{4}_{n}\theta _{n}MM_{1} + 2c_{2}\lambda_{n}\theta_{n} \delta_{E}^{-1}\bigl(\lambda_{n}M^{*}\bigr) M - \lambda_{n}\theta_{n}\phi\bigl(z^{*},z_{n+1} \bigr) \\ < &r+2 M_{2}\lambda_{n}\theta_{n} \gamma_{0}+2c_{2}\lambda_{n} \theta_{n} M_{1}\gamma_{0} + \lambda_{n}\theta_{n}\frac{r}{4}+2 \lambda_{n}\theta_{n}MM_{1}\gamma _{0} \\ &{} + 2\lambda_{n}\theta_{n}\gamma_{0} M- \lambda_{n}\theta_{n}r \\ < &r + \lambda_{n}\theta_{n}\frac{r}{2} - \lambda_{n}\theta _{n}r< r . \end{aligned}$$

This is a contradiction. Hence, \(\phi(z^{*},z_{n+1})< r\). Therefore, \(\phi(z^{*},z_{n})< r\), for all \(n\geq1\). □

Theorem 3.2

Let E be a uniformly smooth and uniformly convex real Banach space. Let \(A:E\rightrightarrows E^{*}\)be a maximal monotone operator with \(D(A)=E\)such that the inclusion \(0\in Az\)has a solution. For arbitrary \(z_{0}, z_{1}\in E\), define a sequence \(\{z_{n}\}\)by algorithm (3.1). Then, the sequence \(\{z_{n}\}\)converges strongly to a zero of A (see Remark 2below).

Proof

Using Lemma 2.1 and equation (2.2), we have

$$\begin{aligned} \phi(y_{n},z_{n+1}) =&V(y_{n}, Jw_{n}-\lambda_{n}\mu_{n}- \lambda_{n}\theta _{n}Jw_{n}) \\ \leq& V(y_{n},Jw_{n})-2\lambda_{n}\langle z_{n+1}-y_{n}, \mu_{n}+\theta _{n}Jw_{n} \rangle \\ =&\phi(w_{n},y_{n})+2\langle w_{n},Jy_{n} \rangle-2\langle y_{n},Jw_{n}\rangle -2 \lambda_{n}\langle z_{n+1}-y_{n}, \mu_{n}+\theta_{n}Jw_{n} \rangle. \end{aligned}$$
(3.9)

Observe that

$$\begin{aligned} \phi(w_{n},y_{n}) =&V(w_{n},Jy_{n})=V(w_{n},Jy_{n-1}+Jy_{n}-Jy_{n-1}) \\ \leq& V(w_{n},Jy_{n-1})-2\langle y_{n}-w_{n},Jy_{n-1}-Jy_{n} \rangle. \end{aligned}$$
(3.10)

Thus, from inequalities (3.9), (3.10), and the fact that \(v_{n}\in Ay_{n}\), we obtain

$$\begin{aligned} \phi(y_{n},z_{n+1}) \le&V(w_{n},Jy_{n-1})-2 \langle y_{n}-w_{n},Jy_{n-1}-Jy_{n} \rangle+2\langle w_{n},Jy_{n}\rangle-2\langle y_{n},Jw_{n}\rangle \\ &{} - 2\lambda _{n}\langle z_{n+1}-y_{n}, \mu_{n}+\theta_{n}Jw_{n} \rangle \\ =&\phi(y_{n-1},w_{n})+2\langle y_{n-1},Jw_{n} \rangle-2\langle w_{n},Jy_{n-1}\rangle-2\langle y_{n}-w_{n},Jy_{n-1}-Jy_{n} \rangle \\ &{} + 2\langle w_{n},Jy_{n}\rangle-2\langle y_{n},Jw_{n}\rangle- 2\lambda _{n}\langle z_{n+1}-y_{n}, \mu_{n}+\theta_{n}Jw_{n} \rangle \\ =&\phi(y_{n-1},w_{n})+2\langle y_{n-1}-y_{n},Jw_{n} \rangle+2\langle w_{n},Jy_{n}-Jy_{n-1}\rangle \\ & {}- 2\langle y_{n}-w_{n},Jy_{n-1}-Jy_{n} \rangle- 2\lambda_{n}\langle z_{n+1}-y_{n}, \mu _{n}+\theta_{n}Jw_{n} \rangle \\ \le&\phi(y_{n-1},w_{n})+2 \Vert y_{n-1}-y_{n} \Vert \Vert w_{n} \Vert +2 \Vert w_{n} \Vert \Vert Jy_{n}-Jy_{n-1} \Vert \\ &{} + 2 \Vert y_{n}-w_{n} \Vert \Vert Jy_{n-1}-Jy_{n} \Vert + 2\lambda_{n} \Vert z_{n+1}-w_{n} \Vert M_{1} \\ &{}- 2\lambda_{n}\langle w_{n}-y_{n}, \mu_{n}-v_{n}\rangle- \underline {2\lambda_{n} \langle w_{n}-y_{n},v_{n} \rangle}- \underline{2\lambda _{n}\theta_{n}\langle w_{n}-y_{n},Jw_{n} \rangle}. \end{aligned}$$
(3.11)

Observe that

$$\begin{aligned} - \underline{2\lambda_{n}\theta_{n} \langle w_{n}-y_{n},Jw_{n} \rangle } =&2 \lambda_{n}\theta_{n}\langle w_{n}-y_{n-1},Jy_{n-1}-Jw_{n} \rangle -2\lambda_{n}\theta_{n}\langle y_{n-1}-y_{n},Jw_{n}-Jy_{n-1} \rangle \\ &{} - 2\lambda_{n}\theta_{n}\langle w_{n}-y_{n},Jy_{n} \rangle- 2\lambda _{n}\theta_{n}\langle w_{n}-y_{n},Jy_{n-1}-Jy_{n} \rangle \\ \le&-\lambda_{n}\theta_{n}\phi(y_{n-1},w_{n})+2 \lambda_{n}\theta_{n} \Vert y_{n-1}-y_{n} \Vert M \\ & {}- \underline{2\lambda_{n}\theta_{n}\langle w_{n}-y_{n},Jy_{n} \rangle }+ 2 \lambda_{n}\theta_{n} \Vert Jy_{n-1}-Jy_{n} \Vert M. \end{aligned}$$
(3.12)

Also, from Lemma 2.8, we obtain that

$$ - \underline{2\lambda_{n}\langle w_{n}-y_{n},v_{n} \rangle}- \underline {2 \lambda_{n}\theta_{n}\langle w_{n}-y_{n},Jy_{n} \rangle}=- 2\lambda_{n}\langle w_{n}-y_{n},v_{n}+ \theta_{n}Jy_{n} \rangle=0. $$
(3.13)

Hence, substituting inequality (3.12) and equation (3.13) into inequality (3.11), we have that

$$ \begin{aligned}[b] \phi(y_{n},z_{n+1})& \leq(1-\lambda_{n}\theta_{n})\phi(y_{n-1},w_{n})+2 \Vert y_{n-1}-y_{n} \Vert M +4 \Vert Jy_{n-1}-Jy_{n} \Vert M \\ &\quad+2\lambda_{n} \Vert z_{n+1}-w_{n} \Vert M+2\lambda_{n}\theta_{n} \Vert y_{n-1}-y_{n} \Vert M +2\lambda_{n} \theta_{n} \Vert Jy_{n-1}-Jy_{n} \Vert M \\ &\leq (1-\lambda_{n}\theta_{n})\phi(y_{n-1},w_{n}) +2M \bigl(\delta_{E}^{-1}(\eta _{n}) + \delta_{E^{*}}^{-1}(\eta_{n}) \bigr) +2 \lambda_{n} \delta_{E}^{-1}(\lambda _{n}M)M \\ &\quad+2\lambda_{n}\theta_{n}M \bigl( \delta_{E}^{-1}(\eta_{n}) +\delta _{E^{*}}^{-1}(\eta_{n}) \bigr) \\ &\leq(1-\lambda_{n}\theta_{n})\phi(y_{n-1},z_{n})+2M_{2} \omega_{J}(\beta _{n}M) +2 M\lambda_{n} \theta_{n}^{2}\gamma_{0} \\ & \quad+2\lambda_{n}\theta_{n} \biggl( \frac{\delta_{E}^{-1}(\eta _{n})}{\lambda_{n}\theta_{n}} +\frac{\delta_{E^{*}}^{-1}(\eta_{n})}{\lambda_{n}\theta_{n}}+\delta _{E}^{-1}( \eta_{n}) +\delta_{E^{*}}^{-1}( \eta_{n}) \biggr)M. \end{aligned} $$
(3.14)

Applying inequalities (3.8) and (3.14), we obtain

$$\begin{aligned} \phi(y_{n},z_{n+1}) \leq& (1- \lambda_{n}\theta_{n})\phi(y_{n-1},z_{n})+ 2M\lambda_{n}^{4}\theta_{n} \gamma_{0} \\ & {}+2\lambda_{n}\theta_{n} \biggl(\frac{\delta_{E}^{-1}(\eta_{n})}{\lambda _{n}\theta_{n}} +\frac{\delta_{E^{*}}^{-1}(\eta_{n})}{\lambda_{n}\theta_{n}}+{\theta _{n}\gamma_{0}} + \delta_{E}^{-1}(\eta_{n}) + \delta_{E^{*}}^{-1}(\eta_{n}) \biggr)M. \end{aligned}$$
(3.15)

Set \(a_{n}:=\phi(y_{n-1},z_{n})\), \(\sigma_{n}:=\lambda_{n}\theta_{n}\), \(c_{n}:=\lambda_{n}^{4}\theta_{n} \), and

$$\begin{aligned} b_{n}:&=M \biggl(\frac{\delta_{E}^{-1}(\eta_{n})}{\lambda_{n}\theta _{n}}+\frac{\delta_{E^{*}}^{-1}(\eta_{n})}{\lambda_{n}\theta_{n}} + \delta_{E}^{-1}(\eta_{n}) + \delta_{E^{*}}^{-1}(\eta_{n}) + { \theta_{n}\gamma_{0}} \biggr) . \end{aligned}$$

Hence, inequality (3.15) becomes \(a_{n+1}\leq(1-\sigma_{n})a_{n}+\sigma_{n}b_{n}+c_{n}\), \(n\geq1\). It follows from Lemma 2.9 that \(\lim_{n\to\infty}\phi (y_{n-1},z_{n})=0\). By Lemma 2.3, we have \(\lim_{n\to\infty}\| z_{n}-y_{n-1}\| = 0\). Since \(\lim_{n\to\infty}y_{n}=y^{*}\in A^{-1}0\), we have that \(\{z_{n}\}\) converges to \(y^{*}\in A^{-1}0\). This completes the proof. □

Remark 2

This zero of A may be a minimum norm zero of A, for example, if A is the subdifferential, ∂f, of a proper lower semicontinuous and convex function f.

4 Applications

4.1 Application to a convex optimization problem

The following lemma will be crucial in what follows.

Lemma 4.1

(Rockafellar [40])

Let E be a Banach space and let \(f:E\rightarrow\mathbb{R}\cup\{ \infty\}\)be a proper, convex and lower semicontinuous function. Then, the subdifferential of f, ∂f, is maximal monotone. Furthermore, \(0\in\partial f(u^{*})\)if and only if \(u^{*}\)is a minimizer of f.

We now have the following theorem.

Theorem 4.2

Let E be a uniformly convex and uniformly smooth real Banach space with dual \(E^{*}\). Let \(f:E\rightarrow\mathbb{R}\cup\{\infty\}\)be a proper, lower semicontinuous, and convex function such that \({(\partial f)}^{-1}0\ne\emptyset\). For given \(z_{0},z_{1}\in E\), let \(\{z_{n}\}\)be generated by the algorithm

$$ \textstyle\begin{cases} w_{n}=z_{n}+\beta_{n}(z_{n}-z_{n-1}),\\ z_{n+1}={J^{-1}} ( Jw_{n}-\lambda_{n}\partial f( w_{n})-\lambda _{n}\theta_{n}Jw_{n} ),\quad n\geq1. \end{cases} $$
(4.1)

Then, the sequence \(\{z_{n}\}\)converges strongly to a minimizer of f.

Proof

By Lemma 4.1, ∂f is maximal monotone. The conclusion follows from Theorem 3.2. □

4.2 Applications to Hammerstein integral equations

Definition 4.3

Let \(\varOmega\subset{\mathbb{R}}^{n}\) be bounded. Let \(k:\varOmega \times\varOmega\to \mathbb{R}\) and \(f:\varOmega\times\mathbb{R} \to \mathbb{R}\) be measurable real-valued functions. An integral equation (generally nonlinear) of Hammerstein-type has the form

$$ u(x)+ \int_{\varOmega}k(x,y)f\bigl(y,u(y)\bigr)\,dy=w(x), $$
(4.2)

where the unknown function u and inhomogeneous function w lie in a Banach space E of measurable real-valued functions.

If we define an operator K by \(K(v):= \int_{\varOmega}\kappa (x,y)v(y)\,dy\); \(x\in\varOmega\), and the so-called superposition or Nemytskii operator by \(Fu(y):=f(y,u)\), then equation (4.2) can be put in the form

$$ u+KFu=0. $$
(4.3)

Without loss of generality, we have taken \(w\equiv0\).

Interest in Hammerstein integral equations stems mainly from the fact that several problems that arise in differential equations, for instance, elliptic boundary value problems whose linear part possesses Green’s function can, as a rule, be transformed into the form of equation (4.2) (see, e.g., Pascali and Sburian [36], Chap. IV). Consider, for example, the following pendulum problem:

$$ \textstyle\begin{cases} \frac{d^{2} v(t)}{dt^{2}} - a^{2}\sin v(t) =z(t),\quad t\in[0,1],\\ v(0) = v(1)=0, \end{cases} $$
(4.4)

where the driving the force z is periodical and odd. The constant \(a\neq0\) depends on the length of the pendulum and on gravity. Since the Green’s function of the problem

$$v^{\prime\prime}(t)=0,\qquad v(0)=v(1)=0 $$

is the function defined by

$$ k(t,x)= \textstyle\begin{cases} t(1-x),&0\le t\le x,\\ t(1-x),&x\le x\le1, \end{cases} $$

problem (4.4) is equivalent to the nonlinear integral equation

$$ v(t)=- \int_{0}^{1}k(t,x)\bigl[z(x)-a^{2} \sin v(x)\bigr]\,dx. $$
(4.5)

If \(\int_{0}^{1}k(t,x)z(x)\,dx=g(t)\) and \(v(t)+g(t)=u(t)\), then (4.5) can be written as the Hammerstein integral equation

$$u(t)+ \int_{0}^{1}k(t,x)f\bigl(x,u(x)\bigr)\,dx=0, $$

where \(f(x,u(x))=a^{2}\sin[u(x)-g(x)]\).

Equations of Hammerstein-type also play a special role in the theory of optimal control systems and in automation and network theory (see, e.g., Dolezale [27]).

In the case when K and F are maximal monotone, several existence and uniqueness theorems have been proved for equations of Hammerstein type (see, e.g., [5, 7, 8, 25]).

Iterative methods for approximating solutions of problem (4.3) have been studied (see e.g., [10, 12, 13, 15, 19, 22, 23, 26, 35, 42] and the references therein).

In this section, we shall apply Theorem 3.1, for the case where the map A is single-valued, to approximate a solution of equation (4.3). First we state the following important lemmas.

Lemma 4.4

(Chidume and Idu [16])

Let X be a uniformly convex and uniformly smooth real Banach space with dual space \(X^{*}\)and \(E=X\times X^{*}\). Let \(F:X\to X^{*}\)and \(K:X^{*}\to X\)be monotone maps with \(R(F)=D(K)\), where \(R(F)\)is the range of F and \(D(K)\)is the domain of K. Let \(A:E\to E^{*}\)be defined by \(A[u,v]=[Fu-v,Kv+u]\). Then, A is maximal monotone.

Let \(\{\lambda_{n}\}\), \(\{\beta_{n}\}\), and \(\{\theta_{n}\}\) be sequences in \((0,1)\) and satisfy the conditions as given in Theorem 3.2.

Theorem 4.5

Let E be a uniformly convex and uniformly smooth real Banach space with dual space \(E^{*}\). Let \(F: E \to E^{*}\), \(K:E^{*}\to E\)be maximal monotone maps. Let \(X:=E\times E^{*}\)and \(A:X\to X^{*}\)be defined by \(A[u,v]:=[Fu-v,Kv+u]\). For arbitrary \(z_{0}, z_{1}\in X\), define the sequence \(\{z_{n}\}\)in X by

$$ \textstyle\begin{cases} w_{n}=z_{n}+\beta_{n}(z_{n}-z_{n-1}),\\ z_{n+1}=J^{-1} ( Jw_{n}-\lambda_{n}Aw_{n}-\lambda_{n}\theta_{n}w_{n} ),\quad n\geq1. \end{cases} $$
(4.6)

Assume that the equation \(u+KFu=0\)has a solution, then the sequence \(\{z_{n}\}_{n=1}^{\infty}\)converges strongly to a solution of \(u+KFu=0\).

Proof

By a result of Chidume [9], E is uniformly smooth and uniformly convex, also, by Lemma 4.4, A maximal monotone. Therefore, the conclusion follows from Theorem 3.2. □

Theorem 4.5 can also be stated as follows.

Theorem 4.6

Let E be a uniformly smooth and uniformly convex real Banach space with dual space \(E^{*}\). Let \(F:E\to E^{*}\), \(K:E^{*}\to E\)be maximal monotone maps with \({R(F)=D(K)}\), where \(R(F)\)is the range of F and \(D(K)\)is the domain of K.

For arbitrary \({(u_{0},v_{0}), (u_{1},v_{1})\in E\times E^{*}}\), define the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)in \(E\times E^{*}\)by

$$ \textstyle\begin{cases} c_{n}=u_{n}+\beta_{n}(u_{n}-u_{n-1}),\qquad d_{n}=v_{n}+\beta_{n}(v_{n}-v_{n-1}),\\ u_{n+1}=J^{-1} ( Jc_{n}-\lambda_{n}(Fc_{n}-d_{n})-\lambda_{n}\theta _{n}Jc_{n} ),\quad n\geq1,\\ v_{n+1}=J (J^{-1}d_{n}-\lambda_{n}(Kd_{n}+c_{n})-\lambda_{n}\theta _{n}J^{-1}d_{n} ),\quad n\geq1. \end{cases} $$
(4.7)

Assume that the equation \(u+KFu=0\)has a solution, then the sequences \(\{u_{n}\}_{n=1}^{\infty}\)and \(\{v_{n}\}_{n=1}^{\infty }\)converge strongly to \(u^{*}\)and \(v^{*}\), respectively, where \(u^{*}\)is a solution of \(u+KFu=0\), with \(v^{*}=Fu^{*}\).

Remark 3

Algorithm (4.7) (Inertial Algorithm 2) will be compared with Algorithm (4.8) of Uba et al. [44] and Algorithm (4.9) of Chidume et al. [11] below. We state the theorems for completeness.

Theorem 4.7

(Uba et al. [44])

Let E be a uniformly convex and uniformly smooth real Banach space and and \(F: E \to E^{*}\), \(K:E^{*}\to E\)be maximal monotone and bounded maps. For \(u_{1}\in E\)and \(v_{1}\in E^{*}\), define the sequences \(\{u_{n}\}\)and \(\{ v_{n}\}\)in E and \(E^{*}\), respectively, by

$$ \textstyle\begin{cases} u_{n+1}=J^{-1} ( Ju_{n}-\lambda_{n}(Fu_{n}-v_{n})-\lambda_{n}\theta _{n}(Ju_{n}-Ju_{1}) ),\quad n\geq1,\\ v_{n+1}=J (J^{-1}v_{n}-\lambda_{n}(Kv_{n}+u_{n})-\lambda_{n}\theta_{n}(J^{-1}v_{n}- J^{-1}v_{1}) ),\quad n\geq1, \end{cases} $$
(4.8)

where \(\lambda_{n}\)and \(\theta_{n}\)are sequences in \((0,1)\)satisfying appropriate conditions. Assume that the equation \(u + KF u =0\)has a solution. Then, the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)converge strongly to \(u^{*}\)and \(v^{*}\), respectively, where \(u^{*}\)is the solution of \(u+KF u = 0\)with \(v^{*}= F u^{*}\).

Theorem 4.8

(Chidume et al. [11])

Let E be a uniformly convex and uniformly smooth real Banach space and \(F: E \to E^{*}\), \(K:E^{*}\to E\)be maximal monotone maps. For \(u_{1}\in E\)and \(v_{1}\in E^{*}\), define the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)in E and \(E^{*}\), respectively, by

$$ \textstyle\begin{cases} u_{n+1}=J^{-1} ( Ju_{n}-\lambda_{n}(Fu_{n}-v_{n})-\lambda_{n}\theta _{n}Ju_{n} ),\quad n\geq1,\\ v_{n+1}=J (J^{-1}v_{n}-\lambda_{n}(Kv_{n}+u_{n})-\lambda_{n}\theta _{n}J^{-1}v_{n} ),\quad n\geq1, \end{cases} $$
(4.9)

where \(\lambda_{n}\)and \(\theta_{n}\)are sequences in \((0,1)\)satisfying appropriate conditions. Assume that the equation \(u + KF u =0\)has a solution. Then, the sequences \(\{u_{n}\}\)and \(\{v_{n}\}\)converge strongly to \(u^{*}\)and \(v^{*}\), respectively, where \(u^{*}\)is the solution of \(u+KF u = 0\)with \(v^{*}= F u^{*}\).

5 Numerical illustration

In this section, we present numerical examples to compare the convergence of the sequence of our inertial algorithms and some recent important algorithms. First, we compare the convergence of the sequence of Inertial Algorithm (3.1) (Inertial Algorithm 1) with IPPA (1.5), (1.7) (IPPA), (1.8) (RPPA), and (1.9) (RIPPA), respectively. Also, we present numerical examples to compare the convergence of the sequence of Algorithm (4.7) (Inertial Algorithm 2) with Algorithms (4.8) and (4.9), respectively. Finally, we present a numerical example to illustrate the implementability of Algorithm (4.1) whose sequence approximates a solution of a convex optimization problem.

Example 1

(Zeros of a maximal monotone map in a real Hilbert space)

In Theorem 1.1, IPPA, RPPA, RIPPA, and Theorem 3.2 set \({E=L_{2}([0,1])}\). Consider the map \(A:E \to E\) defined by

$$ (Au) (t):=(t+1)u(t). $$

Then, it is easy to see that A is maximal monotone. Furthermore, the function \(u(t)=0\), \(\forall t\in[0,1]\) is the solution of the equation \(Au(t)=0\). In Theorem 1.1, we take \(\alpha _{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\theta_{n}=\frac{1}{(n+1)^{\frac{1}{4}}}\); in Algorithm (1.7) (IPPA), take \(\lambda_{k}=\frac{k}{k+1}\), \(\alpha_{k}=\frac{1}{(k+1)^{2}}\); in Algorithm (1.8) (RPPA), take \(\lambda_{k}=\frac{k}{k+1}=\rho _{k}\); in Algorithm (1.9) (RPPA), take \(\lambda_{k}=\frac {k}{k+1}=\rho_{k}\), \(\alpha_{k}=\frac{1}{(k+1)^{2}}\); and in Theorem 3.2, we take \({\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}}\), \(\theta_{n}=\frac {1}{(n+1)^{\frac{1}{4}}}\), \(\beta_{n}=\frac{1}{(n+1)^{2}}\), \(n=1,2, \dots\), as our parameters. Clearly, these parameters satisfy the hypothesis of the respective theorems. In all the tables below, we use the following notions:

  • IP—initial point,

  • n—number of iterations,

  • \(\|u_{n+1}\|\)—norm of the approximate solution at the \((n+1)\)th iteration,

  • \(T(s)\)—time in seconds.

Setting a tolerance of 10−6 and maximum number of iterations \(n=10\), we obtain the iterates which are shown in Tables 1 and 2.

Table 1 Numerical results for Example 1
Table 2 Numerical results for Example 1
figure b
figure d

Example 2

(Numerical example for solutions of Hammerstein equation)

In Theorems 4.7, 4.8, and 4.6 (Inertial algorithm 2), respectively, set \({E=L_{5}([0,1]),}\) then, \(E^{*}=L_{\frac{5}{4}}([0,1])\) and \(F:L_{5}([0,1]) \to L_{\frac{5}{4}}([0,1]) \) is defined by

$$(Fu) (t)=Ju(t). $$

Then, it is easy to see that F is maximal monotone. Let \(K: L_{\frac {5}{4}}([0,1]) \to L_{5}([0,1])\) be defined by

$$(Kv) (t)=tv(t). $$

Observe that K is linear. Furthermore, it is easy to see that K maximal monotone and the function \(u^{*}(t)=0\), \(\forall t\in[0,1]\) is the only solution of the equation \(u+KFu=0\). In the algorithm of Theorem 3.1 in [44], we take \(\lambda_{n}=\theta_{n}=\frac{1}{(n+1)^{\frac {1}{2}}}\); in the algorithm of Theorem 3.4 in Theorem [11], \(\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\beta_{n}=\frac {1}{(n+1)^{\frac{1}{4}}}\), \(n=1,2,\dots\); and in the algorithm of Theorem 3.1, we take \(\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\theta_{n}=\frac{1}{(n+1)^{\frac{1}{4}}}\), \(\beta_{n}=\frac{1}{(n+1)^{2}}\), \(n=1,2,\dots\), as our parameters and fixed \(u_{0}(t)=t\) and \(v_{0}(t)=t+1\). Clearly, these parameters satisfy the hypotheses of the respective theorems. Setting a tolerance of 10−6 and maximum number of iterations \(n=6\), we obtain the iterates which are shown in Table 3.

Table 3 Numerical results for Example 2
figure f

Example 3

(Numerical example for solutions of convex optimization problem)

In Theorem 4.2, set set \({E=L_{2}([0,1])}\). Let \(f: E\to \Bbb {R}\cup\{\infty\}\) be defined by

$$ f(z)= \Vert z \Vert , \quad\text{then } \partial f(z) = \textstyle\begin{cases} \frac{z(t)}{ \Vert z \Vert },& z(t_{0})\neq0, t_{0}\in[0,1]; \\ B(0,1),& z(t)=0, \forall t\in[0,1]. \end{cases} $$
(5.1)

Then it is easy to see that ∂f is maximal monotone. Furthermore, the function \(z(t)=0\), \({\forall t\in[0,1]}\) is the solution of the equation \(\partial fz(t)=0\). We take \(\alpha_{n}=\frac{1}{(n+1)^{\frac{1}{2}}}\), \(\theta_{n}=\frac {1}{(n+1)^{\frac{1}{4}}}\), \(\beta_{n}=\frac{1}{(n+1)^{2}}\), \(n=1,2, \dots\), as our parameters. Clearly, these parameters satisfy the hypotheses of Theorem 4.2. Setting a tolerance of 10−6 and maximum number of iterations \(n=10\), we obtain the iterates which are shown in Table 4.

Table 4 Numerical results for Example 3
figure h

Observations

  1. 1.

    In Example 1, we presented a numerical experiment for zeros of a maximal monotone map A on E, where \(E=L_{2}([0,1])\). With a tolerance of 10−6, setting the maximum number of iterations to \(n=10\) and considering \(u_{1}(t)=t^{2}+1\), the sequence generated by Algorithm (1.5) and the sequence generated by the IPPA (1.7) are yet to converge to zero, whereas the sequence generated by our algorithm, Algorithm (3.1), converges to zero in less than 6 iterations already with the 8th iterate as \(1.89\mathrm{E}{-}6\), a very good approximation to a zero.

    Furthermore, the sequence generated by the RPPA converges to zero with the 10th iterate as 0.005, and the sequence generated by the RIPPA converges to zero with the 10th iterate as 0.0051 in 16.68 seconds, whereas the sequence generated by our algorithm, Algorithm (3.1), converges to zero as in the example. The convergence of the sequence generated by our algorithm is better than the convergence of the sequence generated by either the RPPA and RIPPA. A similar trend is observed when the initial vector is changed to \(u_{1}(t)=\frac{1}{t+1}\).

  2. 2.

    In Example 2, we presented a numerical experiment for solutions of a Hammerstein integral equation, where \(E=L_{5}([0,1])\), \(F:E\to E^{*}\) and \(K:E^{*}\to E\) are maximal monotone. With a tolerance of 10−6, setting the maximum number of iterations to \(n=6\), and taking \(u_{1}(t)=\sin t\) and \(v_{1}(t)=\cos t\), the sequence generated by Algorithm (4.8), after 6 iterations in 41.56 seconds is yet to converge to any zero of A, whereas Algorithms (4.7) and (4.9) after 6 iterations, in 4129.97 and 92.78 seconds, respectively, converged to a zero of A. Furthermore, Algorithm (4.9) and our Algorithm (4.7), with these initial vectors, both converge to zero almost jointly with the 6th iterate as 0.0337 in 92.78 and 0.0291 in 4129.97 seconds, respectively. Similar trends are observed when the initial vectors are changed to \(u_{1}(t)=t^{2}-2\), \(v_{1}(t)=e^{t}-1\), and \(u_{1}(t)=2t^{3}-2\), \(u_{1}(t)=te^{t}\).

  3. 3.

    Example 3, where \(E=L_{2}([0,1])\), \(f:E\to\mathbb{R}\cup \{\infty\}\) and \(\partial f:E\rightrightarrows E^{*}\) are maps defined in equation (5.1), demonstrates the implementability of the sequence generated by Algorithm (4.1) with a tolerance of 10−6, maximum number of iterations \(n=10\), and \(z_{1}(t)= t+\sin t \).

Remark 4

Theorems involving IPPA, RPPA, and RIPPA as cited above are proved in real Hilbert spaces, whereas our theorems in this paper are proved in much more general, uniformly smooth and uniformly convex real Banach spaces. Moreover, a strong convergence theorem is proved in Theorem 3.2, whereas a weak convergence theorem is proved for IPPA, RPPA and RIPPA, respectively.

6 Conclusions

An inertial iterative algorithm which does not involve the resolvent operator is proposed for approximating a solution of a maximal monotone inclusion in uniformly convex and uniformly smooth real Banach spaces. The sequence generated by the algorithm is proved to converge strongly to a solution of the inclusion. Furthermore, the theorem proved is applied to approximate a solution of a convex optimization problem, and a solution of a Hammerstein integral equation. In addition, numerical experiments are given to compare, in terms of CPU time and number of iterations, the performance of the sequence generated by our algorithm with the performance of the sequences generated by IPPA, RPPA and RIPPA, respectively. In these examples, the performance of the sequence generated by our algorithm is much better than the performance of the sequence generated by any of IPPA, RPPA, and RIPPA. A numerical example is also given to illustrate the implementability of our algorithm for approximating a solution of a convex optimization problem and for approximating a solution of a Hammerstein integral equation. Finally, it is clear that our algorithm is a welcome addition to the inertial proximal point type algorithms for approximating solutions of maximal monotone inclusions and their applications.

References

  1. Alber, Ya., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, London (2006)

    MATH  Google Scholar 

  2. Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38, 1102–1119 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  4. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. (2001). https://doi.org/10.1023/A:1011253113155

    Article  MathSciNet  MATH  Google Scholar 

  5. Brézis, H., Browder, F.E.: Nonlinear integral equations and systems of Hammerstein type. Bull. Am. Math. Soc. 82, 115–147 (1976)

    Article  MATH  Google Scholar 

  6. Browder, F.E.: Fixed point theory and nonlinear problems. Bull. Am. Math. Soc. 9, 1–39 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  7. Browder, F.E., De Figueiredo, D.G., Gupta, C.P.: Maximal monotone operators and nonlinear integral equations of Hammerstein type. Bull. Am. Math. Soc. 76, 700–705 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chepanovich, R.S.: Nonlinear Hammerstein equations and fixed points. Publ. Inst. Math. (Belgr.) 35, 119–123 (1984)

    MathSciNet  Google Scholar 

  9. Chidume, C.E.: Geometric Properties of Banach Spaces and Nonlinear Iterations. Lecture Notes in Mathematics, vol. 1965. Springer, London (2009)

    Book  MATH  Google Scholar 

  10. Chidume, C.E., Adamu, A., Minjibir, M.S., Nnyaba, U.V.: On the strong convergence of the proximal point algorithm with an application to Hammerstein equations. J. Fixed Point Theory Appl. 22, Article ID 61 (2020)

    Article  MATH  Google Scholar 

  11. Chidume, C.E., Adamu, A., Okereke, L.C.: Approximation of solutions of Hammerstein equations with monotone mappings in real Banach spaces. Carpath. J. Math. 35(3), 305–316 (2019)

    MathSciNet  Google Scholar 

  12. Chidume, C.E., Adamu, A., Okereke, L.C.: Iterative algorithms for solutions of Hammerstein equations in real Banach spaces. Fixed Point Theory Appl. 2020, Article ID 4 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chidume, C.E., Bello, A.U.: An iterative algorithm for approximating solutions of Hammerstein equations with monotone maps in Banach spaces. Appl. Math. Comput. 313, 408–417 (2017)

    MathSciNet  MATH  Google Scholar 

  14. Chidume, C.E., De Souza, G.S., Nnyaba, U.V., Romanus, O.M., Adamu, A.: Approximation of zeros of m-accretive mappings, with applications to Hammerstein integral equations. Carpath. J. Math. 36(1), 45–55 (2020)

    MathSciNet  Google Scholar 

  15. Chidume, C.E., Djitte, N.: Approximation of solutions of nonlinear integral equations of Hammerstein type. ISRN Math. Anal. (2012). https://doi.org/10.5402/2012/169751

    Article  MathSciNet  MATH  Google Scholar 

  16. Chidume, C.E., Idu, K.O.: Approximation of zeros of bounded maximal monotone mappings, solutions of Hammerstein integral equations and convex minimization problems. Fixed Point Theory Appl. (2016). https://doi.org/10.1186/s13663-016-0582-8

    Article  MathSciNet  MATH  Google Scholar 

  17. Chidume, C.E., Ikechukwu, S.I., Adamu, A.: Inertial algorithm for approximating a common fixed point for a countable family of relatively nonexpansive maps. Fixed Point Theory Appl. 2018, Article ID 9 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chidume, C.E., Kumam, P., Adamu, A.: A hybrid inertial algorithm for approximating solution of convex feasibility problems with applications. Fixed Point Theory Appl. 2020, Article ID 12 (2020)

    Article  MathSciNet  Google Scholar 

  19. Chidume, C.E., Nnakwe, M.O., Adamu, A.: A strong convergence theorem for generalized-Φ-strongly monotone maps, with applications. Fixed Point Theory Appl. (2019). https://doi.org/10.1186/s13663-019-0660-9

    Article  MathSciNet  MATH  Google Scholar 

  20. Chidume, C.E., Okereke, L.C., Adamu, A.: A hybrid algorithm for approximating solutions of a variational inequality problem and a convex feasibility problem. Adv. Nonlinear Var. Inequal. 21 (2018)

  21. Chidume, C.E., Uba, M.O., Uzochukwu, M.I., Otubo, E.E., Idu, K.O.: Strong convergence theorem for an iterative method for finding zeros of maximal monotone maps with applications convex minimization and variational inequality problems. Proc. Edinb. Math. Soc. (2019). https://doi.org/10.1017/S0013091518000366

    Article  MathSciNet  MATH  Google Scholar 

  22. Chidume, C.E., Zegeye, H.: Approximation of solutions of nonlinear equations of monotone and Hammerstein type. Appl. Anal. 82(8), 747–758 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  23. Chidume, C.E., Zegeye, H.: Approximation of solutions of nonlinear equations of Hammerstein type in Hilbert space. Proc. Am. Math. Soc. 133(3), 851–858 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Mathematics and Its Applications, vol. 62. Kluwer Academic, Dordrecht (1990)

    Book  MATH  Google Scholar 

  25. De Figueiredo, D.G., Gupta, C.P.: On the variational methods for the existence of solutions to nonlinear equations of Hammerstein type. Bull. Am. Math. Soc. 40, 470–476 (1973)

    MathSciNet  Google Scholar 

  26. Djitte, N., Sene, M.: Iterative solution of nonlinear integral equations of Hammerstein type with Lipschitz and accretive operators. ISRN Appl. Math. (2012). https://doi.org/10.5402/2012/963802

    Article  MATH  Google Scholar 

  27. Dolezale, V.: Monotone Operators and Its Applications in Automation and Network Theory. Studies in Automation and Control. Elsevier, New York (1979)

    Google Scholar 

  28. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  29. Jules, F., Maingé, P.E.: Numerical approach to a stationary solution of a second order dissipative dynamical system. Optimization 51, 235–255 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938–945 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  31. Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization 37, 239–252 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  32. Lindenstrauss, J., Tzafriri, L.: Classical Banach Spaces II: Function Spaces. Ergebnisse Math. Grenzgebiete, vol. 97. Springer, Berlin (1979)

    Book  MATH  Google Scholar 

  33. Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Autom. Inform. Rech. Opér. 4, 154–158 (1970)

    MATH  Google Scholar 

  34. Minty, G.J.: Monotone networks. Proc. R. Soc. Lond. 257, 194–212 (1960)

    MathSciNet  MATH  Google Scholar 

  35. Ofoedu, E.U., Onyi, C.E.: New implicit and explicit approximation methods for solutions of integral equations of Hammerstein type. Appl. Math. Comput. 246, 628–637 (2014)

    MathSciNet  MATH  Google Scholar 

  36. Pascali, D., Sburian, S.: Nonlinear Mappings of Monotone Type. Editura Academia, Bucuresti (1978)

    Book  Google Scholar 

  37. Polyak, B.T.: Some methods of speeding up the convergence of iteration method. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  38. Reich, S.: Constructive techniques for accretive and monotone operators. In: Applied Nonlinear Analysis, pp. 335–345. Academic Press, New York (1979)

    Chapter  Google Scholar 

  39. Reich, S., Sabach, S.: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 31, 22–44 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  40. Rockafellar, R.T.: On the maximal monotonicity of sub-differential mappings. Pac. J. Math. 33, 209–216 (1970)

    Article  MATH  Google Scholar 

  41. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  42. Shehu, Y.: Strong convergence theorem for integral equations of Hammerstein type in Hilbert spaces. Appl. Math. Comput. 231, 140–147 (2014)

    MathSciNet  MATH  Google Scholar 

  43. Solodov, M.V., Svaiter, B.F.: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program., Ser. A 87, 189–202 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  44. Uba, M.O., Uzochukwu, M.I., Onyido, M.A.: Algorithm for approximating solutions of Hammerstein integral equations with maximal monotone operators. Indian J. Pure Appl. Math. 48(3), 391–410 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  45. Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. (2000). https://doi.org/10.1007/s10898-006-9002-7

    Article  Google Scholar 

  46. Xu, H.K.: A regularization method for the proximal point algorithm. J. Glob. Optim. 36(1), 115–125 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors appreciate the support of their institution. The authors would like to thank the referees for their esteemed comments and suggestions.

Availability of data and materials

Data sharing is not applicable to this article.

Funding

No funding is applicable to this article.

Author information

Authors and Affiliations

Authors

Contributions

All the authors contributed evenly in the writing of this paper. They read and approved the final manuscript.

Corresponding author

Correspondence to C. E. Chidume.

Ethics declarations

Competing interests

The authors declare that they have no conflict of interest.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chidume, C.E., Adamu, A. & Nnakwe, M.O. Strong convergence of an inertial algorithm for maximal monotone inclusions with applications. Fixed Point Theory Appl 2020, 13 (2020). https://doi.org/10.1186/s13663-020-00680-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-020-00680-2

MSC

Keywords