Skip to main content

A new self-adaptive method for solving resolvent of sum of two monotone operators in Banach spaces

Abstract

We introduce a Tseng extragradient method for solving monotone inclusion problem in Banach space. A strong convergence result of an Halpern inertial extrapolation method for solving the resolvent of sum of two monotone operators without the knowledge of the Lipschitz constant was established. Lastly, we illustrate some numerical behavior of our iterative scheme to showcase the performance of the proposed method compared to other related results in the literature.

1 Introduction

Let \(\mathcal{X}\) be a real Banach space equipped with its dual space \(\mathcal{X}^{*}\). The monotone inclusion problem (MIP for short) is to find a point \(u^{*}\in \mathcal{X}\) such that

$$\begin{aligned} 0 \in (\Phi +\Psi )u^{*}, \end{aligned}$$
(1.1)

where \({\Phi}: \mathcal{X} \to \mathcal{X^{*}}\) is a monotone operator and \(\Psi:\mathcal{X} \to 2^{\mathcal{X}^{*}}\) represent a maximal monotone operator. The solution set of the MIP is denoted by \((\Phi +\Psi )^{-1}(0^{*})\). The MIP allows an elegant formulation for a wide range of problems, which involves finding an optimal solution for optimization related problems, such as mathematical programming, optimal control, variational inequalities, and many more (see [32]). The MIP have applications in various areas of real-life problems, such as image processing, statistical regression, and signal recovery (see [13, 14, 16, 22]).

Due to the variety of applications of MIP in fixed-point theory, over the years researchers working in this direction have proposed different iterative methods for solving (1.1) (see [2, 6, 11, 30]). One of these methods is the forward-backward splitting method introduced by Lions and Mercier [18] in the settings of real Hilbert space \(\mathcal{H}\), this method is known to be efficient for solving MIP. The forward-backward splitting method is implemented as follows:

Given an arbitrary starting point \(q_{0}\in \mathcal{H}\), the sequence \(\{q_{k}\}\) subsequently generates the next iterates as follows:

$$\begin{aligned} q_{k+1}=R_{\lambda _{k}}^{{\Psi}}(q_{k}-\lambda _{k} {\Phi}q_{k}),\quad k \geq 1, \end{aligned}$$

where \(R_{\lambda _{k}}^{{\Psi}}:=(I + \lambda _{k} {\Psi})^{-1}\) denotes the resolvent of maximal operator \(\Psi, I\) is the identity operator and \(\{\lambda _{k}\}\) is a positive real sequence. They established a weak convergence result for solving MIP (1.1) by assuming that Φ is α-inverse strongly monotone (α-ism). It is well-known that the inverse strong monotonicity of Φ is a strict condition, so it is very important to dispense with the condition in solving MIP (1.1).

To dispense with the inverse strongly monotone assumption, Tseng [25] introduced the following splitting method, known as Tseng’s splitting method, which is computed using the following procedure:

$$\begin{aligned} \textstyle\begin{cases} q_{1} \in \mathcal{H} \\ y_{k}=R_{\lambda _{k}}^{\Psi}(q_{k}-\lambda _{k} \Phi q_{k}), \\ q_{k+1}=y_{k}-\lambda _{k}(\Phi y_{k}-\Phi q_{k}),\quad \forall k \geq 1, \end{cases}\displaystyle \end{aligned}$$

where \(\Phi: \mathcal{H} \to \mathcal{H}\) is monotone and L-Lipschitz operator, \(\Psi:\mathcal{H} \to 2^{\mathcal{H}}\) is a multi-valued operator, and \(\{\lambda _{k}\}\) is a sequence in \((0, \frac{1}{L}) \), where L is a Lipschitz constant.

Remark 1.1

Though, Tseng [25] could dispense with the inverse strong monotonicity assumption on Φ. The limitation of the Tseng’s method proposed above requires the prior knowledge of the Lipschitz constant of the underlying operator. However, from practical point of view, the Lipschitz constant in this case is very difficult to compute.

In 2019, Shehu [24] extended the Tseng’s [25] iterative method to the setting of 2-uniformly convex Banach space which is also uniformly smooth. He established a weak convergence result with the prior knowledge of the Lipschitz constant.

Very recently, Sunthrayuth et al. [23] extended the result of Shehu [24] to the setting of a reflexive Banach space. They [23] proposed two different iterative methods that do not require prior knowledge of the Lipschitz constants for solving MIP and fixed-point problem for a Bregman relatively nonexpansive mapping. One of their iterative methods uses the linesearch procedure, which would necessitate numerous additional computations and further lower the computational cost of their algorithm, while the other one uses a self-adaptive procedure, which looks more efficient. The method that uses the self-adaptive step-size is defined below:

Algorithm 1.2

Mann splitting algorithm for solving MIP. Initialization: Choose \(\lambda ^{1} > 0\), \(\mu, \theta \in (0,\sigma )\), where σ is a constant defined in (2.3). Let \(q^{1} \in \mathcal{E}\) be arbitrary starting points.

Iterative step:

Step 1::

Compute

$$\begin{aligned} w^{k}=R_{\lambda ^{k}}^{\Psi} \nabla g^{*}\bigl( \nabla g\bigl(q^{k}\bigr)-\lambda ^{k} \Phi \bigl(q^{k}\bigr)\bigr). \end{aligned}$$
Step 2::

Compute

$$\begin{aligned} z^{k}=\nabla g^{*}(\nabla g\bigl(w^{k}\bigr)- \lambda ^{k}\bigl(\Phi \bigl(w^{k}\bigr)-\Phi \bigl(q^{k}\bigr)\bigr), \end{aligned}$$

where \(\lambda ^{k+1}\) is updated as follows:

$$\begin{aligned} \lambda ^{k+1}= \textstyle\begin{cases} \min \{ \frac {\mu \Vert q^{k}-w^{k} \Vert }{ \Vert \Phi q^{k}-\Phi w^{k,j} \Vert }, \lambda ^{k} \} & \textit{if } \Phi q^{k}\neq \Phi w^{k}, \\ \lambda ^{k} & \textit{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(1.2)
Step 3::

Compute

$$\begin{aligned} q^{k+1}=\nabla g^{*}\bigl(\bigl(1-\alpha ^{k}\bigr) \nabla g\bigl(z^{k}\bigr) + \alpha ^{k} \nabla g \bigl(Tz^{k}\bigr)\bigr) \end{aligned}$$

Stopping criterion If \(q^{k+1}=z^{k}\) for some positive k, then stop. Otherwise set \(k:=k+1\) and return to Iterative step .

A weak convergence result was obtained using their iterative algorithm without any prior knowledge of the Lipschitz constant of the underlying operator.

Inspired by the heavy-ball methods of a two-order time dynamical system, Polyak [21] and Nestrov [20] proposed the following inertial method:

$$\begin{aligned} \textstyle\begin{cases} u^{k}=q^{k} + \theta ^{k}(q^{k}-q^{k-1}), \\ q^{k+1}=u^{k}-\lambda ^{k} \nabla f(u^{k}),\quad \forall k \geq 1, \end{cases}\displaystyle \end{aligned}$$
(1.3)

where \(\theta ^{k} \in [0,1)\) is simply the inertial and \(\lambda ^{k}\) is a positive sequence (see [24, 19, 28, 31]).

Very recently, Abass et al. [1] proposed the following modified inertial method for approximating solution of systems of MIP and fixed-point problem for a finite family of multi-valued Bregman relatively nonexpansive as follows:

$$\begin{aligned} \textstyle\begin{cases} u_{k}=\nabla h^{*}(\nabla h(q_{k}) + \theta _{k}(\nabla h(q_{k-1})- \nabla h(q_{k}))) \\ w_{k}=\nabla h^{*}(R_{\phi}^{N} \circ R_{\phi}^{N-1}\circ \cdots \circ R_{\phi}^{1}(u_{k})) \\ y_{k}=\nabla h^{*}(\delta _{k,0} \nabla h(w_{k}) + \sum_{r=1}^{N} \delta _{k,r} \nabla h(z_{k,r})),\quad z_{k,r} \in S_{r}w_{k} \\ x_{k+1}=\nabla h^{*}(\alpha _{k} \nabla h (u) + \beta _{k} \nabla h(x_{k}) + \gamma _{k} \nabla h(y_{k})), \end{cases}\displaystyle \end{aligned}$$

where \(\{\theta _{k}\} \subset [0, \frac{1}{2}], \{\alpha _{k}\}, \{\beta _{k} \}, \{\phi _{k}\}\) and \(\{\delta _{k,r}\}\) are sequences in (0,1) such that \(\alpha _{k} + \beta _{k} + \delta _{k}=1\). They established a strong convergence result of their method for solving the aforementioned problems.

Motivated by the results in [2325] and results from related literature in this direction, we develop a new self-adaptive method equipped with an inertial Halpern method for solving MIP in real Banach space. We establish a strong convergence result for solving MIP without the knowledge of the Lipschitz constant of the underlying operator. Lastly, we illustrate few numerical experiments in comparison with other related ones in the literature. Our result is a further contribution to related results in the literature.

We highlight some of the contributions in this study:

  1. (i)

    Results from [24, 25] were extended to a more general Banach spaces.

  2. (ii)

    Introduction of a self-adaptive procedure, which increase from iteration to iteration and is independent of the Lipschitz constant of underlying operator, is studied. This differs from the methods of Shehu [24] and Tseng [25], where the knowledge of Lipschitz constant is required.

  3. (iii)

    A strong convergence result desirable to weak convergence result was established (see [23]).

  4. (iv)

    The result discussed in [1] is a special of MIP (1.1) if \(\Phi \equiv 0\).

  5. (v)

    We employ the inertial method as introduced by Polyak [21], which is quite different from the ones in [1, 29] (i.e. \(\theta _{k}(q_{k-1}-q_{k}))\) was changed to \(\theta _{k}(q_{k}-q_{k-1})\).

2 Preliminaries

In this section, we denote strong and weak convergence by “→” and “”, respectively.

Let K be a nonempty closed and convex subset of a real Banach space \(\mathcal{X}\). Let \(h: \mathcal{X} \rightarrow (-\infty, + \infty ]\) be a proper, lower semicontinuous and convex function, then the Fenchel conjugate of h, denoted by \(h^{*}: \mathcal{X^{*}} \rightarrow (-\infty,+ \infty ]\) is defined by

$$\begin{aligned} h^{*}\bigl(u^{*}\bigr)=\sup \bigl\{ \bigl\langle u^{*}, u\bigr\rangle -h(u): u \in \mathcal{X} \bigr\} ,\quad u^{*} \in \mathcal{X^{*}}. \end{aligned}$$

The domain of h be denoted as \(\operatorname{dom} (h)=\{u \in \mathcal{X}: h(u)< +\infty \}\), thus for any \(u \in \operatorname{intdom} (h)\) and \(v \in \mathcal{X}\), the right-hand derivative of h at x in the direction of v is defined by

$$\begin{aligned} h^{0}(u, v)=\lim_{t \rightarrow 0^{+}}\frac{h(u+tv)-h(u)}{t}. \end{aligned}$$

The function h is said to be

  1. (i)

    Gâteaux differentiable at u if \(\lim_{t \rightarrow 0^{+}}\frac{h(u+tv)-h(u)}{t}\) exists for any v. In this case, \(h^{0}(u,v)\) coincides with \(\nabla h(u)\);

  2. (ii)

    Gâteaux differentiable, if it is Gâteaux differentiable for any \(u \in \operatorname{intdom} (h)\);

  3. (iii)

    Fréchet differentiable at u, if its limit is attained uniformly in \(\Vert v\Vert =1\);

  4. (iv)

    Uniformly Fréchet differentiable on a subset K of \(\mathcal{X}\), if the above limit is attained uniformly for \(u \in K\) and \(\Vert v\Vert =1\).

Let \(h:\mathcal{X} \rightarrow (-\infty, + \infty ]\) be a mapping, then h is said to be:

  1. (i)

    essentially smooth, if the subdifferential of h denoted as ∂h is both locally bounded and single-valued on its domain, where \(\partial{h(u)}= \{w \in \mathcal{X}: h(u) -h(v) \geq \langle w,v-u \rangle, v \in \mathcal{X} \}\);

  2. (ii)

    essentially strictly convex, if \((\partial h)^{-1}\) is locally bounded on its domain and h is strictly convex on every convex subset of \(\operatorname{dom} \partial h\);

  3. (iii)

    Legendre, if it is both essentially smooth and essentially strictly convex. See [8, 9] for more details on Legendre functions.

Alternatively, a function h is said to be Legendre if it satisfies the following conditions:

  1. (i)

    The intdomh is nonempty, h is Gâteaux differentiable on intdomh and \(\operatorname{dom} \nabla h=\operatorname{intdom} h\);

  2. (ii)

    The \(\operatorname{intdom} h^{*}\) is nonempty, \(h^{*}\) is Gâteaux differentiable on \(\operatorname{intdom} h^{*}\) and \(\operatorname{dom} \nabla h^{*}=\operatorname{int} \operatorname{dom} h\).

If \(B_{s}:=\{z \in \mathcal{E}: \Vert z\Vert \leq s\}\) for all \(s > 0\). Then, a function \(h: \mathcal{X} \rightarrow \mathbb{R}\) is called uniformly convex on bounded subsets of \(\mathcal{X}\), [see pp. 203 and 221] [33] if \(\rho _{s}t > 0\) for all \(s, t > 0\), where \(\rho _{s}: [0, + \infty ) \rightarrow [0, \infty ]\) is defined by

$$\begin{aligned} \rho _{s}(t)=\inf_{x, y \in B_{s}, \Vert x-y\Vert =t, \alpha \in (0,1)} \frac{\alpha h(x)+ (1-\alpha )h(y)-h(\alpha (x)+ (1-\alpha )y)}{\alpha (1-\alpha )}, \end{aligned}$$

for all \(t \geq 0\), with \(\rho _{s}\) denoting the gauge of uniform convexity of h. The function h is also said to be uniformly smooth on bounded subsets of \(\mathcal{X}\), [33, see p. 221], if \(\lim_{t \downarrow 0} \frac{\sigma _{s}}{t}\) for all \(s > 0\), where \(\sigma _{s}: [0, +\infty ) \rightarrow [0,\infty ]\) is defined by

$$\begin{aligned} \sigma _{s}(t)=\sup_{x \in B, y \in S_{\mathcal{X}}, \alpha \in (0,1)} \frac{\alpha h(x)+ (1-\alpha )ty)+ (1-\alpha )h(x-\alpha t y)-h(x)}{\alpha (1-\alpha )}, \end{aligned}$$

for all \(t \geq 0\). The function h is said to be uniformly convex if the function \(\delta h: [0, + \infty ) \rightarrow [0, + \infty )\) defined by

$$\begin{aligned} \delta h(t): =\sup \biggl\{ \frac{1}{2}h(x)+ \frac{1}{2}h(y)-h \biggl( \frac{x+y}{2}\biggr): \Vert y-x \Vert =t\biggr\} , \end{aligned}$$

satisfies \(\lim_{t \downarrow 0} \frac{\delta h(t)}{t}=0\).

Definition 2.1

[10] Let \(h:\mathcal{X} \rightarrow (-\infty,+ \infty ]\) be a convex and Gâteaux differentiable function. Then, the function \(G_{h}: \mathcal{X} \times \mathcal{X} \rightarrow [0, + \infty )\) defined by

$$\begin{aligned} G_{h}(u, v):=h(u)-h(v)- \bigl\langle h\nabla (v), u-v \bigr\rangle \end{aligned}$$
(2.1)

is called the Bregman distance with respect to h, where \(u, v \in \mathcal{X}\).

However, the Bregman distance satisfies the following three point identity: for any \(u \in \operatorname{dom} (h)\) and \(v, z \in \operatorname{int} \operatorname{dom} (h)\),

$$\begin{aligned} G_{h}(u,v)+ G_{h}(v,z)-G_{h}(u,z)= \bigl\langle \nabla h(z)-\nabla h(v), u-v \bigr\rangle . \end{aligned}$$
(2.2)

Also the relationship between \(G_{h}\) and \(\Vert.\Vert \) with strong convexity constant \(\sigma >0\), i.e.,

$$\begin{aligned} G_{h}(x,y) \geq \frac{\sigma}{2} \Vert x-y \Vert ^{2}, \quad\forall x \in \operatorname{dom} (h), y \in \operatorname{int} \bigl(\operatorname{dom} (h)\bigr). \end{aligned}$$
(2.3)

Let \(\mathcal{U}: K \rightarrow \operatorname{int}(\operatorname{dom} h)\) be a nonlinear operator. An element \(p \in K\) is said to be a fixed point of \(\mathcal{U}\) if \(\mathcal{U}p=p\). We denote by \(F(\mathcal{U})\) fixed point sets of the operator \(\mathcal{U}\). In addition, a point \(p \in K\) is said to be an asymptotic fixed point of \(\mathcal{U}\) if K contains a sequence \(\{x^{k}\}\) such that \(\{x^{k}\} \rightharpoonup p\) and \(\lim_{k \rightarrow \infty}\Vert \mathcal{U}x^{k}-x^{k}\Vert =0\). We denote by \(\hat{F}(\mathcal{U})\) the asymptotic fixed-point sets of \(\mathcal{U}\).

An operator \(M:K \to \mathcal{X}\) is said to be monotone if

  1. (i)

    \(\langle x-y, Mx-My\rangle \geq 0, \forall x, y \in K\).

  2. (ii)

    \(\mathcal{L}\)-Lipschitz continuous if there exists a constant \(L>0\) such that \(\|Mx-My\| \leq L\|x-y\|, \forall x, y \in K\).

  3. (iii)

    Bregman quasi-nonexpansive, if \(F(M) \neq \emptyset \), and

    $$\begin{aligned} G_{h}(p, Mx) \leq G_{h}(p, x),\quad \forall p \in F(M), x \in K. \end{aligned}$$

For a set-valued operator M: \(\mathcal{X} \to \mathcal{X^{*}}\), the domain, range and graph are defined as follows: \(\operatorname{dom}(M):\{u \in \mathcal{X}: Mx \neq 0\}, \operatorname{Ran}{M}:=\bigcup \{Mu: u \in \operatorname{dom}(M)\}\) and \(\operatorname{Gra}(M):=\{(u,u^{*}) \in X \times X^{*}: u^{*} \in Mu\}\), respectively. An operator M is said to be monotone if for each \((u,u^{*}), (v,v^{*}) \in \operatorname{Gra}(M)\), we get \(\langle u-v, u^{*}-v^{*}\rangle \geq 0\). A monotone operator M is said to be maximal, if its graph is not contained in the graph of any other monotone operator on \(\mathcal{X}\). It is known that if \(h: \mathcal{X} \to \mathbb{R}\) is Gâteaux differentiable, strictly convex and co-finite, then M is maximal monotone if and only if \(\operatorname{Ran}(\nabla h + \lambda M)=\mathcal{X^{*}}\).

Let \(h: \mathcal{X} \to (-\infty, \infty ]\) be Fréchet differentiable function, which is bounded on bounded subsets of \(\mathcal{X,}\) and M be a maximal monotone operator, then the resolvent of M for \(\lambda >0\) defined by

$$\begin{aligned} R_{\lambda}^{M}(u):=(\nabla h+ \nabla M)^{-1} \nabla h(u),\quad \forall u \in \mathcal{X} \end{aligned}$$

is a single-valued Bregman quasi-nonexpansive mapping from \(\mathcal{X}\) onto \(\operatorname{dom}(M)\) with \(F(J_{\lambda}^{M})=M^{-1}(0)\).

Definition 2.2

A function \(h: \mathcal{X} \rightarrow \mathbb{R}\) is called strongly coercive if

$$\begin{aligned} \lim_{ \Vert q^{k} \Vert \rightarrow \infty}\frac{h(q^{k})}{ \Vert q^{k} \Vert }=\infty. \end{aligned}$$

Lemma 2.3

[12] Let \(h: \mathcal{X} \rightarrow \mathbb{R}\) be a strongly coercive Bregman function and V be a function defined by

$$\begin{aligned} V\bigl(u, u^{*}\bigr)=h(u)-\bigl\langle u, u^{*}\bigr\rangle + h^{*}\bigl(u^{*}\bigr),\quad x \in \mathcal{X}, u^{*} \in \mathcal{X^{*}}. \end{aligned}$$

Then the following holds:

$$\begin{aligned} G_{h}\bigl(u, \nabla h^{*}\bigl(u^{*}\bigr) \bigr)=V\bigl(u,u^{*}\bigr), \quad\textit{for all } u\in \mathcal{X} \textit{ and } u^{*} \in \mathcal{X^{*}}. \end{aligned}$$
$$\begin{aligned} V\bigl(u, u^{*}\bigr) + \bigl\langle \nabla h^{*} \bigl(u^{*}\bigr)-u, v^{*}\bigr\rangle \leq V\bigl(u, u^{*} + v^{*}\bigr) \quad\textit{for all } u \in \mathcal{X} \textit{ and } u^{*}, v^{*} \in \mathcal{X^{*}}. \end{aligned}$$

Lemma 2.4

[12] Let \(h: \mathcal{X} \rightarrow \mathbb{R}\) a Gâteaux differentiable function which is uniformly convex on bounded subsets of \(\mathcal{X}\). Suppose \(\{w^{k}\}_{k \in \mathbb{N}}\) and \(\{v^{k}\}_{k\in \mathbb{N}}\) are bounded sequences in \(\mathcal{X}\). Then,

$$\begin{aligned} \lim_{k \rightarrow \infty}G_{h}\bigl(w^{k},v^{k} \bigr)=0\Rightarrow \lim_{k \rightarrow \infty} \bigl\Vert v^{k}-w^{k} \bigr\Vert =0. \end{aligned}$$

Lemma 2.5

[17] Suppose \(h: \mathcal{X} \rightarrow \mathbb{R}\) is a Gâteaux differentiable function which is uniformly convex on bounded subsets of E. If \(u_{0} \in \mathcal{E}\) and the sequence \(\{G_{h}(u^{k}, u_{0})\}\) is bounded, then the sequence \(\{u^{k}\}\) is also bounded.

Definition 2.6

Let K be a nonempty closed and convex subset of \(\mathcal{X}\). A Bregman projection of \(x \in \operatorname{int}(\operatorname{dom} h)\) onto \(K \subset \operatorname{int}(\operatorname{dom} h)\) is the unique vector \(\operatorname{Proj}_{K}^{h}(x) \in K\) satisfying

$$\begin{aligned} G_{h}\bigl(\operatorname{Proj}_{K}^{h}(u), u \bigr)=\inf\bigl\{ G_{h}(v,u): v \in K\bigr\} . \end{aligned}$$

The Bregman projection is characterized by the following identities given in the following Lemma:

Lemma 2.7

[26] Let K be a nonempty closed and convex subset of a reflexive Banach space \(\mathcal{X}\) and \(x \in \mathcal{X}\). Let \(h: \mathcal{X} \rightarrow \mathbb{R}\) be a Gâteaux differentiable and totally convex function. Then,

(i) \(q=\operatorname{Proj}_{K}^{h}(u)\) if and only if \(\langle \nabla g(u)-\nabla g(q), v-q\rangle \leq 0, \forall v \in K\).

(ii) \(G_{h}(v, \operatorname{Proj}_{K}^{h}(u)) + G_{h}(\operatorname{Proj}_{K}^{h}(u), u) \leq G_{h}(v,u), \forall v \in K\).

Lemma 2.8

[7] Let \(\mathcal{X}\) be a real Banach space and \(\Phi:\mathcal{X} \to \mathcal{X^{*}}\) be a monotone, hemicontinuous and bounded operator. Suppose \(\Psi:\mathcal{X} \to \mathcal{X^{*}}\) is a maximal monotone operator. Then \({\Phi}+{\Psi}\) is maximal monotone.

Lemma 2.9

[5] Let \(\{u_{n}\}, \{\beta _{n}\}\), and \(\{\alpha _{n}\}\) be sequences in \([0,\infty )\) such that

$$ u_{n+1} \le u_{n}+\alpha _{n}(u_{n}-u_{n-1})+ \beta _{n} $$

for all \(n \ge 1\), \(\sum_{n=1}^{\infty}\beta _{n} < \infty \) and there exists a real number α with \(0 \le \alpha _{n} \le \alpha < 1\), for all \(n\in \mathbb{N}\). Then, the following hold:

  1. (i)

    \(\sum_{n \ge 1}[u_{n}-u_{n-1}]_{+}\) where \(t_{+}=\max \{0,t\}\);

  2. (ii)

    there exists \(u^{*} \in [0,\infty )\) such that \(\lim_{n \to \infty}u_{n}=u^{*}\).

Lemma 2.10

[27] Let \(\{u_{n}\}\) be a sequence of nonnegative real numbers, \(\{\alpha _{n} \}\) be a sequence of real numbers in \((0,1)\) such that \(\sum_{n =1}^{\infty}\alpha _{n}=\infty \), and \(\{v_{n}\}\) be a sequence of real numbers. Assume that

$$ u_{n+1} \leq (1-\alpha _{n})u_{n}+\alpha _{n} v_{n} \quad\forall n \geq 1. $$

If \(\limsup_{k \to \infty} v_{n_{k}}\leq 0\) for every subsequence \(\{u_{n_{k}}\}\) of \(\{u_{n}\}\) satisfying the condition

$$ \liminf_{k \to \infty}(u_{n_{k} +1}-u_{n_{k}}) \geq 0, $$

then \(\lim_{n \to \infty}u_{n}=0\).

3 Main result

Now, we present a new self-adaptive method for solving resolvent of sum of two monotone operators. Below are some important assumptions:

Assumption 3.1

  1. (R1)

    Let \(\mathcal{X}\) be a real Banach space equipped with its dual space \(\mathcal{X}^{*}\). Let \(\Phi:\mathcal{X} \to \mathcal{X}^{*}\) be a monotone and \(\mathcal{L}\)-Lipschitz continuous mapping and \(\Psi: \mathcal{X} \to 2^{\mathcal{X}^{*}}\) be a maximal monotone mapping.

  2. (R2)

    \(h:\mathcal{X} \to \mathbb{R} \cup \{+ \infty \}\) is a function which is Legendre, uniformly Fréchet differentiable, strongly coercive, uniformly convex, φ-strongly convex, and bounded on bounded subsets of \(\mathcal{X}\).

  3. (R3)

    The solution set \(\Delta:=(\Phi + \Psi )^{-1}(0)\) is nonempty.

Algorithm 3.2

MIP and its convergence analysis. Initialization: Given \(\kappa _{1} > 0, \sigma >0, \rho >0\), and \(\omega \in (0,1)\). Let \(v, q_{0}, q_{1} \in \mathcal{X}\) and \(\psi _{k}\) be a sequence in (0,1) such that \(\lim_{k \rightarrow \infty}\psi _{k}=0\) and \(\sum_{k=1}^{\infty}\psi _{k}=\infty \) for all \(k \in \mathbb{N}\).

Step 1::

Given \(q_{k-1}, q_{k}\), choose \(\sigma _{k}\) such that \(\sigma _{k} \in [0,\bar{\sigma}_{k}]\), where

$$\begin{aligned} \bar{\sigma}_{k}= \textstyle\begin{cases} \min \{\sigma, \frac {\pi _{k}}{\phi _{r}( \Vert \nabla h(q_{k})-\nabla h (q_{k-1}) \Vert )}, \frac {\pi _{k}}{G_{h}(q_{k},q_{k-1})} \}, \quad \textit{if } q_{k} \neq q_{k-1}, \\ \sigma, \textit{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(3.1)

and \(\{\pi _{k}\}\) is a sequence of nonnegative numbers such that \(\pi _{k}=\circ (\psi _{k})\), that is \(\lim_{k \to \infty} \frac{\pi _{k}}{\psi _{k}}=0\).

Compute

$$\begin{aligned} \textstyle\begin{cases} r_{k}=\nabla h^{*}(\nabla h(q_{k}) + \sigma _{k}(\nabla h(q_{k})- \nabla h (q_{k-1}))) \\ s_{k}=R_{\kappa _{k}}^{\Psi} \nabla h^{*}(\nabla h(r_{k})-\kappa _{k} \Phi (r_{k})) \end{cases}\displaystyle \end{aligned}$$
(3.2)
Step 2::

Compute

$$\begin{aligned} t_{k}=\nabla h^{*}\bigl(\nabla h(s_{k})-\kappa _{k}\bigl(\Phi (s_{k})-\Phi (r_{k})\bigr)\bigr), \end{aligned}$$
(3.3)

and \(\kappa _{k+1}\) is defined as

$$\begin{aligned} \kappa _{k+1}= \textstyle\begin{cases} \min \{ \kappa _{k}, \{ \frac {\mu ((\rho G_{h}(s_{k},r_{k}) + \frac{1}{\rho}G_{h}(s_{k},t_{k})))}{\langle s_{k}-t_{k}, \Phi (s_{k})-\Phi (r_{k})\rangle} \}, \quad \textit{if } \langle s_{k}-t_{k}, \Phi (s_{k})-\Phi (r_{k}) \rangle \} \neq 0. \\ \kappa _{k},\quad \textit{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(3.4)
Step 3::

Compute

$$\begin{aligned} q_{k+1}=\nabla h^{*}\bigl(\psi _{k} \nabla h(v) + (1-\psi _{k}) \nabla h(t_{k})\bigr) \end{aligned}$$
(3.5)

Stopping criterion If \(q_{k+1}=r_{k}\) for \(k \geq 1\), then stop. Otherwise set \(k:=k+1\) and return to step 1.

Lemma 3.3

Let \(\{q_{k}\}\) be a sequence generated by Algorithm 3.2. Then the sequence \(\{\kappa _{k}\}\) is nonincreasing and

$$\begin{aligned} \lim_{k \to \infty} \kappa _{k}=\kappa \geq \min \biggl\{ \kappa _{1}, \frac{\omega \varphi}{\mathcal{L}}\biggr\} . \end{aligned}$$

Proof

It is obvious from self-adaptive stepsize in step 2 of (3.2) that \(\kappa _{k+1} \leq \kappa _{k}\) for all \(k \geq 1\). If \(\langle s_{k}-t_{k}, \Phi (s_{k})-\Phi (r_{k})\rangle =0\), then \(\{\kappa _{k}\}\) is a constant sequence and thus the conclusion holds. Otherwise, we have

$$\begin{aligned} \frac{\omega (\rho G_{h}(s_{k},r_{k}) + \frac{1}{\rho}G_{h}(s_{k},t_{k}))}{\langle s_{k}-t_{k}, \Phi (s_{k})-\Phi (r_{k})\rangle} & \geq \frac{\frac{\omega \varphi}{2}(\rho \Vert s_{k}-r_{k} \Vert ^{2} + \frac{1}{\rho} \Vert s_{k}-t_{k} \Vert ^{2})}{ \Vert s_{k}-t_{k} \Vert \Vert \Phi (s_{k})-\Phi (r_{k}) \Vert } \\ & \geq \frac{\omega \varphi \Vert s_{k}-r_{k} \Vert \Vert s_{k}-t_{k} \Vert }{\mathcal{L} \Vert s_{k}-t_{k} \Vert \Vert s_{k}-r_{k} \Vert } \\ &=\frac{\omega \varphi}{\mathcal{L}}. \end{aligned}$$

Clearly

$$\begin{aligned} \kappa _{k+1} \geq \min \biggl\{ \kappa _{k}, \frac{\omega \varphi}{\mathcal{L}}\biggr\} . \end{aligned}$$

By induction, we deduce that \(\kappa _{k} \geq \min \{\kappa _{1}, \frac{\omega \varphi}{\mathcal{L}}\}\). Thus \(\lim_{k \rightarrow \infty}\kappa _{k}=\kappa \geq \min \{ \kappa _{1}, \frac{\omega \varphi}{\mathcal{L}}\}\). The proof completes. □

Lemma 3.4

Given that \(\kappa > 0\), if \(s_{k}=r_{k}=q_{k+1}\) for some \(k >0\), then \(r_{k} \in \Delta \).

Proof

Given that \(\kappa > 0\), if \(s_{k}=r_{k}\), then \(r_{k}=R_{k_{k}}^{\Psi} \nabla h^{*}(\nabla h(r_{k})-\kappa _{k}\Phi (r_{k}))\). Thus, \(r_{k}=(\nabla h + \kappa _{k}\Psi )^{-1} \circ \nabla h^{*}(\nabla h- \kappa _{k}\Phi )r_{k}\), that is \(\nabla h(r_{k})-\kappa _{k}\Phi (r_{k})\), which implies that \(0 \in (\Phi + \Psi )r_{k}\). Hence, \(r_{k} \in (\Phi + \Psi )^{-1}(0)\). Since \(s_{k}=r_{k}\) and h is injective, we obtain from step 2 of Algorithm (3.2) that \(t_{k}=r_{k}\). Therefore, we conclude that \(r_{k} \in \Delta:=(\Phi + \Psi )^{-1}(0)\). □

Lemma 3.5

Let \(\{r_{k}\}\) be a sequence generated iteratively by (3.2). Then

$$\begin{aligned} G_{h}(u, t_{k}) \leq G_{h}(u,r_{k})- \biggl(1- \frac{\omega \rho \kappa _{k}}{\sigma _{k+1}}\biggr)G_{h}(s_{k}, r_{k}) -\biggl(1- \frac{\omega \kappa _{k}}{\rho \sigma _{k+1}}\biggr)G_{h}(s_{k},t_{k}). \end{aligned}$$

Proof

Let \(u \in \Delta \), thus by applying the definition of Bregman distance, we deduce that

$$\begin{aligned} G_{h}(u,t_{k})={}&G_{h}\bigl(u, \nabla h^{*}\bigl(\nabla h(s_{k})-\kappa _{k}\bigl( \Phi (s_{k})-\Phi (r_{k})\bigr)\bigr)\bigr) \\ ={}&h(u)-h(t_{k})-\bigl\langle u-t_{k}, \nabla h(s_{k})-\kappa _{k}\bigl(\Phi (s_{k})- \Phi (r_{k})\bigr)\bigr\rangle \\ ={}&h(u)-h(t_{k})-\bigl\langle u-t_{k}, \nabla h(s_{k})\bigr\rangle + \kappa _{k} \bigl\langle u-t_{k}, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle \\ ={}&h(u)-h(s_{k})-\bigl\langle u-s_{k}, \nabla h(s_{k})\bigr\rangle + \bigl\langle u-t_{k}, \nabla h(s_{k})\bigr\rangle + h(s_{k}) -h(t_{k}) \\ &{}-\bigl\langle u-t_{k}, \nabla h(s_{k})\bigr\rangle + \kappa _{k} \bigl\langle u-t_{k}, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle \\ ={}&h(u)-h(s_{k})-\bigl\langle u-s_{k}, \nabla h(s_{k})\bigr\rangle -h(t_{k}) + h(s_{k}) + \bigl\langle t_{k}-s_{k}, \nabla h(s_{k})\bigr\rangle \\ &{}+ \kappa _{k}\bigl\langle u-t_{k}, \Phi (s_{k})- \Phi (r_{k})\bigr\rangle \\ ={}&G_{h}(u,s_{k})-G_{h}(t_{k},s_{k})+ \kappa _{k}\bigl\langle u-t_{k}, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle . \end{aligned}$$
(3.6)

From (2.2), we get

$$\begin{aligned} G_{h}(u,s_{k}) =G_{h}(u,r_{k}) -G_{h}(s_{k},r_{k}) + \bigl\langle u-s_{k}, \nabla h(r_{k})-\nabla h(s_{k})\bigr\rangle . \end{aligned}$$
(3.7)

By combining (3.6) and (3.7), we obtain

$$\begin{aligned} G_{h}(u,t_{k}) ={}&G_{h}(u,r_{k}) -G_{h}(s_{k},r_{k}) -G_{h}(t_{k},s_{k}) + \bigl\langle u-s_{k}, \nabla h(r_{k})-\nabla h(s_{k})\bigr\rangle \\ &{}-\kappa _{k} \bigl\langle u-t_{k}, \Phi (s_{k})- \Phi (r_{k})\bigr\rangle \\ ={}&G_{h}(u,r_{k})-G_{h}(s_{k},r_{k})-G_{h}(t_{k},s_{k}) + \bigl\langle u-s_{k}, \nabla h(r_{k})-\nabla h(s_{k})\bigr\rangle \\ &{}+\kappa _{k}\bigl\langle s_{t}-k_{t}, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle - \kappa _{k}\bigl\langle s_{k}-u, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle \\ ={}&G_{h}(u,r_{k})-G_{h}(s_{k},r_{k})-G_{h}(t_{k},s_{k})+ \kappa _{k} \bigl\langle s_{k}-t_{k}, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle \\ &{}-\bigl\langle s_{k}-u, \nabla h(r_{k})-\nabla h(s_{k})-\kappa _{k}\bigl(\Phi (r_{k})- \Phi (s_{k})\bigr)\bigr\rangle . \end{aligned}$$
(3.8)

From the definition of \(s_{k}\), we obtain \(\nabla h(r_{k})-\kappa _{k}(\Phi (r_{k})) \in \nabla h(s_{k}) + \kappa _{k} \Psi (s_{k})\). Since Ψ is maximal monotone, there exists \(d_{k} \in \Psi (s_{k})\) such that \(\nabla h(r_{k})-\kappa _{k}\Phi (r_{k})=\nabla h(s_{k})+ \kappa _{k}d_{k}\), thus it follows that

$$\begin{aligned} d_{k}=\frac{1}{\kappa _{k}}\bigl(\nabla h(r_{k})-\nabla h(s_{k})-\kappa _{k} \Phi (s_{k})\bigr). \end{aligned}$$
(3.9)

Using the fact that \(0 \in (\Phi + \Psi )u\) and \(\Phi (s_{k}) + d_{k} \in (\Phi + \Psi )s_{k}\), it follows from Lemma 2.8 that

$$\begin{aligned} \bigl\langle s_{k}-u, \Phi (s_{k}) + d_{k}\bigr\rangle \geq 0. \end{aligned}$$
(3.10)

By substituting (3.9) and (3.10), we get

$$\begin{aligned} \frac{1}{\kappa _{k}}\bigl\langle s_{k}-u, \nabla h(r_{k})- \nabla h(s_{k})- \kappa _{k}\Phi (r_{k}) + \kappa _{k}\Phi (s_{k})\bigr\rangle \geq 0. \end{aligned}$$

That is

$$\begin{aligned} \bigl\langle s_{k}-u, \nabla h(r_{k})-\nabla h(s_{k})-\kappa _{k}\bigl(\Phi (r_{k})- \Phi (s_{k})\bigr)\bigr\rangle \geq 0. \end{aligned}$$
(3.11)

By combining (3.8) and (3.11), we get

$$\begin{aligned} G_{h}(u,t_{k}) \leq{}& G_{h}(u,r_{k})-G_{h}(s_{k}, r_{k})-G_{h}(t_{k},s_{k}) \\ &{}+ \kappa _{k}\bigl\langle s_{k}-t_{k}, \Phi (s_{k})-\Phi (r_{k})\bigr\rangle \\ \leq{}& G_{h}(u,r_{k})-G_{h}(s_{k},r_{k})-G_{h}(t_{k},s_{k}) \\ &{}+ \frac{\omega \kappa _{k}}{\kappa _{k+1}} \biggl(\rho G_{h}(s_{k},r_{k}) + \frac{1}{\rho}G_{h}(s_{k},t_{k}) \biggr) \\ ={}&G_{h}(u,r_{k})- \biggl(1- \frac{\omega \rho \kappa _{k}}{\kappa _{k+1}} \biggr)G_{h}(s_{k},r_{k})- \biggl(1- \frac{\omega \kappa _{k}}{\rho \kappa _{k+1}} \biggr)G_{h}(s_{k},t_{k}). \end{aligned}$$
(3.12)

It is obvious from Lemma 3.3 that

$$\begin{aligned} \lim_{k \rightarrow \infty} \biggl(1- \frac{\omega \rho \kappa _{k}}{\kappa _{k+1}} \biggr)=1-\omega \rho >0, \end{aligned}$$

and

$$\begin{aligned} \lim_{k \rightarrow \infty} \biggl(1- \frac{\omega \kappa _{k}}{\rho \kappa _{k+1}} \biggr)=1- \frac{\omega}{\rho}>0. \end{aligned}$$

Hence, there exists a positive integer \(N_{2}>0\) such that

$$\begin{aligned} 1-\frac{\omega \rho \kappa _{k}}{\kappa _{k+1}}>0, 1- \frac{\omega \kappa _{k}}{\rho \kappa _{k+1}}>0, \quad\forall k \geq N_{2}. \end{aligned}$$

Therefore, we conclude from (3.12) that

$$\begin{aligned} G_{h}(u,t_{k}) \leq G_{h}(u, r_{k}). \end{aligned}$$
(3.13)

 □

Lemma 3.6

Suppose \(\{q_{k}\}\) is as defined in (3.2), then the sequences \(\{q_{k}\}, \{r_{k}\}, \{s_{k}\}\) and \(\{t_{k}\}\) are all bounded.

Proof

Let \(u \in \Delta \) and \(g_{k} =\nabla h(q_{k})+\sigma _{k}(\nabla h(q_{k})-\nabla h(q_{k-1}))\). Then, we can write \(r_{k}\) as \(r_{k}=\nabla h^{*}(g_{k})\). Thus

$$\begin{aligned} G_{h}(u,r_{k})={}&G_{h}\bigl(u,\nabla h^{*}(g_{k})\bigr) \\ ={}&h(u)-\langle u,g_{k}\rangle + h^{*}(g_{k}) \\ ={}&h(u)-\bigl\langle u,\nabla h(q_{k})+\sigma _{k}\bigl( \nabla h(q_{k})-\nabla h(q_{k-1})\bigr) \bigr\rangle \\ &{}+h^{*}\bigl(\nabla h(q_{k})+\sigma _{k}\bigl(\nabla h(q_{k})-\nabla h(q_{k-1})\bigr)\bigr) \\ ={}&h(u)-\bigl\langle u,\nabla h(q_{k})\bigr\rangle + (1+\sigma _{k})h^{*}\bigl( \nabla h(q_{k})\bigr)-\sigma _{k} h^{*}\bigl(\nabla h(q_{k-1})\bigr) \\ &{}+\sigma _{k}(1+ \sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr) \\ &{}-\bigl\langle u,\sigma _{k}\bigl(\nabla h(q_{k})-\nabla h(q_{k-1})\bigr)\bigr\rangle \\ ={}&h(u)-\bigl\langle u,\nabla h(q_{k})\bigr\rangle +h^{*} \bigl(\nabla h(q_{k})\bigr) + \sigma _{k}\bigl(h^{*} \bigl(\nabla h(q_{k})\bigr)-h^{*}\bigl(\nabla (q_{k-1})\bigr)\bigr) \\ &{}-\bigl\langle u, \sigma _{k}\bigl(\nabla h(q_{k})-\nabla h(q_{k-1})\bigr) \bigr\rangle \\ &{}+ \sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr) \\ ={}&G_{h}(u,q_{k})+\sigma _{k}\bigl(h^{*} \bigl(\nabla h(q_{k})\bigr)-h^{*}\bigl(\nabla (q_{k-1})\bigr)\bigr)- \bigl\langle u,\sigma _{k}\bigl(\nabla h(q_{k})-\nabla h(q_{k-1})\bigr) \bigr\rangle \\ &{}+ \sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr) \\ \le{}& G_{h}(u,q_{k})+\sigma _{k}\bigl\langle q_{k}, \nabla h(q_{k})-\nabla (q_{k-1}) \bigr\rangle -\bigl\langle u,\sigma _{k}\bigl(\nabla h(q_{k})-\nabla h(q_{k-1})\bigr) \bigr\rangle \\ &{}+ \sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr) \\ ={}&G_{h}(u,q_{k})+\sigma _{k}\bigl\langle u-q_{k},\nabla h(q_{k-1})-\nabla h(q_{k}) \bigr\rangle \\ &{}+ \sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})- \nabla h(q_{k-1}) \bigr\Vert \bigr) \\ ={}&G_{h}(u,q_{k})+\sigma _{k}\bigl(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}) \bigr)+\sigma _{k}G_{h}(q_{k},q_{k-1}) \\ &{}+ \sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr). \end{aligned}$$
(3.14)

We deduce from step 3 of (3.2), (3.13), and (3.14) that

$$\begin{aligned} G_{h}(u, q_{k+1}) ={}&G_{h}\bigl(u, \nabla h^{*}\bigl(\psi _{k} \nabla h (v) + (1- \psi _{k}) \nabla h(t_{k})\bigr)\bigr) \\ \leq{}& \psi _{k} G_{h}(u,v) + (1-\psi _{k})G_{h}(u,t_{k}) \\ \leq{}& \psi _{k} G_{h}(u,v) + (1-\psi _{k})G_{h}(u,r_{k}) \\ \leq{}& \psi G_{h}(u,v) + (1-\psi _{k}) \bigl(G_{h}(u,q_{k}) \\ &{}+ \sigma _{k}\bigl(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}) \bigr)+ \sigma _{k}G_{h}(q_{k},q_{k-1}) \\ &{}+\sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr) \bigr) \\ \leq{}& \max \bigl\{ G_{h}(u,v), G_{h}(u,q_{k})+ \sigma _{k}\bigl(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}) \bigr)+ \sigma _{k}G_{h}(q_{k},q_{k-1}) \\ &{}+\sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr) \bigr\} . \end{aligned}$$

Suppose \(G_{h}(u,v)\) is the maximum, then the conclusion follows trivially. Otherwise, there exists \(k_{0} \in \mathbb{N}\) such that for all \(k \ge k_{0}\), we have

$$\begin{aligned} G_{h}(u,q_{k+1}) \le{}& G_{h}(u,q_{k})+ \sigma _{k}\bigl(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}) \bigr)+ \sigma _{k}G_{h}(q_{k},q_{k-1})\\ &{}+ \sigma _{k}(1+\sigma _{k})\phi _{r}\bigl( \bigl\Vert \nabla h(q_{k})-\nabla h(q_{k-1}) \bigr\Vert \bigr). \end{aligned}$$

Then by Lemma 2.9 we have that \(\{G_{h}(u,q_{k})\}\) convergent. Hence, \(\{G_{h}(u,q_{k})\}\) is bounded. It follows from Lemma 2.5 that \(\{q_{k}\}\) is bounded. Consequently, the sequences \(\{r_{k}\}, \{s_{k}\}\) and \(\{t_{k}\}\) are bounded. □

Theorem 3.7

Assume that \(\{\psi _{k}\}\) is a sequence in \((0,1), \nabla h\) is weakly sequentially continuous on \(\mathcal{X,}\) and Assumptions (3.2) holds. Then the sequence \(\{q_{k}\}\) generated by (3.2) converges strongly to an element in Δ.

Proof

Let \(u \in \Delta\), then we have from Lemma 2.3, (3.14), and (3.12) that

$$\begin{aligned} G_{h}(u, q_{k+1}) ={}&V_{h}\bigl(u, \psi _{k} \nabla h(v) + (1-\psi _{k}) \nabla h(t_{k})\bigr) \\ \leq{}& V_{h}\bigl(u, \psi _{k} \nabla h(v) + (1-\psi _{k}) \nabla h(t_{k})- \psi _{k}\bigl(\nabla h(t_{k})-\nabla h(u)\bigr)\bigr) \\ &{}+ \psi _{k}\bigl\langle \nabla h(v)-\nabla h(u), q_{k+1}-u \bigr\rangle \\ ={}&V_{h}\bigl(u, \psi _{k} \nabla h(u) + (1-\psi _{k})\nabla h(t_{k})\bigr) + \psi _{k} \bigl\langle \nabla h(v)-\nabla h(u), q_{k+1}-u\bigr\rangle \\ \leq{}& \psi _{k} G_{h}(u,u) + (1-\psi _{k})G_{h}(u,t_{k}) + \psi _{k} \bigl\langle \nabla h(v)-\nabla h(u), q_{k+1}-u \bigr\rangle \\ ={}&(1-\psi _{k})G_{h}(u,t_{k}) + \psi _{k} \bigl\langle \nabla h(v)-\nabla h(u), q_{k+1}-u\bigr\rangle \\ \leq{}& (1-\psi _{k})G_{h}(u,r_{k})-(1-\psi _{k}) \biggl(1- \frac{\omega \rho \kappa _{k}}{\kappa _{k+1}} \biggr)G_{h}(s_{k},r_{k}) \\ &{}-(1-\psi _{k}) \biggl(1-\frac{\omega \kappa _{k}}{\rho \kappa _{k+1}} \biggr)G_{h}(s_{k},t_{k})+ \psi _{k} \bigl\langle \nabla h(v)-\nabla h(u), q_{k+1}-u \bigr\rangle \\ \leq{}& (1-\psi _{k})G_{h}(u,q_{k}) -(1-\psi _{k}) \biggl(1- \frac{\omega \rho \kappa _{k}}{\kappa _{k+1}} \biggr)G_{h}(s_{k},r_{k}) \\ &{}-(1- \psi _{k}) \biggl(1-\frac{\omega \kappa _{k}}{\rho \kappa _{k+1}} \biggr)G_{h}(s_{k},t_{k}) \\ &{}+\psi _{k} \frac{\sigma _{k}}{\psi _{k}} \bigl[\bigl(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}) \bigr)+G_{h}(q_{k},q_{k-1}) \\ &{}+(1+ \sigma _{k})\phi _{r}\bigl(\|\nabla h(q_{k})-\nabla h(q_{k-1})\bigr) \bigr] \\ &{}+ \psi _{k} \bigl\langle \nabla h(v)-\nabla h(u), q_{k+1}-u \bigr\rangle \end{aligned}$$
(3.15)
$$\begin{aligned} \leq{}& (1-\psi _{k})G_{h}(u,q_{k}) +\psi _{k} \bigl\langle \nabla h(v)- \nabla h(u), q_{k+1}-u\bigr\rangle \\ &{}+\psi _{k} \frac{\sigma _{k}}{\psi _{k}} \bigl[\bigl(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}) \bigr)+G_{h}(q_{k},q_{k-1}) \\ &{}+(1+ \sigma _{k})\phi _{r}\bigl(\|\nabla h(q_{k})-\nabla h(q_{k-1})\bigr) \bigr], \end{aligned}$$
(3.16)

with \(Z_{k}=(\langle \nabla h(v)-\nabla h(u), q_{k+1}-u\rangle + \frac{\sigma _{k}}{\psi _{k}} [(G_{h}(u,q_{k})-G_{h}(u,q_{k-1}))+G_{h}(q_{k},q_{k-1})+(1+ \sigma _{k})\phi _{r}(\|\nabla h(q_{k})-\nabla h(q_{k-1})) ])\).

Thus

$$\begin{aligned} G_{h}(u, q_{k+1}) \leq (1-\psi _{k})G_{h}(u,q_{k}) + \psi _{k} Z_{k}. \end{aligned}$$
(3.17)

Let \(a_{k}=G_{h}(u, q_{k+1})\), then (3.16) becomes

$$\begin{aligned} q_{k+1} \leq (1-\psi _{k})a_{k} + \psi _{k}Z_{k}. \end{aligned}$$
(3.18)

To establish a strong convergence result, we claim that \(\limsup_{l \rightarrow \infty}Z_{k_{l}}\leq 0\) whenever exists a subsequence \(\{a_{k_{l}}\}\) of \(\{a_{k}\}\) satisfying

$$\begin{aligned} \liminf_{l \to \infty}(a_{k_{l+1}}-a_{k_{l}}) \geq 0, \end{aligned}$$

Indeed, assume there exists such subsequence, in view of (3.15), we obtain

$$\begin{aligned} & \limsup_{l \rightarrow \infty} \biggl[(1-\psi _{k}) \biggl(1- \frac{\omega \rho \kappa _{k_{l}}}{\kappa _{k_{l+1}}} \biggr)G_{h}(s_{k_{l}}, r_{k_{l}} )+ \biggl(1- \frac{\omega \kappa _{k_{l}}}{\rho \kappa _{k_{l+1}}} \biggr)G_{h}(s_{k_{l}}, t_{k_{l}}) \biggr] \\ &\quad\leq \limsup_{l \rightarrow \infty} \bigl[(1-\psi _{k_{l}})G_{h}(u, q_{k_{l}}) \\ &\qquad{}-G_{h}(u, q_{k_{l+1}}) + M_{3} \psi _{k_{l}} \bigr] \\ &\quad=-\liminf_{l \to \infty}(a_{k_{l+1}}-a_{k_{l}}) \\ &\quad \leq 0, \end{aligned}$$
(3.19)

where \(M_{3}=\sup_{l \in \mathbb{N}}Z_{k_{l}}\) and thus

$$\begin{aligned} \lim_{l \to \infty}G_{h}(s_{k_{l}}, r_{k_{l}})=0=\lim_{l \to \infty}G_{h}(s_{k_{l}}, t_{k_{l}}). \end{aligned}$$
(3.20)

By Lemma 2.4, we get

$$\begin{aligned} \lim_{l \to \infty} \Vert s_{k_{l}}-r_{k_{l}} \Vert =0=\lim_{l \to \infty} \Vert s_{k_{l}}-t_{k_{l}} \Vert . \end{aligned}$$
(3.21)

Observe from step 1 of (3.2) that

$$\begin{aligned} \bigl\Vert \nabla h(r_{k_{l}})-\nabla h(q_{k_{l}}) \bigr\Vert &=\sigma _{k_{l}} \bigl\Vert \nabla h(q_{k_{l}})- \nabla h(q_{k_{l-1}}) \bigr\Vert \\ &\leq \psi _{k_{l}} \frac{\sigma _{k_{l}}}{\psi _{k_{l}}} \bigl\Vert \nabla h(q_{k_{l}})- \nabla h(q_{k_{l-1}}) \bigr\Vert \to 0, \quad\text{as } l \to \infty. \end{aligned}$$
(3.22)

Since \(\nabla h^{*}\) is continuous on bounded subsets of \(\mathcal{X}^{*}\), we obtain

$$\begin{aligned} \lim_{l \to \infty} \Vert r_{k_{l}}-q_{k_{l}} \Vert =0. \end{aligned}$$
(3.23)

It is obvious from (3.21) that

$$\begin{aligned} \lim_{l \to \infty} \Vert t_{k_{l}}-r_{k_{l}} \Vert =0. \end{aligned}$$
(3.24)

Using (3.21), (3.23), and (3.24), we obtain that

$$\begin{aligned} \lim_{l \to \infty} \Vert s_{k_{l}}-q_{k_{l}} \Vert =0=\lim_{l \to \infty} \Vert t_{k_{l}}-q_{k_{l}} \Vert . \end{aligned}$$
(3.25)

We deduce from step 3 of (3.2) that

$$\begin{aligned} \bigl\Vert \nabla h(q_{k_{l}+1})-\nabla h(t_{k_{l}}) \bigr\Vert \leq \psi _{k_{l}} \bigl\Vert \nabla h(v)-\nabla h(t_{k_{l}}) \bigr\Vert \to 0 \quad\text{as } l \to \infty. \end{aligned}$$
(3.26)

Using the fact that \(\nabla h^{*}\) is continuous on bounded subsets of \(\mathcal{X}^{*}\), we get

$$\begin{aligned} \lim_{l \to \infty} \Vert q_{k_{l}+1}-t_{k_{l}} \Vert =0, \end{aligned}$$
(3.27)

which implies from (3.25) that

$$\begin{aligned} \lim_{l \to \infty} \Vert q_{k_{l}+1}-q_{k_{l}} \Vert =0. \end{aligned}$$
(3.28)

We next show that \(\limsup_{l \rightarrow \infty}Z_{k_{l}}\leq 0\). Clearly, it suffices to show that

$$ \limsup_{l \rightarrow \infty}\bigl\langle \nabla h(v)-\nabla h(u), q_{k_{l}+1}-u\bigr\rangle \le 0. $$

Let \(\{q_{k_{l_{i}}}\}\) be a s subsequence of \(\{q_{k_{l}}\}\) such that

$$ \lim_{j \to \infty}\bigl\langle \nabla h(v)-\nabla h(u), q_{k_{l_{j}}+1}-u \bigr\rangle = \limsup_{l \to \infty}\bigl\langle \nabla h(v)-\nabla h(u), q_{k_{l}+1}-u\bigr\rangle . $$

From the boundedness of the sequence \(\{q_{k_{l}}\}\) is the existence of \(\{q_{k_{l_{j}}}\}\) such that \(q_{k_{l_{j}}} \rightharpoonup x^{*} \in \mathcal{X}\). In view of (3.25), there exist subsequences \(\{s_{k_{l_{j}}}\}\) and \(\{t_{k_{l_{j}}}\}\) which converge weakly to \(x^{*}\), respectively. Now, to establish that \(x^{*} \in (\Phi + \Psi )^{-1}(0)\), let \((\mu _{1}, \mu _{2}) \in \operatorname{Gra}(\Phi + \Psi )\), we have \(\mu _{2}-\Phi \mu _{1} \in \Psi \mu _{1}\). From the definition of \(s_{k_{l}}\), we observe that

$$\begin{aligned} \nabla h(r_{k_{l}})-\kappa _{k_{l_{j}}}\Phi r_{k_{l_{j}}} \in \nabla h(s_{k_{k_{l_{j}}}}) + \kappa _{k_{l_{j}}}\Psi s_{k_{l_{j}}}, \end{aligned}$$

which implies

$$\begin{aligned} \frac{1}{\kappa _{k_{l_{j}}}}\bigl(\nabla h(r_{k_{l_{j}}})-\nabla h(s_{k_{l_{j}}})- \kappa _{k_{l_{j}}}\Phi (r_{k_{l_{j}}})\bigr)\in \Psi (s_{k_{l_{j}}}). \end{aligned}$$

Applying the maximal monotonicity of Ψ, we obtain

$$\begin{aligned} \biggl\langle \mu _{1}-s_{k_{l_{j}}}, \mu _{2}-\Phi \mu _{1} + \frac{1}{\kappa _{k_{l_{j}}}}\bigl(\nabla h(r_{k_{l_{j}}})-\nabla h(s_{k_{l_{j}}})- \kappa _{k_{l_{j}}}\Phi (r_{k_{l_{j}}})\bigr)\biggr\rangle \geq 0. \end{aligned}$$

In addition, by the monotonicity of Φ, we obtain

$$\begin{aligned} \langle \mu _{1}-s_{k_{l_{j}}}, \mu _{2} \rangle \geq{}& \biggl\langle \mu _{1}-s_{k_{l_{j}}}, \Phi \mu _{1} + \frac{1}{\kappa _{k_{l_{j}}}}\bigl(\nabla h(r_{k_{l_{j}}})- \nabla h(s_{k_{l}})-\kappa _{k_{l_{j}}}\Phi (r_{k_{l_{j}}})\bigr)\biggr\rangle \\ ={}&\bigl\langle \mu _{1}-s_{k_{l_{j}}}, \Phi \mu _{1}- \Phi (r_{k_{l_{j}}}) \bigr\rangle + \frac{1}{\kappa _{k_{l_{j}}}}\bigl\langle \mu _{1}-s_{k_{l_{j}}}, \nabla h(r_{k_{l_{j}}})-\nabla h(s_{k_{l_{j}}})\bigr\rangle \\ ={}&\langle \mu _{1}-s_{k_{l_{j}}}, \Phi (\mu _{1})-\Phi (s_{k_{l_{j}}}) + \bigl\langle \mu _{1}-s_{k-{l_{j}}}, \Phi (s_{k_{l_{j}}})-\Phi (r_{k_{l_{j}}}) \bigr\rangle \\ &{}+ \frac{1}{k_{k_{l_{j}}}}\bigl\langle \mu _{1}-s_{k_{l_{j}}}, \nabla h(r_{k_{l_{j}}})- \nabla h(s_{k_{l_{j}}})\bigr\rangle \\ \geq{}& \bigl\langle \mu _{1}-s_{k_{l_{j}}}, \Phi (s_{k_{l_{j}}})-\Phi (r_{k_{l_{j}}}) \bigr\rangle + \frac{1}{k_{k_{l_{j}}}} \bigl\langle \mu _{1}-s_{k_{l_{j}}}, \nabla h(r_{k_{l}})- \nabla h (s_{k_{l_{j}}})\bigr\rangle . \end{aligned}$$
(3.29)

From the fact that Φ is Lipschitz continuous and \(s_{k_{l_{j}}} \rightharpoonup x^{*}\), it follows from (3.21) that

$$\begin{aligned} \bigl\langle \mu _{1}-x^{*}, \mu _{2}\bigr\rangle \geq 0. \end{aligned}$$

From the monotonicity of \(\Phi + \Psi \), we obtain that \(0 \in (\Phi + \Psi )x^{*}\), thus \(x^{*} \in \Delta \). It follows from Lemma 2.7 and (3.28), that

$$\begin{aligned} & \limsup_{l \to \infty} \bigl\langle \nabla h(v)-\nabla h(u), q_{k_{l+1}}-u \bigr\rangle \\ & \quad\leq \limsup_{l \to \infty} \bigl\langle \nabla h(v)- \nabla h(u), q_{k_{l+1}}-q_{k_{l}}\bigr\rangle + \limsup_{l \to \infty} \bigl\langle \nabla h(v)-\nabla h(u), q_{k_{l}}-u\bigr\rangle \\ &\quad= \lim_{j \to \infty}\bigl\langle \nabla h(v)-\nabla h(u), q_{k_{l_{j}}}-u \bigr\rangle \\ &\quad=\bigl\langle \nabla h(v)-\nabla h(u), x^{*}-u\bigr\rangle \\ & \quad\leq 0. \end{aligned}$$
(3.30)

By applying (3.30) and Lemma 2.10, we obtain that \(\lim_{l \to \infty} G_{h}(u, q_{k_{l}}) \to 0\) and \(l \to \infty \), thus \(\lim_{l \to \infty}\|q_{k_{l}}-u\|=0\). Therefore, we conclude that \(\{q_{k}\}\) converges strongly to \(u \in \Omega \), where \(u=\operatorname{Proj}_{\Omega}v\). □

If \(\mathcal{X}\) is a 2-uniformly convex and uniformly smooth Banach space, and \(h(u)=\frac{1}{2}\|u\|^{2}\), then Algorithm 3.2 becomes

Algorithm 3.8

MIP and its convergence analysis. Initialization: Given \(\kappa _{1} > 0, \sigma >0, \rho >0\), and \(\omega \in (0,1)\). Let \(v, q_{0}, q_{1} \in \mathcal{X}\) and \(\psi _{k}\) be a sequence in (0,1) such that \(\lim_{k \rightarrow \infty}\psi _{k}=0\) and \(\sum_{k=1}^{\infty}\psi _{k}=\infty \) for all \(k \in \mathbb{N}\).

Step 1::

Given \(q_{k-1}, q_{k}\), choose \(\sigma _{k}\) such that \(\sigma _{k} \in [0,\overline{\sigma _{k}}]\), where

$$\begin{aligned} \overline{\sigma _{k}}= \textstyle\begin{cases} \min \{\sigma, \frac {\pi _{k}}{ \Vert (q_{k})- (q_{k-1}) \Vert } \} & \textit{if } \Vert (q_{k})- (q_{k-1}) \Vert \neq 0, \\ \sigma & \textit{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(3.31)

and \(\{\pi _{k}\}\) is a sequence of nonnegative numbers such that \(\pi _{k}=\circ (\psi _{k})\), that is \(\lim_{k \to \infty} \frac{\pi _{k}}{\psi _{k}}=0\).

Compute

$$\begin{aligned} \textstyle\begin{cases} r_{k}=J^{-1}(J(q_{k}) + \sigma _{k}(J(q_{k})-J (q_{k-1}))) \\ s_{k}=R_{\kappa _{k}}^{\Psi} J^{-1}(J^{-1}(r_{k})-\kappa _{k}\Phi (r_{k})) \end{cases}\displaystyle \end{aligned}$$
(3.32)
Step 2::

Compute

$$\begin{aligned} t_{k}=J^{-1}(J(s_{k})-\kappa _{k}\bigl( \Phi (s_{k})-\Phi (r_{k})\bigr), \end{aligned}$$
(3.33)

and \(\kappa _{k+1}\) is defined as

$$\begin{aligned} \kappa _{k+1}= \textstyle\begin{cases} \min \{ \kappa _{k}, \{ \frac {\mu ((\rho \phi (s_{k},r_{k}))+ \frac{1}{\rho}\phi (s_{k},t_{k}))}{\langle Js_{k}-Jt_{k}, \Phi (s_{k})-\Phi (r_{k})\rangle} \}, \quad\textit{if } \langle J s_{k}-Jt_{k}, \Phi (s_{k})- \Phi (r_{k})\rangle \} \neq 0. \\ \kappa _{k},\quad \textit{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(3.34)
Step 3::

Compute

$$\begin{aligned} q_{k+1}=J^{-1}\bigl(\psi _{k} J(v) + (1-\psi _{k}) J(t_{k})\bigr) \end{aligned}$$
(3.35)

Stopping criterion If \(q_{k+1}=r_{k}\) for \(k \geq 1\) then stop. Otherwise set \(k:=k+1\) and return to step 1.

4 Numerical example

In this section, we provide the numerical experiments to illustrate the convergence of the proposed algorithms and compare with other algorithms.

Example 4.1

This example is taken from [23]. Let \(\mathcal{X}=\mathbb{R}^{m} (m=2k)\), and \(K:=\{x=(x_{1},x_{2},\ldots,x_{m}) \in \mathbb{R}^{m}: x_{i} \ge 0, \sum_{i =1}^{m}x_{i}=1 \}\). Let \(h: K \to \mathbb{R}\) be defined by \(h(x)=\sum_{i =1}^{m}x_{i}\ln x_{i}\), then h satisfies the condition of Theorem 3.7 and strongly convex with \(\sigma =1\) with respect to \(\ell _{1}\)-norm on K. It follows that \(\nabla h(x)=(1+\ln x_{1},1+\ln x_{2}, \ldots, 1+\ln x_{m})\) and \(\nabla h^{*}(y)=(e^{x_{1}-1},e^{x_{2}-1},\ldots,e^{x_{m}-1})\). Now, define the mapping \(\Phi: \mathcal{X} \to \mathcal{X^{*}}\) by \(\Phi (x)=(2x_{1}+1,0,2x_{3}+1,0 \cdots,0, 2x_{2k-1}+1,0)\) and \(\Psi: \mathcal{X} \to \mathcal{X^{*}}\) by \(\Psi (x)=N_{K}(x)\), where \(N_{K}\) is the normal cone defined by \(N_{K}(x)=\{p \in \mathbb{R}^{m}: \langle p,y-x\rangle \ge 0, \forall y \in K \}\). Then, the mapping Φ is monotone and Lipschitz continuous with a constant 2. The mapping Ψ is maximal monotone. Thus, \(R_{\kappa _{k}}^{\partial i_{K}}(x)=\operatorname{Proj}_{K}^{h}x\). From [15, Remark 4] the Bregman projection onto K is defined as

$$ \operatorname{Proj}_{K}^{h}(x)= \biggl( \frac{x_{1}e^{a_{1}}}{\sum_{i=1}^{m}x_{i}e^{a_{i}}}, \frac{x_{2}e^{a_{2}}}{\sum_{i=1}^{m}x_{i}e^{a_{i}}},\ldots, \frac{x_{m} e^{a_{m}}}{\sum_{i=1}^{m}x_{i}e^{a_{i}}} \biggr) a \in \mathbb{R}^{m} \quad\text{and}\quad x \operatorname{int}(K). $$

For this example, we choose \(\pi _{k}=\frac{1}{k^{1.2}}\), \(\psi _{k}=\frac{1}{k+1}\), \(\mu =0.5\), \(\rho =0.4\), \(\sigma =0.7\) and \(\kappa _{1}=1.7\). The starting points \(x_{0}\) and \(x_{1}\) are chosen randomly in \(\mathbb{R}^{m}\). We compare our Algorithm 3.2 with Algorithm 2 in Sunthrayuth et al. [23]. The result of this experiment is given in Fig. 1 with the \(m=10,20,40,50\).

Figure 1
figure 1

Top left: \(m=10\); Top right: \(m=20\); Bottom left: \(m=40\); Bottom right: \(m=50\)

The next example is given in \(\ell _{p}\) spaces \((1 \le p < \infty )\) with \(p\ne 2\). It is known that \(\ell _{p}^{*}\) is isomorphic to \(l_{q}\) if \(\frac{1}{p}+\frac{1}{q}=1\). It is known that \(\ell _{p}\) is a reflexive Banach space. In this case, we set \(h(x)=\frac{1}{2}\|x\|^{2}\). Thus, we obtain that \(\nabla h(x)=x=\nabla h^{*}(x)\).

Example 4.2

Let \(\mathcal{X}=\ell _{3}(\mathbb{R})\) be a Banach space defined by \(\ell _{3}(\mathbb{R})= \{x=(x_{1},x_{2},x_{3}, \ldots ), x_{i} \in \mathbb{R}: (\sum_{i =1}^{\infty}|x_{i}|^{3} )^{ \frac{1}{3}} \}\), with norm \(\|\cdot \|_{\ell _{3}}:\ell _{3} \to \mathbb{N}\) defined by \(\ell _{3}(x)= (\sum_{i =1}^{\infty}|x_{i}|^{3} )^{ \frac{1}{3}}\) for all \(x=(x_{1},x_{2},x_{3}, \ldots ) \in \ell _{3}\). Let \(\Phi: \ell _{3} \to \ell _{3}\) be given by \(\Phi (x)=3x+(1,1,1,0,0,0,\ldots )\). It is easy to see that Φ is monotone. Also, define the mapping \(\Psi: \ell _{3} \to \ell _{3}\) by \(\Psi (x)=7x\). By direct calculation, we obtain for \(\kappa _{k} > 0\), that

$$\begin{aligned} s_{k}&=\bigl(\nabla h(r_{k})-\kappa _{k} \Psi \bigr)^{-1}\nabla h \circ \nabla h^{*} \bigl(\nabla h(r_{k})-\kappa _{k} \Phi (r_{k})\bigr) \\ &=\frac{1-3\kappa _{k}}{1+7\kappa _{k}}r_{k}- \frac{\kappa _{k}}{1+7\kappa _{k}}(1,1,1,0,0,0,\ldots ). \end{aligned}$$

We choose the parameters as in the previous example with comparison to Algorithm 2 of Sunthrayuth et al. [23]. The report of this example is given in Fig. 2, which shows the competitive advantage of our method in terms of number of iterations to convergence. The stopping criterion in both examples is given as \(\|x_{n+1}-x_{n}\| \le \epsilon \), where \(\epsilon =10^{-4}\).

  1. (Case 1)

    \(x_{0}=[1,1,1,0,0,0,\ldots ]\) and \(x_{1}=[2,2,2,1,1,1,\ldots ]\),

  2. (Case 2)

    \(x_{0}=[1,1,0,0,\ldots ]\) and \(x_{1}=[3,3,0,0,0,0,\ldots ]\).

Figure 2
figure 2

Left: Case 1; Right: Case 2

5 Conclusions

In our article, we propose a new self-adaptive step-size and an inertial Tseng method for solving monotone inclusion problem using the Bregman distance approach in reflexive Banach space. Using an inertial Halpern method, we proved that our proposed method converges strongly to an element in the solution set. Lastly, we show through our experiment that our new step size for the proposed algorithm is more efficient than the result of [23].

Availability of data and materials

Not applicable.

References

  1. Abass, H.A., Narain, O.K., Onifade, O.M.: Inertial extrapolation method for solving system of monotone variational inclusion and fixed point problems using Bregman distance approach. Nonlinear Funct. Anal. Appl. 28(2), 497–520 (2023)

    Google Scholar 

  2. Abass, H.A., Aremu, K.O., Jolaoso, L.O., Mewomo, O.T.: An inertial forward-backward splitting method for approximating solutions of certain optimization problem. J. Nonlinear Funct. Anal. 2020, Article ID 6 (2020)

    Google Scholar 

  3. Abass, H.A., Oyewole, O.K., Jolaoso, L.O., Aremu, K.O.: Modified inertial Tseng for solving variational inclusion and fixed point problems o Hadamard manifolds. Appl. Anal. (2023). https://doi.org/10.1080/00036811.2023.2256357

    Article  Google Scholar 

  4. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  5. Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  6. Attouch, H., Peypouquet, J., Redont, P.: Backward-forward algorithms for structured monotone inclusion in Hilbert space. J. Math. Anal. Appl. 457, 1095–1117 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  7. Barbu, V., Precupanu, T.: Convexity and Optimization in Banach Spaces. Springer, New York (2010)

    MATH  Google Scholar 

  8. Bauschke, H.H., Borwein, J.M.: Legendre functions and method of random Bregman functions. J. Convex Anal. 4, 27–67 (1997)

    MathSciNet  MATH  Google Scholar 

  9. Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Essentially smoothness, essentially strict convexity and Legendre functions in Banach spaces. Commun. Contemp. Math. 3, 615–647 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bregman, L.M.: The relaxation method for finding the common point of convex sets and its application to solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)

    Article  MathSciNet  Google Scholar 

  11. Bruck, R.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159–164 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  12. Butnairu, D., Resmerita, E.: Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, Article ID 84919 (2006)

    MathSciNet  MATH  Google Scholar 

  13. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. SIAM J. Multiscale Model. Simul. 4, 1168–1200 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  14. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  15. Denisov, S.V., Semenov, V.V., Stetsynk, P.I.: Bregman extragradient method with monotone rule of step adjustment. Cybern. Syst. Anal. 55(3), 377–383 (2019)

    Article  MATH  Google Scholar 

  16. Duchi, J., Singer, Y.: Efficient online and batch learning using forward-backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)

    MathSciNet  MATH  Google Scholar 

  17. Eskandani, G.Z., Raeisi, M., Rassias, T.M.: A hybrid extragradient method for solving pseudomonotone equilibrium problem using Bregman distance. J. Fixed Point Theory Appl. 20, 132 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  19. Naraghirad, E., Yao, J.C.: Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, Article ID 141 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Nestrov, Y.: A method for solving the convex programming problem with convergence rate \(0(\frac{1}{k^{2}})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  21. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–77 (1964)

    Article  Google Scholar 

  22. Raguet, H., Fadili, J., Peyre, G.: A generalized forward-backward splitting. SIAM J. Imaging Sci. 6, 1199–1226 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Sunthrayuth, P., Pholasa, N., Cholamjiak, P.: Mann-type algorithms for solving the monotone inclusion problem and the fixed point problem in reflexive Banach spaces. Ric. Mat. 72(1), 63–90 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  24. Shehu, Y.: Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces. Results Math. 74, Article ID 138 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  25. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  26. Reich, S., Sabach, S.: A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 10, 471–485 (2009)

    MathSciNet  MATH  Google Scholar 

  27. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operator in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  28. Suantai, S., Pholasa, N., Cholamjiak, P.: The modified inertial relaxed CQ algorithm for solving split feasibility problems. J. Ind. Manag. Optim. 14, Article ID 4 (2018)

    Article  MathSciNet  Google Scholar 

  29. Suantai, S., Pholasa, N., Cholamjiak, P.: Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 1081–1099 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  30. Tie, J.V.: Convex Analysis: An Introductory Text. Wiley, New York (1984)

    Google Scholar 

  31. Ugwunnadi, G.C., Abass, H.A., Aphane, M., Oyewole, O.K.: Inertial Halpern-typemethod for solving split feasibility and fixed point problems via dynamical stepsize in real Banach spaces. Ann. Univ. Ferrara (2023). https://doi.org/10.1007/s11565-023-00473-6

    Article  Google Scholar 

  32. Wang, Z.B., Sunthrayuth, P., Adamu, A., Cholamjiak, P.: Modified accelerated Bregman projection methods for solving quasi-monotone variational inequalities. Optimization, 1–35 (2023)

  33. Zalinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, River Edge (2002)

    Book  MATH  Google Scholar 

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: H. A. Abass, Methodology: H. A. Abass, O. K. Oyewole and M. Aphane, Software: O. K. Oyewole, validation: H. A. Abass and M. Aphane. All authors have read and agreed to the submission of the article.

Corresponding author

Correspondence to H. A. Abass.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abass, H.A., Aphane, M. & Oyewole, O.K. A new self-adaptive method for solving resolvent of sum of two monotone operators in Banach spaces. Fixed Point Theory Algorithms Sci Eng 2023, 17 (2023). https://doi.org/10.1186/s13663-023-00753-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-023-00753-y

Mathematics Subject Classification

Keywords