Skip to main content

An inertial s-iteration process for a common fixed point of a family of quasi-Bregman nonexpansive mappings

Abstract

In this paper, an inertial S-iteration iterative process for approximating a common fixed point of a finite family of quasi-Bregman nonexpansive mappings is introduced and studied in a reflexive Banach space. A strong convergence theorem is proved. Some applications of the theorem are presented. The results presented here improve, extend, and generalize some recent results in the literature.

1 Introduction

Let E be a real reflexive Banach space with dual space \(E^{*}\). Throughout this paper we shall assume that \(f:E \to (- \infty, + \infty )\) is a proper, lower semicontinuous, and convex function. We denote by \(\operatorname{dom} f:=\{x \in E: f(x)< + \infty \}\), the domain of f. Let \(x \in \operatorname{int} \operatorname{dom} f\), then the subdifferential of f at x is the convex function defined by

$$\begin{aligned} \partial {f}(x)= \bigl\{ x^{*} \in E^{*}: f(x) + \bigl\langle x^{*},y-x\bigr\rangle \leq f(y), \forall y \in E\bigr\} . \end{aligned}$$

The Fenchel conjugate of f is the function \(f^{*}:E^{*} \to (- \infty, + \infty ]\) defined by

$$\begin{aligned} f^{*}\bigl(x^{*}\bigr)=\sup \bigl\{ \bigl\langle x^{*},x \bigr\rangle - f(x):x \in E \bigr\} . \end{aligned}$$

It is known that the Young–Fenchel inequality,

$$\begin{aligned} \bigl\langle x^{*},x \bigr\rangle \leq f(x)+f\bigl(x^{*} \bigr),\quad \forall x \in E, x^{*} \in E^{*}, \end{aligned}$$

holds. A function f is coercive [12] if the sublevel set of f is bounded; equivalently,

$$\begin{aligned} \lim_{\|x\| \rightarrow \infty } f(x)=+ \infty. \end{aligned}$$

A function f is said to be strongly coercive if

$$\begin{aligned} \lim_{ \Vert x \Vert \rightarrow \infty } \frac{f(x)}{ \Vert x \Vert }=+ \infty. \end{aligned}$$

For any \(x \in \operatorname{int} \operatorname{dom} f\) and \(y \in E\), the derivative of f at x in the direction of y is defined by

$$\begin{aligned} {f}^{0}(x,y)= \lim_{t \rightarrow 0} \frac{f(x+ty) - f(x)}{t}. \end{aligned}$$
(1.1)

The function f is said to be Gâteaux differentiable at x if the limit (1.1) exists for any y. In this case, the gradient of f at x is the function \(\nabla f(x):E \to (- \infty, + \infty ]\) defined by \(\langle \nabla f(x),y \rangle =f^{0}(x,y)\) for any y ∈E. The function f is said to be Gâteaux differentiable if it is Gâteaux differentiable at every point \(x\in \operatorname{int} \operatorname{dom} f \). Furthermore, f is said to be Fréchet differentiable at x if this limit (1.1) is attained uniformly in y, \(\|y\|=1\); f is said to be uniformly Fréchet differentiable on a subset C of E if the limit (1.1) is attained uniformly for \(x \in C\) and \(\|y\|=1\). It is well known that if f is Gâteaux differentiable (respectively Fréchet differentiable) on \(\operatorname{int} \operatorname{dom} f\), then f is continuous and its Gâteaux derivative ∇f is norm-to-weak∗ continuous (respectively continuous) on \(\operatorname{int} \operatorname{dom} f\), see, for example, [2, 3, 6]. Let \(f:E \to (- \infty, + \infty )\) be a convex and Gâteaux differentiable function. The Bregman distance with respect to f, \(D_{f}:\operatorname{dom} f \times \operatorname{int} \operatorname{dom} f \to [0, + \infty )\) is defined as

$$D_{f}(x,y)= f(x)-f(y)- \bigl\langle \nabla f(y),x-y \bigr\rangle . $$

Let C be a nonempty closed and convex subset of E. Let \(T: C \rightarrow E\) be a mapping, then

  • A point \(v \in C\) is said to be an asymptotic fixed point of T if for any sequence \(\{x_{n}\} \subset C\) which converges weakly to v, \(\lim_{n \to \infty }\|x_{n}-Tx_{n}\|=0\). The set of asymptotic fixed points of T is denoted by \(\hat{F}(T)\);

  • T is said to be Bregman relatively nonexpansive if \(F(T) \neq \emptyset,F(T)=\hat{F}(T)\), and \(D_{f}(x,Ty) \leq D_{f}(x,y)\) for any \(x \in C, y \in F(T)\);

  • T is said to be quasi-Bregman nonexpansive if \(F(T) \neq \emptyset \) and \(D_{f}(x,Ty) \leq D_{f}(x,y)\) for any \(x\in C, y \in F(T)\);

  • \((I -T)\) is demiclosed at \(y\in E\) if having a sequence \(\{v_{n}\}\) in C converging weakly to u and \(\{ v_{n} - Tv_{n}\}\) converging strongly to y implies that \((I -T)u =y\) where I is the identity mapping. From this we get that \((I-T)\) is demiclosed at zero if whenever a sequence \(\{v_{n}\}\) in C converges weakly to u and \(\{ v_{n} - Tv_{n}\}\) converges strongly to 0 then \(u \in F(T)\).

Agarwal et al. [1] introduced and studied a two-step iterative process called the S-iteration process. They proved a convergence theorem for fixed points of nearly asymptotically nonexpansive mappings. Since then various modifications of the S-iteration scheme and also multistep schemes were studied by many authors for solutions of some nonlinear problems, see, for example, [10, 11, 15] and the references therein.

Suparatulatorn et al. [24] introduced and studied an iteration method called modified S-iteration process which is defined by

$$\begin{aligned} \textstyle\begin{cases} x_{0} \in C; \\ y_{n}=(1-\beta _{n})x_{n} + \beta _{n}S_{1}x_{n}; \\ x_{n+1}=(1-\alpha _{n})S_{1}x_{n} + \alpha _{n}S_{2}y_{n}, \end{cases}\displaystyle \end{aligned}$$

where C is a nonempty closed convex subset of a real Banach space, \(S_{1},S_{2}\) are G-nonexpansive mappings, and \(\{\alpha _{n}\}, \{\beta _{n}\}\) \(\subset (0,1)\). They proved that the sequence generated by the iterative algorithm converges weakly to a common fixed point of two G-nonexpansive mappings in a uniformly convex Banach space.

Recently, Phon-on et al. [17] studied the following inertial modified S-iteration process by combining the inertial extrapolation and modified S-iteration process to speed up the convergence of the modified S-iteration process:

$$\begin{aligned} \textstyle\begin{cases} w_{n}=x_{n} + \gamma _{n}(x_{n}-x_{n-1}); \\ y_{n}=(1-\beta _{n})w_{n} + \beta _{n}S_{1}w_{n}; \\ x_{n+1}=(1-\alpha _{n})S_{1}w_{n} + \alpha _{n}S_{2}y_{n}, \end{cases}\displaystyle \end{aligned}$$

\(n\geq 1\), where \(S_{1},S_{2}\) are nonexpansive mappings, \(\{S_{i}w_{n}-w_{n}\}\) bounded for \(i=1,2\), \(\{S_{i}w_{n}-y\}\) is bounded for \(i=1,2\), and for any \(y\in F(S_{1}) \cap F(S_{2})\), \(\sum_{n=1}^{\infty } \gamma _{n} < \infty \), \(\{\gamma _{n}\}\subset [0,\gamma ]\), \(0\leq \gamma <1\), \(\{\alpha _{n}\},\{\beta _{n}\} \subset [\delta,1-\delta ]\) for some \(\delta \in (0,0.5)\).

They proved, under some assumptions, that the sequence generated by the algorithm converges weakly to a common fixed point of two nonexpansive mappings in a uniformly convex Banach space. Several inertial algorithms were studied by numerous authors to speed up the convergence processes of iterative schemes, see, for example, [13, 18–20] and the references contained therein.

Motivated by the results of Phon-on et al. [17] and Suparatulatorn et al. [24], we raised the following interesting questions:

  1. 1.

    Can one iteratively approximate solutions of inertial modified S-iteration process in real Banach spaces more general than uniformly convex spaces?

  2. 2.

    Can the result also be proved for a common fixed point of a finite family of quasi-Bregman nonexpansive mappings?

  3. 3.

    Can a strong convergence theorem be proved without assuming that the operator is semicompact?

In this paper, we answer the questions in the affirmative. We introduce and study the following algorithm:

$$\begin{aligned} \textstyle\begin{cases} x_{0},x_{1}\in C, \quad C=C_{1}; \\ w_{n}= x_{n}+\gamma _{n} (x_{n}- x_{n-1}); \\ y_{1n}=\nabla f^{*}(\beta _{n} \nabla fw_{n} + (1- \beta _{n}) \nabla f{S_{1}}w_{n}); \\ y_{in}= \nabla f^{*}(\beta _{n} \nabla f{S_{i-1}}w_{n}+ (1- \beta _{n}) \nabla f{S_{i}}y_{(i-1)n}),\quad 2\le i\le m; \\ C_{in}=\{ v \in C_{n}: D_{f}(v,y_{in})\leq D_{f}(v,w_{n})\}; \\ C_{n+1}=\bigcap_{i=1}^{{m}}C_{in}; \\ x_{n+1}={\Pi _{C_{n+1}}}^{f}x_{0}, \end{cases}\displaystyle \end{aligned}$$
(1.2)

where C is a nonempty, closed, and convex subset of a reflexive Banach space E, for some natural number \(m\ge 2\), \(\{S_{i}\}_{i=1}^{m}\) is a finite family of quasi-Bregman nonexpansive self-mappings of C, \(\{\gamma _{n}\}, \{\beta _{n}\}\subset (a,b)\) are sequences such that \(0< a< b<1\). Then we prove that the sequence generated by the algorithm (1.2) converges to a common fixed point of a finite family of quasi-Bregman nonexpansive mappings. Furthermore, we apply our theorem to solution of some equilibrium problem and zeros of some maximal monotone operators.

2 Preliminaries

Let \(f:E \to (- \infty, + \infty )\) be a convex and Gâteaux differentiable function. The modulus of total convexity of f at \(x \in \operatorname{int} \operatorname{dom} f\) is the function \(v_{f}(x,\cdot ):[0,+\infty ) \to [0,+\infty )\) defined by

$$\begin{aligned} v_{f}(x,t):= \inf \bigl\{ D_{f}(y,x): y \in \operatorname{dom} f, \Vert y-x \Vert =t\bigr\} . \end{aligned}$$

The function f is called totally convex at x if \(v_{f}(x,t)>0\) whenever \(t>0\). The function f is called totally convex if it is totally convex at every point \(x \in \operatorname{int} \operatorname{dom} f\) and is said to be totally convex on bounded subsets if \(v_{f}(B,t)>0\) for any nonempty bounded subset B of E and \(t>0\), where the modulus of total convexity of the function f on the set B is the function \(v_{f}:\operatorname{int} \operatorname{dom} f \times [0,+\infty ) \to [0,+ \infty )\) defined by

$$\begin{aligned} v_{f}(B,t):= \inf \bigl\{ v_{f}(x,t): x \in B \cap \operatorname{dom} f\bigr\} . \end{aligned}$$

The function f is said to be Legendre if it satisfies the following conditions:

  1. (1)

    \(\operatorname{int} \operatorname{dom} f \neq \emptyset \) and the subdifferential ∂f is single-valued on its domain;

  2. (2)

    \(\operatorname{int} \operatorname{dom} f^{*} \neq \emptyset \) and \(\partial {f^{*}}\) is single-valued on its domain.

If E is a reflexive Banach space, we have the following:

  1. (i)

    f is Legendre if and only if \(f^{*}\) is Legendre (see [4, Corollary 55]).

  2. (ii)

    If f is Legendre, then ∇f is a bijection satisfying \(\nabla f=(\nabla f^{*})^{-1}\), \(\operatorname{ran} \nabla f=\operatorname{dom} \nabla f^{*}= \operatorname{int} \operatorname{dom} f^{*}\) and \(\operatorname{ran} \nabla f^{*}=\operatorname{dom} f=\operatorname{int} \operatorname{dom} f\)(see [4, Theorem 5.10]).

If the Banach space E is smooth and strictly convex, the function \(\frac{1}{p}\|\cdot \|^{p}\) with \(p \in (1, \infty )\) is Legendre.

The Bregman projection [7] with respect to f of \(x \in \operatorname{int} \operatorname{dom} f\) onto a nonempty closed convex subset \(C \subset \operatorname{int} \operatorname{dom} f\) is defined as the unique vector \({\Pi _{C}}^{f}x \in C\), which satisfies

$$\begin{aligned} D_{f}\bigl({\Pi _{C}}^{f}x,x\bigr)= \inf \bigl\{ D_{f}(y,x), y \in C\bigr\} . \end{aligned}$$

Lemma 2.1

([8])

Let C be a nonempty closed and convex subset of a reflexive Banach space E. Let \(f:E \to \mathbb{R}\) be a Gâteaux differentiable and totally convex function and let \(x \in E\). Then

  1. (1)

    \(z={\Pi _{C}}^{f}x \) if and only if \(\langle \nabla fx- \nabla fz, y-z \rangle \leq 0, \forall y \in C\);

  2. (2)

    \(D_{f}(y, {\Pi _{C}}^{f}x)+ D_{f}({\Pi _{C}}^{f}x,x) \leq D_{f}(y,x), \forall x \in E, y \in C\).

Lemma 2.2

([8, 14])

Let E be a reflexive Banach space. Let \(f:E \to \mathbb{R}\) be a strongly coercive Bregman function and let V be the function defined by

$$\begin{aligned} V_{f}\bigl(x,x^{*}\bigr)=f(x)- \bigl\langle x,x^{*} \bigr\rangle +f^{*}\bigl(x^{*}\bigr),\quad x \in E, x^{*} \in E^{*}. \end{aligned}$$

Then the following hold:

  1. (1)

    \(D_{f}(x, \nabla f^{*}(x^{*}))= V(x,x^{*}), \forall x\in E, x^{*} \in E^{*}\);

  2. (2)

    \(V_{f}(x,x^{*})+ \langle \nabla f^{*}(x^{*})-x, y^{*} \rangle \leq V_{f}(x,x^{*}+y^{*})\).

Lemma 2.3

([22])

If \(f:E \to \mathbb{R}\) is uniformly Fréchet differentiable and bounded on bounded subsets of E, then ∇f is uniformly continuous on bounded subsets of E from strong topology of E to the strong topology of \(E^{*}\).

Theorem 2.4

([25])

Let E be a reflexive Banach space and let \(f:E \to \mathbb{R}\) be a convex function which is bounded on bounded subsets of E. Then the following are equivalent:

  1. (1)

    f is strongly coercive and uniformly convex on bounded subsets of E.

  2. (2)

    \(\operatorname{dom} f^{*}=E^{*}\), \(f^{*}\) is bounded and uniformly smooth on bounded subsets of \(E^{*}\).

  3. (3)

    \(\operatorname{dom} f^{*}=E^{*}\), \(f^{*}\) is Fréchet differentiable and \(\nabla f^{*}\) is norm-to-norm uniformly continuous on bounded subsets of \(E^{*}\).

Theorem 2.5

([25])

Let E be a reflexive Banach space and let \(f:E \to \mathbb{R}\) be a continuous convex function which is strongly coercive. Then the following are equivalent:

  1. (1)

    f is bounded and uniformly smooth on bounded subsets of E.

  2. (2)

    \(f^{*}\) is Fréchet differentiable and \(f^{*}\) is norm-to-norm uniformly continuous on bounded subsets of \(E^{*}\).

  3. (3)

    \(\operatorname{dom} f^{*}=E^{*}\), \(f^{*}\) is strongly coercive and uniformly convex on bounded subsets of \(E^{*}\).

Lemma 2.6

Let E be a reflexive Banach space, let \(r>0\) be a constant, let \(\rho _{r}\) be the gauge of uniform convexity of f, and let \(f:E \to \mathbb{R}\) be a convex function which is bounded and uniformly convex on bounded subsets of E. Then, for any \(x \in E, y^{*}, z^{*} \in B_{r}\) and \(\alpha \in (0,1)\),

$$\begin{aligned} V_{f}\bigl(x, \alpha y^{*} + (1- \alpha ) z^{*} \bigr)\leq \alpha V_{f}\bigl(x,y^{*}\bigr) + (1- \alpha )V_{f}\bigl(x, z^{*}\bigr)- \alpha (1- \alpha ) {\rho _{r}}^{*}\bigl( \bigl\Vert y^{*}-z^{*} \bigr\Vert \bigr). \end{aligned}$$

Lemma 2.7

([16])

Let E be a Banach space and \(f:E \to \mathbb{R}\) be a Gâteaux differentiable function which is uniformly convex on bounded subsets of E. Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be bounded sequences in E. Then

$$\begin{aligned} \lim_{n\rightarrow \infty }D_{f}(x_{n},y_{n})=0 \quad\textit{if and only if}\quad \lim_{n \rightarrow \infty } \Vert x_{n}-y_{n} \Vert =0. \end{aligned}$$

Lemma 2.8

([21])

Let \(f:E \to \mathbb{R}\) be a Gâteaux differentiable and totally convex function. If \(x_{0} \in E\) and the sequence \(\{D_{f}(x_{n},x_{0})\}\) is bounded, the sequence \(\{x_{n}\}\) is bounded, too.

The function f is called sequentially consistent if for any two sequences \(\{u_{n}\}\) and \(\{v_{n}\}\) in E such that the first one is bounded:

$$\begin{aligned} \lim_{n\rightarrow \infty }D_{f}(u_{n},v_{n})=0\quad \text{implies}\quad \lim_{n \rightarrow \infty } \Vert u_{n}-v_{n} \Vert =0. \end{aligned}$$

Lemma 2.9

([9])

The function f is totally convex on bounded subsets if and only if the function f is sequentially consistent.

3 Main results

Theorem 3.1

Let C be a nonempty, closed, and convex subset of a reflexive Banach space E, and let \(f:E\to \mathbb{R}\) be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let \(\{S_{i}\}_{i=1}^{m} \) be a finite family of quasi-Bregman nonexpansive self mappings of C such that \(S_{i}\) is \(L_{i}\)-Lipschitz and \((I-S_{i})\) is demiclosed at 0 for each \(i\in \{1,2, \dots,m\}\). Assume \(\Gamma = \bigcap_{i=1}^{m} F(S_{i})\neq \emptyset \). Let a sequence \(\{x_{n}\}\) be generated by (1.2), then the sequence \(\{x_{n}\}\) converges to \({\Pi _{\Gamma }}^{f}x_{0}\).

Proof

We divide the proof into six steps.

Step 1. We show that \(C_{n}\) is closed and convex for any \(n \geq 1\).

Since \(C=C_{1}\), \(C_{1}\) is closed and convex.

Assume \(C_{n}\) is closed and convex for some \(n\geq 1\). Since for any \(y\in C_{n}\), \(i=1\),

$$\begin{aligned} &D_{f}(y,y_{1n}) \leq D_{f}(y,w_{n}) \\ & \quad\Leftrightarrow\quad f(w_{n})-f(y_{1n})+\bigl\langle \nabla f(w_{n}),y-w_{n} \bigr\rangle -\bigl\langle \nabla f(y_{1n}),y-y_{1n}\bigr\rangle \leq 0 \\ &\quad\Leftrightarrow\quad f(w_{n})-f(y_{1n})+\bigl\langle \nabla f(y_{1n}),y_{1n} \bigr\rangle -\bigl\langle \nabla f(w_{n}),w_{n}\bigr\rangle \leq \bigl\langle \nabla f(y_{1n})- \nabla f(w_{n}),y \bigr\rangle \end{aligned}$$

and, for \(2\leq i \leq m\),

$$\begin{aligned} &D_{f}(y,y_{in})\leq D_{f}(y,w_{n}) \\ &\quad\Leftrightarrow \quad f(w_{n})-f(y_{in})+\bigl\langle \nabla f(w_{n}),y-w_{n} \bigr\rangle -\bigl\langle \nabla f(y_{in}),y-y_{in}\bigr\rangle \leq 0 \\ &\quad \Leftrightarrow\quad f(w_{n})-f(y_{in})+\bigl\langle \nabla f(y_{in}),y_{in} \bigr\rangle -\bigl\langle \nabla f(w_{n}),w_{n}\bigr\rangle \leq \bigl\langle \nabla f(y_{in})- \nabla f(w_{n}),y \bigr\rangle , \end{aligned}$$

we have that \(C_{n+1}\) is closed and convex. Therefore, \(C_{n}\) is closed and convex for any \(n \geq 1\).

Step 2. We show that \(\Gamma \subset C_{n}\) for any \(n \geq 1\).

For \(n=1\), \(\Gamma \subset C=C_{1}\).

Now assume \(\Gamma \subset C_{n}\) for some \(n\geq 1\). Let \(u\in \Gamma \), then by Lemma 2.6, we have for \(i=1\),

$$\begin{aligned} D_{f}(u,y_{1n}) ={}& D_{f} \bigl(u,\nabla f^{*}\bigl(\beta _{n} \nabla f(w_{n})+ (1- \beta _{n}) \nabla f\bigl(S_{1} (w_{n})\bigr)\bigr) \bigr) \\ ={}&V_{f} \bigl(u, \beta _{n} \nabla f(w_{n})+ (1- \beta _{n}) \nabla f\bigl(S_{1} (w_{n})\bigr) \bigr) \\ ={}&f(u) - \bigl\langle u,\beta _{n} \nabla f(w_{n})+ (1- \beta _{n}) \nabla f\bigl(S_{1} (w_{n})\bigr) \bigr\rangle \\ &{} +f^{*} \bigl(\beta _{n} \nabla f(w_{n})+ (1- \beta _{n}) \nabla f\bigl(S_{1} (w_{n})\bigr) \bigr) \\ ={}& \beta _{n} f(u)+ (1- \beta _{n}) f(u) - \beta _{n} \bigl\langle u, \nabla f(w_{n}) \bigr\rangle - (1- \beta _{n}) \bigl\langle u,\nabla f\bigl(S_{1} (w_{n})\bigr) \bigr\rangle \\ &{} +f^{*} \bigl(\beta _{n} \nabla f(w_{n})+ (1- \beta _{n}) \nabla f\bigl(S_{1} (w_{n})\bigr) \bigr) \\ \leq{}& \beta _{n} f(u)+ (1- \beta _{n}) f(u) - \beta _{n} \bigl\langle u, \nabla f(w_{n}) \bigr\rangle - (1- \beta _{n}) \bigl\langle u,\nabla f\bigl(S_{1} (w_{n})\bigr) \bigr\rangle \\ &{} + \beta _{n} f^{*} \bigl( \nabla f(w_{n}) \bigr)+ (1- \beta _{n}) f^{*} \bigl(\nabla f \bigl(S_{1} (w_{n})\bigr) \bigr) \\ ={}& \beta _{n} \bigl[f(u) - \bigl\langle u, \nabla f(w_{n}) \bigr\rangle + f^{*} \bigl( \nabla f(w_{n}) \bigr) \bigr] \\ &{} +(1- \beta _{n}) \bigl[f(u) - \bigl\langle u,\nabla f \bigl(S_{1} (w_{n})\bigr) \bigr\rangle + f^{*} \bigl(\nabla f\bigl(S_{1} (w_{n})\bigr) \bigr) \bigr] \\ = {}&\beta _{n} D_{f}(u,w_{n}) +(1- \beta _{n})D_{f}\bigl(u,S_{1}(w_{n})\bigr) \\ \leq{}& \beta _{n} D_{f}(u,w_{n}) +(1- \beta _{n})D_{f}(u,w_{n}) \\ ={}& D_{f}(u,w_{n}) \end{aligned}$$
(3.1)

Now for \(2\le i\le m\), we have

$$\begin{aligned} & D_{f}(u,y_{in}) \\ &\quad= D_{f} \bigl(u,\nabla f^{*} \bigl(\beta _{n} \nabla f(S_{i-1}w_{n})+ (1- \beta _{n}) \nabla f(S_{i} y_{(i-1)n}) \bigr) \bigr) \\ &\quad=V_{f} \bigl(u, \beta _{n} \nabla f(S_{i-1}w_{n})+ (1- \beta _{n}) \nabla f(S_{i} y_{(i-1)n}) \bigr) \\ &\quad=f(u) - \bigl\langle u,\alpha _{n} \nabla f(S_{i-1}w_{n})+ (1- \beta _{n}) \nabla f(S_{2} y_{(i-1)n}) \bigr\rangle \\ &\qquad{} +f^{*} \bigl(\beta _{n} \nabla f(S_{i-1}w_{n})+ (1- \beta _{n}) \nabla f(S_{i} y_{(i-1)n}) \bigr) \\ &\quad= \beta _{n} f(u)+ (1- \beta _{n}) f(u) - \beta _{n} \bigl\langle u, \nabla f(S_{i-1}w_{n}) \bigr\rangle \\ &\qquad{} - (1- \beta _{n}) \bigl\langle u,\nabla f(S_{i}y_{(i-1)n}) \bigr\rangle \\ & \qquad{}+f^{*} \bigl(\beta _{n} \nabla f(S_{i-1}w_{n})+ (1- \beta _{n}) \nabla f(S_{i} y_{(i-1)n}) \bigr) \\ &\quad\leq \beta _{n} f(u)+ (1- \beta _{n}) f(u) - \beta _{n} \bigl\langle u, \nabla f(S_{i-1}w_{n}) \bigr\rangle \\ &\qquad{} - (1- \beta _{n}) \bigl\langle u,\nabla f(S_{i} y_{(i-1)n}) \bigr\rangle \\ &\qquad{} + \beta _{n} f^{*} \bigl( \nabla f(S_{i-1}w_{n}) \bigr)+ (1- \beta _{n}) f^{*} \bigl(\nabla f(S_{i} y_{(i-1)n}) \bigr) \\ &\quad= \beta _{n} \bigl[f(u) - \bigl\langle u, \nabla f(S_{i-1}w_{n}) \bigr\rangle + f^{*} \bigl( \nabla f(S_{i-1}w_{n}) \bigr) \bigr] \\ & \qquad{}+(1- \beta _{n}) \bigl[f(u) - \bigl\langle u,\nabla f(S_{i} y_{(i-1)n}) \bigr\rangle + f^{*} \bigl(\nabla f(S_{i} y_{(i-1)n}) \bigr) \bigr] \\ &\quad= \beta _{n} D_{f}(u,S_{i-1}w_{n}) +(1- \beta _{n})D_{f}(u,S_{i}y_{(i-1)n}) \\ &\quad\leq \beta _{n} D_{f}(u,w_{n}) +(1- \beta _{n})D_{f}(u,y_{(i-1)n}) \\ &\quad\leq \beta _{n} D_{f}(u,w_{n}) +(1- \beta _{n}) \bigl[ \beta _{n} D_{f}(u,w_{n}) + (1-\beta _{n})D_{f}(u,y_{(i-2)n}) \bigr] \\ &\quad= \bigl(\beta _{n} + \beta _{n} (1-\beta _{n}) \bigr) D_{f}(u,w_{n}) + (1-\beta _{n})^{2} D_{f}(u,y_{(i-2)n}) \\ &\quad\leq \beta _{n} \bigl( 1+ (1- \beta _{n}) \bigr) D_{f}(u,w_{n}) \\ &\qquad{} +(1- \beta _{n})^{2} \bigl[ \beta _{n} D_{f}(u,w_{n}) + (1- \beta _{n})D_{f}(u,y_{(i-3)n}) \bigr] \\ &\quad= \beta _{n} \bigl(1+ (1-\beta _{n}) + (1-\beta _{n})^{2} \bigr) D_{f}(u,w_{n}) + (1- \beta _{n})^{3} D_{f}(u,y_{(i-3)n}) \\ &\quad\leq \\ &\quad \vdots \\ &\quad\leq \beta _{n} \bigl(1+ (1-\beta _{n}) + (1-\beta _{n})^{2}+ \cdots +(1-\beta _{n})^{i-1} \bigr) D_{f}(u,w_{n}) \\ & \qquad{}+ (1-\beta _{n})^{i} D_{f}(u,w_{n}) \\ &\quad=\beta _{n} \biggl[ \frac{1-(1-\beta _{n})^{i}}{1-(1-\beta _{n})} \biggr]D_{f}(u,w_{n}) + (1-\beta _{n})^{i} D_{f}(u,w_{n}) \\ &\quad= D_{f}(u,w_{n}). \end{aligned}$$
(3.2)

Hence \(\Gamma \subset C_{n} \) for any \(n \geq 1\).

Step 3. We shall show that \(\{x_{n}\}\) is a Cauchy sequence.

Since \(\Gamma \subset C_{n+1} \subset C_{n} \) and \(x_{n}={\Pi _{C_{n}}}^{f} x_{0} \subset C_{n}\), by Lemma 2.1, we have that \(D_{f}(x_{n}, x_{0}) \leq D_{f} (x_{n+1},x_{0})\) and also \(D_{f}(x_{n},x_{0}) \leq D_{f} (u,x_{0}), u \in \Gamma \). Hence \(D_{f}(x_{n}, x_{0})\) is nondecreasing and bounded. So, \(\lim_{n\to \infty } D_{f} (x_{n}, x_{0})\) exists. Furthermore, by Lemma 2.8, \(\{x_{n}\}\) is bounded. Also, since \(x_{n}={\Pi _{C_{n}}}^{f} x_{0}\), it follows from Lemma 2.1 that \(D_{f}(x_{k}, x_{n}) = D_{f} (x_{k},{\Pi _{C_{n} }}^{f}x_{0}) \leq D_{f} (x_{k},x_{0})-D_{f} (x_{n},x_{0}) \rightarrow 0\) as \(n,k \rightarrow \infty \). Since f is totally convex on bounded subsets of E, f is sequentially consistent. Therefore \(\|x_{n}-x_{k}\| \rightarrow 0\) as \(n,k \rightarrow \infty \). Hence, \(\{x_{n}\}\) is a Cauchy sequence.

Step 4. We show that

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert x_{n}-w_{n} \Vert &=\lim_{n \rightarrow \infty } \Vert x_{n}-y_{in} \Vert \\ &= \lim_{n\rightarrow \infty } \Vert y_{(i+1)n}-y_{in} \Vert \\ &=\lim_{n\rightarrow \infty } \bigl\Vert (I-S_{1})w_{n} \bigr\Vert \\ &=\lim_{n\rightarrow \infty } \bigl\Vert (I-S_{i})y_{(i-1)n} \bigr\Vert =0, \end{aligned}$$

for each \(i\in \{1,2,\dots,m\}\).

Since \(x_{n+1} \in C_{n+1} \subset C_{n}\), by Lemma 2.1, we have \(D_{f} (x_{n+1},x_{n}) \leq D_{f} (x_{n+1},x_{0})-D_{f} (x_{n},x_{0})\). Taking the limit as \(n \rightarrow \infty \), we have \(\underset{n\to \infty }{\lim } D_{f} (x_{n+1},x_{n}) =0\).

Since f is totally convex on bounded subsets of E, f is sequentially consistent. Therefore

$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert \rightarrow 0\quad \text{as } n \rightarrow \infty. \end{aligned}$$
(3.3)

From (1.2) we get

$$\begin{aligned} \Vert x_{n} - w_{n} \Vert = \bigl\Vert \gamma _{n}(x_{n} - x_{n-1}) \bigr\Vert \leq \Vert x_{n}-x_{n-1} \Vert , \end{aligned}$$

which implies

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert x_{n}-w_{n} \Vert = 0. \end{aligned}$$
(3.4)

Since \(\{x_{n}\}\) is bounded, (3.4) implies that \(\{w_{n}\}\) is also bounded and

$$\begin{aligned} \Vert x_{n+1} -w_{n} \Vert \leq \Vert x_{n+1}-x_{n} \Vert + \Vert x_{n} - w_{n} \Vert . \end{aligned}$$

Thus, we get

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert x_{n+1}-w_{n} \Vert = 0. \end{aligned}$$

By Lemma 2.7,

$$\begin{aligned} \lim_{n \rightarrow \infty } D_{f}(x_{n+1},w_{n})= 0. \end{aligned}$$

Since \(x_{n+1} \in C_{n}\), for \(1 \leq i \leq m\), from (1.2) we have \(D_{f}(x_{n+1},y_{in}) \leq D_{f}(x_{n+1},w_{n})\). Hence \(\lim_{n \to \infty } D_{f}(x_{n+1},y_{in})= 0\), \(\forall i \in \{1,2,3,\dots,m \}\). Since f is totally convex on bounded subsets of E, f is sequentially consistent. Therefore

$$\begin{aligned} \Vert x_{n+1}-y_{in} \Vert \rightarrow 0 \quad\text{as } n \rightarrow \infty, \forall i \in \{1,2,3,\dots,m\}. \end{aligned}$$
(3.5)

Observe that \(\| x_{n} -y_{in}\| \leq \|x_{n}-x_{n+1}\|+ \|x_{n+1} - y_{in}\|, \forall i \in \{ 1,2,3,\dots,m\}\), which implies

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert x_{n}-y_{in} \Vert = 0,\quad \forall i \in \{1,2,3,\dots,m\}. \end{aligned}$$
(3.6)

Also, \(\| y_{in} -w_{n}\| \leq \|y_{in}-x_{n}\|+ \|x_{n} - w_{n}\|\). Thus,

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert y_{in}-w_{n} \Vert = 0, \quad \forall i \in \{1,2,3,\dots,m\}. \end{aligned}$$
(3.7)

Since ∇f is norm-to-norm uniformly continuous on bounded subsets of E, we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla fy_{in}- \nabla fw_{n} \Vert = 0, \quad\forall i \in \{1,2,3,\dots,m\}. \end{aligned}$$
(3.8)

Since \(\{w_{n}\}\) is bounded, (3.7) implies that \(\{y_{in}\}\) is also bounded.

Thus, for \(1\leq i \leq m-1 \), we have \(\| y_{(i+1)n} -y_{in}\| \leq \|y_{(i+1)n}-x_{n+1}\|+ \|x_{n+1} - y_{in} \| \), so that

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert y_{(i+1)n}-y_{in} \Vert = 0. \end{aligned}$$
(3.9)

Since ∇f is norm-to-norm uniformly continuous on bounded subsets of E, we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla fy_{(i+1)n}- \nabla fy_{in} \Vert = 0,\quad \forall i \in \{1,2,3,\dots,m-1\}. \end{aligned}$$
(3.10)

From (1.2)

$$\begin{aligned} \Vert \nabla fy_{1n}- \nabla fw_{n} \Vert =(1- \beta _{n}) \Vert \nabla f{S_{1}}w_{n}- \nabla fw_{n} \Vert . \end{aligned}$$

From (3.7), we have

$$\begin{aligned} 0=\lim_{n \rightarrow \infty } \Vert \nabla fy_{1n}- \nabla fw_{n} \Vert = \lim_{n \rightarrow \infty } (1- \beta _{n}) \Vert \nabla f{S_{1}}w_{n}- \nabla fw_{n} \Vert . \end{aligned}$$

Hence

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla f{S_{1}}w_{n}- \nabla fw_{n} \Vert = 0. \end{aligned}$$
(3.11)

This implies that as \(\nabla f^{*}\) is norm-to-norm uniformly continuous on bounded subsets of \(E^{*}\),

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert w_{n}- {S_{1}}w_{n} \Vert = 0. \end{aligned}$$
(3.12)

Now

$$\begin{aligned} \Vert y_{1n}-S_{1}w_{n} \Vert \leq \Vert y_{1n}-w_{n} \Vert + \Vert w_{n}-S_{1}w_{n} \Vert , \end{aligned}$$

which implies

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert y_{1n} -S_{1}w_{n} \Vert =0. \end{aligned}$$

Thus

$$\begin{aligned} \Vert y_{2n}-S_{1}w_{n} \Vert \leq \Vert y_{2n}-y_{1n} \Vert + \Vert y_{1n}-S_{1}w_{n} \Vert \end{aligned}$$

gives

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert y_{2n} -S_{1}w_{n} \Vert =0. \end{aligned}$$

Since ∇f is norm-to-norm uniformly continuous on bounded subsets of E, we have

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert \nabla f y_{2n} -\nabla fS_{1}w_{n} \Vert =0. \end{aligned}$$

Again, from (1.2), we have

$$\begin{aligned} \Vert \nabla fy_{2n}- \nabla f{S_{1}} w_{n} \Vert =(1-\beta _{n}) \Vert \nabla f{S_{2}}y_{1n}- \nabla f{S_{1}}w_{n} \Vert . \end{aligned}$$

Therefore,

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla fS_{2}y_{1n}- \nabla f{S_{1}}w_{n} \Vert = 0. \end{aligned}$$

Since \(\nabla f^{*}\) is norm-to-norm uniformly continuous on bounded subsets of \(E^{*}\), we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert S_{2}y_{1n}- S_{1} w_{n} \Vert = 0. \end{aligned}$$

Thus

$$\begin{aligned} \Vert y_{1n}- {S_{2}}y_{1n} \Vert \leq \Vert y_{1n}-w_{n} \Vert + \Vert w_{n}-{S_{1}}w_{n} \Vert + \Vert {S_{1}}w_{n}- {S_{2}}y_{1n} \Vert \end{aligned}$$

gives

$$\begin{aligned} \lim_{n \rightarrow \infty } \bigl\Vert (I-{S_{2}})y_{1n} \bigr\Vert = 0. \end{aligned}$$
(3.13)

Now

$$\begin{aligned} \Vert y_{3n}- S_{2}w_{n} \Vert &\leq \Vert y_{3n}- y_{2n} \Vert + \Vert y_{2n}-y_{1n} \Vert + \Vert y_{1n}-S_{2}y_{1n} \Vert + \Vert S_{2}y_{1n}- S_{2}w_{n} \Vert \\ &\leq \Vert y_{3n}- y_{2n} \Vert + \Vert y_{2n}-y_{1n} \Vert + \Vert y_{1n}-S_{2}y_{1n} \Vert + L_{2} \Vert y_{1n}- w_{n} \Vert . \end{aligned}$$

This implies \(\lim_{n\to \infty }\|y_{3n}-S_{2}w_{n}\|=0\).

From this and the fact that ∇f is norm-to-norm uniformly continuous on bounded subsets of E, we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla fy_{3n}-\nabla fS_{2}w_{n} \Vert = 0. \end{aligned}$$

Similarly, from (1.2) we have

$$\begin{aligned} \Vert \nabla fy_{3n}- \nabla f{S_{2}} w_{n} \Vert =(1-\beta _{n}) \Vert \nabla f{S_{3}}y_{2n}- \nabla f{S_{2}}w_{n} \Vert . \end{aligned}$$

Therefore,

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla fS_{3}y_{2n}- \nabla f{S_{2}}w_{n} \Vert = 0. \end{aligned}$$

Since \(\nabla f^{*}\) is norm-to-norm uniformly continuous on bounded subsets of \(E^{*}\), we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert S_{3}y_{2n}- S_{2} w_{n} \Vert = 0. \end{aligned}$$

From the following inequality:

$$\begin{aligned} \Vert y_{2n}- {S_{3}}y_{2n} \Vert &\leq \Vert y_{2n}-y_{1n} \Vert + \Vert y_{1n}-{S_{2}}y_{1n} \Vert + \Vert {S_{2}}y_{1n}- {S_{2}}w_{n} \Vert + \Vert S_{2}w_{n}- S_{3}y_{2n} \Vert \\ &\leq \Vert y_{2n}-y_{1n} \Vert + \Vert y_{1n}-{S_{2}}y_{1n} \Vert + L_{2} \Vert y_{1n}- w_{n} \Vert + \Vert S_{2}w_{n}- S_{3}y_{2n} \Vert , \end{aligned}$$

we get

$$\begin{aligned} \lim_{n \rightarrow \infty } \bigl\Vert (I-{S_{3}})y_{2n} \bigr\Vert = 0. \end{aligned}$$
(3.14)

Also,

$$\begin{aligned} \Vert y_{4n}- S_{3}w_{n} \Vert &\leq \Vert y_{4n}- y_{3n} \Vert + \Vert y_{3n}-y_{2n} \Vert + \Vert y_{2n}-S_{3}y_{2n} \Vert + \Vert S_{3}y_{2n}- S_{3}w_{n} \Vert \\ &\leq \Vert y_{4n}- y_{3n} \Vert + \Vert y_{3n}-y_{2n} \Vert + \Vert y_{2n}-S_{3}y_{2n} \Vert + L_{3} \Vert y_{2n}- w_{n} \Vert , \end{aligned}$$

implies \(\lim_{n\to \infty }\|y_{4n}-S_{3}w_{n}\|=0\).

Since ∇f is norm-to-norm uniformly continuous on bounded subsets of E, we have \(\lim_{n\to \infty }\|\nabla f y_{4n}-\nabla fS_{3}w_{n}\|=0\).

From (1.2) we have

$$\begin{aligned} \Vert \nabla fy_{4n}- \nabla f{S_{3}} w_{n} \Vert =(1-\beta _{n}) \Vert \nabla f{S_{4}}y_{3n}- \nabla f{S_{3}}w_{n} \Vert . \end{aligned}$$

Therefore,

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert \nabla fS_{4}y_{3n}- \nabla f{S_{3}}w_{n} \Vert = 0. \end{aligned}$$

Since \(\nabla f^{*}\) is norm-to-norm uniformly continuous on bounded subsets of \(E^{*}\), we have

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert S_{4}y_{3n}- S_{3} w_{n} \Vert = 0. \end{aligned}$$

From the inequality

$$\begin{aligned} \Vert y_{3n}- {S_{4}}y_{3n} \Vert &\leq \Vert y_{3n}-y_{2n} \Vert + \Vert y_{2n}-{S_{3}}y_{2n} \Vert + \Vert {S_{3}}y_{2n}- {S_{3}}w_{n} \Vert + \Vert S_{3}w_{n}- S_{4}y_{3n} \Vert \\ &\leq \Vert y_{3n}-y_{2n} \Vert + \Vert y_{2n}-{S_{3}}y_{2n} \Vert + L_{3} \Vert y_{2n}- w_{n} \Vert + \Vert S_{3}w_{n}- S_{4}y_{3n} \Vert , \end{aligned}$$

we get

$$\begin{aligned} \lim_{n \rightarrow \infty } \bigl\Vert (I-{S_{4}})y_{3n} \bigr\Vert = 0. \end{aligned}$$
(3.15)

Continuing in this fashion, we get

$$\begin{aligned} \lim_{n\rightarrow \infty } \bigl\Vert (I-S_{1})w_{n} \bigr\Vert &=\lim_{n \rightarrow \infty } \bigl\Vert (I-S_{2})y_{1n} \bigr\Vert \\ &=\lim_{n\rightarrow \infty } \bigl\Vert (I-S_{3})y_{2n} \bigr\Vert \\ &=\lim_{n\rightarrow \infty } \bigl\Vert (I-S_{4})y_{3n} \bigr\Vert \\ & \vdots \\ & =\lim_{n\rightarrow \infty } \bigl\Vert (I-S_{m})y_{(m-1)n} \bigr\Vert =0. \end{aligned}$$

Step 5. We show that \(\{x_{n}\}\) converges to an element of Γ.

Since \(\{x_{n}\}\) is a Cauchy sequence, we assume that \(x_{n} \rightarrow x^{*}\) as \(n\to \infty \). From the fact that

$$\begin{aligned} \lim_{n\rightarrow \infty } \Vert x_{n}-w_{n} \Vert = \lim_{n\rightarrow \infty } \Vert x_{n}-y_{in} \Vert =0,\quad \forall i \in \{1,2,3, \ldots ,m\}, \end{aligned}$$

we have that

$$\begin{aligned} w_{n}\rightarrow x^{*}, \qquad y_{in} \rightarrow x^{*} \quad\text{as } n \to \infty, \forall i \in \{1,2,3, \ldots ,m \}. \end{aligned}$$

Since \(I-S_{i}\), \(i \in \{1,2,3, \ldots ,m\}\) are demiclosed at 0 and

$$\begin{aligned} \lim_{n\rightarrow \infty } \bigl\Vert (I-S_{1})w_{n} \bigr\Vert = \lim_{n\rightarrow \infty } \bigl\Vert (I-S_{i})y_{(i-1)n} \bigr\Vert =0 \quad\text{for } 2\leq i \leq m, \end{aligned}$$

we have \(x^{*} \in \bigcap_{i=1}^{m} F(S_{i})\). Therefore, \(x^{*} \in \Gamma \).

Step 6. We show that \(x^{*} ={\Pi _{\Gamma }}^{f}x_{0}\).

Let \(y={\Pi _{\Gamma }}^{f}x_{0}\). Since \(x^{*} \in \Gamma \), we have that

$$\begin{aligned} \begin{aligned} D_{f}(y,x_{0}) & \leq D_{f}\bigl(x^{*},x_{0}\bigr). \end{aligned} \end{aligned}$$
(3.16)

Since \(y \in \Gamma \subset C_{n}\) and \(x_{n}= {\Pi _{C_{n}}}^{f}x_{0}\), we have

$$\begin{aligned} D_{f}(x_{n},x_{0}) \leq D_{f}(y,x_{0}) \end{aligned}$$

and, taking into account that \(x_{n} \rightarrow x^{*}\), obtain

$$\begin{aligned} D_{f}\bigl(x^{*},x_{0}\bigr) \leq D_{f}(y,x_{0}). \end{aligned}$$
(3.17)

Combining (3.16) and (3.17) yields

$$\begin{aligned} D_{f}(y,x_{0})= D_{f}\bigl(x^{*},x_{0} \bigr). \end{aligned}$$

Hence, \(x^{*}=y= {\Pi _{\Gamma }}^{f}x_{0}\). □

Corollary 3.2

Let C be a nonempty, closed, and convex subset of a reflexive Banach space E, and let \(f:E\to \mathbb{R}\) a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let \(\{S_{i}\}_{i=1}^{m}\) be a finite family of Bregman relatively nonexpansive mappings such that \(S_{i}\), \(i=1,2,3,\dots,m\) are \(L_{i}\)-Lipschitz and \((I-S_{i}), i=1,2, \ldots ,m\) are demiclosed at 0. Assume \(\Gamma = \bigcap_{i=1}^{m} F(S_{i})\neq \emptyset \). Let a sequence \(\{x_{n}\}\) be generated by

$$\begin{aligned} \textstyle\begin{cases} x_{0},x_{1}\in C,\quad C=C_{1}; \\ w_{n}= x_{n}+\gamma _{n} (x_{n}- x_{n-1}); \\ y_{1n}=\nabla f^{*}(\beta _{n} \nabla fw_{n} + (1- \beta _{n}) \nabla f{S_{1}}w_{n}); \\ y_{in}= \nabla f^{*}(\beta _{n} \nabla f{S_{i-1}}w_{n}+ (1- \beta _{n}) \nabla f{S_{i}}y_{(i-1)n}); \\ C_{in}=\{ v \in C_{n}: D_{f}(v,y_{in})\leq D_{f}(v,w_{n}) \}; \\ C_{n+1}=\bigcap_{i=1}^{{m}}C_{in}; \\ x_{n+1}={\Pi _{C_{n+1}}}^{f}x_{0}; \end{cases}\displaystyle \end{aligned}$$
(3.18)

where \(\{\gamma _{n}\}\) and \(\{\beta _{n}\}\subset (a,b)\), \(0< a< b<1\), are sequences. Then the sequence \(\{x_{n}\}\) converges to a point \(z\in \Gamma \), where \(z={\Pi _{\Gamma }}^{f}x_{0} \).

Corollary 3.3

Let E be a uniformly convex real Banach space. Let \(\{S_{i}\}_{i=1}^{m}\) be a finite family of nonexpansive mappings. Assume \(\Gamma = \bigcap_{i=1}^{m} F(S_{i})\} \neq \emptyset \). Let a sequence \(\{x_{n}\}\) be generated by

$$\begin{aligned} \textstyle\begin{cases} x_{0},x_{1}\in C,\quad C=C_{1}; \\ w_{n}= x_{n}+\gamma _{n} (x_{n}- x_{n-1}); \\ y_{1n}=(\beta _{n} w_{n} + (1- \beta _{n}){S_{1}}w_{n}); \\ y_{in}= (\beta _{n} {S_{i-1}}w_{n}+ (1- \beta _{n}){S_{i}}y_{(i-1)n}); \\ C_{in}=\{ v \in C_{n}: \Vert y_{in}-v \Vert \leq \Vert w_{n}-v \Vert \}; \\ C_{n+1}=\bigcap_{i=1}^{{m}}C_{in}; \\ x_{n+1}=P_{C_{n+1}}x_{0}, \end{cases}\displaystyle \end{aligned}$$
(3.19)

where \(\{\gamma _{n}\}\) and \(\{\beta _{n}\}\) are sequences in \((0,1)\). Then the sequence \(\{x_{n}\}\) converges to a point \(z\in \Gamma \), where \(z=P_{\Gamma }x_{0} \).

4 Applications

4.1 Application to the equilibrium problem

Let C be a nonempty closed convex subset of a real Banach space E, and let \(F:C \times C \to \mathbb{R}\) be a bifunction.

The equilibrium problem with respect to F and C is to find \(z\in C\) such that

$$\begin{aligned} F(z,y) \geq 0,\quad \forall y \in C. \end{aligned}$$

The set of solutions of the equilibrium problem above is denoted by \(EP(F)\). For solving the equilibrium problem, we assume that F satisfies the following conditions:

  1. (A1)

    \(F(x,x)=0\) for all \(x \in C\);

  2. (A2)

    F is monotone, i.e., \(F(x,y)+F(y,x) \leq 0 \), \(\forall x,y \in C\);

  3. (A3)

    for each \(x,y,z\) ∈C, \(\lim_{t\downarrow 0} F(tz + (1-t)x,y) \leq F(x,y)\);

  4. (A4)

    for each \(x \in C\), \(y\mapsto F(x,y)\) is convex and lower semicontinuous.

The resolvent of a bifunction F is the operator \({\operatorname{Res}_{f}}^{F}:E \to 2^{C}\) defined by

$$\begin{aligned} {\operatorname{Res}_{f}}^{F} x=\bigl\{ z \in C: F(z,y)+ \bigl\langle \nabla f(z)- \nabla f(x), y-z\bigr\rangle \geq 0, \forall y \in C \bigr\} . \end{aligned}$$

Lemma 4.1

([23])

Let E be a reflexive Banach space, and C be a nonempty closed convex subset of E. Let \(f:E \to (- \infty, + \infty ) \) be a Legendre function. If the bifunction \(F:C \times C \to \mathbb{R}\) satisfies conditions (A1)–(A4), then the following holds:

  1. (1)

    \({\operatorname{Res}_{f}}^{F}\) is single-valued;

  2. (2)

    \({\operatorname{Res}_{f}}^{F}\) is Bregman firmly nonexpansive;

  3. (3)

    \(\operatorname{Fix}(\operatorname{Res}^{F})=EP(F)\);

  4. (4)

    \(EP(F)\) is a closed and convex subset of C;

  5. (5)

    For all \(x \in E\) and for all \(q \in \operatorname{Fix}(\operatorname{Res}^{F})\),

    $$\begin{aligned} D_{f}\bigl(q, {\operatorname{Res}_{f}}^{F}x\bigr)+ D_{f}\bigl({\operatorname{Res}_{f}}^{F}x,x\bigr) \leq D_{f}(q,x). \end{aligned}$$

Theorem 4.2

Let C and Q be nonempty, closed, and convex subsets of a reflexive Banach space E, and Let \(f:E\to \mathbb{R}\) be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let \(F_{i}:C \times C \to \mathbb{R}\), \(i=1,2,3,\dots,m\) be bifunctions satisfying conditions \((A1)\)–\((A4)\) such that \({\operatorname{Res}_{f}}^{F_{i}}\) are \(L_{i}\)-Lipschitz for \(1\leq i \leq m\). Assume \(\Gamma =\bigcap_{i=1}^{m}EP(F_{i}) \neq \emptyset \). Let a sequence \(\{x_{n}\}\) be generated by

$$\begin{aligned} \textstyle\begin{cases} x_{0},x_{1}\in C,\quad C=C_{1}; \\ w_{n}= x_{n}+\gamma _{n} (x_{n}- x_{n-1}); \\ y_{1n}=\nabla f^{*}(\beta _{n} \nabla fw_{n} + (1- \beta _{n}) \nabla f{\operatorname{Res}_{f}}^{F_{1}}w_{n}); \\ y_{in}= \nabla f^{*}(\beta _{n} \nabla f{\operatorname{Res}_{f}}^{F_{i-1}}w_{n}+ (1- \beta _{n})\nabla f{\operatorname{Res}_{f}}^{F_{i}}y_{(i-1)n}); \\ C_{in}=\{ v \in C_{n}: D_{f}(v,y_{in})\leq D_{f}(v,w_{n}) \}; \\ C_{n+1}=\bigcap_{i=1}^{{m}}C_{in}; \\ x_{n+1}={\Pi _{C_{n+1}}}^{f}x_{0}, \end{cases}\displaystyle \end{aligned}$$
(4.1)

where \(\{\gamma _{n}\},\{\beta _{n}\}\subset (a,b)\), \(0< a< b<1\), are sequences and \({\operatorname{Res}_{f}}^{F_{i}}\) are the resolvents of \(F_{i}\), \(i\in \{1,2,\dots,m\}\). Then the sequence \(\{x_{n}\}\) converges to \(z={P_{\Gamma }}^{f}x_{0}\).

Proof

Putting \(S_{i}={\operatorname{Res}_{f}}^{F_{i}}\) in Theorem 3.1, we get the desired result. □

4.2 Application to the maximal monotone operator

A set-valued mapping \(B\subset E\times E^{*}\) with domain \(D(B)=\{x \in E: Bx \neq \emptyset \}\) and range \(R(B)=\cup \{Bx:x \in D(B)\}\) is said to be monotone if \(\langle x-y,x^{*}-y^{*}\rangle \geq 0 \) whenever \((x,x^{*}),(y,y^{*}) \in B\), see, for example, [2]. A monotone mapping \(B \subset E \times E^{*}\) is said to be maximal monotone if its graph \(G(B)=\{ (x,y): y \in Bx\}\) is not properly contained in the graph of any other monotone mapping. We know that if B is maximal monotone, then the zero of B, \(B^{-1}(0)=\{x \in E: 0 \in Bx\}\) is closed and convex. Define the resolvent of B, \({\operatorname{Res}_{B}}^{f}:E\to 2^{E}\) by

$$\begin{aligned} {\operatorname{Res}_{B}}^{f}x=(\nabla f +B)^{-1}\circ \nabla fx. \end{aligned}$$

We know the following (see [5]):

  1. (1)

    \({\operatorname{Res}_{B}}^{f}\) is single valued;

  2. (2)

    \(\operatorname{Fix}({\operatorname{Res}_{B}}^{f})=B^{-1}0\).

Lemma 4.3

([21])

Let \(B:E \to {2}^{E*}\) be a maximal monotone mapping such that \(B^{-1}(0) \neq \emptyset \). Then for all \(x \in E\) and \(q \in B^{-1}(0)\), we have

$$\begin{aligned} D_{f}\bigl(q, {\operatorname{Res}_{B}}^{f}x\bigr)+ D_{f}\bigl(\operatorname{Res}^{f}x,x\bigr) \leq D_{f}(q,x). \end{aligned}$$

Theorem 4.4

Let C be a nonempty, closed, and convex subset of a reflexive Banach space E, and let \(f:E\to \mathbb{R}\) be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let \(B_{i}:E\to 2^{E*}\) \(i=1,2,3,\dots,m\) be maximal monotone operators such that \({\operatorname{Res}_{B_{i}}}^{f}\) are \(L_{i}\)-Lipschitz for \(1\leq i \leq m\). Assume \(\Gamma = \bigcap_{i=1}^{m} {B_{i}}^{-1}(0)\neq \emptyset \). Let a sequence \(\{x_{n}\}\) be generated by

$$\begin{aligned} \textstyle\begin{cases} x_{0},x_{1}\in C,\quad C=C_{1}; \\ w_{n}= x_{n}+\gamma _{n} (x_{n}- x_{n-1}); \\ y_{1n}=\nabla f^{*}(\beta _{n} \nabla fw_{n} + (1- \beta _{n}) \nabla f{\operatorname{Res}_{B_{1}}}^{f}w_{n}); \\ y_{in}= \nabla f^{*}(\beta _{n} \nabla f{\operatorname{Res}_{B_{i-1}}}^{f}w_{n}+ (1- \alpha _{n})\nabla f{\operatorname{Res}_{B_{i}}}^{f}y_{(i-1)n}), \quad 2 \le i\le m; \\ C_{in}=\{ v \in C_{n}: D_{f}(v,y_{in})\leq D_{f}(v,w_{n}) \}; \\ C_{n+1}=\bigcap_{i=1}^{{m}}C_{in}; \\ x_{n+1}={\Pi _{C_{n+1}}}^{f}x_{0}, \end{cases}\displaystyle \end{aligned}$$
(4.2)

where \(\{\gamma _{n}\}, \{\beta _{n}\}\subset (a,b)\), \(0< a< b<1\), are sequences and \({\operatorname{Res}_{B_{i}}}^{f}\) are the resolvents of \(B_{i}\). Then the sequence \(\{x_{n}\}\) converges to a point \(z\in \Gamma \), where \(z={P_{\Gamma }}^{f}x_{0}\).

Proof

Putting \(S_{i}={\operatorname{Res}_{B_{i}}}^{f}\) in Theorem 3.1, we get the desired result. □

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Agarwal, R.P., O’Regan, D., Sahu, D.R.: Iterative construction of fixed points of nearly asymptotically nonexansive mappings. J. Nonlinear Convex Anal. 8, 61–79 (2007)

    MathSciNet  MATH  Google Scholar 

  2. Alber, Y., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type. Springer, Dordrecht (2006)

    MATH  Google Scholar 

  3. Asplund, E., Rockafellar, R.T.: Gradients of convex functions. Trans. Am. Math. Soc. 139, 443–467 (1969)

    Article  MathSciNet  Google Scholar 

  4. Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 3, 615–647 (2001)

    Article  MathSciNet  Google Scholar 

  5. Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42, 596–636 (2003)

    Article  MathSciNet  Google Scholar 

  6. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  Google Scholar 

  7. Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)

    Article  MathSciNet  Google Scholar 

  8. Butnariu, D., Resmerita, E.: Bregman distances, totally convex functions, and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, Article ID 84919 (2006)

    Article  MathSciNet  Google Scholar 

  9. Butnariu, D., Iusem, A.N.: Totally Convex Functions for Fixed Point Computation and Infinite Dimensional Optimization. Kluwer Academic, Dordrecht (2000)

    Book  Google Scholar 

  10. Chidume, C.E., Ali, B.: Approximation of common fixed points for finite families of nonself asymptotically nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 326, 960–973 (2007)

    Article  MathSciNet  Google Scholar 

  11. Chidume, C.E., Ali, B.: Weak and strong convergence theorems for finite families of asymptotically nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 330, 377–387 (2007)

    Article  MathSciNet  Google Scholar 

  12. Hiriart-Urruty, J.B., Lemarchal, C.: Convex Analysis and Minimization Algorithms II. Grundlehren der Mathematischen Wissenschaften, vol. 306. Springer, Berlin (1993)

    Book  Google Scholar 

  13. Kitkuan, D., Kumam, P., Moreno, J.M., Sitthithakerngkiet, K.: Inertial viscosity forward backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. https://doi.org/10.1080/00207160.2019.1649661

  14. Kohsaka, F., Takahashi, W.: Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 6, 505–523 (2005)

    MathSciNet  MATH  Google Scholar 

  15. Kumam, P., Saluja, G.S., Nashine, H.K.: Convergence of modified S-iteration process for two asymptotically nonexpansive mappings in the intermediate sense in \(\mathit{CAT}(0)\) spaces. J. Inequal. Appl. 2014, 368 (2014)

    Article  MathSciNet  Google Scholar 

  16. Naraghirad, E., Yao, J.C.: Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, 141 (2013)

    Article  MathSciNet  Google Scholar 

  17. Phon-on, A., Makaje, N., Sama-Ae, A., Khongraphan, K.: An inertial S-iteration process. Fixed Point Theory Appl. (2019). https://doi.org/10.1186/s13663-019-0654-7

  18. Rehman, H., Kumam, P., Abubakar, A.B., Cho, Y.J.: The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 39, Article ID 100 (2020). https://doi.org/10.1007/s40314-020-1093-0

    Article  MathSciNet  MATH  Google Scholar 

  19. Rehman, H., Kumam, P., Kumam, W., Shutaywi, M., Jirakitpuwapat, W.: The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 12, 463 (2020). https://doi.org/10.3390/sym12030463

    Article  Google Scholar 

  20. Rehman, H., Kumam, P., Argyros, I.K., Deebani, W., Kumam, W.: Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 12, 503 (2020). https://doi.org/10.3390/sym12040503

    Article  Google Scholar 

  21. Reich, S., Sabach, S.: Two convergence theorems for a proximal methods in reflexive Banach spaces. Numer. Funct. Anal. Optim. 31, 22–44 (2010)

    Article  MathSciNet  Google Scholar 

  22. Reich, S., Sabach, S.: A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 10, 471–485 (2009)

    MathSciNet  MATH  Google Scholar 

  23. Reich, S., Sabach, S.: Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. 73, 122–135 (2010)

    Article  MathSciNet  Google Scholar 

  24. Suparatulatorn, R., Cholamjiak, W., Suantai, S.: A modified S-iteration process for G-nonexpansive mappings in Banach spaces with graphs. Numer. Algorithms 77, 479–490 (2018)

    Article  MathSciNet  Google Scholar 

  25. Zalinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, River Edge (2002)

    Book  Google Scholar 

Download references

Acknowledgements

The authors appreciate the support of their institutions. They thank ACBF and AfDB for their financial support.

Funding

This work is supported from ACBF and AfDB Research Grant Funds to AUST.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Bashir Ali.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ali, B., Adam, A.A. An inertial s-iteration process for a common fixed point of a family of quasi-Bregman nonexpansive mappings. Fixed Point Theory Algorithms Sci Eng 2022, 9 (2022). https://doi.org/10.1186/s13663-022-00719-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-022-00719-6

MSC

Keywords