Skip to main content

Strong convergence theorems for Bregman W-mappings with applications to convex feasibility problems in Banach spaces

Abstract

In this paper we introduce new modified Mann iterative processes for computing fixed points of an infinite family of Bregman W-mappings in reflexive Banach spaces. Let \(W_{n}\) be the Bregman W-mapping generated by \(S_{n},S_{n-1},\ldots,S_{1}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). We first express the set of fixed points of \(W_{n}\) as the intersection of fixed points of \(\{S_{i}\}_{i=1}^{n}\). As a consequence, we show that \(W_{n}\) is a Bregman weak relatively nonexpansive mapping if \(S_{i}\) is a Bregman weak relatively nonexpansive mapping for each \(i=1,2,\ldots,n\). When specialized to the fixed point set of a Bregman nonexpansive type mapping T, the required sufficient condition \(\tilde{F}(T)=F(T)\) is less restrictive than the usual condition \(\hat{F}(T)=F(T)\) which is based on the demiclosedness principle. We then prove some strong convergence theorems for these mappings. Some application of our results to convex feasibility problem is also presented. Our results improve and generalize many known results in the current literature.

1 Introduction

Throughout this paper, we denote the set of real numbers and the set of positive integers by \(\mathbb{R}\) and \(\mathbb{N}\), respectively. Let E be a Banach space with the norm \(\Vert \cdot \Vert \) and the dual space \(E^{*}\). For any \(x\in E\), we denote the value of \(x^{*}\in E^{*}\) at x by \(\langle x,x^{*} \rangle\). Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence in E, we denote the strong convergence of \(\{x_{n}\}_{n\in\mathbb{N}}\) to \(x\in E\) as \(n\to\infty\) by \(x_{n}\to x\) and the weak convergence by \(x_{n}\rightharpoonup x\). The modulus δ of convexity of E is denoted by

$$\delta(\epsilon)=\inf \biggl\{ 1-\frac{\Vert x+y\Vert }{2}:\Vert x\Vert \leq1, \Vert y\Vert \leq 1,\Vert x-y\Vert \geq\epsilon \biggr\} $$

for every ϵ with \(0\leq\epsilon\leq2\). A Banach space E is said to be uniformly convex if \(\delta(\epsilon)>0\) for every \(\epsilon>0\). Let \(S_{E}=\{x\in E:\Vert x\Vert =1\}\). The norm of E is said to be Gâteaux differentiable if for each \(x,y\in S_{E}\), the limit

$$ \lim_{t\to 0}\frac{\Vert x+ty\Vert -\Vert x\Vert }{t} $$
(1.1)

exists. In this case, E is called smooth. If the limit (1.1) is attained uniformly for all \(x,y\in S_{E}\), then E is called uniformly smooth. The Banach space E is said to be strictly convex if \(\Vert \frac{x+y}{2}\Vert <1\) whenever \(x,y\in S_{E}\) and \(x\neq y\). It is well known that E is uniformly convex if and only if \(E^{*}\) is uniformly smooth. It is also known that if E is reflexive, then E is strictly convex if and only if \(E^{*}\) is smooth; for more details, see [1, 2].

Let C be a nonempty subset of E. Let \(T:C\to E\) be a mapping. We denote the set of fixed points of T by \(F(T)\), i.e., \(F(T)=\{x\in C:Tx=x\}\). A mapping \(T:C\to E\) is said to be nonexpansive if \(\Vert Tx-Ty\Vert \leq \Vert x-y\Vert \) for all \(x,y\in C\). A mapping \(T:C\to E\) is said to be quasi-nonexpansive if \(F(T)\neq\emptyset \) and \(\Vert Tx-y\Vert \leq \Vert x-y\Vert \) for all \(x\in C\) and \(y\in F(T)\). The mapping T is called closed, if for any sequence \(\{x_{n}\}_{n\in\mathbb{N}}\subset C\) with \(\lim_{n\to \infty}x_{n}=x_{0}\) and \(\lim_{n\to\infty}Tx_{n}=y_{0}\), we have \(Tx_{0}=y_{0}\). The concept of nonexpansivity plays an important role in the study of Mann-type iteration [3] for finding fixed points of a mapping \(T:C\to C\). Recall that the Mann-type iteration is given by the following formula:

$$ x_{n+1}=\gamma_{n}Tx_{n}+(1-\gamma_{n})x_{n}, \quad x_{1}\in C. $$
(1.2)

Here, \(\{\gamma_{n}\}_{n\in\Bbb{N}}\) is a sequence of real numbers in \([0,1]\) satisfying some appropriate conditions. The construction of fixed points of nonexpansive mappings via Mann’s algorithm [3] has been extensively investigated recently in the current literature (see, for example, [4] and the references therein). In [4], Reich proved that the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) generated by Mann’s algorithm (1.2) converges weakly to a fixed point of T. However, the convergence of the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) generated by Mann’s algorithm (1.2) is in general not strong (see a counterexample in [5]; see also [6, 7]). Some attempts to modify the Mann iteration method (1.2) so that strong convergence is guaranteed have recently been made. Bauschke and Combettes [8] proposed another modification of the Mann iteration process for a single nonexpansive mapping T in a Hilbert space H. Then they proved that if the sequence \(\{\alpha_{n}\}_{n\in\mathbb{N}}\) is bounded above from one, then the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) generated by (1.2) converges strongly to a fixed point of T, see also Nakajo and Takahashi [9].

Let E be a smooth, strictly convex and reflexive Banach space and let J be the normalized duality mapping of E. Let C be a nonempty, closed and convex subset of E. The generalized projection \(\Pi_{C}\) from E onto C is defined and denoted by

$$ \Pi_{C}(x)=\mathop{\arg\min}\limits _{y\in C}\phi(y,x), $$
(1.3)

where \(\phi(x,y)=\Vert x\Vert ^{2}-2\langle x,Jy \rangle+\Vert y\Vert ^{2}\). For more details, see [10].

Let C be a nonempty, closed and convex subset of a smooth Banach space E, let T be a mapping from C into itself. A point \(p\in C\) is said to be an asymptotic fixed point [11] of T if there exists a sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) in C which converges weakly to p and \(\lim_{n\to\infty} \Vert x_{n} -Tx_{n}\Vert =0\). We denote the set of all asymptotic fixed points of T by \(\hat{F}(T)\). A point \(p\in C\) is called a strong asymptotic fixed point of T if there exists a sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) in C which converges strongly to p and \(\lim_{n\to\infty} \Vert x_{n}-Tx_{n}\Vert =0\). We denote the set of all strong asymptotic fixed points of T by \(\tilde{F}(T)\).

Following Matsushita and Takahashi [12], a mapping \(T:C\to C\) is said to be relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    \(F(T)\) is nonempty;

  2. (2)

    \(\phi(u,Tx)\leq\phi(u,x)\), \(\forall u\in F(T)\), \(x\in C\);

  3. (3)

    \(\hat{F}(T)=F(T)\).

In 2005, Matsushita and Takahashi [12] proved the following strong convergence theorem for relatively nonexpansive mappings in a Banach space.

Theorem 1.1

Let E be a uniformly convex and uniformly smooth Banach space, let C be a nonempty, closed and convex subset of E, let T be a relatively nonexpansive mapping from C into itself, and let \(\{\alpha_{n}\}_{n\in\mathbb{N}}\) be a sequence of real numbers such that \(0\leq\alpha_{n}<1\) and \(\limsup_{n\to\infty}\alpha_{n}<1\). Suppose that \(\{x_{n}\}_{n\in\mathbb{N}}\) is given by

$$ \textstyle\begin{cases} x_{0}=x\in C,\\ y_{n}=J^{-1}(\alpha_{n}Jx_{n}+(1-\alpha_{n}) JTx_{n}),\\ H_{n}=\{z\in C_{n}:\phi(z,y_{n})\leq\phi(z,x_{n})\},\\ W_{n}=\{z\in C:\langle x_{n}-z,Jx-Jx_{n} \rangle\geq0\},\\ x_{n+1}=\Pi_{H_{n}\cap W_{n}}x. \end{cases} $$
(1.4)

If \(F(T)\) is nonempty, then \(\{x_{n}\}_{n\in\mathbb{N}}\) converges strongly to \(\Pi_{F(T)}x\).

1.1 Some facts about gradients

For any convex function \(g:E\to(-\infty,+\infty]\), we denote the domain of g by \(\operatorname {dom}g=\{x\in E:g(x)<\infty\}\). For any \(x\in \operatorname {int}\operatorname {dom}g\) and any \(y\in E\), the right-hand derivative of g at x in the direction y is defined by

$$ g^{o}(x,y)=\lim_{t\downarrow 0}\frac{g(x+ty)-g(x)}{t}. $$
(1.5)

The function g is said to be Gâteaux differentiable at x if \(\lim_{t\to 0}\frac{g(x+ty)-g(x)}{t}\) exists for any y. In this case \(g^{o}(x,y)\) coincides with \(\nabla g(x)\), the value of the gradient g of g at x. The function g is said to be Gâteaux differentiable if it is Gâteaux differentiable everywhere. The function g is said to be Fréchet differentiable at x if this limit is attained uniformly in \(\Vert y\Vert =1\). The function g is said to be Fréchet differentiable if it is Fréchet differentiable everywhere. It is well known that if a continuous convex function \(g:E\to\mathbb{R}\) is Gâteaux differentiable, then g is norm-to-weak continuous (see, for example, [13]). Also, it is known that if g is Fréchet differentiable, then g is norm-to-norm continuous (see [13]). The mapping g is said to be weakly sequentially continuous if \(x_{n}\rightharpoonup x\) as \(n\to\infty\) implies that \(\nabla g(x_{n})\rightharpoonup^{*} \nabla g (x)\) as \(n\to\infty\) (for more details, see [13] or [14]). The function g is said to be strongly coercive if

$$\lim_{\Vert x_{n}\Vert \to\infty} \frac{g(x_{n})}{\Vert x_{n}\Vert }= \infty. $$

It is also said to be bounded on bounded subsets of E if \(g(U)\) is bounded for each bounded subset U of E. Finally, g is said to be uniformly Fréchet differentiable on a subset X of E if the limit (1.5) is attained uniformly for all \(x\in X\) and \(\Vert y\Vert =1\).

Let \(A:E\to2^{E^{*}}\) be a set-valued mapping. We define the domain and range of A by \(\operatorname {dom}A=\{x\in E:Ax\neq{\O}\}\) and \(\operatorname {ran}A=\bigcup_{x\in E} Ax\), respectively. The graph of A is denoted by \(G(A)=\{(x,x^{*})\in E\times E^{*}:x^{*}\in Ax\}\). The mapping \(A\subset E\times E^{*}\) is said to be monotone [15] if \(\langle x-y,x^{*}-y^{*} \rangle\geq0\) whenever \((x,x^{*}),(y,y^{*})\in A\). It is also said to be maximal monotone [16] if its graph is not contained in the graph of any other monotone operator on E. If \(A\subset E\times E^{*}\) is maximal monotone, then we can show that the set \(A^{-1}0=\{z\in E:0\in Az\}\) is closed and convex.

1.2 Some facts about Legendre functions

Let E be a reflexive Banach space. For any proper, lower semicontinuous and convex function \(g:E\to(-\infty,+\infty]\), the conjugate function \(g^{*}\) of g is defined by

$$g^{*}\bigl(x^{*}\bigr)=\sup_{x\in E}\bigl\{ \bigl\langle x,x^{*} \bigr\rangle -g(x)\bigr\} $$

for all \(x^{*}\in E^{*}\). It is well known that \(g(x)+g^{*}(x^{*})\geq\langle x,x^{*} \rangle\) for all \((x,x^{*})\in E\times E^{*}\). It is also known that \((x,x^{*})\in\partial g\) is equivalent to

$$ g(x)+g^{*}\bigl(x^{*}\bigr)=\bigl\langle x,x^{*} \bigr\rangle . $$
(1.6)

Here, ∂g is the subdifferential of g [17, 18]. We also know that if \(g:E\to(-\infty,+\infty]\) is a proper, lower semicontinuous and convex function, then \(g^{*}:E^{*}\to (-\infty,+\infty]\) is a proper, weak lower semicontinuous and convex function; see [2] for more details on convex analysis.

Let \(g:E\to(-\infty,+\infty]\) be a mapping. The function g is said to be:

  1. (i)

    Essentially smooth if ∂g is both locally bounded and single-valued on its domain.

  2. (ii)

    Essentially strictly convex if \((\partial g)^{-1}\) is locally bounded on its domain and g is strictly convex on every convex subset of \(\operatorname {dom}\partial g\).

  3. (iii)

    Legendre if it is both essentially smooth and essentially strictly convex (for more details, we refer to [19]).

If E is a reflexive Banach space and \(g:E\to(-\infty,+\infty]\) is a Legendre function, then in view of [20]

$$\nabla g^{*}=(\nabla g)^{-1}, \qquad \operatorname {ran}\nabla g=\operatorname {dom}g^{*}=\operatorname {int}\operatorname {dom}g^{*},\quad \mbox{and} \quad \operatorname {ran}\nabla g=\operatorname {int}\operatorname {dom}g. $$

Examples of Legendre functions are given in [21, 22]. The most notable example of a Legendre function is \(\frac{1}{s}\Vert \cdot \Vert ^{s}\) (\(1< s<\infty\)), where the Banach space E is smooth and strictly convex and, in particular, a Hilbert space.

1.3 Some facts about Bregman distances

Let E be a Banach space and let \(E^{*}\) be the dual space of E. Let \(g:E\to\mathbb{R}\) be a convex and Gâteaux differentiable function. Then the Bregman distance [23, 24] corresponding to g is the function \(D_{g}:E\times E\to \mathbb{R}\) defined by

$$ D_{g}(x,y)=g(x)-g(y)-\bigl\langle x-y,\nabla g(y) \bigr\rangle , \quad \forall x,y\in E. $$
(1.7)

It is clear that \(D_{g}(x,y)\geq0\) for all \(x,y\in E\). It is well known [25] that for \(x\in E\) and \(x_{0}\in C\), \(D_{g}(x_{0},x)=\min_{y\in C}D_{g}(y,x)\) if and only if

$$ \bigl\langle y-x_{0},\nabla g (x)-\nabla g(x_{0}) \bigr\rangle \leq0,\quad \forall y\in C. $$
(1.8)

In that case when E is a smooth Banach space, setting \(g(x)=\Vert x\Vert ^{2}\) for all \(x\in E\), we obtain that \(\nabla g(x)=2Jx\) for all \(x\in E\) and hence \(D_{g}(x,y)=\phi(x,y)\) for all \(x,y\in E\).

A Bregman projection [13, 23] of \(x\in \operatorname {int}(\operatorname {dom}g)\) onto the nonempty, closed and convex set \(C\subset \operatorname {dom}g\) is the unique vector \(\operatorname {proj}^{g}_{C}(x):=x_{0}\in C\) satisfying

$$D_{g}(x_{0},x)=\min_{y\in C}D_{g}(y,x). $$

It is well known that \(\operatorname {proj}^{g}_{C}\) has the following property:

$$ D_{g} \bigl(y,\operatorname {proj}^{g}_{C}x \bigr)+D_{g} \bigl(\operatorname {proj}^{g}_{C}x,x \bigr)\leq D_{g}(y,x) $$
(1.9)

for all \(y\in C\) and \(x\in E\) (see [13] for more details).

1.4 Some facts about uniformly convex functions

Let E be a Banach space and let \(B_{r}:=\{z\in E:\Vert z\Vert \leq r\}\) for all \(r>0\). Then a function \(g:E\to\mathbb{R}\) is said to be uniformly convex on bounded subsets of E [26] if \(\rho_{r}(t)>0\) for all \(r,t>0\), where \(\rho_{r}:[0,+\infty)\to[0,\infty]\) is defined by

$$ \rho_{r}(t)=\inf_{x,y\in B_{r},\Vert x-y\Vert =t,\alpha\in(0,1)} \frac{\alpha g(x)+(1-\alpha)g(y)-g(\alpha x+(1-\alpha)y)}{\alpha(1-\alpha)} $$
(1.10)

for all \(t\geq0\). The function \(\rho_{r}\) is called the gauge of uniform convexity of g. The function g is also said to be uniformly smooth on bounded subsets of E [26] if \(\lim_{t\downarrow 0}\frac{\sigma_{r}(t)}{t}=0\) for all \(r>0\), where \(\sigma_{r}:[0,+\infty)\to[0,\infty]\) is defined by

$$\sigma_{r}(t)=\sup_{x\in B_{r},y\in S_{E},\alpha\in(0,1)} \frac{\alpha g(x+(1-\alpha)ty)+(1-\alpha)g(x-\alpha ty)-g(x)}{\alpha(1-\alpha)} $$

for all \(t\geq0\). The function g is said to be uniformly convex if the function \(\delta_{g}:[0,+\infty)\to[0,+\infty]\), defined by

$$\delta_{g}(t):=\sup \biggl\{ \frac{1}{2}g(x)+ \frac{1}{2}g(y)-g \biggl(\frac {x+y}{2} \biggr):\Vert y-x\Vert =t \biggr\} , $$

satisfies that \(\lim_{t\downarrow0}\frac{\sigma_{r}(t)}{t}=0\).

1.5 Some facts about resolvents

Let E be a reflexive Banach space with the dual space \(E^{*}\) and let \(g:E\to(-\infty,+\infty]\) be a proper, lower semicontinuous and convex function. Let A be a maximal monotone operator from E to \(E^{*}\). For any \(r>0\), let the mapping \(\operatorname {Res}^{g}_{rA}:E\to \operatorname {dom}A\) be defined by

$$\operatorname {Res}^{g}_{rA}=(\nabla g+rA)^{-1}\nabla g. $$

The mapping \(\operatorname {Res}^{g}_{rA}\) is called the g-resolvent of A (see [27]). It is well known that \(A^{-1}(0)=F (\operatorname {Res}^{g}_{rA} )\) for each \(r>0\) (for more details, see, for example, [1]).

Examples and some important properties of such operators are discussed in [28].

1.6 Some facts about Bregman quasi-nonexpansive mappings

Let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let \(g:E\to(-\infty,+\infty]\) be a proper, lower semicontinuous and convex function. Recall that a mapping \(T:C\to C\) is said to be Bregman quasi-nonexpansive if \(F(T)\neq{\O}\) and

$$D_{g}(p,Tx)\leq D_{g}(p,x),\quad \forall x\in C, p\in F(T). $$

A mapping \(T:C\to C\) is said to be Bregman relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    \(F(T)\) is nonempty;

  2. (2)

    \(D_{g}(p,Tv)\leq D_{g}(p,v)\), \(\forall p\in F(T)\), \(v\in C\);

  3. (3)

    \(\hat{F}(T)=F(T)\).

A mapping \(T:C\to C\) is said to be Bregman weak relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    \(F(T)\) is nonempty;

  2. (2)

    \(D_{g}(p,Tv)\leq D_{g}(p,v)\), \(\forall p\in F(T)\), \(v\in C\);

  3. (3)

    \(\tilde{F}(T)=F(T)\).

It is clear that any Bregman relatively nonexpansive mapping is a Bregman quasi-nonexpansive mapping. It is also obvious that every Bregman relatively nonexpansive mapping is a Bregman weak relatively nonexpansive mapping, but the converse in not true in general; see, for example, [29]. Indeed, for any mapping \(T:C\to C\), we have \(F(T)\subset\tilde{F}(T)\subset\hat{F}(T)\). If T is Bregman relatively nonexpansive, then \(F(T)=\tilde{F}(T)=\hat{F}(T)\).

The concept of W-mapping was first introduced by Atsushiba and Takahashi [30] in 1999 and ever since has been extensively investigated for a finite family of mappings (see [31] and the references therein). Now, we are in a position to introduce the concept of Bregman W-mapping in a Banach space. Let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let \(\{S_{n}\}_{n\in\Bbb{N}}\) be an infinite family of Bregman weak relatively nonexpansive mappings of C into itself, and let \(\{\beta_{n,k}:k,n\in\Bbb{N}, 1\leq k\leq n\}\) be a sequence of real numbers such that \(0\leq \beta_{i,j}\leq1\) for every \(i,j\in\Bbb{N}\) with \(i\geq j\). Then, for any \(n\in\Bbb{N}\), we define a mapping \(W_{n}\) of C into itself as follows:

$$\begin{aligned}& U_{n,n+1}x=x, \\& U_{n,n}x=\operatorname {proj}^{g}_{C}\bigl(\nabla g^{*}\bigl[ \beta_{n,n} \nabla g(S_{n}U_{n,n+1}x)+(1- \beta_{n,n})\nabla g(x)\bigr]\bigr), \\& U_{n,n-1}x=\operatorname {proj}^{g}_{C}\bigl(\nabla g^{*}\bigl[ \beta_{n,n-1} \nabla g(S_{n-1}U_{n,n}x)+(1- \beta_{n,n-1})\nabla g(x)\bigr]\bigr), \\& \vdots \\& U_{n,k}x=\operatorname {proj}^{g}_{C}\bigl(\nabla g^{*}\bigl[ \beta_{n,k} \nabla g(S_{k}U_{n,k+1}x)+(1- \beta_{n,k})\nabla g(x)\bigr]\bigr), \\& \vdots \\& U_{n,2}x=\operatorname {proj}^{g}_{C}\bigl(\nabla g^{*}\bigl[ \beta_{n,2} \nabla g(S_{2}U_{n,3}x)+(1- \beta_{n,2})\nabla g(x)\bigr]\bigr), \\& W_{n}x=U_{n,1}x=\nabla g^{*}\bigl[\beta_{n,1} \nabla g(S_{1}U_{n,2}x)+(1-\beta_{n,1})\nabla g(x)\bigr] \end{aligned}$$

for all \(x\in C\), where \(\operatorname {proj}^{g}_{C}\) is the Bregman projection from E onto C. Such a mapping \(W_{n}\) is called the Bregman W-mapping generated by \(S_{n},S_{n-1},\ldots,S_{1}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\).

The theory of fixed points with respect to Bregman distances has been studied in the last ten years and much intensively in the last four years. For some recent articles on the existence of fixed points for Bregman nonexpansive type mappings, we refer the readers to [2022, 27, 28]. But it is worth mentioning that, in all the above results for Bregman nonexpansive type mappings, the assumption \(\hat{F}(T)=F(T)\) is imposed on the map T. So, the following question arises naturally in a Banach space setting.

Question 1.1

Is it possible to obtain strong convergence of modified Mann-type schemes to a common fixed point of an infinite family of Bregman W-mappings \(\{S_{j}\}_{j\in \mathbb{N}}\) without imposing the assumption \(\hat{F}(S_{j})=F(S_{j})\) on \(S_{j}\)?

In this paper we introduce new modified Mann iterative processes for computing fixed points of an infinite family of Bregman W-mappings in reflexive Banach spaces. Let \(W_{n}\) be the Bregman W-mapping generated by \(S_{n},S_{n-1},\ldots,S_{1}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). We first express the set of fixed points of \(W_{n}\) as the intersection of fixed points of \(\{S_{i}\}_{i=1}^{n}\). As a consequence, we show that \(W_{n}\) is a Bregman weak relatively nonexpansive mapping if \(S_{i}\) is a Bregman weak relatively nonexpansive mapping for each \(i=1,2,\ldots,n\). We then prove some strong convergence theorems for these mappings. Some application of our results to convex feasibility problem is also presented. No assumption \(\hat{F}(T)=F(T)\) is imposed on the mapping T. Consequently, the above question is answered in the affirmative in a reflexive Banach space setting. Our results improve and generalize many known results in the current literature; see, for example, [8, 9, 12, 3033].

2 Preliminaries

In this section, we begin by recalling some preliminaries and lemmas which will be used in the sequel.

The following definition is slightly different from that in Butnariu and Iusem [13].

Definition 2.1

([14])

Let E be a Banach space. The function \(g:E\to\mathbb{R}\) is said to be a Bregman function if the following conditions are satisfied:

  1. (1)

    g is continuous, strictly convex and Gâteaux differentiable;

  2. (2)

    the set \(\{y\in E:D_{g}(x,y)\leq r\}\) is bounded for all \(x\in E\) and \(r>0\).

The following lemma follows from Butnariu and Iusem [13] and Zălinescu [26].

Lemma 2.1

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a strongly coercive Bregman function. Then

  1. (1)

    \(\nabla g:E\to E^{*}\) is one-to-one, onto and norm-to-weak continuous;

  2. (2)

    \(\langle x-y,\nabla g(x)-\nabla g(y) \rangle=0\) if and only if \(x=y\);

  3. (3)

    \(\{x\in E:D_{g}(x,y)\leq r\}\) is bounded for all \(y\in E\) and \(r>0\);

  4. (4)

    \(\operatorname {dom}g^{*}=E^{*}, g^{*}\) is Gâteaux differentiable and \(\nabla g^{*}=(\nabla g)^{-1}\).

We know the following two results from [26].

Theorem 2.1

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex function which is bounded on bounded subsets of E. Then the following assertions are equivalent:

  1. (1)

    g is strongly coercive and uniformly convex on bounded subsets of E;

  2. (2)

    \(\operatorname {dom}g^{*}=E^{*}, g^{*}\) is bounded on bounded subsets and uniformly smooth on bounded subsets of \(E^{*}\);

  3. (3)

    \(\operatorname {dom}g^{*}=E^{*}, g^{*}\) is Fréchet differentiable and \(\nabla g^{*}\) is uniformly norm-to-norm continuous on bounded subsets of \(E^{*}\).

Theorem 2.2

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a continuous convex function which is strongly coercive. Then the following assertions are equivalent:

  1. (1)

    g is bounded on bounded subsets and uniformly smooth on bounded subsets of E;

  2. (2)

    \(g^{*}\) is Fréchet differentiable and \(\nabla g^{*}\) is uniformly norm-to-norm continuous on bounded subsets of \(E^{*}\);

  3. (3)

    \(\operatorname {dom}g^{*}=E^{*}, g^{*}\) is strongly coercive and uniformly convex on bounded subsets of \(E^{*}\).

Let E be a Banach space and let \(g:E\to\mathbb{R}\) be a convex and Gâteaux differentiable function. Then the Bregman distance [34] (see also [23, 24]) satisfies the three point identity that is

$$ D_{g}(x,z)=D_{g}(x,y)+D_{g}(y,z)+\bigl\langle x-y, \nabla g(y)-\nabla g (z) \bigr\rangle ,\quad \forall x,y,z\in E. $$
(2.1)

In particular, it can be easily seen that

$$ D_{g}(x,y)=-D_{g}(y,x)+\bigl\langle y-x,\nabla g(y)-\nabla g (x) \bigr\rangle ,\quad \forall x,y\in E. $$
(2.2)

The following result was proved in [29].

Lemma 2.2

Let E be a Banach space and \(g:E\to \mathbb{R}\) be a Gâteaux differentiable function which is uniformly convex on bounded subsets of E. Let \(\{x_{n}\}_{n\in \mathbb{N}}\) and \(\{y_{n}\}_{n\in\mathbb{N}}\) be bounded sequences in E. Then

$$\lim_{n\to\infty}D_{g}(x_{n},y_{n})=0 \quad \Longleftrightarrow\quad \lim_{n\to\infty} \Vert x_{n}-y_{n} \Vert =0. $$

The following result was first proved in [19] (see also [14]).

Lemma 2.3

Let E be a reflexive Banach space, \(g:E\to\mathbb{R}\) be a strongly coercive Bregman function and V be the function defined by

$$V_{g}\bigl(x,x^{*}\bigr)=g(x)-\bigl\langle x,x^{*} \bigr\rangle +g^{*} \bigl(x^{*}\bigr),\quad x\in E, x^{*}\in E^{*}. $$

Then the following assertions hold:

  1. (1)

    \(D_{g}(x,\nabla g^{*}(x^{*}))=V_{g}(x,x^{*})\) for all \(x\in E\) and \(x^{*}\in E^{*}\).

  2. (2)

    \(V_{g}(x,x^{*})+\langle\nabla g^{*}(x^{*})-x,y^{*} \rangle\leq V_{g}(x,x^{*}+y^{*})\) for all \(x\in E\) and \(x^{*},y^{*}\in E^{*}\).

The following result was proved in [29].

Lemma 2.4

Let E be a Banach space, \(r>0\) be a constant, \(\rho_{r}\) be the gauge of uniform convexity of g and \(g:E\to\mathbb{R}\) be a convex function which is uniformly convex on bounded subsets of E. Then

  1. (i)

    For any \(x,y\in B_{r}\) and \(\alpha\in(0,1)\),

    $$g\bigl(\alpha x+(1-\alpha)y\bigr)\leq \alpha g(x)+(1-\alpha)g(y)-\alpha(1- \alpha)\rho_{r}\bigl(\Vert x-y\Vert \bigr). $$
  2. (ii)

    For any \(x,y\in B_{r}\),

    $$\rho_{r}\bigl(\Vert x-y\Vert \bigr)\leq D_{g}(x,y). $$
  3. (iii)

    If, in addition, g is bounded on bounded subsets and uniformly convex on bounded subsets of E then, for any \(x\in E\), \(y^{*},z^{*}\in B_{r}\) and \(\alpha\in(0,1)\),

    $$V_{g}\bigl(x,\alpha y^{*}+(1-\alpha)z^{*}\bigr)\leq\alpha V_{g} \bigl(x,y^{*}\bigr)+(1-\alpha)V_{g}\bigl(x,z^{*}\bigr)-\alpha(1-\alpha) \rho^{*}_{r}\bigl(\bigl\Vert y^{*}-z^{*}\bigr\Vert \bigr). $$

The following result was proved in [29].

Lemma 2.5

Let E be a Banach space, \(r>0\) be a constant and \(g:E\to\mathbb{R}\) be a convex function which is uniformly convex on bounded subsets of E. Then

$$g \Biggl(\sum_{k=0}^{n} \alpha_{k} x_{k} \Biggr)\leq\sum_{k=0}^{n} \alpha_{k} g(x_{k})-\alpha_{i}\alpha_{j} \rho_{r}\bigl(\Vert x_{i}-x_{j}\Vert \bigr) $$

for all \(i,j\in\{0,1,2,\ldots,n\}\), \(x_{k}\in B_{r}\), \(\alpha_{k}\in (0,1)\) and \(k=0,1,2,\ldots,n\) with \(\sum_{k=0}^{n}\alpha_{k}=1\), where \(\rho_{r}\) is the gauge of uniform convexity of g.

Now we prove the following important result.

Proposition 2.1

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let \(S_{1},S_{2},\ldots,S_{n}\) be Bregman weak relatively nonexpansive mappings of C into itself such that \(\bigcap_{i=1}^{n}F(S_{i})\neq{\O}\), and let \(\{\beta_{n,k}:k,n\in\Bbb{N}, 1\leq k\leq n\}\) be a sequence of real numbers such that \(0< \beta_{n,1}\leq1\) and \(0<\beta_{n,i}<1\) for every \(i=2,3,\ldots,n\). Let \(W_{n}\) be the Bregman W-mapping generated by \(S_{n},S_{n-1},\ldots,S_{1}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). Then the following assertions hold:

  1. (i)

    \(F(W_{n})=\bigcap_{i=1}^{n}F(S_{i})\);

  2. (ii)

    for every \(k=1,2,\ldots,n\), \(x\in C\) and \(z\in F(W_{n})\), \(D_{g}(z,U_{n,k}x)\leq D_{g}(z,x)\) and \(D_{g}(z,S_{k}U_{n,k+1}x)\leq D_{g}(z,x)\);

  3. (iii)

    for every \(n\in\Bbb{N}\), \(W_{n}\) is a Bregman weak relatively nonexpansive mapping.

Proof

(i) It is clear that \(\bigcap_{i=1}^{n}F(S_{i})\subset F(W_{n})\). For the converse inclusion, take any \(w\in \bigcap_{i=1}^{n}F(S_{i})\) and \(z\in F(W_{n})\).

Let \(r_{1}=\sup\{\Vert \nabla g(z)\Vert ,\Vert \nabla g(S_{k}z)\Vert ,\Vert \nabla g(S_{k}U_{n,k+1}z)\Vert :k=1,2,\ldots,n\}\) and \(\rho^{*}_{r_{1}}:E^{*}\to \mathbb{R}\) be the gauge of uniform convexity of the conjugate function \(g^{*}\). In view of (1.9) and Lemma 2.4, we obtain

$$\begin{aligned} D_{g}(w,z)={} & D_{g}(w,W_{n}z) \\ ={}&D_{g}\bigl(w,\nabla g^{*}\bigl[\beta _{n,1}\nabla g(S_{1}U_{n,2}z) +(1-\beta_{n,1})\nabla g(z)\bigr] \bigr) \\ ={}& g(w)-\bigl\langle w,\beta_{n,1} \beta_{n,1}\nabla g(S_{1}U_{n,2}z)+(1-\beta_{n,1})\nabla g(z) \bigr\rangle \\ & {}+g^{*}\bigl(\beta_{n,1}\nabla g(S_{1}U_{n,2}z)+(1- \beta_{n,1})\nabla g(z)\bigr) \\ \leq {}&\beta_{n,1}g(w)+(1-\beta_{n,1}) g(w)+ \beta_{n,1}g^{*}\bigl(\nabla g(S_{1}U_{n,2}z)\bigr)+(1- \beta_{n,1})g^{*}\bigl(\nabla g(z)\bigr) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{1}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\bigr\Vert \bigr) \\ ={}&\beta_{n,1}V_{g}\bigl(w,\nabla g(S_{1}U_{n,2}z) \bigr)+(1-\beta_{n,1})V_{g}\bigl(w,\nabla g(z)\bigr) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{1}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\bigr\Vert \bigr) \\ ={}&\beta_{n,1}D_{g}(w,S_{1}U_{n,2}z)+(1- \beta_{n,1})D_{g}(w,z) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{1}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\bigr\Vert \bigr) \\ \leq{}&\beta_{n,1} D_{g}(w,U_{n,2}z)+(1- \beta_{n,1})D_{g}(w,z) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{1}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\bigr\Vert \bigr) \\ ={}&\beta_{n,1} \bigl[\beta_{n,2} D_{g}(w,U_{n,3}z)+(1- \beta_{n,2})D_{g}(w,z) \\ &{} -\beta_{n,2}(1-\beta_{n,2})\rho^{*}_{r_{1}}\bigl( \bigl\Vert \nabla g(S_{2}U_{n,3}z) -\nabla g(z)\bigr\Vert \bigr)\bigr]+(1-\beta_{n,1})D_{g}(w,z) \\ &{} -\beta_{n,1}(1-\beta_{n,1}) \rho^{*}_{r_{1}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\bigr\Vert \bigr) \\ \leq{}& \cdots \\ \leq{}& D_{g}(w,z)-\beta_{n,1}(1-\beta_{n,1}) \rho^{*}_{r_{1}}\bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}z) -\nabla g(z)\bigr\Vert \bigr) \\ &{} -\beta_{n,1}\beta_{n,2}(1-\beta_{n,2}) \rho^{*}_{r_{1}} \bigl(\bigl\Vert \nabla g(S_{2}U_{n,3}z)- \nabla g(z)\bigr\Vert \bigr)-\cdots \\ & {}-\beta_{n,1}\beta_{n,2}\cdots\beta_{n,n} (1- \beta_{n,n})\rho^{*}_{r_{1}}\bigl(\bigl\Vert \nabla g(S_{n}z)-\nabla g(z)\bigr\Vert \bigr). \end{aligned}$$

This implies that

$$\rho^{*}_{r_{1}}\bigl(\bigl\Vert \nabla g(S_{2}U_{n,3}z)- \nabla g(z)\bigr\Vert \bigr)=\cdots=\rho^{*}_{r_{1}}\bigl(\bigl\Vert \nabla g(S_{n}z)-\nabla g(z)\bigr\Vert \bigr)=0 $$

and hence, from the properties of \(\rho^{*}_{r_{1}}\), we conclude that

$$S_{k}z=z,\qquad U_{n,k}z=z\quad (k=2,3,\ldots,n). $$

If \(\beta_{n,1}<1\), then we get from \(\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\Vert =0\) that \(S_{1}z=z\). And if \(\beta_{n,1}=1\), then we obtain from \(z=W_{n}z=S_{1}U_{n,2}z\) that \(S_{1}z=z\). Thus we have \(z\in\bigcap_{i=1}^{n}F(S_{i})\). This shows that \(F(W_{n})\subset\bigcap_{i=1}^{n}F(S_{i})\).

(ii) Let \(k=1,2,\ldots,n, x\in C\) and \(z\in F(W_{n})\). By a similar way as in the proof of (i), we arrive at

$$\begin{aligned} D_{g}(z,U_{n,k}x)&\leq\beta_{n,k}D_{g}(z,S_{k}U_{n,k+1}x)+(1- \beta_{n,k})D_{g}(z,x) \\ &\leq\beta_{n,k}D_{g}(z,U_{n,k+1}x)+(1- \beta_{n,k})D_{g}(z,x) \\ &\leq\beta_{n,k} \bigl[\beta_{n,k+1} D_{g}(z,U_{n,k+1}x)+(1- \beta_{n,k+1})D_{g}(z,x)\bigr]+(1-\beta_{n,k})D_{g}(z,x) \\ &\leq\cdots\leq D_{g}(z,x). \end{aligned}$$

This implies that

$$D_{g}(z,S_{k}U_{n,k+1}x)\leq D_{g}(z,x). $$

(iii) Since we have already proved that \(F(W_{n})=\bigcap_{i=1}^{n}F(S_{i})\), then the fact that \(W_{n}\) is a Bregman weak relatively nonexpansive mapping is a consequence of each \(S_{i}\) being Bregman weak relatively nonexpansive. Indeed, let \(\{z_{m}\}_{m\in\Bbb{N}}\) be a sequence in C such that \(z_{m}\to z\in C\) and \(\Vert z_{m}-W_{n}z_{m}\Vert \to0\) as \(m\to\infty\). We will show that \(z\in F(W_{n})\). To this end, let \(w\in F(W_{n})\). In view of Lemma 2.2, we get that

$$\lim_{m\to \infty}D_{g}(W_{n}z_{m},z_{m})=0. $$

On the other hand, we have from (2.1) that

$$\begin{aligned} D_{g}(w,z_{m})-D_{g}(w,W_{n}z_{m})={}&D_{g}(w,W_{n}z_{m})+D_{g}(W_{n}z_{m},z_{m}) \\ &{} +\bigl\langle w-W_{n}z_{m},\nabla g(W_{n}z_{m})- \nabla g (z_{m}) \bigr\rangle -D_{g}(w,W_{n}z_{m}) \\ ={}&D_{g}(W_{n}z_{m},z_{m})+\bigl\langle w-W_{n}z_{m},\nabla g(W_{n}z_{m})- \nabla g (z_{m}) \bigr\rangle . \end{aligned}$$

This, together with (2.2), implies that

$$\lim_{m\to\infty}\bigl\vert D_{g}(w,z_{m})-D_{g}(w,W_{n}z_{m}) \bigr\vert =0. $$

Let \(r_{2}=\sup\{\Vert \nabla g(z_{m})\Vert ,\Vert \nabla g(S_{k}z_{m})\Vert ,\Vert \nabla g(S_{k}U_{n,k+1}z_{m})\Vert :m\in\Bbb{N},k=1,2,\ldots,n\}\) and \(\rho^{*}_{r_{2}}:E^{*}\to\mathbb{R}\) be the gauge of uniform convexity of the conjugate function \(g^{*}\). By the same arguments as in (ii), we conclude that

$$\begin{aligned} D_{g}(w,W_{n}z_{m})={}&D_{g}\bigl(w, \nabla g^{*}\bigl[\beta_{n,1}\nabla g(S_{1}U_{n,2}z_{m})+(1- \beta_{n,1})\nabla g(z_{m})\bigr]\bigr) \\ ={}&g(w)-\bigl\langle w,\beta_{n,1} \beta_{n,1}\nabla g(S_{1}U_{n,2}z_{m})+(1-\beta_{n,1}) \nabla g(z_{m}) \bigr\rangle \\ &{} +g^{*}\bigl(\beta_{n,1}\nabla g(S_{1}U_{n,2}z_{m})+(1- \beta_{n,1})\nabla g(z_{m})\bigr) \\ \leq{}& \beta_{n,1}g(w)+(1-\beta_{n,1})g(w)+ \beta_{n,1}g^{*}\bigl(\nabla g(S_{2}U_{n,3}z_{m}) \bigr)+(1-\beta_{n,1})g^{*}\bigl(\nabla g(z_{m})\bigr) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{2}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr) \\ ={}&\beta_{n,1}V_{g}\bigl(w,\nabla g(S_{1}U_{n,2}z_{m}) \bigr)+(1-\beta_{n,1})V_{g}\bigl(w,\nabla g(z_{m}) \bigr) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho_{r_{2}}\bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr) \\ ={}&\beta_{n,1} D_{g}(w,S_{1}U_{n,2}z_{m})+(1- \beta_{n,1})D_{g}(u,z_{m}) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{2}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr) \\ \leq{}&\beta_{n,1}D_{g}(w,U_{n,2}z_{m})+(1- \beta_{n,1})D_{g}(w,z_{m}) \\ &{} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{2}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr) \\ ={}&\beta_{n,1}\bigl[\beta_{n,2}D_{g}(w,U_{n,3}z_{m})+(1- \beta_{n,2})D_{g}(w,z_{m}) \\ &{} -\beta_{n,2}(1-\beta_{n,2})\rho^{*}_{r_{2}}\bigl( \bigl\Vert \nabla g(S_{2}U_{n,3}z_{m}) -\nabla g(z_{m})\bigr\Vert \bigr)\bigr]+(1-\beta_{n,1})D_{g}(w,z_{m}) \\ & {}-\beta_{n,1}(1-\beta_{n,1}) \rho^{*}_{r_{2}}\bigl( \bigl\Vert \nabla g(S_{1}U_{n,2}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr) \\ \leq{}& \cdots \\ \leq{}& D_{g}(w,z_{m})-\beta_{n,1}(1- \beta_{n,1})\rho^{*}_{r_{2}}\bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}z_{m}) -\nabla g(z_{m}) \bigr\Vert \bigr) \\ &{} -\beta_{n,1}\beta_{n,2}(1-\beta_{n,2}) \rho^{*}_{r_{2}} \bigl(\bigl\Vert \nabla g(S_{2}U_{n,3}z_{m})- \nabla g(z_{m})\bigr\Vert \bigr)-\cdots \\ &{} -\beta_{n,1}\beta_{n,2}\cdots\beta_{n,n} (1- \beta_{n,n})\rho^{*}_{r_{2}}\bigl(\bigl\Vert \nabla g(S_{n}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr). \end{aligned}$$

This implies that

$$\begin{aligned} \lim_{m\to\infty}\rho^{*}_{r_{2}}\bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}z_{m})-\nabla g(z_{m}) \bigr\Vert \bigr)&=\cdots \\ &=\lim_{m\to\infty}\rho^{*}_{r_{2}}\bigl(\bigl\Vert \nabla g(S_{n}z_{m})-\nabla g(z_{m})\bigr\Vert \bigr)=0. \end{aligned}$$

Therefore, from the property of \(\rho_{r_{2}}^{*}\) we deduce that

$$\lim_{m\to\infty}\bigl\Vert \nabla g(S_{1}z_{m})- \nabla g(S_{k}z_{m})\bigr\Vert =0,\quad \forall k\in\{2, \ldots,n\} $$

and hence

$$S_{k}z=z,\qquad U_{n,k}z=z\quad (k=2,3,\ldots,n). $$

If \(\beta_{n,1}<1\), then we get from \(\Vert \nabla g(S_{1}U_{n,2}z)-\nabla g(z)\Vert =0\) that \(S_{1}z=z\). And if \(\beta_{n,1}=1\), then we obtain from \(z=W_{n}z=S_{1}U_{n,2}z\) that \(S_{1}z=z\). Thus we have \(z\in\bigcap_{i=1}^{n}F(S_{i})\) and hence \(W_{n}\) is a Bregman weak relatively nonexpansive mapping for every \(n\in \Bbb{N}\). This completes the proof. □

Next we prove the following convex combination of Bregman weak relatively nonexpansive mappings in a Banach space.

Proposition 2.2

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let \(\{S_{n}\}_{n\in\Bbb{N}}\) be a family of Bregman weak relatively nonexpansive mappings of C into itself such that \(F:=\bigcap_{n=1}^{\infty}F(S_{n})\neq{\O}\), and let \(T_{n}x=\nabla g^{*}(\sum_{j=1}^{n}\beta_{n,j}\nabla g(S_{j}x))\) for every \(n\in \Bbb{N}\) and \(x\in C\), where \(0\leq\beta_{n,j}\leq1\) (\(n\in \Bbb{N}\), \(j=1,2,\ldots,n\)) with \(\sum_{j=1}^{n}\beta_{n,j}=1\) for all \(n\in\Bbb{N}\) and \(\liminf_{n\to\infty}\beta_{n,j}>0\) for each \(j\in\Bbb{N}\). Then the following assertions hold:

  1. (i)

    \(\bigcap_{n=1}^{\infty}F(T_{n})=F\);

  2. (ii)

    for every \(n\in\Bbb{N}\), \(x\in C\) and \(z\in F\), \(D_{g}(z,T_{n}x)\leq D_{g}(z,x)\);

  3. (iii)

    for every \(n\in\Bbb{N}\), \(T_{n}\) is a Bregman weak relatively nonexpansive mapping.

Proof

(i) It is clear that \(F\subset \bigcap_{n=1}^{\infty}F(T_{n})\neq{\O}\). For the converse inclusion, take \(w\in F\) and \(z\in\bigcap_{n=1}^{\infty}F(T_{n})\). Let \(n\in \Bbb{N}\) be large enough and \(l,m\in\Bbb{N}\) with \(1\leq l\leq m\leq n\). Let \(r_{3}=\sup\{\Vert \nabla g(z)\Vert ,\Vert \nabla g(S_{k}z)\Vert ,\Vert \nabla g(S_{k}U_{n,k+1}z_{m})\Vert :m\in\Bbb{N},k=1,2,\ldots,n\}\) and \(\rho^{*}_{r_{3}}:E^{*}\to\mathbb{R}\) be the gauge of uniform convexity of the conjugate function \(g^{*}\). In view of Lemma 2.5, we obtain

$$\begin{aligned} D_{g}(w,z)={}&D_{g}(w,T_{n}z) \\ ={}&D_{g}\Biggl(w,\nabla g^{*}\Biggl(\sum_{j=1}^{n} \beta_{n,j}\nabla g(S_{j}) (z)\Biggr)\Biggr) \\ ={}&D_{g}\Biggl(w,\nabla g^{*}\Biggl[\sum_{j=1}^{n} \beta_{n,j} \nabla g(S_{j}z)\Biggr]\Biggr) \\ ={}&V_{g}\Biggl(w,\sum_{j=1}^{n} \beta_{n,j} \nabla g(S_{j}z)\Biggr) \\ ={}&g(w)-\Biggl\langle w,\sum_{j=1}^{n} \beta_{n,j} \nabla g(S_{j}z) \Biggr\rangle \\ &{} +g^{*}\biggl((\beta_{n,l}+\beta_{n,m})\frac{(\beta_{n,l}\nabla g(S_{l}z))+\beta_{n,m}\nabla g(S_{m}z)}{\beta_{n,l}+\beta_{n,m}} \\ &{}\times \bigl(1-(\beta_{n,l}+\beta _{n,m})\bigr)\frac{\sum_{j=1,2,\ldots,n,j\neq l,m}^{n}\beta_{n,j} \nabla g(S_{j}z)}{1-(\beta_{n,l}+\beta_{n,m})} \biggr) \\ \leq {}&g(w)-\sum_{j=1}^{n} \beta_{n,j}\bigl\langle w, \nabla g(S_{j}z)\bigr\rangle \\ & {}+(\beta_{n,l}+\beta_{n,m})\biggl[\frac{\beta_{n,l}}{(\beta _{n,l}+\beta_{n,m})}g^{*} \bigl(\nabla g(S_{l}z)\bigr)+\frac{\beta_{n,l}}{(\beta_{n,l}+\beta_{n,m})}g^{*}\bigl(\nabla g(S_{m}z)\bigr) \\ &{} -\frac{\beta_{n,l}}{(\beta_{n,l}+\beta_{n,m})}\frac{\beta _{n,m}}{(\beta_{n,l}+\beta_{n,m})}\rho_{r_{3}}^{*}\bigl(\bigl\Vert \nabla g(S_{l}z)-\nabla g(S_{m}z)\bigr\Vert \bigr)\biggr] \\ &{} +\sum_{j=1,2,\ldots,n,j\neq l,m}^{n}\beta_{n,j}g^{*} \bigl(\nabla g(S_{j}z)\bigr) \\ ={}&\sum_{j=1,2,\ldots,n}^{n}\beta_{n,j} \bigl[g(w)-\bigl\langle w, \nabla g(S_{j}z)\bigr\rangle +g^{*}\bigl(\nabla g(S_{j}z)\bigr)\bigr] \\ &{} -\frac{\beta_{n,l}\beta_{n,m}}{(\beta_{n,l}+\beta _{n,m})}\rho_{r_{3}}^{*}\bigl(\bigl\Vert \nabla g(S_{l}z)-\nabla g(S_{m}z)\bigr\Vert \bigr) \\ ={}&\sum_{j=1,2,\ldots,n}^{n}\beta_{n,j}V \bigl(w,\nabla g(S_{j}z)\bigr) \\ &{} -\frac{\beta_{n,l}\beta_{n,m}}{(\beta_{n,l}+\beta _{n,m})}\rho_{r_{3}}^{*}\bigl(\bigl\Vert \nabla g(S_{l}z)-\nabla g(S_{m}z)\bigr\Vert \bigr) \\ ={}&\sum_{j=1,2,\ldots,n}^{n}\beta_{n,j}D_{g}(w,S_{j}z) \\ &{} -\frac {\beta_{n,l}\beta_{n,m}}{(\beta_{n,l}+\beta_{n,m})}\rho_{r_{3}}^{*}\bigl(\bigl\Vert \nabla g(S_{l}z)-\nabla g(S_{m}z)\bigr\Vert \bigr) \\ ={}&D_{g}(w,z)-\frac{\beta_{n,l}\beta_{n,m}}{(\beta _{n,l}+\beta_{n,m})}\rho_{r_{3}}^{*}\bigl(\bigl\Vert \nabla g(S_{l}z)-\nabla g(S_{m}z)\bigr\Vert \bigr). \end{aligned}$$

This implies that for any \(l,m\in\Bbb{N}\),

$$\beta_{n,l}\beta_{n,m}\rho_{r_{3}}^{*}\bigl( \bigl\Vert \nabla g(S_{l}z)-\nabla g\bigl(S_{m}(z)\bigr) \bigr\Vert \bigr)= 0 $$

for large enough \(n \in\Bbb{N}\).

Therefore, from the property of \(\rho_{r_{3}}^{*}\) we deduce that

$$\bigl\Vert \nabla g(S_{l}z)-\nabla g(S_{m}z) \bigr\Vert =0,\quad \forall l,m\in\Bbb{N}. $$

Since \(\nabla g^{*}\) is uniformly norm-to-norm continuous on bounded subsets of \(E^{*}\), we arrive at

$$\bigl\Vert S_{l}(z)-S_{m}(z)\bigr\Vert =0, \quad \forall l,m\in\Bbb{N}. $$

This implies that

$$\lim_{n\to\infty}\bigl\Vert S_{l}(z)-S_{m}z \bigr\Vert =0,\quad \forall l,m\in \Bbb{N}. $$

Therefore, \(S_{l}z=S_{m}\) for every \(l,m\in\Bbb{N}\), that is, \(z\in F\). This completes the proof. □

3 Strong convergence theorems

In this section, we prove strong convergence theorems in a reflexive Banach space. We start with the following simple lemma which was proved in [35].

Lemma 3.1

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let \(T:C\to C\) be a Bregman quasi-nonexpansive mapping. Then \(F(T)\) is closed and convex.

Theorem 3.1

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let \(\{S_{n}\}_{n\in\Bbb{N}}\) be a family of Bregman weak relatively nonexpansive mappings of C into itself such that \(F:=\bigcap_{n=1}^{\infty}F(S_{n})\neq{\O}\), and let \(\{\beta_{n,k}:k,n\in\Bbb{N}, 1\leq k\leq n\}\) be a sequence of real numbers such that \(0< \beta_{i,j}\leq1\) and \(0<\beta_{i,j}<1\) for all \(i\in\Bbb{N}\) and every \(j=2,3,\ldots,n\). Let \(W_{n}\) be the Bregman W-mapping generated by \(S_{n},S_{n-1},\ldots,S_{1}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). Let \(\{\alpha_{n}\}_{n\in\mathbb{N}\cup\{0\}}\) be a sequence in \([0,1)\) such that \(\liminf_{n\to \infty}\alpha_{n}(1-\alpha_{n})>0\). Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence generated by

$$ \textstyle\begin{cases} x_{0}=x\in C\quad \textit{chosen arbitrarily},\\ C_{0}=C,\\ y_{n}=\nabla g^{*}[\alpha_{n} \nabla g(x_{n})+(1-\alpha_{n})\nabla g(W_{n}x_{n})],\\ C_{n+1}=\{z\in C_{n}:D_{g}(z,y_{n})\leq D_{g}(z,x_{n})\},\\ x_{n+1}=\operatorname {proj}^{g}_{C_{n+1}}x \quad \textit{and}\quad n\in \mathbb{N}\cup\{0\}, \end{cases} $$
(3.1)

where g is the gradient of g. Then \(\{x_{n}\}_{n\in\mathbb{N}}\) converges strongly to \(\operatorname {proj}^{g}_{F}x_{0}\) as \(n\to\infty\).

Proof

We divide the proof into several steps.

Step 1. We show that \(C_{n}\) is closed and convex for each \(n\in\mathbb{N}\cup\{0\}\).

We proceed by the mathematical induction. It is clear that \(C_{0}=C\) is closed and convex. Let \(C_{m}\) be closed and convex for some \(m\in\mathbb{N}\). For \(z\in C_{m}\), we see that

$$D_{g}(z,y_{m})\leq D_{g}(z,x_{m}) $$

is equivalent to

$$\bigl\langle z,\nabla g(x_{m})-\nabla g(y_{m}) \bigr\rangle \leq g(y_{m})-g(x_{m})+ \bigl\langle x_{m},\nabla g(x_{m}) \bigr\rangle -\bigl\langle y_{m},\nabla g(y_{m}) \bigr\rangle . $$

An easy argument shows that \(C_{m+1}\) is closed and convex. Hence \(C_{n}\) is closed and convex for each \(n\in\mathbb{N}\cup\{0\}\).

Step 2. We claim that \(F\subset C_{n}\) for all \(n\in\mathbb {N}\cup\{0\}\).

It is obvious that \(F\subset C_{0}=C\). Assume now that \(F\subset C_{m}\) for some \(m\in\mathbb{N}\). Take any \(w\in F\subset C_{m}\). Employing Lemma 2.3, we obtain

$$\begin{aligned} D_{g}(w,y_{m})={}&D_{g}\bigl(w,\nabla g^{*}\bigl[\alpha_{m} \nabla g(x_{m})+(1- \alpha_{m})\nabla g(W_{m}x_{m})\bigr]\bigr) \\ ={}&V_{g}\bigl(w,\alpha_{m} \nabla g(x_{m})+(1- \alpha_{m})\nabla g(W_{m}x_{m})\bigr) \\ ={}&g(w)-\bigl\langle w,\alpha_{m} \nabla g(x_{m})+ \bigl(1-\alpha_{m} \nabla g(W_{m}x_{m})\bigr) \bigr\rangle \\ &{} +g^{*}\bigl(\alpha_{m} \nabla g(x_{m})+(1- \alpha_{m}) \nabla g(W_{m}x_{m})\bigr) \\ \leq{}&\alpha_{m}g(w)+(1-\alpha_{m}) g(w) \\ &{} +\alpha_{m}g^{*}\bigl(\nabla g(x_{m})\bigr)+(1- \alpha_{m})g^{*}\bigl(\nabla g(W_{m}x_{m})\bigr) \\ ={}& \alpha_{m} V_{g}\bigl(w,\nabla g(x_{m}) \bigr)+(1-\alpha_{m})V_{g}\bigl(w,\nabla g(W_{m}x_{m}) \bigr) \\ ={}&\alpha_{m} D_{g}(w,x_{m})+(1- \alpha_{m})D_{g}(w,W_{m}x_{m}) \\ \leq{}&\alpha_{m} D_{g}(w,x_{m})+(1- \alpha_{m})D_{g}(w,x_{m}) \\ ={}& D_{g}(w,x_{m}). \end{aligned}$$
(3.2)

This proves that \(w\in C_{m+1}\). Thus, we have \(F\subset C_{n}\) for all \(n\in\mathbb{N}\cup\{0\}\).

Step 3. We prove that \(\{x_{n}\}_{n\in \mathbb{N}}, \{y_{n}\}_{n\in\mathbb{N}}\) and \(\{W_{n}x_{n}\}_{n\in \mathbb{N}}\) are bounded sequences in C.

It is then easily seen from (1.9) that

$$\begin{aligned} D_{g}(x_{n},x)&=D_{g} \bigl(\operatorname {proj}^{g}_{C_{n}}x,x \bigr) \\ &\leq D_{g}(w,x)-D_{g}(w,x_{n})\leq D_{g}(w,x), \quad \forall w\in F\subset C_{n}, n\in \mathbb{N}\cup\{0\}. \end{aligned}$$

This leads immediately to the boundedness of \(\{D_{g}(x_{n},x)\}_{n\in\mathbb{N}}\). So, there exists \(M_{1}>0\) such that

$$ D_{g}(x_{n},x)\leq M_{1},\quad \forall n\in\mathbb{N}. $$
(3.3)

Using Lemma 2.1(3) and (3.3), we have the boundedness of \(\{x_{n}\}_{n\in \mathbb{N}}\). Since \(\{W_{n}\}_{n\in\mathbb{N}}\) is an infinite family of Bregman weak relatively nonexpansive mappings from C into itself, we have for any \(q\in F\) that

$$ D_{g}(q,W_{n}x_{n})\leq D_{g}(q,x_{n}),\quad \forall n\in\mathbb{N}. $$
(3.4)

Then by Definition 2.1, (3.4) and observing that \(\{x_{n}\}_{n\in \mathbb{N}}\) is bounded, we are led to the boundedness of \(\{W_{n}x_{n}\}_{n\in \mathbb{N}}\).

Step 4. We show that \(x_{n}\to u\) for some \(u\in F\), where \(u=\operatorname {proj}^{g}_{F}x\).

By Step 3, we have that \(\{x_{n}\}_{n\in\mathbb{N}}\) is bounded. By the construction of \(C_{n}\), we conclude that \(C_{m}\subset C_{n}\) and \(x_{m}=\operatorname {proj}^{g}_{C_{m}}x\in C_{m}\subset C_{n}\) for any positive integer \(m\geq n\). This, together with (1.9), implies that

$$\begin{aligned} D_{g}(x_{m},x_{n})&=D_{g} \bigl(x_{m},\operatorname {proj}^{g}_{C_{n}}x \bigr)\leq D_{g}(x_{m},x)-D_{g} \bigl(\operatorname {proj}^{g}_{C_{n}}x,x \bigr) \\ &= D_{g}(x_{m},x)-D_{g}(x_{n},x). \end{aligned}$$
(3.5)

In view of (3.5), we conclude that

$$D_{g}(x_{n},x)\leq D_{g}(x_{n},x)+D_{g}(x_{m},x_{n}) \leq D_{g}(x_{m},x),\quad \forall m\geq n. $$

This proves that \(\{D_{g}(x_{n},x)\}_{n\in\mathbb{N}}\) is an increasing sequence in \(\mathbb{R}\) and hence by (3.5) the limit \(\lim_{n\to \infty}D_{g}(x_{n},x)\) exists. Letting \(m,n\to\infty\) in (3.5), we deduce that \(D_{g}(x_{m},x_{n})\to0\). In view of Lemma 2.2, we obtain that \(\Vert x_{m}-x_{n}\Vert \to0\) as \(m,n\to\infty\). This means that \(\{x_{n}\}_{n\in\mathbb{N}}\) is a Cauchy sequence. Since E is a Banach space and C is closed and convex, we conclude that there exists \(u\in C\) such that

$$ \lim_{n\to\infty} \Vert x_{n}-u\Vert =0. $$
(3.6)

Now, we show that \(u\in F\). In view of Lemma 2.2, (3.5) and (3.6), we obtain

$$ \lim_{n\to\infty}D_{g}(x_{n+1},x_{n})=0. $$
(3.7)

Since \(x_{n+1}\in C_{n+1}\), we conclude that

$$D_{g}(x_{n+1},y_{n})\leq D_{g}(x_{n+1},x_{n}). $$

This, together with (3.7), implies that

$$ \lim_{n\to \infty}D_{g}(x_{n+1},y_{n})=0. $$
(3.8)

Employing Lemma 2.2 and (3.7)-(3.8), we deduce that

$$\lim_{n\to \infty} \Vert x_{n+1}-x_{n} \Vert =0\quad \mbox{and}\quad \lim_{n\to \infty} \Vert x_{n+1}-y_{n}\Vert =0. $$

In view of (3.6), we get

$$ \lim_{n\to\infty} \Vert y_{n}-u\Vert =0. $$
(3.9)

From (3.6) and (3.9), it follows that

$$\lim_{n\to\infty} \Vert x_{n}-y_{n} \Vert =0. $$

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

$$ \lim_{n\to\infty}\bigl\Vert \nabla g(x_{n})- \nabla g(y_{n})\bigr\Vert =0. $$
(3.10)

Applying Lemma 2.2 we derive that

$$\lim_{n\to\infty}D_{g}(y_{n},x_{n})=0 $$

and hence

$$\lim_{n\to\infty} \bigl\vert g(x_{n})-g(y_{n}) \bigr\vert =\lim_{n\to\infty }\bigl\vert D_{g}(y_{n},x_{n})- \bigl\langle x_{n}-y_{n},\nabla g(x_{n}) \bigr\rangle \bigr\vert =0. $$

It follows from the definition of Bregman distance that

$$\begin{aligned} &\bigl\vert D_{g}(w,x_{n})-D_{g}(w,y_{n}) \bigr\vert \\ &\quad =|g(w)-g(x_{n})+\bigl\langle w-x_{n}, \nabla g(x_{n})\bigr\rangle -\bigl(g(w)-g(y_{n}) +\bigl\langle w-y_{n}, \nabla g(y_{n})\bigr\rangle \bigr) \\ &\quad =\bigl\vert g(y_{n})-g(x_{n})+\bigl\langle w-x_{n}, \nabla g(x_{n})-\nabla g(x_{n})\bigr\rangle +\bigl\langle x_{n}-y_{n},\nabla g(x_{n}) \bigr\rangle \bigr\vert \\ &\quad \leq\bigl\vert g(y_{n})-g(x_{n})\bigr\vert + \Vert w-x_{n}\Vert \bigl\Vert \nabla g(y_{n})-\nabla g(x_{n})\bigr\Vert +\Vert x_{n}-y_{n}\Vert \bigl\Vert \nabla g(y_{n})\bigr\Vert \\ &\quad \to 0 \end{aligned}$$
(3.11)

as \(n\to\infty\).

The function g is bounded on bounded subsets of E and, thus, g is also bounded on bounded subsets of \(E^{*}\) (see, for example, [13] for more details). This implies that the sequences \(\{\nabla g(x_{n})\}_{n\in\mathbb{N}}\), \(\{\nabla g(y_{n})\}_{n\in\mathbb{N}}\) and \(\{\nabla g(W_{n}x_{n}):n\in \mathbb{N}\cup\{0\}\}\) are bounded in \(E^{*}\).

In view of Theorem 2.2(3), we know that \(\operatorname {dom}g^{*}=E^{*}\) and \(g^{*}\) is strongly coercive and uniformly convex on bounded subsets. Let \(r_{4}=\sup\{\Vert \nabla g(x_{n})\Vert ,\Vert \nabla g(W_{n}x_{n})\Vert :n\in\mathbb{N}\cup\{0\}\}\) and \(\rho^{*}_{r_{4}}:E^{*}\to\mathbb{R}\) be the gauge of uniform convexity of the conjugate function \(g^{*}\). We prove that for any \(w\in F\)

$$ D_{g}(w,y_{n})\leq D_{g}(w,x_{n})- \alpha_{n}(1-\alpha_{n})\rho _{r_{4}}^{*}\bigl(\bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n})\bigr\Vert \bigr). $$
(3.12)

Let us show (3.12). For any given \(w\in F\), in view of the definition of Bregman distance (see (1.7)), (1.6), Lemma 2.5, we obtain

$$\begin{aligned} D_{g}(w,y_{n})={}&D_{g}\bigl(w,\nabla g^{*}\bigl[ \alpha_{n} \nabla g(x_{n})+(1-\alpha_{n}) \nabla g(W_{n}x_{n})\bigr]\bigr) \\ ={}&V_{g}\bigl(w,\alpha_{n} \nabla g(x_{n})+(1- \alpha_{n})\nabla g(W_{n}x_{n})\bigr) \\ ={}&g(w)-\bigl\langle w,\alpha_{n} \nabla g(x_{n})+(1- \alpha_{n}) \nabla g(W_{n}x_{n}) \bigr\rangle \\ &{} +g^{*}\bigl(\alpha_{n} \nabla g(x_{n})+(1- \alpha_{n})\nabla g(W_{n}x_{n})\bigr) \\ \leq{}& \alpha_{n}g(w)+(1-\alpha_{n})g(w)- \alpha_{n}\bigl\langle w,\nabla g(x_{n})\bigr\rangle -(1- \alpha_{n})\bigl\langle w,\nabla g(W_{n}x_{n}) \bigr\rangle \\ &{} +\alpha_{n}g^{*}\bigl(\nabla g(x_{n})\bigr)+(1- \alpha_{n}) g^{*}\bigl(\nabla g(W_{n}x_{n})\bigr) \\ &{} -\alpha_{n}(1-\alpha_{n})\rho^{*}_{r_{4}}\bigl( \bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n}) \bigr\Vert \bigr) \\ ={}&\alpha_{n} V_{g}\bigl(w,\nabla g(x_{n}) \bigr)+(1-\alpha _{n})V_{g}\bigl(w,\nabla g(W_{n}x_{n})\bigr) \\ &{} -\alpha_{n}(1-\alpha_{n})\rho^{*}_{r_{4}}\bigl( \bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n}) \bigr\Vert \bigr) \\ ={}&\alpha_{n} D_{g}(w,x_{n})+(1- \alpha_{n})D_{g}(w,W_{n}x_{n}) - \alpha_{n}(1-\alpha_{n})\rho^{*}_{r_{4}}\bigl(\bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n})\bigr\Vert \bigr) \\ \leq{}&\alpha_{n} D_{g}(w,x_{n})+(1- \alpha_{n})D_{g}(w,x_{n}) -\alpha_{n}(1- \alpha_{n})\rho^{*}_{r_{4}}\bigl(\bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n})\bigr\Vert \bigr) \\ ={}& D_{g}(w,x_{n})-\alpha_{n}(1- \alpha_{n})\rho^{*}_{r_{4}}\bigl(\bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n})\bigr\Vert \bigr). \end{aligned}$$

In view of (3.11), we obtain

$$ D_{g}(w,x_{n})-D_{g}(w,y_{n}) \to 0\quad \mbox{as } n\to\infty. $$
(3.13)

In view of (3.12) and (3.13), we conclude that

$$\alpha_{n}(1-\alpha_{n})\rho^{*}_{r_{4}} \bigl(\bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n}) \bigr\Vert \bigr)\leq D_{g}(w,x_{n})-D_{g}(w,y_{n}) \to 0 $$

as \(n\to\infty\). From the assumption \(\liminf_{n\to \infty}\alpha_{n}(1-\alpha_{n})>0\), we have

$$\lim_{n\to\infty}\rho^{*}_{r_{4}}\bigl\Vert \nabla g(x_{n})-\nabla g(W_{n}x_{n})\bigr\Vert =0. $$

Therefore, from the property of \(\rho^{*}_{r_{4}}\) we deduce that

$$\lim_{n\to\infty}\bigl\Vert \nabla g(x_{n})- \nabla g(W_{n}x_{n})\bigr\Vert =0. $$

Since \(\nabla g^{*}\) is uniformly norm-to-norm continuous on bounded subsets of \(E^{*}\), we arrive at

$$ \lim_{n\to \infty} \Vert x_{n}-W_{n}x_{n} \Vert =0. $$
(3.14)
$$\begin{aligned} &D_{g}(w,U_{n,k}x_{n}) \\ &\quad =D_{g}\bigl(w,\operatorname {proj}^{g}_{C}\bigl(\nabla g^{*}\bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr)\bigr) \\ &\quad \leq D_{g}\bigl(w,\nabla g^{*}\bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1-\beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\quad =g(w)-\bigl\langle u,\nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n}) \bigr\rangle \\ &\qquad {} +g^{*}\bigl(\beta_{n,k}\nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ & \quad \leq \beta_{n,k}g(w)+(1-\beta_{n,1}) g(w)+ \beta_{n,k}g^{*}\bigl(\nabla g(S_{k}U_{n,3}x_{n}) \bigr)+(1-\beta_{n,1})g^{*}\bigl(\nabla g(x_{n})\bigr) \\ &\qquad {}-\beta_{n,k}(1-\beta_{n,k})\rho^{*}_{r_{3}} \bigl(\bigl\Vert \nabla g(S_{k}U_{n,2}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\quad =\beta_{n,k}V_{g}\bigl(v,\nabla g(S_{k}U_{n,2}x_{n}) \bigr)+(1-\beta_{n,k})V_{g}\bigl(u,\nabla g(x_{n}) \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\qquad {} -\beta_{n,k}(1-\beta_{n,k})\rho^{*}_{r_{3}} \bigl(\bigl\Vert \nabla g(S_{k}U_{n,2}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\quad =\beta_{n,k}D_{g}(w,S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,1})D_{g}(w,x_{n}) \\ &\qquad {} -\beta_{n,k}(1-\beta_{n,k})\rho^{*}_{r_{3}} \bigl(\bigl\Vert \nabla g(S_{1}U_{n,k+1}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\quad \leq\beta_{n,k} D_{g}(w,U_{n,k+1}x_{n})+(1- \beta_{n,1})D_{g}(w,x_{n}) \\ &\qquad {} -\beta_{n,k}(1-\beta_{n,k})\rho^{*}_{r_{3}} \bigl(\bigl\Vert \nabla g(S_{k}U_{n,k+1}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}z_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr) \\ &\quad \leq\beta_{n,k} D_{g}(w,U_{n,k+1}x_{n})+(1- \beta_{n,k})D_{g}(w,x_{n}) \\ &\qquad {} -\beta_{n,k}(1-\beta_{n,k})\rho_{s_{3}} \bigl(\bigl\Vert \nabla g(S_{k}U_{n,k+1}x_{n}) - \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,k}x_{n},\nabla g^{*} \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr). \end{aligned}$$

Let \(r_{5}=\sup\{\Vert \nabla g(x_{n})\Vert ,\Vert \nabla g(W_{n}x_{n})\Vert :n\in \mathbb{N}\cup\{0\}\}\) and \(\rho^{*}_{r_{5}}:E^{*}\to\mathbb{R}\) be the gauge of uniform convexity of the conjugate function \(g^{*}\). Then we have

$$\begin{aligned} &D_{g}(w,W_{n}x_{n}) \\ &\quad =D_{g}(w,U_{n,1}x_{n}) \\ &\quad \leq\beta_{n,1}D_{g}(w,U_{n,2}x_{n})+(1- \beta_{n,1})D_{g}(w,x_{n}) \\ &\qquad {} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{5}} \bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\quad \leq\beta_{n,1}\bigl[\beta_{n,2}D_{g}(w,U_{n,3}x_{n})+(1- \beta_{n,2})D_{g}(w,x_{n}) \\ &\qquad {} -\beta_{n,2}(1-\beta_{n,2})\rho^{*}_{r_{5}} \bigl(\bigl\Vert \nabla g(S_{2}U_{n,3}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -D_{g}\bigl(U_{n,2}x_{n},\nabla g^{*} \bigl[\beta_{n,2}\nabla g(S_{2}U_{n,3}x_{n})+(1- \beta_{n,2})\nabla g(x_{n})\bigr]\bigr)\bigr] \\ &\qquad {} +(1-\beta_{n,1})D_{g}(w,x_{n})- \beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{5}}\bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}x_{n}) -\nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -\beta_{n,1}(1-\beta_{n,1})\rho^{*}_{r_{5}} \bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\quad \leq\cdots \\ &\quad \leq D_{g}(w,x_{n})-\beta_{n,1}(1- \beta_{n,1})\rho^{*}_{r_{5}}\bigl(\bigl\Vert \nabla g(S_{1}U_{n,2}x_{n})-\nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {}-\beta_{n,1}\beta_{n,2}(1-\beta_{n,2}) \rho^{*}_{r_{5}}\bigl(\bigl\Vert \nabla g(S_{2}U_{n,3}x_{n})- \nabla g(x_{n})\bigr\Vert \bigr) \\ &\qquad {} -\beta_{n,1}D_{g}\bigl(U_{n,2}x_{n}, \nabla g^{*}\bigl[\beta_{n,2} \nabla g(S_{2}U_{n,3}x_{n})+(1- \beta_{n,2})\nabla g(x_{n})\bigr]\bigr)-\cdots \\ &\qquad {} -\beta_{n,1}\beta_{n,2}\cdots\beta _{n,{n-1}}D_{g}\bigl(U_{n,n}x_{n},\nabla g^{*} \bigl[\beta_{n,n} \nabla g(S_{n}U_{n,n+1}x_{n})+(1- \beta_{n,n})\nabla g(x_{n})\bigr]\bigr) \end{aligned}$$
(3.15)

for all \(n\in\Bbb{N}\). Since g is uniformly norm-to-norm continuous on bounded subsets of E, we obtain

$$\lim_{n\to\infty}\beta_{n,1}\bigl\Vert \nabla g(S_{1}U_{n,2}x_{n})-\nabla g(x_{n})\bigr\Vert =\lim_{n\to\infty}\bigl\Vert \nabla g(W_{n}x_{n})- \nabla g(x_{n})\bigr\Vert =0. $$

This implies that

$$\lim_{n\to\infty}\bigl\Vert \nabla g(S_{1}U_{n,2}x_{n})- \nabla g(x_{n})\bigr\Vert =0. $$

Now, in view of (3.14) and (3.15), we conclude that

$$ \lim_{n\to\infty}\bigl\Vert \nabla g(S_{k}U_{n,k+1}x_{n})- \nabla g(x_{n})\bigr\Vert =0,\quad \forall k\in\Bbb{N}. $$
(3.16)

Since \(\nabla g^{*}\) is uniformly norm-to-norm continuous on bounded subsets of \(E^{*}\), we deduce that

$$ \lim_{n\to\infty} \Vert S_{k}U_{n,k+1}x_{n}-x_{n} \Vert =0,\quad \forall k\in\Bbb{N}. $$
(3.17)

On the other hand, we have

$$\lim_{n\to\infty}D_{g}\bigl(U_{n,k}x_{n}, \nabla g^{*}\bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr)=0,\quad \forall k\in \Bbb{N} \mbox{ with } k\geq 2. $$

This, together with Lemma 2.2, implies that

$$\begin{aligned} &\lim_{n\to\infty}\bigl\Vert U_{n,k}x_{n}- \nabla g^{*}\bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]\bigr\Vert =0, \\ &\quad \forall k\in\Bbb{N} \mbox{ with } k\geq 2. \end{aligned}$$
(3.18)

In view of (3.17), we obtain

$$\lim_{n\to\infty}\bigl\Vert \bigl[\beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1-\beta _{n,k})\nabla g(x_{n})\bigr]-\nabla g(x_{n})\bigr\Vert =0,\quad \forall k\in\Bbb{N}. $$

Therefore,

$$\lim_{n\to\infty}\bigl\Vert \nabla g^{*}\bigl[ \beta_{n,k} \nabla g(S_{k}U_{n,k+1}x_{n})+(1- \beta_{n,k})\nabla g(x_{n})\bigr]-x_{n}\bigr\Vert =0,\quad \forall k\in\Bbb{N}. $$

From (3.14) and (3.17), we get

$$\lim_{n\to\infty} \Vert U_{n,k}x_{n}-x_{n} \Vert =0,\quad \forall k\in\Bbb{N}. $$

This, together with (3.18), implies that

$$\lim_{n\to\infty} \Vert S_{k}U_{n,k+1}x_{n}-U_{n,k+1}x_{n} \Vert =0,\quad \forall k\in\Bbb{N}. $$

Since \(U_{n,k+1}x_{n}\to u\) and \(S_{k}\) is Bregman weak relatively nonexpansive, we obtain \(u\in F(S_{k})\) for every \(k\in \Bbb{N}\). Thus, \(x_{n}\to \operatorname {proj}^{g}_{F}x_{0}\) as \(n\to\infty\).

Finally, we show that \(u=\operatorname {proj}^{g}_{F}x\). From \(x_{n}=\operatorname {proj}^{g}_{C_{n}}x\), we conclude that

$$\bigl\langle z-x_{n},\nabla g(x_{n})-\nabla g(x) \bigr\rangle \geq 0,\quad \forall z\in C_{n}. $$

Since \(F\subset C_{n}\) for each \(n\in\mathbb{N}\), we obtain

$$ \bigl\langle z-x_{n},\nabla g(x_{n})-\nabla g(x) \bigr\rangle \geq0,\quad \forall z\in F. $$
(3.19)

Letting \(n\to\infty\) in (3.19), we deduce that

$$\bigl\langle z-u,\nabla g(u)-\nabla g(x) \bigr\rangle \geq0,\quad \forall z\in F. $$

In view of (1.8), we have \(u=\operatorname {proj}^{g}_{F}x\), which completes the proof. □

Theorem 3.2

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let \(\{S_{n}\}_{n\in\Bbb{N}}\) be a family of Bregman weak relatively nonexpansive mappings of C into itself such that \(\bigcap_{n=1}^{\infty}F(S_{n})\neq{\O}\), and let \(T_{n}x=\nabla g^{*}(\sum_{j=1}^{n}\beta_{n,j}\nabla g(S_{j}x))\) for every \(n\in \Bbb{N}\) and \(x\in C\), where \(0\leq\beta_{n,j}\leq1\) (\(n\in \Bbb{N}\), \(j=1,2,\ldots,n\)) with \(\sum_{j=1}^{n}\beta_{n,j}=1\) for all \(n\in\Bbb{N}\) and \(\liminf_{n\to\infty}\beta_{n,j}>0\) for each \(j\in\{1,2,\ldots,n\}\). Let \(\{\alpha_{n}\}_{n\in \mathbb{N}\cup\{0\}}\) and \(\{\beta_{n}\}_{n\in\mathbb{N}\cup\{0\}}\) be sequences in \([0,1)\) such that \(\liminf_{n\to \infty}\alpha_{n}(1-\alpha_{n})>0\). Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence generated by

$$ \textstyle\begin{cases} x_{0}=x\in C\quad \textit{chosen arbitrarily},\\ C_{0}=C,\\ y_{n}=\nabla g^{*}[\alpha_{n} \nabla g(x_{n})+(1-\alpha_{n})\nabla g(T_{n}x_{n})],\\ C_{n+1}=\{z\in C_{n}:D_{g}(z,y_{n})\leq D_{g}(z,x_{n})\},\\ x_{n+1}=\operatorname {proj}^{g}_{C_{n+1}}x \quad \textit{and}\quad n\in\mathbb{N}\cup\{0\}, \end{cases} $$
(3.20)

where g is the gradient of g. Then \(\{x_{n}\}_{n\in\mathbb{N}}\) converges strongly to \(\operatorname {proj}^{g}_{F}x_{0}\) as \(n\to\infty\).

Remark 3.1

Theorem 3.1 improves Theorem 1.1 in the following aspects.

  1. (1)

    For the structure of Banach spaces, we extend the duality mapping to a more general case, that is, a convex, continuous and strongly coercive Bregman function which is bounded on bounded subsets, and uniformly convex and uniformly smooth on bounded subsets.

  2. (2)

    For the mappings, we extend the mapping from a relatively nonexpansive mapping to a countable family of Bregman W-mappings. We remove the assumption \(\hat{F}(T)=F(T)\) on the mapping T and extend the result to a countable family of Bregman weak relatively nonexpansive mappings, where \(\hat{F}(T)\) is the set of asymptotic fixed points of the mapping T.

  3. (3)

    For the algorithm, we remove the set \(W_{n}\) in Theorem 1.1.

The following result was proved in [29].

Lemma 3.2

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a strongly coercive Bregman function which is bounded on bounded subsets, and uniformly convex and uniformly smooth on bounded subsets of E. Let A be a maximal monotone operator from E to \(E^{*}\) such that \(A^{-1}(0)\neq{\O}\). Let \(r>0\) and \(\operatorname {Res}^{g}_{rA}=(\nabla g+rA)^{-1}\nabla g\) be the g-resolvent of A. Then \(\operatorname {Res}^{g}_{rA}\) is a Bregman weak relatively nonexpansive mapping.

As an application of our main result, we include a concrete example in support of Theorem 3.1. Using Theorem 3.1, we obtain the following strong convergence theorem for maximal monotone operators.

Theorem 3.3

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a strongly coercive Bregman function which is bounded on bounded subsets, and uniformly convex and uniformly smooth on bounded subsets of E. Let \(\{A_{n}\}_{n\in\Bbb{N}}\) be an infinite family of maximal monotone operators from E to \(E^{*}\) such that \(Z=\bigcap_{n=1}^{\infty}A_{n}^{-1}(0)\neq{\O}\). Let \(r>0\) and \(\operatorname {Res}^{g}_{rA_{n}}=(\nabla g+rA_{n})^{-1}\nabla g\) be the g-resolvent of \(A_{n}\). Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence generated by

$$ \textstyle\begin{cases} x_{0}=x\in E\quad \textit{chosen arbitrarily},\\ C_{0}=E,\\ y_{n}=\nabla g^{*} [\alpha_{n} \nabla g(x_{n})+(1-\alpha_{n})\nabla g (W_{n}x_{n} ) ],\\ C_{n+1}=\{z\in C_{n}:D_{g}(z,y_{n})\leq D_{g}(z,x_{n})\},\\ x_{n+1}=\operatorname {proj}^{g}_{C_{n+1}}x\quad \textit{and}\quad n\in \mathbb{N}\cup\{0\}, \end{cases} $$
(3.21)

where g is the right-hand derivative of g and \(W_{n}\) is the W-mapping generated by \(\operatorname {Res}^{g}_{rA_{n}}\), \(\operatorname {Res}^{g}_{rA_{n-1}}\), …, \(\operatorname {Res}^{g}_{rA_{1}}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). Let \(\{\alpha_{n}\}_{n\in\mathbb{N}\cup\{0\}}\) and \(\{\beta_{n}\}_{n\in \mathbb{N}\cup\{0\}}\) be sequences in \([0,1)\) satisfying the following control conditions:

  1. (1)

    \(\liminf_{n\to\infty}\alpha_{n}(1-\alpha_{n})>0\);

  2. (2)

    \(0\leq\beta_{n}<1\) for all \(n\in\mathbb{N}\cup\{0\}\) and \(\liminf_{n\to \infty}\beta_{n}<1\).

Then the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) defined in (3.21) converges strongly to \(\operatorname {proj}^{g}_{Z}x\) as \(n\to\infty\).

Proof

Letting \(S_{n}=\operatorname {Res}^{g}_{rA_{n}}\), \(\forall n\in \mathbb{N}\), in Theorem 3.1, from (3.1) we obtain (3.19). We need only to show that \(S_{n}\) satisfies all the conditions in Theorem 3.1 for all \(n\in\mathbb{N}\). In view of Lemma 3.2, we conclude that \(S_{n}\) is a Bregman relatively nonexpansive mapping for each \(n\in\mathbb{N}\). Thus, we obtain

$$D_{g} \bigl(p,\operatorname {Res}^{g}_{rA_{n}}v \bigr) \leq D_{g}(p,v),\quad \forall v\in E, p\in F \bigl(\operatorname {Res}^{g}_{rA_{n}} \bigr) $$

and

$$\tilde{F} \bigl(\operatorname {Res}^{g}_{rA} \bigr)=F \bigl( \operatorname {Res}^{g}_{rA_{n}} \bigr)=A_{n}^{-1}(0), $$

where \(\tilde{F} (\operatorname {Res}^{g}_{rA_{n}} )\) is the set of all strong asymptotic fixed points of \(\operatorname {Res}^{g}_{rA_{n}}\). Therefore, in view of Theorem 3.1, we have the conclusions of Theorem 3.2. This completes the proof. □

Below we include a nontrivial example of an infinite family of Bregman weak relatively nonexpansive mappings in order to reconstruct a Bregman W-mapping in the setting of Hilbert spaces.

Example 3.1

Let \(E=l^{2}\), where

$$\begin{aligned}& l^{2}= \Biggl\{ \sigma=(\sigma_{1}, \sigma_{2},\ldots,\sigma_{n},\ldots) :\sum _{n=1}^{\infty} \Vert \sigma_{n}\Vert ^{2}< \infty \Biggr\} , \qquad \Vert \sigma \Vert = \Biggl(\sum _{n=1}^{\infty} \Vert \sigma_{n} \Vert ^{2} \Biggr)^{\frac{1}{2}}, \forall \sigma\in l^{2}, \\& \langle\sigma,\eta \rangle=\sum_{n=1}^{\infty}\sigma_{n}\eta_{n},\quad \forall\delta=(\sigma_{1}, \sigma _{2},\ldots,\sigma_{n},\ldots), \eta=( \eta_{1},\eta_{2},\ldots,\eta_{n},\ldots)\in l^{2}. \end{aligned}$$

Let \(\{x_{n}\}_{n\in\mathbb{N}\cup\{0\}}\subset E\) be a sequence defined by

$$\begin{aligned}& x_{0}=(1,0,0,0,\ldots), \\& x_{1}=(1,1,0,0,0,\ldots), \\& x_{2}=(1,0,1,0,0,0,\ldots), \\& x_{3}=(1,0,0,1,0,0,0,\ldots), \\& \ldots \\& x_{n}=(\sigma_{n,1},\sigma_{n,2},\ldots, \sigma_{n,k},\ldots), \\& \ldots, \end{aligned}$$

where

$$\sigma_{n,k}= \textstyle\begin{cases} 1&\mbox{if } k=1, n+1,\\ 0&\mbox{if } k\neq1, k\neq n+1, \end{cases} $$

for all \(n\in\mathbb{N}\). It is clear that the sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) converges weakly to \(x_{0}\). Indeed, for any \(\Lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{n},\ldots)\in l^{2}=(l^{2})^{*}\), we have

$$\begin{aligned} \Lambda(x_{n}-x_{0})&=\langle x_{n}-x_{0},\Lambda\rangle\\ &=\sum _{k=2}^{\infty }\lambda_{k}\sigma_{n,k} \to0 \end{aligned} $$

as \(n\to\infty\). It is also obvious that \(\Vert x_{n}-x_{m}\Vert =\sqrt{2}\) for any \(n\ne m\) with n, m sufficiently large. Thus, \(\{x_{n}\}_{n\in\mathbb{N}}\) is not a Cauchy sequence. Let k be an even number in \(\mathbb{N}\) and let \(g:E\to\mathbb{R}\) be defined by

$$g(x)=\frac{1}{k}\Vert x\Vert ^{k},\quad x\in E. $$

It is easy to show that \(\nabla g(x)=J_{k}(x)\) for all \(x\in E\), where

$$J_{k}(x)= \bigl\{ x^{*}\in E^{*}:\bigl\langle x,x^{*}\bigr\rangle = \Vert x\Vert \bigl\Vert x^{*}\bigr\Vert , \bigl\Vert x^{*}\bigr\Vert = \Vert x\Vert ^{k-1} \bigr\} . $$

It is also obvious that

$$J_{k}(\lambda x)=\lambda^{k-1}J_{k}(x), \quad \forall x\in E, \lambda\in \mathbb{R}. $$

We define a countable family of mappings \(S_{j}:E\to E\) by

$$S_{j}(x)= \textstyle\begin{cases} \frac{n}{n+1}x&\mbox{if } x=x_{n};\\ \frac{-j}{j+1}x&\mbox{if } x\neq x_{n}, \end{cases} $$

for all \(j\geq1\) and \(n\geq0\). It is clear that \(F(S_{j})=\{0\}\) for all \(j\geq1\). Choose \(j\in\mathbb{N}\), then for any \(n\in\mathbb{N}\)

$$\begin{aligned} D_{g}(0,S_{j}x_{n})&=g(0)-g(S_{j}x_{n})- \bigl\langle 0-S_{j}x_{n},\nabla g(S_{j}x_{n}) \bigr\rangle \\ &= -\frac{n^{k}}{(n+1)^{k}}g(x_{n})+\frac{n^{k}}{(n+1)^{k}}\bigl\langle x_{n},\nabla g(x_{n})\bigr\rangle \\ &=\frac{n^{k}}{(n+1)^{k}}\bigl[-g(x_{n})+\bigl\langle x_{n},\nabla g(x_{n})\bigr\rangle \bigr] \\ &=\frac{n^{k}}{(n+1)^{k}}\bigl[D_{g}(0,x_{n})\bigr] \\ &\leq D_{g}(0,x_{n}). \end{aligned}$$

If \(x\neq x_{n}\), then we have

$$\begin{aligned} D_{g}(0,S_{j}x)&=g(0)-g(S_{j}x)-\bigl\langle 0-S_{j}x,\nabla g(S_{j}x)\bigr\rangle \\ &=-\frac{j^{k}}{(j+1)^{k}}g(x)-\frac{j^{k}}{(j+1)^{k}}\bigl\langle x,-\nabla g(x)\bigr\rangle \\ &=\frac{j^{k}}{(j+1)^{k}}\bigl[-g(x)-\bigl\langle -x,\nabla g(x)\bigr\rangle \bigr] \\ &\leq D_{g}(0,x). \end{aligned}$$

Therefore, \(S_{j}\) is a Bregman quasi-nonexpansive mapping. Next, we claim that \(S_{j}\) is a Bregman weak relatively nonexpansive mapping. Indeed, for any sequence \(\{z_{n}\}_{n\in \mathbb{N}}\subset E\) such that \(z_{n}\to z_{0}\) and \(\Vert z_{n}-S_{j}z_{n}\Vert \to0\) as \(n\to\infty\), there exists a sufficiently large number \(N_{0}\in\mathbb{N}\) such that \(z_{n}\neq x_{m}\) for any \(n,m>N_{0}\). This implies that \(S_{j}z_{n}=-\frac{j}{j+1}z_{n}\) for all \(n>N_{0}\). It follows from \(\Vert z_{n}-S_{j}z_{n}\Vert \to0\) that \(\frac{2j+1}{j+1}z_{n}\to0\) and hence \(z_{n}\to z_{0}=0\). Since \(z_{0}\in F(S_{j})\), we conclude that \(S_{j}\) is a Bregman weak relatively nonexpansive mapping. It is clear that \(\bigcap_{j=1}^{\infty}\tilde{F}(S_{j})=\bigcap_{j=1}^{\infty}F(S_{j})=\{0\}\). Thus \(\{S_{j}\}_{j\in\mathbb{N}}\) is a countable family of Bregman weak relatively nonexpansive mappings. Next, we show that \(\{S_{j}\}_{j\in\mathbb{N}}\) is not a countable family of Bregman relatively nonexpansive mappings. In fact, though \(x_{n}\rightharpoonup x_{0}\) and

$$\Vert x_{n}-S_{j}x_{n}\Vert = \biggl\Vert x_{n}-\frac{n}{n+1}x_{n}\biggr\Vert = \frac{1}{n+1}\Vert x_{n}\Vert \to0 $$

as \(n\to\infty\), but \(x_{0}\notin F(S_{j})\) for all \(j\in \mathbb{N}\). Therefore, \(\hat{F}(S_{j})\neq F(S_{j})\) for all \(j\in \mathbb{N}\). This implies that \(\bigcap_{j=1}^{\infty}\hat{F}(S_{j})\ne \bigcap_{j=1}^{\infty}F(S_{j})\). Let \(\{\beta_{n,k}:k,n\in\Bbb{N}, 1\leq k\leq n\}\) be a sequence of real numbers such that \(0< \beta_{n,1}\leq1\) and \(0<\beta_{n,i}<1\) for every \(i=2,3,\ldots,n\). Let \(W_{n}\) be the Bregman W-mapping generated by \(S_{n},S_{n-1},\ldots,S_{1}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). Finally, it is obvious that the family \(\{S_{j}\}_{j\in\mathbb{N}}\) satisfies all the aspects of the hypothesis of Theorem 3.1.

4 Applications to convex feasibility problems

Let \(\{D_{n}\}_{n\in\Bbb{N}}\) be a family of nonempty, closed and convex subsets of a Banach space E. The convex feasibility problem is to find an element in the assumed nonempty intersection \(\bigcap_{n=1}^{\infty}D_{n}\) (see [36]). In the following, we prove a strong convergence theorem concerning convex feasibility problems in a reflexive Banach space.

Theorem 4.1

Let E be a reflexive Banach space and \(g:E\to\mathbb{R}\) be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let \(F:=\{D_{n}\}_{n\in\Bbb{N}}\) be an infinite family of nonempty, closed and convex subsets of E such that \(\bigcap_{n=1}^{\infty}D_{n}\neq{\O}\), and let \(\{\beta_{n,k}:k,n\in\Bbb{N}, 1\leq k\leq n\}\) be a sequence of real numbers such that \(0< \beta_{i,j}\leq1\) and \(0<\beta_{i,j}<1\) for every \(i=2,3,\ldots,n\). Let \(W_{n}\) be the Bregman W-mapping generated by \(\operatorname {proj}^{g}_{D_{n}}\), \(\operatorname {proj}^{g}_{D_{n-1}}, \ldots, \operatorname {proj}^{g}_{D_{1}}\) and \(\beta_{n,n},\beta_{n,n-1},\ldots,\beta_{n,1}\). Let \(\{\alpha_{n}\}_{n\in\mathbb{N}\cup\{0\}}\) and \(\{\beta_{n}\}_{n\in \mathbb{N}\cup\{0\}}\) be sequences in \([0,1)\) such that \(\liminf_{n\to\infty}\alpha_{n}(1-\alpha_{n})>0\). Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence generated by

$$ \textstyle\begin{cases} x_{0}=x\in C\quad \textit{chosen arbitrarily},\\ C_{0}=C,\\ y_{n}=\nabla g^{*}[\alpha_{n} \nabla g(x_{n})+(1-\alpha_{n})\nabla g(W_{n}x_{n})],\\ C_{n+1}=\{z\in C_{n}:D_{g}(z,y_{n})\leq D_{g}(z,x_{n})\},\\ x_{n+1}=\operatorname {proj}^{g}_{C_{n+1}}x \quad \textit{and}\quad n\in \mathbb{N}\cup\{0\}, \end{cases} $$
(4.1)

where g is the gradient of g. Then \(\{x_{n}\}_{n\in\mathbb{N}}\) defined in (4.1) converges strongly to \(\operatorname {proj}^{g}_{F}x_{0}\) as \(n\to\infty\).

Proof

For each \(j\in\Bbb{N}\), let \(S_{j}=\operatorname {proj}^{g}_{D_{j}}\). We will prove that \(S_{j}\) is a Bregman weak relatively nonexpansive mapping. Indeed, for any sequence \(\{z_{n}\}_{n\in\mathbb{N}}\subset E\) such that \(z_{n}\to z_{0}\) and \(\Vert z_{n}-S_{j}z_{n}\Vert \to0\) as \(n\to\infty\), in view of Lemma 2.2, we conclude that

$$ \begin{aligned} &\lim_{n\to \infty}D_{g}(z_{n},S_{j}z_{n})=0,\\ & \lim_{n\to \infty}D_{g}(z_{n},z_{0})=0. \end{aligned} $$
(4.2)

It follows from (1.9) that

$$D_{g} \bigl(z_{n},\operatorname {proj}^{g}_{D_{j}}z_{n} \bigr)+D_{g} \bigl(\operatorname {proj}^{g}_{D_{j}}z_{n},z_{0} \bigr)\leq D_{g}(z_{n},z_{0}). $$

This, together with (4.2), amounts to

$$\lim_{n\to \infty}D_{g} \bigl(\operatorname {proj}^{g}_{D_{j}}z_{n},z_{0} \bigr)=0 $$

and hence by Lemma 2.2

$$\lim_{n\to \infty}\bigl\Vert \operatorname {proj}^{g}_{D_{j}}z_{n}-z_{0} \bigr\Vert =0. $$

Thus we obtain \(z_{0}\in F(S_{j})=D_{j}\) and hence \(S_{j}\) is a Bregman weak relatively nonexpansive mapping. By a similar argument as in the proof of Theorem 3.1, we get the desired conclusion, which completes the proof. □

5 Numerical example

In this section, in order to demonstrate the effectiveness, realization and convergence of algorithm of Theorem 3.1, we consider the following simple example.

Example 5.1

Let \(S:[0,2]\to[0,2]\) be defined by

$$Sx= \textstyle\begin{cases} 0 &\mbox{if } x\neq2,\\ 1 &\mbox{if } x=2. \end{cases} $$

Then T is a quasi-nonexpansive mapping. Indeed, for any \(x\in[0,2)\), we have that \(Sx=0\). Thus,

$$\vert Sx-0\vert ^{2}=0\leq \vert x-0\vert ^{2}. $$

The other cases can be verified similarly. It is worth mentioning that S is neither nonexpansive nor continuous. Let \(\beta_{n,k}=0\) and \(\alpha_{n}=\frac{1}{4}\) for all \(n,k\geq1\). Under the above assumptions, the given algorithm (3.1) in Theorem 3.1 is simplified as follows:

$$ \textstyle\begin{cases} x_{0}=x\in[0,2]\quad \mbox{chosen arbitrarily},\\ C_{0}=[0,2],\\ y_{n}=\frac{1}{4} x_{n}+\frac{3}{4}Sx_{n},\\ C_{n+1}=\{z\in C_{n}:\vert z-y_{n}\vert \leq \vert z-x_{n}\vert \},\\ x_{n+1}=P_{C_{n+1}}x \quad \text{and}\quad n\in\mathbb{N}\cup\{0\}. \end{cases} $$
(5.1)

We know that, in a one-dimensional case, the set \(C_{n+1}\) is a closed interval. If we set \([a_{n+1},b_{n+1}]:=C_{n+1}\), then the projection point \(x_{n+1}\) of \(x\in C\) onto \(C_{n+1}\) can be expressed as

$$x_{n+1}:=P_{C_{n+1}}x= \textstyle\begin{cases} x&\mbox{if } x\in[a_{n+1},b_{n+1}];\\ b_{n+1}& \mbox{if } x>b_{n+1};\\ a_{n+1}& \mbox{if } x< a_{n+1}. \end{cases} $$

Choose \(x_{0}=x=1\). Then the iteration process (5.1) becomes

$$ \begin{aligned} &C_{0}=[0,2], \qquad u_{n}=\frac{1}{2} x_{n},\qquad y_{n}=\frac{1}{4} x_{n}, \\ &C_{n+1}= \biggl[0,\frac{5}{8}x_{n} \biggr],\qquad x_{n+1}=\biggl(\frac{5}{8}\biggr)^{n}. \end{aligned} $$
(5.2)

In this section, we give some numerical experiment results (based on Matlab) as follows.

6 Conclusion

Table 1 and Figure 1 show that the sequence \(\{x_{n}\}_{n\in\Bbb{N}}\) generated by (5.2) converges to 0, which solves the fixed point problem.

Figure 1
figure 1

Iteration chart of the sequence \(\pmb{\{x_{n}\}_{n\in\Bbb{N}}}\) in Example  5.1 with initial value \(\pmb{x_{0}=1}\) .

Table 1 This table shows the values of the sequence \(\pmb{\{x_{n}\}_{n\in\Bbb{N}}}\) on 30th iteration steps (initial value \(\pmb{x_{0}=1}\) )

References

  1. Takahashi, W: Nonlinear Functional Analysis, Fixed Point Theory and Its Applications. Yokahama Publishers, Yokahama (2000)

    MATH  Google Scholar 

  2. Takahashi, W: Convex Analysis and Approximation of Fixed Points. Yokahama Publishers, Yokahama (2000)

    Google Scholar 

  3. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)

    Article  MATH  Google Scholar 

  4. Reich, S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67, 274-276 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  5. Genel, A, Lindenstrauss, J: An example concerning fixed points. Isr. J. Math. 22, 81-86 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  6. Güler, O: On the convergence of the proximal point algorithm for convex optimization. SIAM J. Control Optim. 29, 403-419 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bauschke, HH, Matouskova, E, Reich, S: Projection and proximal point methods: convergence results and counterexamples. Nonlinear Anal. 56, 715-738 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26, 248-264 (2011)

    Article  MathSciNet  Google Scholar 

  9. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  10. Alber, YI: Metric and generalized projection operators in Banach spaces Properties and Applications. In: Kartsatos, A (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, pp. 15-50. Marcel Dekker, New York (1996)

    Google Scholar 

  11. Reich, S: A weak convergence theorem for the altering method with Bregman distances. In: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, pp. 313-318. Marcel Dekker, New York (1996)

    Google Scholar 

  12. Matsushita, S, Takahashi, W: A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 134, 257-266 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  13. Butnariu, D, Iusem, AN: Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization. Kluwer Academic Publishers, Dordrecht (2000)

    Book  MATH  Google Scholar 

  14. Kohsaka, F, Takahashi, W: Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 6(3), 505-523 (2005)

    MathSciNet  MATH  Google Scholar 

  15. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  16. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  17. Rockafellar, RT: Characterization of subdifferentials of convex functions. Pac. J. Math. 17, 497-510 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  18. Rockafellar, RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209-216 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  19. Butnariu, D, Resmerita, E: Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006, Article ID 84919 (2006)

    Article  MathSciNet  Google Scholar 

  20. Bonnas, JF, Shapiro, A: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  Google Scholar 

  21. Bauschke, HH, Borwein, JM, Combettes, PL: Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 3, 615-647 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  22. Bauschke, HH, Borwein, JM: Legendre functions and the method of random Bregman functions. J. Convex Anal. 4, 27-67 (1997)

    MathSciNet  MATH  Google Scholar 

  23. Bregman, LM: The relation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200-217 (1967)

    Article  Google Scholar 

  24. Censor, Y, Lent, A: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 34, 321-358 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  25. Naraghirad, E, Takahashi, W, Yao, J-C: Generalized retraction and fixed point theorems using Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 13(1), 141-156 (2012)

    MathSciNet  MATH  Google Scholar 

  26. Zălinescu, C: Convex Analysis in General Vector Spaces. World Scientific Publishing Co. Inc., River Edge, NJ (2002)

    Book  MATH  Google Scholar 

  27. Bauschke, HH, Borwein, JM, Combettes, PL: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42, 596-636 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  28. Borwein, MJ, Reich, S, Sabach, S: A characterization of Bregman firmly nonexpansive operators using a new monotonicity concept. J. Nonlinear Convex Anal. 12(1), 161-184 (2011)

    MathSciNet  MATH  Google Scholar 

  29. Naraghirad, E, Yao, J-C: Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, 141 (2013)

    Article  Google Scholar 

  30. Atsushiba, S, Takahashi, W: Strong convergence theorems for a finite family of nonexpansive mappings and applications. Indian J. Math. 41(3), 435-453 (1999). BN Prasad birth centenary commemoration volume.

    MathSciNet  MATH  Google Scholar 

  31. Matsushita, S-Y, Nakajo, K, Takahashi, W: Strong convergence theorems obtained by a generalized projections hybrid method for families of mappings in Banach spaces. Nonlinear Anal. 73, 1466-1480 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  32. Takahashi, W, Shimoji, K: Convergence theorems for nonexpansive mappings and feasibility problems. Math. Comput. Model. 32, 1463-1471 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  33. Matsushita, S, Takahashi, W: Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2004, 37-47 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  34. Chen, G, Teboulle, M: Convergence analysis of a proximal-like minimization algorithm using Bregman functions. SIAM J. Optim. 3, 538-543 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  35. Reich, S, Sabach, S: Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 299-314. Springer, New York (2010)

    Google Scholar 

  36. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367-426 (1996)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees and the editor for sincere evaluation and constructive comments which improved the paper considerably.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eskandar Naraghirad.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naraghirad, E., Timnak, S. Strong convergence theorems for Bregman W-mappings with applications to convex feasibility problems in Banach spaces. Fixed Point Theory Appl 2015, 149 (2015). https://doi.org/10.1186/s13663-015-0395-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0395-1

MSC

Keywords