Skip to main content

Bregman weak relatively nonexpansive mappings in Banach spaces

Abstract

In this paper, we introduce a new class of mappings called Bregman weak relatively nonexpansive mappings and propose new hybrid iterative algorithms for finding common fixed points of an infinite family of such mappings in Banach spaces. We prove strong convergence theorems for the sequences produced by the methods. Furthermore, we apply our method to prove strong convergence theorems of iterative algorithms for finding common fixed points of finitely many Bregman weak relatively nonexpansive mappings in reflexive Banach spaces. These algorithms take into account possible computational errors. We also apply our main results to solve equilibrium problems in reflexive Banach spaces. Finally, we study hybrid iterative schemes for finding common solutions of an equilibrium problem, fixed points of an infinite family of Bregman weak relatively nonexpansive mappings and null spaces of a γ-inverse strongly monotone mapping in 2-uniformly convex Banach spaces. Some application of our results to the solution of equations of Hammerstein-type is presented. Our results improve and generalize many known results in the current literature.

MSC:47H10, 37C25.

1 Introduction

The hybrid projection method was first introduced by Hangazeau in [1]. In a series of papers [212], authors investigated the hybrid projection method and proved strong and weak convergence theorems for the sequences produced by their method. The shrinking projection method, which is a generalization of the hybrid projection method, was first introduced by Takahashi et al. in [13]. Throughout this paper, we denote the set of real numbers and the set of positive integers by and , respectively. Let E be a Banach space with the norm and the dual space E . For any xE, we denote the value of x E at x by x, x . Let { x n } n N be a sequence in E. We denote the strong convergence of { x n } n N to xE as n by x n x and the weak convergence by x n x. The modulus δ of convexity of E is denoted by

δ(ϵ)=inf { 1 x + y 2 : x 1 , y 1 , x y ϵ }

for every ϵ with 0ϵ2. A Banach space E is said to be uniformly convex if δ(ϵ)>0 for every ϵ>0. Let S E ={xE:x=1}. The norm of E is said to be Gâteaux differentiable if for each x,y S E , the limit

lim t 0 x + t y x t
(1.1)

exists. In this case, E is called smooth. If the limit (1.1) is attained uniformly for all x,y S E , then E is called uniformly smooth. The Banach space E is said to be strictly convex if x + y 2 <1 whenever x,y S E and xy. It is well known that E is uniformly convex if and only if E is uniformly smooth. It is also known that if E is reflexive, then E is strictly convex if and only if E is smooth; for more details, see [14, 15].

Let C be a nonempty subset of E. Let T:CE be a mapping. We denote the set of fixed points of T by F(T), i.e., F(T)={xC:Tx=x}. A mapping T:CE is said to be nonexpansive if TxTyxy for all x,yC. A mapping T:CE is said to be quasi-nonexpansive if F(T) and Txyxy for all xC and yF(T). The concept of nonexpansivity plays an important role in the study of Mann-type iteration [16] for finding fixed points of a mapping T:CC. Recall that the Mann-type iteration is given by the following formula:

x n + 1 = γ n T x n +(1 γ n ) x n , x 1 C.
(1.2)

Here, { γ n } n N is a sequence of real numbers in [0,1] satisfying some appropriate conditions. The construction of fixed points of nonexpansive mappings via Mann’s algorithm [16] has been extensively investigated recently in the current literature (see, for example, [17] and the references therein). In [17], Reich proved the following interesting result.

Theorem 1.1 Let C be a closed and convex subset of a uniformly convex Banach space E with a Fréchet differentiable norm, let T:CC be a nonexpansive mapping with a fixed point, and let γ n be a sequence of real numbers such that γ n [0,1] and n = 1 γ n (1 γ n )=. Then the sequence { x n } n N generated by Mann’s algorithm (1.2) converges weakly to a fixed point of T.

However, the convergence of the sequence { x n } n N generated by Mann’s algorithm (1.2) is in general not strong (see a counterexample in [18]; see also [19]). Some attempts to modify the Mann iteration method (1.2) so that strong convergence is guaranteed have recently been made. Bauschke and Combettes [4] proposed the following modification of the Mann iteration method for a single nonexpansive mapping T in a Hilbert space H:

{ x 0 = x C , y n = α n x n + ( 1 α n ) T x n , C n = { z C n : z y n z x n } , Q n = { z C : x n z , x x n 0 } , x n + 1 = P C n Q n x ,
(1.3)

where C is a closed and convex subset of H, P Q denotes the metric projection from H onto a closed and convex subset Q of H. They proved that if the sequence { α n } n N is bounded above from one, then the sequence { x n } n N generated by (1.3) converges strongly to P F ( T ) x as n.

Let E be a smooth, strictly convex and reflexive Banach space and let J be a normalized duality mapping of E. Let C be a nonempty, closed and convex subset of E. The generalized projection Π C from E onto C [20] is defined and denoted by

Π C (x)= arg min y C ϕ(y,x),

where ϕ(x,y)= x 2 2x,Jy+ y 2 . Let C be a nonempty, closed and convex subset of a smooth Banach space E, let T be a mapping from C into itself. A point pC is said to be an asymptotic fixed point [21] of T if there exists a sequence { x n } n N in C which converges weakly to p and lim n x n T x n =0. We denote the set of all asymptotic fixed points of T by F ˆ (T). A point pC is called a strong asymptotic fixed point of T if there exists a sequence { x n } n N in C which converges strongly to p and lim n x n T x n =0. We denote the set of all strong asymptotic fixed points of T by F ˜ (T).

Following Matsushita and Takahashi [22], a mapping T:CC is said to be relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    F(T) is nonempty;

  2. (2)

    ϕ(u,Tx)ϕ(u,x), uF(T), xC;

  3. (3)

    F ˆ (T)=F(T).

In 2005, Matsushita and Takahashi [22] proved the following strong convergence theorem for relatively nonexpansive mappings in a Banach space.

Theorem 1.2 Let E be a uniformly smooth and uniformly convex Banach space, let C be a nonempty, closed and convex subset of E, let T be a relatively nonexpansive mapping from C into itself, and let { α n } n N be a sequence of real numbers such that 0 α n <1 and lim sup n α n <1. Suppose that { x n } n N is given by

{ x 0 = x C , y n = J 1 ( α n J x n + ( 1 α n ) J T x n ) , H n = { z C n : ϕ ( z , y n ) ϕ ( z , x n ) } , W n = { z C : x n z , J x J x n 0 } , x n + 1 = Π H n W n x .
(1.4)

If F(T) is nonempty, then { x n } n N converges strongly to Π F ( T ) x.

1.1 Some facts about gradient

For any convex function g:E(,+] we denote the domain of g by domg={xE:g(x)<}. For any xint domg and any yE, we denote by g 0 (x,y) the right-hand derivative of g at x in the direction y, that is,

g 0 (x,y)= lim t 0 g ( x + t y ) g ( x ) t .
(1.5)

The function g is said to be Gâteaux differentiable at x if lim t 0 g ( x + t y ) g ( x ) t exists for any y. In this case, g 0 (x,y) coincides with g(x), the value of the gradient g of g at x. The function g is said to be Gâteaux differentiable if it is Gâteaux differentiable everywhere. The function g is said to be Fréchet differentiable at x if this limit is attained uniformly in y=1. The function g is Fréchet differentiable at xE (see, for example, [[23], p.13] or [[24], p.508]) if for all ϵ>0, there exists δ>0 such that yxδ implies that

|g(y)g(x) y x , g ( x ) |ϵyx.

The function g is said to be Fréchet differentiable if it is Fréchet differentiable everywhere. It is well known that if a continuous convex function g:ER is Gâteaux differentiable, then g is norm-to-weak continuous (see, for example, [[23], Proposition 1.1.10]). Also, it is known that if g is Fréchet differentiable, then g is norm-to-norm continuous (see [[24], p.508]). The mapping g is said to be weakly sequentially continuous if x n x as n implies that g( x n ) g(x) as n (for more details, see [[23], Theorem 3.2.4] or [[24], p.508]). The function g is said to be strongly coercive if

lim x n g ( x n ) x n =.

It is also said to be bounded on bounded subsets of E if g(U) is bounded for each bounded subset U of E. Finally, g is said to be uniformly Fréchet differentiable on a subset X of E if the limit (1.5) is attained uniformly for all xX and y=1.

Let A:E 2 E be a set-valued mapping. We define the domain and range of A by domA={xE:Ax} and ranA= x E Ax, respectively. The graph of A is denoted by G(A)={(x, x )E× E : x Ax}. The mapping AE× E is said to be monotone [25] if xy, x y 0 whenever (x, x ),(y, y )A. It is also said to be maximal monotone [26] if its graph is not contained in the graph of any other monotone operator on E. If AE× E is maximal monotone, then we can show that the set A 1 0={zE:0Az} is closed and convex. A mapping A:domAE E is called γ-inverse strongly monotone if there exists a positive real number γ such that for all x,ydomA, xy,AxAyγ A x A y 2 .

1.2 Some facts about Legendre functions

Let E be a reflexive Banach space. For any proper, lower semicontinuous and convex function g:E(,+], the conjugate function g of g is defined by

g ( x ) = sup x E { x , x g ( x ) }

for all x E . It is well known that g(x)+ g ( x )x, x for all (x, x )E× E . It is also known that (x, x )g is equivalent to

g(x)+ g ( x ) = x , x .
(1.6)

Here, ∂g is the subdifferential of g [27, 28]. We also know that if g:E(,+] is a proper, lower semicontinuous and convex function, then g : E (,+] is a proper, weak lower semicontinuous and convex function; see [15] for more details on convex analysis.

Let g:E(,+] be a mapping. The function g is said to be:

  1. (i)

    essentially smooth, if ∂g is both locally bounded and single-valued on its domain;

  2. (ii)

    essentially strictly convex, if ( g ) 1 is locally bounded on its domain and g is strictly convex on every convex subset of domg;

  3. (iii)

    Legendre, if it is both essentially smooth and essentially strictly convex (for more details, we refer to [[29], Definition 5.2]).

If E is a reflexive Banach space and g:E(,+] is a Legendre function, then in view of [[30], p.83],

g = ( g ) 1 ,rang=dom g =int dom g andrang=int domg.

Examples of Legendre functions are given in [29, 31]. One important and interesting Legendre function is 1 s s (1<s<), where the Banach space E is smooth and strictly convex and, in particular, a Hilbert space.

1.3 Some facts about Bregman distance

Let E be a Banach space and let E be the dual space of E. Let g:ER be a convex and Gâteaux differentiable function. Then the Bregman distance [32, 33] corresponding to g is the function D g :E×ER defined by

D g (x,y)=g(x)g(y) x y , g ( y ) ,x,yE.
(1.7)

It is clear that D g (x,y)0 for all x,yE. In that case when E is a smooth Banach space, setting g(x)= x 2 for all xE, we obtain that g(x)=2Jx for all xE and hence D g (x,y)=ϕ(x,y) for all x,yE.

Let E be a Banach space and let C be a nonempty and convex subset of E. Let g:ER be a convex and Gâteaux differentiable function. Then we know from [34] that for xE and x 0 C, D g ( x 0 ,x)= min y C D g (y,x) if and only if

y x 0 , g ( x ) g ( x 0 ) 0,yC.
(1.8)

Furthermore, if C is a nonempty, closed and convex subset of a reflexive Banach space E and g:ER is a strongly coercive Bregman function, then for each xE, there exists a unique x 0 C such that

D g ( x 0 ,x)= min y C D g (y,x).

The Bregman projection proj C g from E onto C is defined by proj C g (x)= x 0 for all xE. It is also well known that proj C g has the following property:

D g ( y , proj C g x ) + D g ( proj C g x , x ) D g (y,x)
(1.9)

for all yC and xE (see [23] for more details).

1.4 Some facts about uniformly convex and totally convex functions

Let E be a Banach space and let B r :={zE:zr} for all r>0. Then a function g:ER is said to be uniformly convex on bounded subsets of E [[35], pp.203, 221] if ρ r (t)>0 for all r,t>0, where ρ r :[0,+)[0,] is defined by

ρ r (t)= inf x , y B r , x y = t , α ( 0 , 1 ) α g ( x ) + ( 1 α ) g ( y ) g ( α x + ( 1 α ) y ) α ( 1 α )

for all t0. The function ρ r is called the gauge of uniform convexity of g. The function g is also said to be uniformly smooth on bounded subsets of E [[35], pp.207, 221] if lim t 0 σ r ( t ) t =0 for all r>0, where σ r :[0,+)[0,] is defined by

σ r (t)= sup x B r , y S E , α ( 0 , 1 ) α g ( x + ( 1 α ) t y ) + ( 1 α ) g ( x α t y ) g ( x ) α ( 1 α )

for all t0. The function g is said to be uniformly convex if the function δ g :[0,+)[0,+], defined by

δ g (t):=sup { 1 2 g ( x ) + 1 2 g ( y ) g ( x + y 2 ) : y x = t } ,

satisfies that lim t 0 σ r ( t ) t =0.

Remark 1.1 Let E be a Banach space, let r>0 be a constant and let g:ER be a convex function which is uniformly convex on bounded subsets. Then

g ( α x + ( 1 α ) y ) αg(x)+(1α)g(y)α(1α) ρ r ( x y )

for all x,y B r and α(0,1), where ρ r is the gauge of uniform convexity of g.

Let g:E(,+] be a convex and Gâteaux differentiable function. Recall that, in view of [[23], Section 1.2, p.17] (see also [36]), the function g is called totally convex at a point xint domg if its modulus of total convexity at x, that is, the function v g :int domg×[0,+)[0,+), defined by

v g (x,t):=inf { D g ( y , x ) : y int  dom g , y x = t } ,

is positive whenever t>0. The function g is called totally convex when it is totally convex at every point xint domg. Moreover, the function f is called totally convex on bounded subsets of E if v g (x,t)>0 for any bounded subset X of E and for any t>0, where the modulus of total convexity of the function g on the set X is the function v g :int domg×[0,+)[0,+) defined by

v g (X,t):=inf { v g ( x , t ) : x X int dom g } .

It is well known that any uniformly convex function is totally convex, but the converse is not true in general (see [[23], Section 1.3, p.30]).

It is also well known that g is totally convex on bounded subsets if and only if g is uniformly convex on bounded subsets (see [[37], Theorem 2.10, p.9]).

Examples of totally convex functions can be found, for instance, in [23, 37].

1.5 Some facts about resolvent

Let E be a reflexive Banach space with the dual space E and let g:E(,+] be a proper, lower semicontinuous and convex function. Let A be a maximal monotone operator from E to E . For any r>0, let the mapping Res r A g :EdomA be defined by

Res r A g = ( g + r A ) 1 g.

The mapping Res r A g is called the g-resolvent of A (see [38]). It is well known that A 1 (0)=F( Res r A g ) for each r>0 (for more details, see, for example, [14]).

Examples and some important properties of such operators are discussed in [39].

1.6 Some facts about Bregman quasi-nonexpansive mappings

Let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let g:E(,+] be a proper, lower semicontinuous and convex function. Recall that a mapping T:CC is said to be Bregman quasi-nonexpansive [40] if F(T) and

D g (p,Tx) D g (p,x),xC,pF(T).

A mapping T:CC is said to be Bregman relatively nonexpansive [40] if the following conditions are satisfied:

  1. (1)

    F(T) is nonempty;

  2. (2)

    D g (p,Tv) D g (p,v), pF(T), vC;

  3. (3)

    F ˆ (T)=F(T).

Now, we are in a position to introduce the following new class of Bregman quasi-nonexpansive type mappings. A mapping T:CC is said to be Bregman weak relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    F(T) is nonempty;

  2. (2)

    D g (p,Tv) D g (p,v), pF(T), vC;

  3. (3)

    F ˜ (T)=F(T).

It is clear that any Bregman relatively nonexpansive mapping is a Bregman quasi-nonexpansive mapping. It is also obvious that every Bregman relatively nonexpansive mapping is a Bregman weak relatively nonexpansive mapping, but the converse in not true in general. Indeed, for any mapping T:CC, we have F(T) F ˜ (T) F ˆ (T). If T is Bregman relatively nonexpansive, then F(T)= F ˜ (T)= F ˆ (T). Below we show that there exists a Bregman weak relatively nonexpansive mapping which is not a Bregman relatively nonexpansive mapping.

Example 1.1 Let E= l 2 , where

l 2 = { σ = ( σ 1 , σ 2 , , σ n , ) : n = 1 σ n 2 < } , σ = ( n = 1 σ n 2 ) 1 2 , σ l 2 , σ , η = n = 1 σ n η n , δ = ( σ 1 , σ 2 , , σ n , ) , η = ( η 1 , η 2 , , η n , ) l 2 .

Let { x n } n N { 0 } E be a sequence defined by

x 0 = ( 1 , 0 , 0 , 0 , ) , x 1 = ( 1 , 1 , 0 , 0 , 0 , ) , x 2 = ( 1 , 0 , 1 , 0 , 0 , 0 , ) , x 3 = ( 1 , 0 , 0 , 1 , 0 , 0 , 0 , ) , , x n = ( σ n , 1 , σ n , 2 , , σ n , k , ) , ,

where

σ n , k ={ 1 if  k = 1 , n + 1 , 0 if  k 1 , k n + 1

for all nN. It is clear that the sequence { x n } n N converges weakly to x 0 . Indeed, for any Λ=( λ 1 , λ 2 ,, λ n ,) l 2 = ( l 2 ) , we have

Λ( x n x 0 )= x n x 0 ,Λ= k = 2 λ k σ n , k 0

as n. It is also obvious that x n x m = 2 for any nm with n, m sufficiently large. Thus, { x n } n N is not a Cauchy sequence. Let k be an even number in and let g:ER be defined by

g(x)= 1 k x k ,xE.

It is easy to show that g(x)= J k (x) for all xE, where

J k (x)= { x E : x , x = x x , x = x k 1 } .

It is also obvious that

J k (λx)= λ k 1 J k (x),xE,λR.

Now, we define a mapping T:EE by

T(x)={ n n + 1 x if  x = x n ; x if  x x n .

It is clear that F(T)={0} and for any nN,

D g ( 0 , T x n ) = g ( 0 ) g ( T x n ) 0 T x n , g ( T x n ) = n k ( n + 1 ) k g ( x n ) + n k ( n + 1 ) k x n , g ( x n ) = n k ( n + 1 ) k [ g ( x n ) + x , g ( x n ) ] = n k ( n + 1 ) k D g ( 0 , x n ) D g ( 0 , x n ) .

If x x n , then we have

D g ( 0 , T x ) = g ( 0 ) g ( T x ) 0 T x , g ( T x ) = g ( x ) x , g ( x ) = g ( x ) x , g ( x ) = D g ( 0 , x ) .

Therefore, T is a Bregman quasi-nonexpansive mapping. Next, we claim that T is a Bregman weak relatively nonexpansive mapping. Indeed, for any sequence { z n } n N E such that z n z 0 and z n T z n 0 as n, since { x n } n N is not a Cauchy sequence, there exists a sufficiently large number NN such that z n x m for any n,m>N. If we suppose that there exists mN such that z n = x m for infinitely many nN, then a subsequence { x n i } i N would satisfy z n i = x m , so z 0 = lim i z n i = x m and z 0 = lim i T z n i =T x m = m m + 1 x m , which is impossible. This implies that T z n = z n for all n>N. It follows from z n T z n 0 that 2 z n 0 and hence z n z 0 =0. Since z 0 F(T), we conclude that T is a Bregman weak relatively nonexpansive mapping.

Finally, we show that T is not Bregman relatively nonexpansive. In fact, though x n x 0 and

x n T x n = x n n n + 1 x n = 1 n + 1 x n 0

as n, but x 0 F(T). Thus we have F ˆ (T)F(T).

Let us give an example of a Bregman quasi-nonexpansive mapping which is neither a Bregman relatively nonexpansive mapping nor a Bregman weak relatively nonexpansive mapping (see also [41]).

Example 1.2 Let E be a smooth Banach space, let k be an even number in and let g:ER be defined by

g(x)= 1 k x k ,xE.

Let x 0 0 be any element of E. We define a mapping T:EE by

T(x)={ ( 1 2 + 1 2 n + 1 ) x 0 if  x = ( 1 2 + 1 2 n ) x 0 ; x if  x ( 1 2 + 1 2 n ) x 0

for all n0. It could easily be seen that T is neither a Bregman weak relatively nonexpansive mapping nor a Bregman relatively nonexpansive mapping. To this end, we set

x n = ( 1 2 + 1 2 n ) x 0 ,nN.

Though x n 1 2 x 0 ( x n 1 2 x 0 ) as n and

x n T x n = ( 1 2 + 1 2 n ) x 0 ( 1 2 + 1 2 n + 1 ) x 0 = 1 2 n 1 x 0 0

as n, but 1 2 x 0 F(T). Therefore, F ˆ (T)F(T) and F ˜ (T)F(T).

In [42], Bauschke and Combettes introduced an iterative method to construct the Bregman projection of a point onto a countable intersection of closed and convex sets in reflexive Banach spaces. They proved a strong convergence theorem of the sequence produced by their method; for more detail, see [[42], Theorem 4.7].

In [40], Reich and Sabach introduced a proximal method for finding common zeros of finitely many maximal monotone operators in a reflexive Banach space. More precisely, they proved the following strong convergence theorem.

Theorem 1.3 Let E be a reflexive Banach space and let A i :E 2 E , i=1,2,,N, be N maximal monotone operators such that Z:= i = 1 N A i 1 ( 0 ). Let g:ER be a Legendre function that is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let { x n } n N be a sequence defined by the following iterative algorithm:

{ x 0 E chosen arbitrarily , y n i = Res λ n i A i g ( x n + e n i ) , C n i = { z E : D g ( z , y n i ) D g ( z , x n + e n i ) } , C n : = i = 1 N C n i , Q n = { z E : g ( x 0 ) g ( x n ) , z x n 0 } , x n + 1 = proj C n Q n g x 0 and n N { 0 } .
(1.10)

If, for each i=1,2,,N, lim inf n λ n i >0 and the sequences of errors { e n i } n N E satisfy lim inf n e n i =0, then each such sequence { x n } n N converges strongly to proj Z g ( x 0 ) as n.

Let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let g:E(,+] be a proper, lower semicontinuous and convex function. Recall that a mapping T:CC is said to be Bregman firmly nonexpansive (for short, BFNE) if

D g (Tx,Ty)+ D g (Ty,Tx)+ D g (Tx,x)+ D g (Ty,y) D g (Tx,y)+ D g (Ty,x)

for all x,yC. The mapping T is called quasi-Bregman firmly nonexpansive (for short, QBFNE) [43], if F(T) and

D g (p,Tx)+ D g (Tx,x) D g (p,x)

for all xC and pF(T). It is clear that any quasi-Bregman firmly nonexpansive mapping is Bregman quasi-nonexpansive. For more information on Bregman firmly nonexpansive mappings, we refer the readers to [38, 44]. In [44], Reich and Sabach proved that for any BFNE operator T, F ˆ (T)=F(T).

In [43], Reich and Sabach introduced a Mann-type process to approximate fixed points of quasi-Bregman firmly nonexpansive mappings defined on a nonempty, closed and convex subset C of a reflexive Banach space E. More precisely, they proved the following theorem.

Theorem 1.4 Let E be a reflexive Banach space and let T i :EE, i=1,2,,N, be N QBFNE operators which satisfy F( T i )= F ˆ ( T i ) for each 1iN and F:= i = 1 N F( T i ). Let g:ER be a Legendre function that is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let { x n } n N be a sequence defined by the following iterative algorithm:

{ x 0 E chosen arbitrarily , Q 0 i = E , i = 1 , 2 , , N , y n i = T i ( x n + e n i ) , Q n + 1 i = { z Q n i : g ( x n + e n i ) g ( y n i ) , z y n i 0 } , Q n + 1 : = i = 1 N Q n + 1 i , x n + 1 = proj Q n + 1 g x 0 and n N { 0 } .
(1.11)

If, for each i=1,2,,N, the sequences of errors { e n i } n N E satisfy lim inf n e n i =0, then each such sequence { x n } n N converges strongly to proj F g ( x 0 ) as n.

Let E be a reflexive Banach space and let g:ER be a convex and Gâteaux differentiable function. Let C be a nonempty, closed and convex subset of E. Recall that a mapping T:CC is said to be (quasi-)Bregman strongly firmly nonexpansive (for short, BSNE) with respect to a nonempty F ˆ (T) if F(T) and

D g (p,Tx) D g (p,x)

for all xC and p F ˆ (T), and if whenever { x n } n N C is bounded and p F ( T ) ˆ , then we have

lim n ( D g ( p , x n ) D g ( p , T x n ) ) =0 lim n D g (T x n , x n )=0.

The class of (quasi-)Bregman strongly nonexpansive mappings was first introduced in [21, 45] (for more details, see also [46]). We know that the notion of a strongly nonexpansive operator (with respect to the norm) was first introduced and studied in [47, 48].

In [46], Reich and Sabach introduced iterative algorithms for finding common fixed points of finitely many Bregman strongly nonexpansive operators in a reflexive Banach space. They established the following strong convergence theorem in a reflexive Banach space.

Theorem 1.5 Let E be a reflexive Banach space and let T i :EE, i=1,2,,N, be N BSNE operators which satisfy F( T i )= F ˆ ( T i ) for each 1iN and F:= i = 1 N F( T i ). Let g:ER be a Legendre function that is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let { x n } n N be a sequence defined by the following iterative algorithm:

{ x 0 E chosen arbitrarily , y n i = T i ( x n + e n i ) , C n i = { z E : D g ( z , y n i ) D g ( z , x n + e n i ) } , C n : = i = 1 N C n i , Q n = { z E : g ( x 0 ) g ( x n ) , z x n 0 } , x n + 1 = proj C n Q n g x 0 and n N { 0 } .
(1.12)

If, for each i=1,2,,N, the sequences of errors { e n i } n N E satisfy lim inf n e n i =0, then each such sequence { x n } n N converges strongly to proj F g ( x 0 ) as n.

But it is worth mentioning that, in all the above results for Bregman nonexpansive-type mappings, the assumption F ˆ (T)=F(T) is imposed on the map T.

Remark 1.2 Though the iteration processes (1.10) and (1.12), as introduced by the authors mentioned above, worked, it is easy to see that these processes seem cumbersome and complicated in the sense that at each stage of iteration, two different sets C n and Q n are computed and the next iterate taken as the Bregman projection of x 0 on the intersection of C n and Q n . This seems difficult to do in application. It is important to state clearly that the iteration process (1.11) involves computation of only one set Q n at each stage of iteration. In [49], Sabach proposed an excellent modification of algorithm (1.10) for finding common zeros of finitely many maximal monotone operators in reflexive Banach spaces.

Our concern now is the following:

Is it possible to obtain strong convergence of modified Mann-type schemes (1.10)-(1.12) to a fixed point of a Bregman quasi-nonexpansive type mapping T without imposing the assumption F ˆ (T)=F(T) on T?

In this paper, using Bregman functions, we introduce new hybrid iterative algorithms for finding common fixed points of an infinite family of Bregman weak relatively nonexpansive mappings in Banach spaces. We prove strong convergence theorems for the sequences produced by the methods. Furthermore, we apply our method to prove strong convergence theorems of iterative algorithms for finding common fixed points of finitely many Bregman weak relatively nonexpansive mappings in reflexive Banach spaces. These algorithms take into account possible computational errors. We also apply our main results to solve equilibrium problems in reflexive Banach spaces. Finally, we study hybrid iterative schemes for finding common solutions of an equilibrium problem, fixed points of an infinite family of Bregman weak relatively nonexpansive mappings and null spaces of a γ-inverse strongly monotone mapping in 2-uniformly convex Banach spaces. Some application of our results to the solution of equations of Hammerstein type is presented. No assumption F ˆ (T)=F(T) is imposed on the mapping T. Consequently, the above concern is answered in the affirmative in reflexive Banach space setting. Our results improve and generalize many known results in the current literature; see, for example, [4, 7, 8, 11, 22, 40, 4244, 46, 5052].

2 Preliminaries

In this section, we begin by recalling some preliminaries and lemmas which will be used in the sequel.

The following definition is slightly different from that in Butnariu and Iusem [23].

Definition 2.1 [24]

Let E be a Banach space. The function g:ER is said to be a Bregman function if the following conditions are satisfied:

  1. (1)

    g is continuous, strictly convex and Gâteaux differentiable;

  2. (2)

    the set {yE: D g (x,y)r} is bounded for all xE and r>0.

The following lemma follows from Butnariu and Iusem [23] and Zălinscu [35].

Lemma 2.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function. Then

  1. (1)

    g:E E is one-to-one, onto and norm-to-weak continuous;

  2. (2)

    xy,g(x)g(y)=0 if and only if x=y;

  3. (3)

    {xE: D g (x,y)r} is bounded for all yE and r>0;

  4. (4)

    dom g = E , g is Gâteaux differentiable and g = ( g ) 1 .

Now, we are ready to prove the following key lemma.

Lemma 2.2 Let E be a Banach space, let r>0 be a constant and let g:ER be a convex function which is uniformly convex on bounded subsets of E. Then

g ( k = 0 n α k x k ) k = 0 n α k g( x k ) α i α j ρ r ( x i x j )

for all i,j{0,1,2,,n}, x k B r , α k (0,1) and k=0,1,2,,n with k = 0 n α k =1, where ρ r is the gauge of uniform convexity of g.

Proof Without loss of generality, we may assume that i=0 and j=1. By induction on n, for n=1, in view of Remark 1.1 we get the desired result. Now suppose that it is true for n=k, i.e.,

g ( m = 0 k α m x m ) m = 0 k α m g( x m ) α 0 α 1 ρ r ( x 0 x 1 ) .

Now, we prove that the conclusion holds for n=k+1. Put x= m = 0 k α m x m 1 α k + 1 and observe that x B r . Since g is convex, given assumption, we conclude that

g ( m = 0 k + 1 α m x m ) = g ( ( 1 α k + 1 ) m = 0 k α m x m 1 α k + 1 + α k + 1 x k + 1 ) ( 1 α k + 1 ) g ( m = 0 k α m x m 1 α k + 1 ) + α k + 1 g ( x k + 1 ) m = 0 k α m g ( x m ) α 0 α 1 ρ r ( x 0 x 1 ) + α k + 1 g ( x k + 1 ) = m = 0 k + 1 α m g ( x m ) α 0 α 1 ρ r ( x 0 x 1 ) .

This completes the proof. □

Lemma 2.3 Let E be a Banach space, let r>0 be a constant and let g:ER be a continuous and convex function which is uniformly convex on bounded subsets of E. Then

g ( k = 0 α k x k ) k = 0 α k g( x k ) α i α j ρ r ( x i x j )

for all i,jN{0}, x k B r , α k (0,1) and kN{0} with k = 0 α k =1, where ρ r is the gauge of uniform convexity of g.

Proof Let i,jN{0} and k>i,j. Put v k = α 0 x 0 m = 0 k α m + α 1 x 1 m = 0 k α m ++ α k x k m = 0 k α m and observe that v k B r for all kN. In view of Lemma 2.2, we obtain that

g ( v k ) = g ( α 0 x 0 m = 0 k α m + α 1 x 1 m = 0 k α m + + α k x k m = 0 k α m ) 1 m = 0 k α m m = 0 k α m g ( x m ) α i α j ρ r ( x i x j ) .
(2.1)

Since g is continuous and v k m = 0 α m x m as k, we have

lim k g( v k )=g ( m = 0 α m x m ) .

Letting k in (2.1), we conclude that

g ( m = 0 α m x m ) m = 0 α m g( x m ) α i α j ρ r ( x i x j ) ,

which completes the proof. □

We know the following two results; see [[35], Proposition 3.6.4].

Theorem 2.1 Let E be a reflexive Banach space and let g:ER be a convex function which is bounded on bounded subsets of E. Then the following assertions are equivalent:

  1. (1)

    g is strongly coercive and uniformly convex on bounded subsets of E;

  2. (2)

    dom g = E , g is bounded on bounded subsets and uniformly smooth on bounded subsets of E ;

  3. (3)

    dom g = E , g is Fréchet differentiable and g is uniformly norm-to-norm continuous on bounded subsets of E .

Theorem 2.2 Let E be a reflexive Banach space and let g:ER be a continuous convex function which is strongly coercive. Then the following assertions are equivalent:

  1. (1)

    g is bounded on bounded subsets and uniformly smooth on bounded subsets of E;

  2. (2)

    g is Fréchet differentiable and g is uniformly norm-to-norm continuous on bounded subsets of E ;

  3. (3)

    dom g = E , g is strongly coercive and uniformly convex on bounded subsets of E .

Let E be a Banach space and let g:ER be a convex and Gâteaux differentiable function. Then the Bregman distance [32, 33] satisfies the three point identity that is

D g (x,z)= D g (x,y)+ D g (y,z)+ x y , g ( y ) g ( z ) ,x,y,zE.
(2.2)

In particular, it can be easily seen that

D g (x,y)= D g (y,x)+ y x , g ( y ) g ( x ) ,x,yE.
(2.3)

Indeed, by letting z=x in (2.2) and taking into account that D g (x,x)=0, we get the desired result.

Lemma 2.4 Let E be a Banach space and let g:ER be a Gâteaux differentiable function which is uniformly convex on bounded subsets of E. Let { x n } n N and { y n } n N be bounded sequences in E. Then the following assertions are equivalent:

  1. (1)

    lim n D g ( x n , y n )=0;

  2. (2)

    lim n x n y n =0.

Proof The implication (1) (2) was proved in [23] (see also [24]). For the converse implication, we assume that lim n x n y n =0. Then, in view of (2.3), we have

D g ( x n , y n ) = D g ( y n , x n ) + x n y n , g ( x n ) g ( y n ) x n y n g ( x n ) g ( y n ) , n N .
(2.4)

The function g is bounded on bounded subsets of E and therefore g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). This, together with (2.3)-(2.4), implies that lim n D g ( x n , y n )=0, which completes the proof. □

The following result was first proved in [37] (see also [24]).

Lemma 2.5 Let E be a reflexive Banach space, let g:ER be a strongly coercive Bregman function and let V be the function defined by

V ( x , x ) =g(x) x , x + g ( x ) ,xE, x E .

Then the following assertions hold:

  1. (1)

    D g (x, g ( x ))=V(x, x ) for all xE and x E .

  2. (2)

    V(x, x )+ g ( x )x, y V(x, x + y ) for all xE and x , y E .

Corollary 2.1 [35]

Let E be a Banach space, let g:E(,] be a proper, lower semicontinuous and convex function and let p,qR with 1p2q and p 1 + q 1 =1. Then the following statements are equivalent.

  1. (1)

    There exists c 1 >0 such that g is ρ-convex with ρ(t):= c 1 q t q for all t0.

  2. (2)

    There exists c 2 >0 such that for all (x, x ),(y, y )G(g); x y 2 c 2 q x y q 1 .

3 Strong convergence theorems without computational errors

In this section, we prove strong convergence theorems without computational errors in a reflexive Banach space. We start with the following simple lemma whose proof will be omitted since it can be proved by a similar argument as that in [[44], Lemma 15.5].

Lemma 3.1 Let E be a reflexive Banach space and let g:ER be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let T:CC be a Bregman weak relatively nonexpansive mapping. Then F(T) is closed and convex.

Using ideas in [22], we can prove the following result.

Theorem 3.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let C be a nonempty, closed and convex subset of E and let { T j } j N be an infinite family of Bregman weak relatively nonexpansive mappings from C into itself such that F:= j = 1 F( T j ). Suppose in addition that T j 0 = T 0 =I for all jN, where I is the identity mapping on E. Let { x n } n N be a sequence generated by

{ x 0 = x C chosen arbitrarily , C 0 = C , z n = g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ] , y n = g [ β n g ( x n ) + ( 1 β n ) g ( z n ) ] , C n + 1 = { z C n : D g ( z , y n ) D g ( z , x n ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(3.1)

where g is the right-hand derivative of g. Let { α n , j :j,nN{0}} and { β n } n N { 0 } be sequences in [0,1) satisfying the following control conditions:

  1. (1)

    j = 0 α n , j =1, nN{0};

  2. (2)

    There exists iN such that lim inf n α n , i α n , j >0, jN{0};

  3. (3)

    0 β n <1 for all nN{0} and lim sup n β n <1.

Then the sequence { x n } n N defined in (3.1) converges strongly to proj F g x as n.

Proof We divide the proof into several steps.

Step 1. We show that C n is closed and convex for each nN{0}.

It is clear that C 0 =C is closed and convex. Let C m be closed and convex for some mN. For z C m , we see that

D g (z, y m ) D g (z, x m )

is equivalent to

z , g ( x m ) g ( y m ) g( y m )g( x m )+ x m , g ( x m ) y m , g ( y m ) .

An easy argument shows that C m + 1 is closed and convex. Hence C n is closed and convex for each nN{0}.

Step 2. We claim that F C n for all nN{0}.

It is obvious that F C 0 =C. Assume now that F C m for some mN. Employing Lemma 2.5, for any wF C m , we obtain

D g ( w , z m ) = D g ( w , g [ α m , 0 g ( x m ) + j = 1 α m , j g ( T j x m ) ] ) = V ( w , α m , 0 g ( x m ) + j = 1 α m , j g ( T j m x m ) ) = g ( w ) w , α m , 0 g ( x m ) + j = 1 α m , j g ( T j x m ) + g ( α m , 0 g ( x m ) + j = 1 α m , j g ( T j x m ) ) α m , 0 g ( w ) + j = 1 α m , j g ( w ) + α m , 0 g ( g ( x m ) ) + j = 1 α m , j g ( g ( T j x m ) ) = α m , 0 V ( w , g ( x m ) ) + j = 1 α m , j V ( w , g ( T j x m ) ) = α m , 0 D g ( w , x m ) + j = 1 α m , j D g ( w , T j x m ) α m , 0 D g ( w , x m ) + j = 1 α m , j D g ( w , x m ) = D g ( w , x m ) .

This implies that

D g ( w , y m ) = D g ( w , g [ β m g ( x m ) + ( 1 β m ) g ( z m ) ] ) = V ( w , β m g ( x m ) + ( 1 β m ) g ( z m ) ) β m V ( w , g ( x m ) ) + ( 1 β m ) V ( w , ( z m ) ) = β m D g ( w , x m ) + ( 1 β m ) D g ( w , z m ) β m D g ( w , x m ) + ( 1 β m ) D g ( w , x m ) = D g ( w , x m ) .
(3.2)

This proves that w C m + 1 . Thus, we have F C n for all nN{0}.

Step 3. We prove that { x n } n N , { y n } n N , { z n } n N and { T j x n :j,nN{0}} are bounded sequences in C.

In view of (1.9), we conclude that

D g ( x n , x ) = D g ( proj C n g x , x ) D g ( w , x ) D g ( w , x n ) D g ( w , x ) , w F C n , n N { 0 } .

This implies that the sequence { D ( x n , x ) } n N is bounded and hence there exists M>0 such that

D g ( x n ,x)M,nN.

In view of Lemma 2.1(3), we conclude that the sequence { x n } n N is bounded. Since { T j } j N is an infinite family of Bregman weak relatively nonexpansive mappings from C into itself, we have for any qF that

D g (q, T j x n ) D g (q, x n ),j,nN.

This, together with Definition 2.1 and the boundedness of { x n } n N , implies that the sequence { T j x n :j,nN{0}} is bounded.

Step 4. We show that x n u for some uF, where u= proj F g x.

By Step 3, we have that { x n } n N is bounded. By the construction of C n , we conclude that C m C n and x m = proj C m g x C m C n for any positive integer mn. This, together with (1.9), implies that

D g ( x m , x n ) = D g ( x m , proj C n g x ) D g ( x m , x ) D g ( proj C n g x , x ) = D g ( x m , x ) D g ( x n , x ) .
(3.3)

In view of (1.9), we conclude that

D g ( x n , x ) = D g ( proj C n g x , x ) D g ( w , x ) D g ( w , x n ) D g ( w , x ) , w F C n , n N { 0 } .
(3.4)

It follows from (3.4) that the sequence { D g ( x n , x ) } n N is bounded and hence there exists M>0 such that

D g ( x n ,x)M,nN.
(3.5)

In view of (3.3), we conclude that

D g ( x n ,x) D g ( x n ,x)+ D g ( x m , x n ) D g ( x m ,x),mn.

This proves that { D g ( x n , x ) } n N is an increasing sequence in and hence by (3.5) the limit lim n D g ( x n ,x) exists. Letting m,n in (3.3), we deduce that D g ( x m , x n )0. In view of Lemma 2.4, we get that x m x n 0 as m,n. This means that { x n } n N is a Cauchy sequence. Since E is a Banach space and C is closed and convex, we conclude that there exists uC such that

lim n x n u=0.
(3.6)

Now, we show that uF. In view of (3.3), we obtain

lim n D g ( x n + 1 , x n )=0.
(3.7)

Since x n + 1 C n + 1 , we conclude that

D g ( x n + 1 , y n ) D g ( x n + 1 , x n ).

This, together with (3.7), implies that

lim n D g ( x n + 1 , y n )=0.
(3.8)

Employing Lemma 2.4 and (3.7)-(3.8), we deduce that

lim n x n + 1 x n =0and lim n x n + 1 y n =0.

In view of (3.6), we get

lim n y n u=0.
(3.9)

From (3.6) and (3.9), it follows that

lim n x n y n =0.

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n g ( x n ) g ( y n ) =0.
(3.10)

In view of (3.1), we have

g( y n )g( x n )=(1 β n ) ( g ( z n ) g ( x n ) ) .
(3.11)

It follows from (3.10)-(3.11) that

lim n g ( z n ) g ( x n ) =0.
(3.12)

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n z n x n =0.

Applying Lemma 2.4, we derive that

lim n D g ( z n , x n )=0.

It follows from the three point identity (see (2.2)) that

| D g ( w , x n ) D g ( w , z n ) | = | D g ( w , z n ) + D g ( z n , x n ) + w z n , g ( z n ) g ( x n ) D g ( w , z n ) | = | D g ( z n , x n ) w z n , g ( z n ) g ( x n ) | D g ( z n , x n ) + w z n g ( z n ) g ( x n ) 0
(3.13)

as n.

The function g is bounded on bounded subsets of E and thus g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). This implies that the sequences { g ( x n ) } n N , { g ( y n ) } n N , { g ( z n ) } n N and {g( T j n x n ):n,jN{0}} are bounded in E .

In view of Theorem 2.2(3), we know that dom g = E and g is strongly coercive and uniformly convex on bounded subsets. Let s=sup{g( T j n x n ):jN{0},nN{0}} and ρ s : E R be the gauge of uniform convexity of the conjugate function g . Now, we fix iN satisfying condition (2). We prove that for any wF and jN{0}

D g (w, z n ) D g (w, x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) .
(3.14)

Let us show (3.14). For any given wF(T) and jN, in view of the definition of the Bregman distance (see (1.7)), (1.6), Lemmas 2.3 and 2.5, we obtain

D g ( w , z n ) = D g ( w , g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ] ) = V ( w , α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ) = g ( w ) w , α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) + g ( α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ) α n , 0 g ( w ) + j = 1 α n , j g ( w ) α n , 0 w , g ( x n ) j = 1 α n , j w , g ( T j x n ) + α n , 0 g ( g ( x n ) ) + j = 1 α n , j g ( g ( T j x n ) ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) = α n , 0 V ( w , g ( x n ) ) + j = 1 α n , j V ( w , g ( T j x n ) ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) = α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , T j x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) = D g ( w , x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) .

In view of (3.13), we obtain

D g (w, x n ) D g (w, z n )0as n.
(3.15)

In view of (3.14) and (3.15), we conclude that

α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) D g (w, x n ) D g (w, z n )0

as n. From the assumption lim inf n α n , i α n , j >0, jN{0}, we have

lim n ρ s ( g ( T i x n ) g ( T j x n ) ) =0,jN{0}.

Therefore, from the property of ρ s , we deduce that

lim n g ( T i x n ) g ( T j x n ) =0,jN{0}.

Since g is uniformly norm-to-norm continuous on bounded subsets of E , we arrive at

lim n T i x n T j x n =0,jN{0}.
(3.16)

In particular, for j=0, we have

lim n T i x n x n =0.

This, together with (3.16), implies that

lim n T j x n x n =0,jN{0}.
(3.17)

Since { T j } j N is an infinite family of Bregman weak relatively nonexpansive mappings, from (3.6) and (3.17), we conclude that T j u=u, jN{0}. Thus, we have uF.

Finally, we show that u= proj F g x. From x n = proj C n g x, we conclude that

z x n , g ( x n ) g ( x ) 0,z C n .

Since F C n for each nN, we obtain

z x n , g ( x n ) g ( x ) 0,zF.
(3.18)

Letting n in (3.18), we deduce that

z u , g ( u ) g ( x ) 0,zF.

In view of (1.8), we have u= proj F g x, which completes the proof. □

Remark 3.1 Theorem 3.1 improves Theorem 1.2 in the following aspects.

  1. (1)

    For the structure of Banach spaces, we extend the duality mapping to a more general case, that is, a convex, continuous and strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets.

  2. (2)

    For the mappings, we extend the mapping from a relatively nonexpansive mapping to a countable family of Bregman weak relatively nonexpansive mappings. We remove the assumption F ˆ (T)=F(T) on the mapping T and extend the result to a countable family of Bregman weak relatively nonexpansive mappings, where F ˆ (T) is the set of asymptotic fixed points of the mapping T.

  3. (3)

    For the algorithm, we remove the set W n in Theorem 1.2.

Lemma 3.2 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let A be a maximal monotone operator from E to E such that A 1 (0). Let r>0 and Res r A g = ( g + r A ) 1 g be the g-resolvent of A. Then Res r A g is a Bregman weak relatively nonexpansive mapping.

Proof Let { z n } n N E be a sequence such that z n z and lim n z n Res r A g z n =0. Since g is uniformly norm-to-norm continuous on bounded subsets of E, we obtain

1 r ( g ( z n ) g ( Res r A g z n ) ) 0.

It follows from

1 r ( g ( z n ) g ( Res r A g z n ) ) A Res r A g z n

and the monotonicity of A that

w Res r A g z n , y 1 r ( g ( z n ) g ( Res r A g z n ) ) 0

for all wdomA and yAw. Letting n in the above inequality, we have wz,y0 for all wdomA and yAw. Therefore, from the maximality of A, we conclude that z A 1 (0)=F( Res r A g ), that is, z= Res r A g z. Hence Res r A g is Bregman weak relatively nonexpansive, which completes the proof. □

As an application of our main result, we include a concrete example in support of Theorem 3.1. Using Theorem 3.1, we obtain the following strong convergence theorem for maximal monotone operators.

Theorem 3.2 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let A be a maximal monotone operator from E to E such that A 1 (0). Let r n >0 such that lim inf n r n >0 and Res r n A g = ( g + r n A ) 1 g be the g-resolvent of A. Let { x n } n N be a sequence generated by

{ x 0 = x C chosen arbitrarily , C 0 = C , z n = g [ α n , 0 g ( x n ) + j = 1 α n , j g ( Res r j A g x n ) ] , y n = g [ β n g ( x n ) + ( 1 β n ) g ( z n ) ] , C n + 1 = { z C n : D g ( z , y n ) D g ( z , x n ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(3.19)

where g is the right-hand derivative of g. Let { α n , j :j,nN{0}} and { β n } n N { 0 } be sequences in [0,1) satisfying the following control conditions:

  1. (1)

    j = 0 α n , j =1, nN{0};

  2. (2)

    There exists iN such that lim inf n α n , i α n , j >0, jN{0};

  3. (3)

    0 β n <1 for all nN{0} and lim inf n β n <1.

Then the sequence { x n } n N defined in (3.19) converges strongly to proj A 1 ( 0 ) g x as n.

Proof Letting T j = Res r j A g , jN{0}, in Theorem 3.1, from (3.1) we obtain (3.19). We need only to show that T j satisfies all the conditions in Theorem 3.1 for all jN{0}. In view of Lemma 3.2, we conclude that T j is a Bregman relatively nonexpansive mapping for each jN{0}. Thus, we obtain

D g ( p , Res r j A g v ) D g (p,v),vE,pF ( Res r j A g ) ,jN{0}

and

F ˜ ( Res r j A g ) =F ( Res r j A g ) = A 1 (0),jN{0},

where F ˜ ( Res r j A g ) is the set of all strong asymptotic fixed points of Res r j A g . Therefore, in view of Theorem 3.1, we have the conclusions of Theorem 3.2. This completes the proof. □

4 Strong convergence theorems with computational errors

In this section, we study strong convergence of iterative algorithms to find common fixed points of finitely many Bregman weak relatively nonexpansive mappings in a reflexive Banach space. Our algorithms take into account possible computational errors. We prove the following strong convergence theorem concerning Bregman weak relatively nonexpansive mappings.

Theorem 4.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let NN and { T j } j = 1 N be a finite family of Bregman weak relatively nonexpansive mappings from E into int domg such that F:= j = 1 N F( T j ) is a nonempty subset of E. Suppose in addition that T 0 =I, where I is the identity mapping on E. Let { x n } n N be a sequence generated by

{ x 0 = x E chosen arbitrarily , C 0 = E , y n = g [ α n , 0 g ( x n ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) ] , C n + 1 = { z C n : D g ( z , y n ) D g ( z , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) C n + 1 = + j = 1 N α n , j z x n , g ( x n ) g ( x n + e n j ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(4.1)

where g is the right-hand derivative of g. Let { α n , j :nN{0},j{0,1,2,,N}} be a sequence in (0,1) satisfying the following control conditions:

  1. (1)

    j = 0 N α n , j =1, nN{0};

  2. (2)

    There exists i{1,2,,N} such that lim inf n α n , i α n , j >0, j{0,1,2,,N}.

If, for each j=0,1,2,,N, the sequences of errors { e n j } n N E satisfy lim inf n e n j =0, then the sequence { x n } n N defined in (4.1) converges strongly to proj F g x as n.

Proof We divide the proof into several steps.

Step 1. We show that C n is closed and convex for each nN{0}.

It is clear that C 0 =E is closed and convex. Let C m be closed and convex for some mN. For z C m , we see that

D g ( z , y m ) D g ( z , x m ) + j = 1 N α m , j D g ( x m , x m + e m j ) + j = 1 N α m , j z x m , g ( x m ) g ( x m + e m j )

is equivalent to

z , g ( x m ) g ( y m ) + j = 1 N α m , j x m z , g ( x m ) g ( x m + e m j ) g ( y m ) g ( x m ) + x m , g ( x m ) y m , g ( y m ) + j = 1 N α m , j D g ( x m , x m + e m j ) .

An easy argument shows that C m + 1 is closed and convex. Hence C n is closed and convex for all nN{0}.

Step 2. We claim that F C n for all nN{0}.

It is obvious that F C 0 =E. Assume now that F C m for some mN. Employing Lemma 2.5, for any wF C m , we obtain

D g ( w , y m ) = D g ( w , g [ α m , 0 g ( x m ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) ] ) = V ( w , α m , 0 g ( x m ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) ) = g ( w ) w , α m , 0 g ( x m ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) + g ( α m , 0 g ( x m ) ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) α m , 0 g ( w ) + j = 1 N α m , j g ( w ) + α m , 0 g ( g ( x m ) ) + j = 1 N α m , j g ( g ( T j ( x m + e m j ) ) ) = α m , 0 V ( w , g ( x m ) ) + j = 1 N α m , j V ( w , g ( T j ( x m + e m j ) ) ) = α m , 0 D g ( w , x m ) + j = 1 N α m , j D g ( w , T j ( x m + e m j ) ) α m , 0 D g ( w , x m ) + j = 1 N α m , j D g ( w , x m + e m j ) = α m , 0 D g ( w , x m ) + j = 1 N α m , j D g ( w , x m ) + j = 1 N α m , j D g ( x m , x m + e m j ) + j = 1 N α m , j w x m , g ( x m ) g ( x m + e m j ) = D g ( w , x m ) + j = 1 N α m , j D g ( x m , x m + e m j ) + j = 1 N α m , j w x m , g ( x m ) g ( x m + e m j ) .
(4.2)

This proves that w C m + 1 . Consequently, we see that F C n for any nN{0}.

Step 3. We prove that { x n } n N , { y n } n N and { T j ( x n + e n j ):nN,j{0,1,2,,N}} are bounded sequences in E.

In view of (1.9), we conclude that

D g ( x n , x ) = D g ( proj C n g x , x ) D g ( w , x ) D g ( w , x n ) D g ( w , x ) , w F C n , n N { 0 } .
(4.3)

It follows from (4.3) that the sequence { D g ( x n , x ) } n N is bounded and hence there exists M 0 >0 such that

D g ( x n ,x) M 0 ,nN{0}.
(4.4)

In view of Lemma 2.1(3), we conclude that the sequence { x n } n N and hence { x n + e n j :nN{0},j{0,1,2,,N}} is bounded. Since { T j } j = 1 N is a finite family of Bregman weak relatively nonexpansive mappings from E into int domg, for any qF, we have

D g ( q , T j ( x n + e n j ) ) D g ( q , x n + e n j ) ,nN and j{0,1,2,,N}.
(4.5)

This, together with Definition 2.1 and the boundedness of { x n } n N , implies that { T j ( x n + e n j ):nN{0},j{0,1,2,,N}} is bounded.

Step 4. We show that x n u for some uF, where u= proj F g x.

By Step 3, we deduce that { x n } n N is bounded. By the construction of C n , we conclude that C m C n and x m = proj C m g x C m C n for any positive integer mn. This, together with (1.9), implies that

D g ( x m , x n ) = D g ( x m , proj C n g x ) D g ( x m , x ) D g ( proj C n g x , x ) = D g ( x m , x ) D g ( x n , x ) .
(4.6)

In view of (4.6), we have

D g ( x n ,x) D g ( x n ,x)+ D g ( x m , x n ) D g ( x m ,x),mn.

This proves that { D g ( x n , x ) } n N is an increasing sequence in and hence by (4.4) the limit lim n D g ( x n ,x) exists. Letting m,n in (4.6), we deduce that D g ( x m , x n )0. In view of Lemma 2.4, we obtain that x m x n 0 as m,n. Thus we have { x n } n N is a Cauchy sequence. Since E is a Banach space, we conclude that there exists uE such that

lim n x n u=0.
(4.7)

Now, we show that uF. In view of (4.6), we obtain

lim n D g ( x n + 1 , x n )=0.
(4.8)

Since lim n e n j =0, for all j{0,1,2,,N}, in view of Lemma 2.4 and (4.8), we obtain that

lim n x n + 1 x n =0and lim n D ( x n , x n + e n j ) =0,j{0,1,2,,N}.
(4.9)

The function g is bounded on bounded subsets of E and thus g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). Since x n + 1 C n + 1 , we get

D g ( x n + 1 , y n ) D g ( x n + 1 , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j x n + 1 x n , g ( x n ) g ( x n + e n j ) .

This, together with (4.9), implies that

lim n D g ( x n + 1 , y n )=0.
(4.10)

Employing Lemma 2.4 and (4.9)-(4.10), we deduce that

lim n x n + 1 y n =0.
(4.11)

In view of (4.7) and (4.11), we get

lim n y n u=0.
(4.12)

Thus, { y n } n N is a bounded sequence.

From (4.11) and (4.12), it follows that

lim n x n y n =0.

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n g ( x n ) g ( y n ) =0.
(4.13)

Applying Lemma 2.4, we deduce that

lim n D g ( y n , x n )=0.
(4.14)

It follows from the three point identity (see (2.2)) that

| D g ( w , x n ) D g ( w , y n ) | = | D g ( w , y n ) + D g ( y n , x n ) + w y n , g ( y n ) g ( x n ) D g ( w , y n ) | = | D g ( y n , x n ) w y n , g ( y n ) g ( x n ) | D g ( y n , x n ) + w y n g ( y n ) g ( x