Skip to main content

Bregman weak relatively nonexpansive mappings in Banach spaces

Abstract

In this paper, we introduce a new class of mappings called Bregman weak relatively nonexpansive mappings and propose new hybrid iterative algorithms for finding common fixed points of an infinite family of such mappings in Banach spaces. We prove strong convergence theorems for the sequences produced by the methods. Furthermore, we apply our method to prove strong convergence theorems of iterative algorithms for finding common fixed points of finitely many Bregman weak relatively nonexpansive mappings in reflexive Banach spaces. These algorithms take into account possible computational errors. We also apply our main results to solve equilibrium problems in reflexive Banach spaces. Finally, we study hybrid iterative schemes for finding common solutions of an equilibrium problem, fixed points of an infinite family of Bregman weak relatively nonexpansive mappings and null spaces of a γ-inverse strongly monotone mapping in 2-uniformly convex Banach spaces. Some application of our results to the solution of equations of Hammerstein-type is presented. Our results improve and generalize many known results in the current literature.

MSC:47H10, 37C25.

1 Introduction

The hybrid projection method was first introduced by Hangazeau in [1]. In a series of papers [212], authors investigated the hybrid projection method and proved strong and weak convergence theorems for the sequences produced by their method. The shrinking projection method, which is a generalization of the hybrid projection method, was first introduced by Takahashi et al. in [13]. Throughout this paper, we denote the set of real numbers and the set of positive integers by and , respectively. Let E be a Banach space with the norm and the dual space E . For any xE, we denote the value of x E at x by x, x . Let { x n } n N be a sequence in E. We denote the strong convergence of { x n } n N to xE as n by x n x and the weak convergence by x n x. The modulus δ of convexity of E is denoted by

δ(ϵ)=inf { 1 x + y 2 : x 1 , y 1 , x y ϵ }

for every ϵ with 0ϵ2. A Banach space E is said to be uniformly convex if δ(ϵ)>0 for every ϵ>0. Let S E ={xE:x=1}. The norm of E is said to be Gâteaux differentiable if for each x,y S E , the limit

lim t 0 x + t y x t
(1.1)

exists. In this case, E is called smooth. If the limit (1.1) is attained uniformly for all x,y S E , then E is called uniformly smooth. The Banach space E is said to be strictly convex if x + y 2 <1 whenever x,y S E and xy. It is well known that E is uniformly convex if and only if E is uniformly smooth. It is also known that if E is reflexive, then E is strictly convex if and only if E is smooth; for more details, see [14, 15].

Let C be a nonempty subset of E. Let T:CE be a mapping. We denote the set of fixed points of T by F(T), i.e., F(T)={xC:Tx=x}. A mapping T:CE is said to be nonexpansive if TxTyxy for all x,yC. A mapping T:CE is said to be quasi-nonexpansive if F(T) and Txyxy for all xC and yF(T). The concept of nonexpansivity plays an important role in the study of Mann-type iteration [16] for finding fixed points of a mapping T:CC. Recall that the Mann-type iteration is given by the following formula:

x n + 1 = γ n T x n +(1 γ n ) x n , x 1 C.
(1.2)

Here, { γ n } n N is a sequence of real numbers in [0,1] satisfying some appropriate conditions. The construction of fixed points of nonexpansive mappings via Mann’s algorithm [16] has been extensively investigated recently in the current literature (see, for example, [17] and the references therein). In [17], Reich proved the following interesting result.

Theorem 1.1 Let C be a closed and convex subset of a uniformly convex Banach space E with a Fréchet differentiable norm, let T:CC be a nonexpansive mapping with a fixed point, and let γ n be a sequence of real numbers such that γ n [0,1] and n = 1 γ n (1 γ n )=. Then the sequence { x n } n N generated by Mann’s algorithm (1.2) converges weakly to a fixed point of T.

However, the convergence of the sequence { x n } n N generated by Mann’s algorithm (1.2) is in general not strong (see a counterexample in [18]; see also [19]). Some attempts to modify the Mann iteration method (1.2) so that strong convergence is guaranteed have recently been made. Bauschke and Combettes [4] proposed the following modification of the Mann iteration method for a single nonexpansive mapping T in a Hilbert space H:

{ x 0 = x C , y n = α n x n + ( 1 α n ) T x n , C n = { z C n : z y n z x n } , Q n = { z C : x n z , x x n 0 } , x n + 1 = P C n Q n x ,
(1.3)

where C is a closed and convex subset of H, P Q denotes the metric projection from H onto a closed and convex subset Q of H. They proved that if the sequence { α n } n N is bounded above from one, then the sequence { x n } n N generated by (1.3) converges strongly to P F ( T ) x as n.

Let E be a smooth, strictly convex and reflexive Banach space and let J be a normalized duality mapping of E. Let C be a nonempty, closed and convex subset of E. The generalized projection Π C from E onto C [20] is defined and denoted by

Π C (x)= arg min y C ϕ(y,x),

where ϕ(x,y)= x 2 2x,Jy+ y 2 . Let C be a nonempty, closed and convex subset of a smooth Banach space E, let T be a mapping from C into itself. A point pC is said to be an asymptotic fixed point [21] of T if there exists a sequence { x n } n N in C which converges weakly to p and lim n x n T x n =0. We denote the set of all asymptotic fixed points of T by F ˆ (T). A point pC is called a strong asymptotic fixed point of T if there exists a sequence { x n } n N in C which converges strongly to p and lim n x n T x n =0. We denote the set of all strong asymptotic fixed points of T by F ˜ (T).

Following Matsushita and Takahashi [22], a mapping T:CC is said to be relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    F(T) is nonempty;

  2. (2)

    ϕ(u,Tx)ϕ(u,x), uF(T), xC;

  3. (3)

    F ˆ (T)=F(T).

In 2005, Matsushita and Takahashi [22] proved the following strong convergence theorem for relatively nonexpansive mappings in a Banach space.

Theorem 1.2 Let E be a uniformly smooth and uniformly convex Banach space, let C be a nonempty, closed and convex subset of E, let T be a relatively nonexpansive mapping from C into itself, and let { α n } n N be a sequence of real numbers such that 0 α n <1 and lim sup n α n <1. Suppose that { x n } n N is given by

{ x 0 = x C , y n = J 1 ( α n J x n + ( 1 α n ) J T x n ) , H n = { z C n : ϕ ( z , y n ) ϕ ( z , x n ) } , W n = { z C : x n z , J x J x n 0 } , x n + 1 = Π H n W n x .
(1.4)

If F(T) is nonempty, then { x n } n N converges strongly to Π F ( T ) x.

1.1 Some facts about gradient

For any convex function g:E(,+] we denote the domain of g by domg={xE:g(x)<}. For any xint domg and any yE, we denote by g 0 (x,y) the right-hand derivative of g at x in the direction y, that is,

g 0 (x,y)= lim t 0 g ( x + t y ) g ( x ) t .
(1.5)

The function g is said to be Gâteaux differentiable at x if lim t 0 g ( x + t y ) g ( x ) t exists for any y. In this case, g 0 (x,y) coincides with g(x), the value of the gradient g of g at x. The function g is said to be Gâteaux differentiable if it is Gâteaux differentiable everywhere. The function g is said to be Fréchet differentiable at x if this limit is attained uniformly in y=1. The function g is Fréchet differentiable at xE (see, for example, [[23], p.13] or [[24], p.508]) if for all ϵ>0, there exists δ>0 such that yxδ implies that

|g(y)g(x) y x , g ( x ) |ϵyx.

The function g is said to be Fréchet differentiable if it is Fréchet differentiable everywhere. It is well known that if a continuous convex function g:ER is Gâteaux differentiable, then g is norm-to-weak continuous (see, for example, [[23], Proposition 1.1.10]). Also, it is known that if g is Fréchet differentiable, then g is norm-to-norm continuous (see [[24], p.508]). The mapping g is said to be weakly sequentially continuous if x n x as n implies that g( x n ) g(x) as n (for more details, see [[23], Theorem 3.2.4] or [[24], p.508]). The function g is said to be strongly coercive if

lim x n g ( x n ) x n =.

It is also said to be bounded on bounded subsets of E if g(U) is bounded for each bounded subset U of E. Finally, g is said to be uniformly Fréchet differentiable on a subset X of E if the limit (1.5) is attained uniformly for all xX and y=1.

Let A:E 2 E be a set-valued mapping. We define the domain and range of A by domA={xE:Ax} and ranA= x E Ax, respectively. The graph of A is denoted by G(A)={(x, x )E× E : x Ax}. The mapping AE× E is said to be monotone [25] if xy, x y 0 whenever (x, x ),(y, y )A. It is also said to be maximal monotone [26] if its graph is not contained in the graph of any other monotone operator on E. If AE× E is maximal monotone, then we can show that the set A 1 0={zE:0Az} is closed and convex. A mapping A:domAE E is called γ-inverse strongly monotone if there exists a positive real number γ such that for all x,ydomA, xy,AxAyγ A x A y 2 .

1.2 Some facts about Legendre functions

Let E be a reflexive Banach space. For any proper, lower semicontinuous and convex function g:E(,+], the conjugate function g of g is defined by

g ( x ) = sup x E { x , x g ( x ) }

for all x E . It is well known that g(x)+ g ( x )x, x for all (x, x )E× E . It is also known that (x, x )g is equivalent to

g(x)+ g ( x ) = x , x .
(1.6)

Here, ∂g is the subdifferential of g [27, 28]. We also know that if g:E(,+] is a proper, lower semicontinuous and convex function, then g : E (,+] is a proper, weak lower semicontinuous and convex function; see [15] for more details on convex analysis.

Let g:E(,+] be a mapping. The function g is said to be:

  1. (i)

    essentially smooth, if ∂g is both locally bounded and single-valued on its domain;

  2. (ii)

    essentially strictly convex, if ( g ) 1 is locally bounded on its domain and g is strictly convex on every convex subset of domg;

  3. (iii)

    Legendre, if it is both essentially smooth and essentially strictly convex (for more details, we refer to [[29], Definition 5.2]).

If E is a reflexive Banach space and g:E(,+] is a Legendre function, then in view of [[30], p.83],

g = ( g ) 1 ,rang=dom g =int dom g andrang=int domg.

Examples of Legendre functions are given in [29, 31]. One important and interesting Legendre function is 1 s s (1<s<), where the Banach space E is smooth and strictly convex and, in particular, a Hilbert space.

1.3 Some facts about Bregman distance

Let E be a Banach space and let E be the dual space of E. Let g:ER be a convex and Gâteaux differentiable function. Then the Bregman distance [32, 33] corresponding to g is the function D g :E×ER defined by

D g (x,y)=g(x)g(y) x y , g ( y ) ,x,yE.
(1.7)

It is clear that D g (x,y)0 for all x,yE. In that case when E is a smooth Banach space, setting g(x)= x 2 for all xE, we obtain that g(x)=2Jx for all xE and hence D g (x,y)=ϕ(x,y) for all x,yE.

Let E be a Banach space and let C be a nonempty and convex subset of E. Let g:ER be a convex and Gâteaux differentiable function. Then we know from [34] that for xE and x 0 C, D g ( x 0 ,x)= min y C D g (y,x) if and only if

y x 0 , g ( x ) g ( x 0 ) 0,yC.
(1.8)

Furthermore, if C is a nonempty, closed and convex subset of a reflexive Banach space E and g:ER is a strongly coercive Bregman function, then for each xE, there exists a unique x 0 C such that

D g ( x 0 ,x)= min y C D g (y,x).

The Bregman projection proj C g from E onto C is defined by proj C g (x)= x 0 for all xE. It is also well known that proj C g has the following property:

D g ( y , proj C g x ) + D g ( proj C g x , x ) D g (y,x)
(1.9)

for all yC and xE (see [23] for more details).

1.4 Some facts about uniformly convex and totally convex functions

Let E be a Banach space and let B r :={zE:zr} for all r>0. Then a function g:ER is said to be uniformly convex on bounded subsets of E [[35], pp.203, 221] if ρ r (t)>0 for all r,t>0, where ρ r :[0,+)[0,] is defined by

ρ r (t)= inf x , y B r , x y = t , α ( 0 , 1 ) α g ( x ) + ( 1 α ) g ( y ) g ( α x + ( 1 α ) y ) α ( 1 α )

for all t0. The function ρ r is called the gauge of uniform convexity of g. The function g is also said to be uniformly smooth on bounded subsets of E [[35], pp.207, 221] if lim t 0 σ r ( t ) t =0 for all r>0, where σ r :[0,+)[0,] is defined by

σ r (t)= sup x B r , y S E , α ( 0 , 1 ) α g ( x + ( 1 α ) t y ) + ( 1 α ) g ( x α t y ) g ( x ) α ( 1 α )

for all t0. The function g is said to be uniformly convex if the function δ g :[0,+)[0,+], defined by

δ g (t):=sup { 1 2 g ( x ) + 1 2 g ( y ) g ( x + y 2 ) : y x = t } ,

satisfies that lim t 0 σ r ( t ) t =0.

Remark 1.1 Let E be a Banach space, let r>0 be a constant and let g:ER be a convex function which is uniformly convex on bounded subsets. Then

g ( α x + ( 1 α ) y ) αg(x)+(1α)g(y)α(1α) ρ r ( x y )

for all x,y B r and α(0,1), where ρ r is the gauge of uniform convexity of g.

Let g:E(,+] be a convex and Gâteaux differentiable function. Recall that, in view of [[23], Section 1.2, p.17] (see also [36]), the function g is called totally convex at a point xint domg if its modulus of total convexity at x, that is, the function v g :int domg×[0,+)[0,+), defined by

v g (x,t):=inf { D g ( y , x ) : y int  dom g , y x = t } ,

is positive whenever t>0. The function g is called totally convex when it is totally convex at every point xint domg. Moreover, the function f is called totally convex on bounded subsets of E if v g (x,t)>0 for any bounded subset X of E and for any t>0, where the modulus of total convexity of the function g on the set X is the function v g :int domg×[0,+)[0,+) defined by

v g (X,t):=inf { v g ( x , t ) : x X int dom g } .

It is well known that any uniformly convex function is totally convex, but the converse is not true in general (see [[23], Section 1.3, p.30]).

It is also well known that g is totally convex on bounded subsets if and only if g is uniformly convex on bounded subsets (see [[37], Theorem 2.10, p.9]).

Examples of totally convex functions can be found, for instance, in [23, 37].

1.5 Some facts about resolvent

Let E be a reflexive Banach space with the dual space E and let g:E(,+] be a proper, lower semicontinuous and convex function. Let A be a maximal monotone operator from E to E . For any r>0, let the mapping Res r A g :EdomA be defined by

Res r A g = ( g + r A ) 1 g.

The mapping Res r A g is called the g-resolvent of A (see [38]). It is well known that A 1 (0)=F( Res r A g ) for each r>0 (for more details, see, for example, [14]).

Examples and some important properties of such operators are discussed in [39].

1.6 Some facts about Bregman quasi-nonexpansive mappings

Let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let g:E(,+] be a proper, lower semicontinuous and convex function. Recall that a mapping T:CC is said to be Bregman quasi-nonexpansive [40] if F(T) and

D g (p,Tx) D g (p,x),xC,pF(T).

A mapping T:CC is said to be Bregman relatively nonexpansive [40] if the following conditions are satisfied:

  1. (1)

    F(T) is nonempty;

  2. (2)

    D g (p,Tv) D g (p,v), pF(T), vC;

  3. (3)

    F ˆ (T)=F(T).

Now, we are in a position to introduce the following new class of Bregman quasi-nonexpansive type mappings. A mapping T:CC is said to be Bregman weak relatively nonexpansive if the following conditions are satisfied:

  1. (1)

    F(T) is nonempty;

  2. (2)

    D g (p,Tv) D g (p,v), pF(T), vC;

  3. (3)

    F ˜ (T)=F(T).

It is clear that any Bregman relatively nonexpansive mapping is a Bregman quasi-nonexpansive mapping. It is also obvious that every Bregman relatively nonexpansive mapping is a Bregman weak relatively nonexpansive mapping, but the converse in not true in general. Indeed, for any mapping T:CC, we have F(T) F ˜ (T) F ˆ (T). If T is Bregman relatively nonexpansive, then F(T)= F ˜ (T)= F ˆ (T). Below we show that there exists a Bregman weak relatively nonexpansive mapping which is not a Bregman relatively nonexpansive mapping.

Example 1.1 Let E= l 2 , where

l 2 = { σ = ( σ 1 , σ 2 , , σ n , ) : n = 1 σ n 2 < } , σ = ( n = 1 σ n 2 ) 1 2 , σ l 2 , σ , η = n = 1 σ n η n , δ = ( σ 1 , σ 2 , , σ n , ) , η = ( η 1 , η 2 , , η n , ) l 2 .

Let { x n } n N { 0 } E be a sequence defined by

x 0 = ( 1 , 0 , 0 , 0 , ) , x 1 = ( 1 , 1 , 0 , 0 , 0 , ) , x 2 = ( 1 , 0 , 1 , 0 , 0 , 0 , ) , x 3 = ( 1 , 0 , 0 , 1 , 0 , 0 , 0 , ) , , x n = ( σ n , 1 , σ n , 2 , , σ n , k , ) , ,

where

σ n , k ={ 1 if  k = 1 , n + 1 , 0 if  k 1 , k n + 1

for all nN. It is clear that the sequence { x n } n N converges weakly to x 0 . Indeed, for any Λ=( λ 1 , λ 2 ,, λ n ,) l 2 = ( l 2 ) , we have

Λ( x n x 0 )= x n x 0 ,Λ= k = 2 λ k σ n , k 0

as n. It is also obvious that x n x m = 2 for any nm with n, m sufficiently large. Thus, { x n } n N is not a Cauchy sequence. Let k be an even number in and let g:ER be defined by

g(x)= 1 k x k ,xE.

It is easy to show that g(x)= J k (x) for all xE, where

J k (x)= { x E : x , x = x x , x = x k 1 } .

It is also obvious that

J k (λx)= λ k 1 J k (x),xE,λR.

Now, we define a mapping T:EE by

T(x)={ n n + 1 x if  x = x n ; x if  x x n .

It is clear that F(T)={0} and for any nN,

D g ( 0 , T x n ) = g ( 0 ) g ( T x n ) 0 T x n , g ( T x n ) = n k ( n + 1 ) k g ( x n ) + n k ( n + 1 ) k x n , g ( x n ) = n k ( n + 1 ) k [ g ( x n ) + x , g ( x n ) ] = n k ( n + 1 ) k D g ( 0 , x n ) D g ( 0 , x n ) .

If x x n , then we have

D g ( 0 , T x ) = g ( 0 ) g ( T x ) 0 T x , g ( T x ) = g ( x ) x , g ( x ) = g ( x ) x , g ( x ) = D g ( 0 , x ) .

Therefore, T is a Bregman quasi-nonexpansive mapping. Next, we claim that T is a Bregman weak relatively nonexpansive mapping. Indeed, for any sequence { z n } n N E such that z n z 0 and z n T z n 0 as n, since { x n } n N is not a Cauchy sequence, there exists a sufficiently large number NN such that z n x m for any n,m>N. If we suppose that there exists mN such that z n = x m for infinitely many nN, then a subsequence { x n i } i N would satisfy z n i = x m , so z 0 = lim i z n i = x m and z 0 = lim i T z n i =T x m = m m + 1 x m , which is impossible. This implies that T z n = z n for all n>N. It follows from z n T z n 0 that 2 z n 0 and hence z n z 0 =0. Since z 0 F(T), we conclude that T is a Bregman weak relatively nonexpansive mapping.

Finally, we show that T is not Bregman relatively nonexpansive. In fact, though x n x 0 and

x n T x n = x n n n + 1 x n = 1 n + 1 x n 0

as n, but x 0 F(T). Thus we have F ˆ (T)F(T).

Let us give an example of a Bregman quasi-nonexpansive mapping which is neither a Bregman relatively nonexpansive mapping nor a Bregman weak relatively nonexpansive mapping (see also [41]).

Example 1.2 Let E be a smooth Banach space, let k be an even number in and let g:ER be defined by

g(x)= 1 k x k ,xE.

Let x 0 0 be any element of E. We define a mapping T:EE by

T(x)={ ( 1 2 + 1 2 n + 1 ) x 0 if  x = ( 1 2 + 1 2 n ) x 0 ; x if  x ( 1 2 + 1 2 n ) x 0

for all n0. It could easily be seen that T is neither a Bregman weak relatively nonexpansive mapping nor a Bregman relatively nonexpansive mapping. To this end, we set

x n = ( 1 2 + 1 2 n ) x 0 ,nN.

Though x n 1 2 x 0 ( x n 1 2 x 0 ) as n and

x n T x n = ( 1 2 + 1 2 n ) x 0 ( 1 2 + 1 2 n + 1 ) x 0 = 1 2 n 1 x 0 0

as n, but 1 2 x 0 F(T). Therefore, F ˆ (T)F(T) and F ˜ (T)F(T).

In [42], Bauschke and Combettes introduced an iterative method to construct the Bregman projection of a point onto a countable intersection of closed and convex sets in reflexive Banach spaces. They proved a strong convergence theorem of the sequence produced by their method; for more detail, see [[42], Theorem 4.7].

In [40], Reich and Sabach introduced a proximal method for finding common zeros of finitely many maximal monotone operators in a reflexive Banach space. More precisely, they proved the following strong convergence theorem.

Theorem 1.3 Let E be a reflexive Banach space and let A i :E 2 E , i=1,2,,N, be N maximal monotone operators such that Z:= i = 1 N A i 1 ( 0 ). Let g:ER be a Legendre function that is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let { x n } n N be a sequence defined by the following iterative algorithm:

{ x 0 E chosen arbitrarily , y n i = Res λ n i A i g ( x n + e n i ) , C n i = { z E : D g ( z , y n i ) D g ( z , x n + e n i ) } , C n : = i = 1 N C n i , Q n = { z E : g ( x 0 ) g ( x n ) , z x n 0 } , x n + 1 = proj C n Q n g x 0 and n N { 0 } .
(1.10)

If, for each i=1,2,,N, lim inf n λ n i >0 and the sequences of errors { e n i } n N E satisfy lim inf n e n i =0, then each such sequence { x n } n N converges strongly to proj Z g ( x 0 ) as n.

Let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let g:E(,+] be a proper, lower semicontinuous and convex function. Recall that a mapping T:CC is said to be Bregman firmly nonexpansive (for short, BFNE) if

D g (Tx,Ty)+ D g (Ty,Tx)+ D g (Tx,x)+ D g (Ty,y) D g (Tx,y)+ D g (Ty,x)

for all x,yC. The mapping T is called quasi-Bregman firmly nonexpansive (for short, QBFNE) [43], if F(T) and

D g (p,Tx)+ D g (Tx,x) D g (p,x)

for all xC and pF(T). It is clear that any quasi-Bregman firmly nonexpansive mapping is Bregman quasi-nonexpansive. For more information on Bregman firmly nonexpansive mappings, we refer the readers to [38, 44]. In [44], Reich and Sabach proved that for any BFNE operator T, F ˆ (T)=F(T).

In [43], Reich and Sabach introduced a Mann-type process to approximate fixed points of quasi-Bregman firmly nonexpansive mappings defined on a nonempty, closed and convex subset C of a reflexive Banach space E. More precisely, they proved the following theorem.

Theorem 1.4 Let E be a reflexive Banach space and let T i :EE, i=1,2,,N, be N QBFNE operators which satisfy F( T i )= F ˆ ( T i ) for each 1iN and F:= i = 1 N F( T i ). Let g:ER be a Legendre function that is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let { x n } n N be a sequence defined by the following iterative algorithm:

{ x 0 E chosen arbitrarily , Q 0 i = E , i = 1 , 2 , , N , y n i = T i ( x n + e n i ) , Q n + 1 i = { z Q n i : g ( x n + e n i ) g ( y n i ) , z y n i 0 } , Q n + 1 : = i = 1 N Q n + 1 i , x n + 1 = proj Q n + 1 g x 0 and n N { 0 } .
(1.11)

If, for each i=1,2,,N, the sequences of errors { e n i } n N E satisfy lim inf n e n i =0, then each such sequence { x n } n N converges strongly to proj F g ( x 0 ) as n.

Let E be a reflexive Banach space and let g:ER be a convex and Gâteaux differentiable function. Let C be a nonempty, closed and convex subset of E. Recall that a mapping T:CC is said to be (quasi-)Bregman strongly firmly nonexpansive (for short, BSNE) with respect to a nonempty F ˆ (T) if F(T) and

D g (p,Tx) D g (p,x)

for all xC and p F ˆ (T), and if whenever { x n } n N C is bounded and p F ( T ) ˆ , then we have

lim n ( D g ( p , x n ) D g ( p , T x n ) ) =0 lim n D g (T x n , x n )=0.

The class of (quasi-)Bregman strongly nonexpansive mappings was first introduced in [21, 45] (for more details, see also [46]). We know that the notion of a strongly nonexpansive operator (with respect to the norm) was first introduced and studied in [47, 48].

In [46], Reich and Sabach introduced iterative algorithms for finding common fixed points of finitely many Bregman strongly nonexpansive operators in a reflexive Banach space. They established the following strong convergence theorem in a reflexive Banach space.

Theorem 1.5 Let E be a reflexive Banach space and let T i :EE, i=1,2,,N, be N BSNE operators which satisfy F( T i )= F ˆ ( T i ) for each 1iN and F:= i = 1 N F( T i ). Let g:ER be a Legendre function that is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let { x n } n N be a sequence defined by the following iterative algorithm:

{ x 0 E chosen arbitrarily , y n i = T i ( x n + e n i ) , C n i = { z E : D g ( z , y n i ) D g ( z , x n + e n i ) } , C n : = i = 1 N C n i , Q n = { z E : g ( x 0 ) g ( x n ) , z x n 0 } , x n + 1 = proj C n Q n g x 0 and n N { 0 } .
(1.12)

If, for each i=1,2,,N, the sequences of errors { e n i } n N E satisfy lim inf n e n i =0, then each such sequence { x n } n N converges strongly to proj F g ( x 0 ) as n.

But it is worth mentioning that, in all the above results for Bregman nonexpansive-type mappings, the assumption F ˆ (T)=F(T) is imposed on the map T.

Remark 1.2 Though the iteration processes (1.10) and (1.12), as introduced by the authors mentioned above, worked, it is easy to see that these processes seem cumbersome and complicated in the sense that at each stage of iteration, two different sets C n and Q n are computed and the next iterate taken as the Bregman projection of x 0 on the intersection of C n and Q n . This seems difficult to do in application. It is important to state clearly that the iteration process (1.11) involves computation of only one set Q n at each stage of iteration. In [49], Sabach proposed an excellent modification of algorithm (1.10) for finding common zeros of finitely many maximal monotone operators in reflexive Banach spaces.

Our concern now is the following:

Is it possible to obtain strong convergence of modified Mann-type schemes (1.10)-(1.12) to a fixed point of a Bregman quasi-nonexpansive type mapping T without imposing the assumption F ˆ (T)=F(T) on T?

In this paper, using Bregman functions, we introduce new hybrid iterative algorithms for finding common fixed points of an infinite family of Bregman weak relatively nonexpansive mappings in Banach spaces. We prove strong convergence theorems for the sequences produced by the methods. Furthermore, we apply our method to prove strong convergence theorems of iterative algorithms for finding common fixed points of finitely many Bregman weak relatively nonexpansive mappings in reflexive Banach spaces. These algorithms take into account possible computational errors. We also apply our main results to solve equilibrium problems in reflexive Banach spaces. Finally, we study hybrid iterative schemes for finding common solutions of an equilibrium problem, fixed points of an infinite family of Bregman weak relatively nonexpansive mappings and null spaces of a γ-inverse strongly monotone mapping in 2-uniformly convex Banach spaces. Some application of our results to the solution of equations of Hammerstein type is presented. No assumption F ˆ (T)=F(T) is imposed on the mapping T. Consequently, the above concern is answered in the affirmative in reflexive Banach space setting. Our results improve and generalize many known results in the current literature; see, for example, [4, 7, 8, 11, 22, 40, 4244, 46, 5052].

2 Preliminaries

In this section, we begin by recalling some preliminaries and lemmas which will be used in the sequel.

The following definition is slightly different from that in Butnariu and Iusem [23].

Definition 2.1 [24]

Let E be a Banach space. The function g:ER is said to be a Bregman function if the following conditions are satisfied:

  1. (1)

    g is continuous, strictly convex and Gâteaux differentiable;

  2. (2)

    the set {yE: D g (x,y)r} is bounded for all xE and r>0.

The following lemma follows from Butnariu and Iusem [23] and Zălinscu [35].

Lemma 2.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function. Then

  1. (1)

    g:E E is one-to-one, onto and norm-to-weak continuous;

  2. (2)

    xy,g(x)g(y)=0 if and only if x=y;

  3. (3)

    {xE: D g (x,y)r} is bounded for all yE and r>0;

  4. (4)

    dom g = E , g is Gâteaux differentiable and g = ( g ) 1 .

Now, we are ready to prove the following key lemma.

Lemma 2.2 Let E be a Banach space, let r>0 be a constant and let g:ER be a convex function which is uniformly convex on bounded subsets of E. Then

g ( k = 0 n α k x k ) k = 0 n α k g( x k ) α i α j ρ r ( x i x j )

for all i,j{0,1,2,,n}, x k B r , α k (0,1) and k=0,1,2,,n with k = 0 n α k =1, where ρ r is the gauge of uniform convexity of g.

Proof Without loss of generality, we may assume that i=0 and j=1. By induction on n, for n=1, in view of Remark 1.1 we get the desired result. Now suppose that it is true for n=k, i.e.,

g ( m = 0 k α m x m ) m = 0 k α m g( x m ) α 0 α 1 ρ r ( x 0 x 1 ) .

Now, we prove that the conclusion holds for n=k+1. Put x= m = 0 k α m x m 1 α k + 1 and observe that x B r . Since g is convex, given assumption, we conclude that

g ( m = 0 k + 1 α m x m ) = g ( ( 1 α k + 1 ) m = 0 k α m x m 1 α k + 1 + α k + 1 x k + 1 ) ( 1 α k + 1 ) g ( m = 0 k α m x m 1 α k + 1 ) + α k + 1 g ( x k + 1 ) m = 0 k α m g ( x m ) α 0 α 1 ρ r ( x 0 x 1 ) + α k + 1 g ( x k + 1 ) = m = 0 k + 1 α m g ( x m ) α 0 α 1 ρ r ( x 0 x 1 ) .

This completes the proof. □

Lemma 2.3 Let E be a Banach space, let r>0 be a constant and let g:ER be a continuous and convex function which is uniformly convex on bounded subsets of E. Then

g ( k = 0 α k x k ) k = 0 α k g( x k ) α i α j ρ r ( x i x j )

for all i,jN{0}, x k B r , α k (0,1) and kN{0} with k = 0 α k =1, where ρ r is the gauge of uniform convexity of g.

Proof Let i,jN{0} and k>i,j. Put v k = α 0 x 0 m = 0 k α m + α 1 x 1 m = 0 k α m ++ α k x k m = 0 k α m and observe that v k B r for all kN. In view of Lemma 2.2, we obtain that

g ( v k ) = g ( α 0 x 0 m = 0 k α m + α 1 x 1 m = 0 k α m + + α k x k m = 0 k α m ) 1 m = 0 k α m m = 0 k α m g ( x m ) α i α j ρ r ( x i x j ) .
(2.1)

Since g is continuous and v k m = 0 α m x m as k, we have

lim k g( v k )=g ( m = 0 α m x m ) .

Letting k in (2.1), we conclude that

g ( m = 0 α m x m ) m = 0 α m g( x m ) α i α j ρ r ( x i x j ) ,

which completes the proof. □

We know the following two results; see [[35], Proposition 3.6.4].

Theorem 2.1 Let E be a reflexive Banach space and let g:ER be a convex function which is bounded on bounded subsets of E. Then the following assertions are equivalent:

  1. (1)

    g is strongly coercive and uniformly convex on bounded subsets of E;

  2. (2)

    dom g = E , g is bounded on bounded subsets and uniformly smooth on bounded subsets of E ;

  3. (3)

    dom g = E , g is Fréchet differentiable and g is uniformly norm-to-norm continuous on bounded subsets of E .

Theorem 2.2 Let E be a reflexive Banach space and let g:ER be a continuous convex function which is strongly coercive. Then the following assertions are equivalent:

  1. (1)

    g is bounded on bounded subsets and uniformly smooth on bounded subsets of E;

  2. (2)

    g is Fréchet differentiable and g is uniformly norm-to-norm continuous on bounded subsets of E ;

  3. (3)

    dom g = E , g is strongly coercive and uniformly convex on bounded subsets of E .

Let E be a Banach space and let g:ER be a convex and Gâteaux differentiable function. Then the Bregman distance [32, 33] satisfies the three point identity that is

D g (x,z)= D g (x,y)+ D g (y,z)+ x y , g ( y ) g ( z ) ,x,y,zE.
(2.2)

In particular, it can be easily seen that

D g (x,y)= D g (y,x)+ y x , g ( y ) g ( x ) ,x,yE.
(2.3)

Indeed, by letting z=x in (2.2) and taking into account that D g (x,x)=0, we get the desired result.

Lemma 2.4 Let E be a Banach space and let g:ER be a Gâteaux differentiable function which is uniformly convex on bounded subsets of E. Let { x n } n N and { y n } n N be bounded sequences in E. Then the following assertions are equivalent:

  1. (1)

    lim n D g ( x n , y n )=0;

  2. (2)

    lim n x n y n =0.

Proof The implication (1) (2) was proved in [23] (see also [24]). For the converse implication, we assume that lim n x n y n =0. Then, in view of (2.3), we have

D g ( x n , y n ) = D g ( y n , x n ) + x n y n , g ( x n ) g ( y n ) x n y n g ( x n ) g ( y n ) , n N .
(2.4)

The function g is bounded on bounded subsets of E and therefore g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). This, together with (2.3)-(2.4), implies that lim n D g ( x n , y n )=0, which completes the proof. □

The following result was first proved in [37] (see also [24]).

Lemma 2.5 Let E be a reflexive Banach space, let g:ER be a strongly coercive Bregman function and let V be the function defined by

V ( x , x ) =g(x) x , x + g ( x ) ,xE, x E .

Then the following assertions hold:

  1. (1)

    D g (x, g ( x ))=V(x, x ) for all xE and x E .

  2. (2)

    V(x, x )+ g ( x )x, y V(x, x + y ) for all xE and x , y E .

Corollary 2.1 [35]

Let E be a Banach space, let g:E(,] be a proper, lower semicontinuous and convex function and let p,qR with 1p2q and p 1 + q 1 =1. Then the following statements are equivalent.

  1. (1)

    There exists c 1 >0 such that g is ρ-convex with ρ(t):= c 1 q t q for all t0.

  2. (2)

    There exists c 2 >0 such that for all (x, x ),(y, y )G(g); x y 2 c 2 q x y q 1 .

3 Strong convergence theorems without computational errors

In this section, we prove strong convergence theorems without computational errors in a reflexive Banach space. We start with the following simple lemma whose proof will be omitted since it can be proved by a similar argument as that in [[44], Lemma 15.5].

Lemma 3.1 Let E be a reflexive Banach space and let g:ER be a convex, continuous, strongly coercive and Gâteaux differentiable function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E. Let T:CC be a Bregman weak relatively nonexpansive mapping. Then F(T) is closed and convex.

Using ideas in [22], we can prove the following result.

Theorem 3.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let C be a nonempty, closed and convex subset of E and let { T j } j N be an infinite family of Bregman weak relatively nonexpansive mappings from C into itself such that F:= j = 1 F( T j ). Suppose in addition that T j 0 = T 0 =I for all jN, where I is the identity mapping on E. Let { x n } n N be a sequence generated by

{ x 0 = x C chosen arbitrarily , C 0 = C , z n = g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ] , y n = g [ β n g ( x n ) + ( 1 β n ) g ( z n ) ] , C n + 1 = { z C n : D g ( z , y n ) D g ( z , x n ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(3.1)

where g is the right-hand derivative of g. Let { α n , j :j,nN{0}} and { β n } n N { 0 } be sequences in [0,1) satisfying the following control conditions:

  1. (1)

    j = 0 α n , j =1, nN{0};

  2. (2)

    There exists iN such that lim inf n α n , i α n , j >0, jN{0};

  3. (3)

    0 β n <1 for all nN{0} and lim sup n β n <1.

Then the sequence { x n } n N defined in (3.1) converges strongly to proj F g x as n.

Proof We divide the proof into several steps.

Step 1. We show that C n is closed and convex for each nN{0}.

It is clear that C 0 =C is closed and convex. Let C m be closed and convex for some mN. For z C m , we see that

D g (z, y m ) D g (z, x m )

is equivalent to

z , g ( x m ) g ( y m ) g( y m )g( x m )+ x m , g ( x m ) y m , g ( y m ) .

An easy argument shows that C m + 1 is closed and convex. Hence C n is closed and convex for each nN{0}.

Step 2. We claim that F C n for all nN{0}.

It is obvious that F C 0 =C. Assume now that F C m for some mN. Employing Lemma 2.5, for any wF C m , we obtain

D g ( w , z m ) = D g ( w , g [ α m , 0 g ( x m ) + j = 1 α m , j g ( T j x m ) ] ) = V ( w , α m , 0 g ( x m ) + j = 1 α m , j g ( T j m x m ) ) = g ( w ) w , α m , 0 g ( x m ) + j = 1 α m , j g ( T j x m ) + g ( α m , 0 g ( x m ) + j = 1 α m , j g ( T j x m ) ) α m , 0 g ( w ) + j = 1 α m , j g ( w ) + α m , 0 g ( g ( x m ) ) + j = 1 α m , j g ( g ( T j x m ) ) = α m , 0 V ( w , g ( x m ) ) + j = 1 α m , j V ( w , g ( T j x m ) ) = α m , 0 D g ( w , x m ) + j = 1 α m , j D g ( w , T j x m ) α m , 0 D g ( w , x m ) + j = 1 α m , j D g ( w , x m ) = D g ( w , x m ) .

This implies that

D g ( w , y m ) = D g ( w , g [ β m g ( x m ) + ( 1 β m ) g ( z m ) ] ) = V ( w , β m g ( x m ) + ( 1 β m ) g ( z m ) ) β m V ( w , g ( x m ) ) + ( 1 β m ) V ( w , ( z m ) ) = β m D g ( w , x m ) + ( 1 β m ) D g ( w , z m ) β m D g ( w , x m ) + ( 1 β m ) D g ( w , x m ) = D g ( w , x m ) .
(3.2)

This proves that w C m + 1 . Thus, we have F C n for all nN{0}.

Step 3. We prove that { x n } n N , { y n } n N , { z n } n N and { T j x n :j,nN{0}} are bounded sequences in C.

In view of (1.9), we conclude that

D g ( x n , x ) = D g ( proj C n g x , x ) D g ( w , x ) D g ( w , x n ) D g ( w , x ) , w F C n , n N { 0 } .

This implies that the sequence { D ( x n , x ) } n N is bounded and hence there exists M>0 such that

D g ( x n ,x)M,nN.

In view of Lemma 2.1(3), we conclude that the sequence { x n } n N is bounded. Since { T j } j N is an infinite family of Bregman weak relatively nonexpansive mappings from C into itself, we have for any qF that

D g (q, T j x n ) D g (q, x n ),j,nN.

This, together with Definition 2.1 and the boundedness of { x n } n N , implies that the sequence { T j x n :j,nN{0}} is bounded.

Step 4. We show that x n u for some uF, where u= proj F g x.

By Step 3, we have that { x n } n N is bounded. By the construction of C n , we conclude that C m C n and x m = proj C m g x C m C n for any positive integer mn. This, together with (1.9), implies that

D g ( x m , x n ) = D g ( x m , proj C n g x ) D g ( x m , x ) D g ( proj C n g x , x ) = D g ( x m , x ) D g ( x n , x ) .
(3.3)

In view of (1.9), we conclude that

D g ( x n , x ) = D g ( proj C n g x , x ) D g ( w , x ) D g ( w , x n ) D g ( w , x ) , w F C n , n N { 0 } .
(3.4)

It follows from (3.4) that the sequence { D g ( x n , x ) } n N is bounded and hence there exists M>0 such that

D g ( x n ,x)M,nN.
(3.5)

In view of (3.3), we conclude that

D g ( x n ,x) D g ( x n ,x)+ D g ( x m , x n ) D g ( x m ,x),mn.

This proves that { D g ( x n , x ) } n N is an increasing sequence in and hence by (3.5) the limit lim n D g ( x n ,x) exists. Letting m,n in (3.3), we deduce that D g ( x m , x n )0. In view of Lemma 2.4, we get that x m x n 0 as m,n. This means that { x n } n N is a Cauchy sequence. Since E is a Banach space and C is closed and convex, we conclude that there exists uC such that

lim n x n u=0.
(3.6)

Now, we show that uF. In view of (3.3), we obtain

lim n D g ( x n + 1 , x n )=0.
(3.7)

Since x n + 1 C n + 1 , we conclude that

D g ( x n + 1 , y n ) D g ( x n + 1 , x n ).

This, together with (3.7), implies that

lim n D g ( x n + 1 , y n )=0.
(3.8)

Employing Lemma 2.4 and (3.7)-(3.8), we deduce that

lim n x n + 1 x n =0and lim n x n + 1 y n =0.

In view of (3.6), we get

lim n y n u=0.
(3.9)

From (3.6) and (3.9), it follows that

lim n x n y n =0.

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n g ( x n ) g ( y n ) =0.
(3.10)

In view of (3.1), we have

g( y n )g( x n )=(1 β n ) ( g ( z n ) g ( x n ) ) .
(3.11)

It follows from (3.10)-(3.11) that

lim n g ( z n ) g ( x n ) =0.
(3.12)

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n z n x n =0.

Applying Lemma 2.4, we derive that

lim n D g ( z n , x n )=0.

It follows from the three point identity (see (2.2)) that

| D g ( w , x n ) D g ( w , z n ) | = | D g ( w , z n ) + D g ( z n , x n ) + w z n , g ( z n ) g ( x n ) D g ( w , z n ) | = | D g ( z n , x n ) w z n , g ( z n ) g ( x n ) | D g ( z n , x n ) + w z n g ( z n ) g ( x n ) 0
(3.13)

as n.

The function g is bounded on bounded subsets of E and thus g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). This implies that the sequences { g ( x n ) } n N , { g ( y n ) } n N , { g ( z n ) } n N and {g( T j n x n ):n,jN{0}} are bounded in E .

In view of Theorem 2.2(3), we know that dom g = E and g is strongly coercive and uniformly convex on bounded subsets. Let s=sup{g( T j n x n ):jN{0},nN{0}} and ρ s : E R be the gauge of uniform convexity of the conjugate function g . Now, we fix iN satisfying condition (2). We prove that for any wF and jN{0}

D g (w, z n ) D g (w, x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) .
(3.14)

Let us show (3.14). For any given wF(T) and jN, in view of the definition of the Bregman distance (see (1.7)), (1.6), Lemmas 2.3 and 2.5, we obtain

D g ( w , z n ) = D g ( w , g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ] ) = V ( w , α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ) = g ( w ) w , α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) + g ( α n , 0 g ( x n ) + j = 1 α n , j g ( T j x n ) ) α n , 0 g ( w ) + j = 1 α n , j g ( w ) α n , 0 w , g ( x n ) j = 1 α n , j w , g ( T j x n ) + α n , 0 g ( g ( x n ) ) + j = 1 α n , j g ( g ( T j x n ) ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) = α n , 0 V ( w , g ( x n ) ) + j = 1 α n , j V ( w , g ( T j x n ) ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) = α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , T j x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) = D g ( w , x n ) α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) .

In view of (3.13), we obtain

D g (w, x n ) D g (w, z n )0as n.
(3.15)

In view of (3.14) and (3.15), we conclude that

α n , i α n , j ρ s ( g ( T i x n ) g ( T j x n ) ) D g (w, x n ) D g (w, z n )0

as n. From the assumption lim inf n α n , i α n , j >0, jN{0}, we have

lim n ρ s ( g ( T i x n ) g ( T j x n ) ) =0,jN{0}.

Therefore, from the property of ρ s , we deduce that

lim n g ( T i x n ) g ( T j x n ) =0,jN{0}.

Since g is uniformly norm-to-norm continuous on bounded subsets of E , we arrive at

lim n T i x n T j x n =0,jN{0}.
(3.16)

In particular, for j=0, we have

lim n T i x n x n =0.

This, together with (3.16), implies that

lim n T j x n x n =0,jN{0}.
(3.17)

Since { T j } j N is an infinite family of Bregman weak relatively nonexpansive mappings, from (3.6) and (3.17), we conclude that T j u=u, jN{0}. Thus, we have uF.

Finally, we show that u= proj F g x. From x n = proj C n g x, we conclude that

z x n , g ( x n ) g ( x ) 0,z C n .

Since F C n for each nN, we obtain

z x n , g ( x n ) g ( x ) 0,zF.
(3.18)

Letting n in (3.18), we deduce that

z u , g ( u ) g ( x ) 0,zF.

In view of (1.8), we have u= proj F g x, which completes the proof. □

Remark 3.1 Theorem 3.1 improves Theorem 1.2 in the following aspects.

  1. (1)

    For the structure of Banach spaces, we extend the duality mapping to a more general case, that is, a convex, continuous and strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets.

  2. (2)

    For the mappings, we extend the mapping from a relatively nonexpansive mapping to a countable family of Bregman weak relatively nonexpansive mappings. We remove the assumption F ˆ (T)=F(T) on the mapping T and extend the result to a countable family of Bregman weak relatively nonexpansive mappings, where F ˆ (T) is the set of asymptotic fixed points of the mapping T.

  3. (3)

    For the algorithm, we remove the set W n in Theorem 1.2.

Lemma 3.2 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let A be a maximal monotone operator from E to E such that A 1 (0). Let r>0 and Res r A g = ( g + r A ) 1 g be the g-resolvent of A. Then Res r A g is a Bregman weak relatively nonexpansive mapping.

Proof Let { z n } n N E be a sequence such that z n z and lim n z n Res r A g z n =0. Since g is uniformly norm-to-norm continuous on bounded subsets of E, we obtain

1 r ( g ( z n ) g ( Res r A g z n ) ) 0.

It follows from

1 r ( g ( z n ) g ( Res r A g z n ) ) A Res r A g z n

and the monotonicity of A that

w Res r A g z n , y 1 r ( g ( z n ) g ( Res r A g z n ) ) 0

for all wdomA and yAw. Letting n in the above inequality, we have wz,y0 for all wdomA and yAw. Therefore, from the maximality of A, we conclude that z A 1 (0)=F( Res r A g ), that is, z= Res r A g z. Hence Res r A g is Bregman weak relatively nonexpansive, which completes the proof. □

As an application of our main result, we include a concrete example in support of Theorem 3.1. Using Theorem 3.1, we obtain the following strong convergence theorem for maximal monotone operators.

Theorem 3.2 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let A be a maximal monotone operator from E to E such that A 1 (0). Let r n >0 such that lim inf n r n >0 and Res r n A g = ( g + r n A ) 1 g be the g-resolvent of A. Let { x n } n N be a sequence generated by

{ x 0 = x C chosen arbitrarily , C 0 = C , z n = g [ α n , 0 g ( x n ) + j = 1 α n , j g ( Res r j A g x n ) ] , y n = g [ β n g ( x n ) + ( 1 β n ) g ( z n ) ] , C n + 1 = { z C n : D g ( z , y n ) D g ( z , x n ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(3.19)

where g is the right-hand derivative of g. Let { α n , j :j,nN{0}} and { β n } n N { 0 } be sequences in [0,1) satisfying the following control conditions:

  1. (1)

    j = 0 α n , j =1, nN{0};

  2. (2)

    There exists iN such that lim inf n α n , i α n , j >0, jN{0};

  3. (3)

    0 β n <1 for all nN{0} and lim inf n β n <1.

Then the sequence { x n } n N defined in (3.19) converges strongly to proj A 1 ( 0 ) g x as n.

Proof Letting T j = Res r j A g , jN{0}, in Theorem 3.1, from (3.1) we obtain (3.19). We need only to show that T j satisfies all the conditions in Theorem 3.1 for all jN{0}. In view of Lemma 3.2, we conclude that T j is a Bregman relatively nonexpansive mapping for each jN{0}. Thus, we obtain

D g ( p , Res r j A g v ) D g (p,v),vE,pF ( Res r j A g ) ,jN{0}

and

F ˜ ( Res r j A g ) =F ( Res r j A g ) = A 1 (0),jN{0},

where F ˜ ( Res r j A g ) is the set of all strong asymptotic fixed points of Res r j A g . Therefore, in view of Theorem 3.1, we have the conclusions of Theorem 3.2. This completes the proof. □

4 Strong convergence theorems with computational errors

In this section, we study strong convergence of iterative algorithms to find common fixed points of finitely many Bregman weak relatively nonexpansive mappings in a reflexive Banach space. Our algorithms take into account possible computational errors. We prove the following strong convergence theorem concerning Bregman weak relatively nonexpansive mappings.

Theorem 4.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let NN and { T j } j = 1 N be a finite family of Bregman weak relatively nonexpansive mappings from E into int domg such that F:= j = 1 N F( T j ) is a nonempty subset of E. Suppose in addition that T 0 =I, where I is the identity mapping on E. Let { x n } n N be a sequence generated by

{ x 0 = x E chosen arbitrarily , C 0 = E , y n = g [ α n , 0 g ( x n ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) ] , C n + 1 = { z C n : D g ( z , y n ) D g ( z , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) C n + 1 = + j = 1 N α n , j z x n , g ( x n ) g ( x n + e n j ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(4.1)

where g is the right-hand derivative of g. Let { α n , j :nN{0},j{0,1,2,,N}} be a sequence in (0,1) satisfying the following control conditions:

  1. (1)

    j = 0 N α n , j =1, nN{0};

  2. (2)

    There exists i{1,2,,N} such that lim inf n α n , i α n , j >0, j{0,1,2,,N}.

If, for each j=0,1,2,,N, the sequences of errors { e n j } n N E satisfy lim inf n e n j =0, then the sequence { x n } n N defined in (4.1) converges strongly to proj F g x as n.

Proof We divide the proof into several steps.

Step 1. We show that C n is closed and convex for each nN{0}.

It is clear that C 0 =E is closed and convex. Let C m be closed and convex for some mN. For z C m , we see that

D g ( z , y m ) D g ( z , x m ) + j = 1 N α m , j D g ( x m , x m + e m j ) + j = 1 N α m , j z x m , g ( x m ) g ( x m + e m j )

is equivalent to

z , g ( x m ) g ( y m ) + j = 1 N α m , j x m z , g ( x m ) g ( x m + e m j ) g ( y m ) g ( x m ) + x m , g ( x m ) y m , g ( y m ) + j = 1 N α m , j D g ( x m , x m + e m j ) .

An easy argument shows that C m + 1 is closed and convex. Hence C n is closed and convex for all nN{0}.

Step 2. We claim that F C n for all nN{0}.

It is obvious that F C 0 =E. Assume now that F C m for some mN. Employing Lemma 2.5, for any wF C m , we obtain

D g ( w , y m ) = D g ( w , g [ α m , 0 g ( x m ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) ] ) = V ( w , α m , 0 g ( x m ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) ) = g ( w ) w , α m , 0 g ( x m ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) + g ( α m , 0 g ( x m ) ) + j = 1 N α m , j g ( T j ( x m + e m j ) ) α m , 0 g ( w ) + j = 1 N α m , j g ( w ) + α m , 0 g ( g ( x m ) ) + j = 1 N α m , j g ( g ( T j ( x m + e m j ) ) ) = α m , 0 V ( w , g ( x m ) ) + j = 1 N α m , j V ( w , g ( T j ( x m + e m j ) ) ) = α m , 0 D g ( w , x m ) + j = 1 N α m , j D g ( w , T j ( x m + e m j ) ) α m , 0 D g ( w , x m ) + j = 1 N α m , j D g ( w , x m + e m j ) = α m , 0 D g ( w , x m ) + j = 1 N α m , j D g ( w , x m ) + j = 1 N α m , j D g ( x m , x m + e m j ) + j = 1 N α m , j w x m , g ( x m ) g ( x m + e m j ) = D g ( w , x m ) + j = 1 N α m , j D g ( x m , x m + e m j ) + j = 1 N α m , j w x m , g ( x m ) g ( x m + e m j ) .
(4.2)

This proves that w C m + 1 . Consequently, we see that F C n for any nN{0}.

Step 3. We prove that { x n } n N , { y n } n N and { T j ( x n + e n j ):nN,j{0,1,2,,N}} are bounded sequences in E.

In view of (1.9), we conclude that

D g ( x n , x ) = D g ( proj C n g x , x ) D g ( w , x ) D g ( w , x n ) D g ( w , x ) , w F C n , n N { 0 } .
(4.3)

It follows from (4.3) that the sequence { D g ( x n , x ) } n N is bounded and hence there exists M 0 >0 such that

D g ( x n ,x) M 0 ,nN{0}.
(4.4)

In view of Lemma 2.1(3), we conclude that the sequence { x n } n N and hence { x n + e n j :nN{0},j{0,1,2,,N}} is bounded. Since { T j } j = 1 N is a finite family of Bregman weak relatively nonexpansive mappings from E into int domg, for any qF, we have

D g ( q , T j ( x n + e n j ) ) D g ( q , x n + e n j ) ,nN and j{0,1,2,,N}.
(4.5)

This, together with Definition 2.1 and the boundedness of { x n } n N , implies that { T j ( x n + e n j ):nN{0},j{0,1,2,,N}} is bounded.

Step 4. We show that x n u for some uF, where u= proj F g x.

By Step 3, we deduce that { x n } n N is bounded. By the construction of C n , we conclude that C m C n and x m = proj C m g x C m C n for any positive integer mn. This, together with (1.9), implies that

D g ( x m , x n ) = D g ( x m , proj C n g x ) D g ( x m , x ) D g ( proj C n g x , x ) = D g ( x m , x ) D g ( x n , x ) .
(4.6)

In view of (4.6), we have

D g ( x n ,x) D g ( x n ,x)+ D g ( x m , x n ) D g ( x m ,x),mn.

This proves that { D g ( x n , x ) } n N is an increasing sequence in and hence by (4.4) the limit lim n D g ( x n ,x) exists. Letting m,n in (4.6), we deduce that D g ( x m , x n )0. In view of Lemma 2.4, we obtain that x m x n 0 as m,n. Thus we have { x n } n N is a Cauchy sequence. Since E is a Banach space, we conclude that there exists uE such that

lim n x n u=0.
(4.7)

Now, we show that uF. In view of (4.6), we obtain

lim n D g ( x n + 1 , x n )=0.
(4.8)

Since lim n e n j =0, for all j{0,1,2,,N}, in view of Lemma 2.4 and (4.8), we obtain that

lim n x n + 1 x n =0and lim n D ( x n , x n + e n j ) =0,j{0,1,2,,N}.
(4.9)

The function g is bounded on bounded subsets of E and thus g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). Since x n + 1 C n + 1 , we get

D g ( x n + 1 , y n ) D g ( x n + 1 , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j x n + 1 x n , g ( x n ) g ( x n + e n j ) .

This, together with (4.9), implies that

lim n D g ( x n + 1 , y n )=0.
(4.10)

Employing Lemma 2.4 and (4.9)-(4.10), we deduce that

lim n x n + 1 y n =0.
(4.11)

In view of (4.7) and (4.11), we get

lim n y n u=0.
(4.12)

Thus, { y n } n N is a bounded sequence.

From (4.11) and (4.12), it follows that

lim n x n y n =0.

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n g ( x n ) g ( y n ) =0.
(4.13)

Applying Lemma 2.4, we deduce that

lim n D g ( y n , x n )=0.
(4.14)

It follows from the three point identity (see (2.2)) that

| D g ( w , x n ) D g ( w , y n ) | = | D g ( w , y n ) + D g ( y n , x n ) + w y n , g ( y n ) g ( x n ) D g ( w , y n ) | = | D g ( y n , x n ) w y n , g ( y n ) g ( x n ) | D g ( y n , x n ) + w y n g ( y n ) g ( x n ) 0
(4.15)

as n.

The function g is bounded on bounded subsets of E and thus g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). This, together with Step 3, implies that the sequences { g ( x n ) } n N , { g ( y n ) } n N and {g( T j ( x n + e n j )):nN{0},j{0,1,2,,N}} are bounded in E .

In view of Theorem 2.2(3), we know that dom g = E and g is strongly coercive and uniformly convex on bounded subsets. Let s=sup{g( T j ( x n + e n j )):j{0,1,2,,N},nN{0}} and let ρ s : E R be the gauge of uniform convexity of the conjugate function g . Suppose that iN satisfies condition (2). We prove that for any wF and j{0,1,2,,N},

D g ( w , y n ) D g ( w , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j w x n , g ( x n ) g ( x n + e n j ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) .
(4.16)

Let us show (4.16). For any given wF and j{0,1,2,,N}, in view of the definition of the Bregman distance (see (1.7)), (1.6), Lemmas 2.3 and 2.5, we obtain

D g ( w , y n ) = D g ( w , g [ α n , 0 g ( x n ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) ] ) = V ( w , α n , 0 g ( x n ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) ) = g ( w ) w , α n , 0 g ( x n ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) + g ( α n , 0 g ( x n ) ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) α n , 0 g ( w ) + j = 1 N α n , j g ( w ) α n , 0 w , g ( x n ) j = 1 N α n , j w , g ( T j ( x n + e n j ) ) + α n , 0 g ( g ( x n ) ) + j = 1 N α n , j g ( g ( T j ( x n + e n j ) ) ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) = α n , 0 V ( w , g ( x n ) ) + j = 1 N α n , j V ( w , g ( T j ( x n + e n j ) ) ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) = α n , 0 D g ( w , x n ) + j = 1 N α n , j D g ( w , T j ( x n + e n j ) ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) α n , 0 D g ( w , x n ) + j = 1 N α n , j D g ( w , x n + e n j ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) = α n , 0 D g ( w , x n ) + j = 1 N α n , j D g ( w , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j w x n , g ( x n ) g ( x n + e n j ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) = D g ( w , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j w x n , g ( x n ) g ( x n + e n j ) α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) .

Since lim n x n ( x n + e n j )=0 for all j{0,1,2,,N} and g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n g ( x n ) g ( x n + e n j ) =0,j{0,1,2,,N}.

This, together with (4.15), implies that

D g ( w , x n ) D g ( w , y n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j w x n , g ( x n ) g ( x n + e n j ) 0 as  n .
(4.17)

In view of (4.16) and (4.17), we conclude that

α n , i α n , j ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) D g ( w , x n ) D g ( w , y n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j x n + 1 x n , g ( x n ) g ( x n + e n j ) 0

as n. From the assumption lim inf n α n , i α n , j >0, j{0,1,2,,N}, we have

lim n ρ s ( g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) ) =0,j{0,1,2,,N}.

Therefore, from the property of ρ s , we deduce that

lim n g ( T i ( x n + e n i ) ) g ( T j ( x n + e n j ) ) =0,j{0,1,2,,N}.

Since g is uniformly norm-to-norm continuous on bounded subsets of E , we arrive at

lim n T i ( x n + e n i ) T j ( x n + e n j ) =0,j{0,1,2,,N}.
(4.18)

In particular, for j=0, we have

lim n T i ( x n + e n i ) x n = lim n T i ( x n + e n i ) ( x n + e n 0 ) =0.

This, together with (4.7) and (4.18), implies that

lim n T j ( x n + e n j ) x n + e n j =0,j{0,1,2,,N}.
(4.19)

From (4.7), we obtain

lim n x n + e n j u =0,j{0,1,2,,N}.
(4.20)

In view of (4.19) and (4.20), we conclude that T j u=u, j{0,1,2,,N}. Thus, we have uF.

Finally, we show that u= proj F g x. From x n = proj C n g x, we conclude that

z x n , g ( x n ) g ( x ) 0,z C n .

Since F C n for each nN, we obtain

z x n , g ( x n ) g ( x ) 0,zF.
(4.21)

Letting n in (4.21), we deduce that

z u , g ( u ) g ( x ) 0,zF.

In view of (1.8), we have u= proj F g x, which completes the proof. □

Remark 4.1 In Theorem 4.1, we present a strong convergence theorem for Bregman weak relatively nonexpansive mappings with a new algorithm and new control conditions. This is complementary to Reich and Sabach [[46], Theorem 2]. It also extends and improves Theorems 1.3, 1.4 and 1.5.

5 Equilibrium problems

Let E be a Banach space and let C be a nonempty, closed and convex subset of a reflexive Banach space E. Let f:C×CR be a bifunction. Consider the following equilibrium problem: Find x ¯ C such that

f( x ¯ ,y)0,yC.
(5.1)

In order to solve the equilibrium problem, let us assume that f:C×CR satisfies the following conditions [53]:

  1. (A1)

    f(x,x)=0 for all xC;

  2. (A2)

    f is monotone, i.e., f(x,y)+f(y,x)0 for all x,yC;

  3. (A3)

    f is upper hemi-continuous, i.e., for each x,y,zC,

    lim sup t 0 f ( t z + ( 1 t ) x , y ) f(x,y);
  4. (A4)

    for each xC, the function yf(x,y) is convex and lower semicontinuous.

The set of solutions of problem (5.1) is denoted by EP(f).

Let C be a nonempty, closed and convex subset of E and let g:ER be a Legendre function. For r>0, we define a mapping T r :EC as follows:

T r (x)= { z C : f ( z , y ) + 1 r y z , g ( z ) g ( x ) 0  for all  y C }
(5.2)

for all xE.

The following two lemmas were proved in [46].

Lemma 5.1 Let E be a reflexive Banach space and let g:ER be a Legendre function. Let C be a nonempty, closed and convex subset of E and let f:C×CR be a bifunction satisfying (A1)-(A4). For r>0, let T r :EC be the mapping defined by (5.2). Then dom( T r )=E.

Lemma 5.2 Let E be a reflexive Banach space and let g:ER be a convex, continuous and strongly coercive function which is bounded on bounded subsets and uniformly convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of E and let f:C×CR be a bifunction satisfying (A1)-(A4). For r>0, let T r :EC be the mapping defined by (5.2). Then the following statements hold:

  1. (1)

    T r is single-valued;

  2. (2)

    T r is a Bregman firmly nonexpansive mapping [46], i.e., for all x,yE,

    T r x T r y , g ( T r x ) g ( T r y ) T r x T r y , g ( x ) g ( y ) ;
  3. (3)

    F( T r )=EP(f);

  4. (4)

    EP(f) is closed and convex;

  5. (5)

    T r is a Bregman quasi-nonexpansive mapping;

  6. (6)

    D g (q, T r x)+ D g ( T r x,x) D g (q,x), qF( T r ).

Theorem 5.1 Let E be a reflexive Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Let f be a bifunction from E×E to satisfying (A1)-(A4). Let NN and let { T j } j = 1 N be a finite family of Bregman weak relatively nonexpansive mappings from E into int domg such that F:= j = 1 N F( T j ) is a nonempty subset of E. Suppose in addition that T 0 =I, where I is the identity mapping on E. Suppose that FEP(f) is a nonempty subset of E, where EP(f) is the set of solutions to the equilibrium problem (5.1). Let { x n } n N be a sequence generated by

{ x 0 = x E chosen arbitrarily , C 0 = E , y n = g [ α n , 0 g ( x n ) + j = 1 N α n , j g ( T j ( x n + e n j ) ) ] , u n E such that f ( u n , y ) + 1 r n y u n , g ( u n ) g ( y n ) 0 , y E , C n + 1 = { z C n : D g ( z , u n ) D g ( z , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) C n + 1 = + j = 1 N α n , j z x n , g ( x n ) g ( x n + e n j ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(5.3)

where g is the right-hand derivative of g. Let { α n , j :nN{0},j{0,1,2,,N}} be a sequence in (0,1) satisfying the following control conditions:

  1. (1)

    j = 0 N α n , j =1, nN{0};

  2. (2)

    There exists i{1,2,,N} such that lim inf n α n , i α n , j >0, j{0,1,2,,N}.

If, for each j=0,1,2,,N, the sequences of errors { e n j } n N E satisfy lim inf n e n j =0; then the sequence { x n } n N defined in (5.3) converges strongly to proj F EP ( f ) g x as n.

Proof By the same argument, as in the proof of Theorem 4.1, we can prove the following:

  1. (i)

    lim n x n u n =0 and lim n g( x n )g( u n )=0.

  2. (ii)

    For each wF, lim n | D g (w, x n ) D g (w, u n )|=0.

  3. (iii)

    There exists uF such that x n u as n.

Since u n = T r n y n , for any wF, we have

D g ( w , u n ) = D g ( w , T r n y n ) D g ( w , y n ) D g ( w , x n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j w x n , g ( x n ) g ( x n + e n j ) .
(5.4)

Next, we show that uEP(f). From Lemma 5.2(6), (5.4) and u n = T r n y n , we conclude that

D g ( u n , y n ) = D g ( T r n y n , y n ) D g ( w , y n ) D g ( w , T r n y n ) D g ( w , x n ) D g ( w , u n ) + j = 1 N α n , j D g ( x n , x n + e n j ) + j = 1 N α n , j w x n , g ( x n ) g ( x n + e n j ) 0
(5.5)

as n. In view of (5.5) and Lemma 2.4, we obtain

lim n u n y n =0.
(5.6)

Since g is uniformly norm-to-norm continuous on any bounded subset of E, it follows from (5.6) that

lim n g ( u n ) g ( y n ) =0.

By the assumption r n a, we have

lim n g ( u n ) g ( y n ) r n =0.
(5.7)

In view of u n = T r n y n , we obtain

f( u n ,y)+ 1 r n y u n , g ( u n ) g ( y n ) 0,yE.

From condition (A2), we deduce that

y u n g ( u n ) g ( y n ) r n 1 r n y u n , g ( u n ) g ( y n ) f ( u n , y ) f ( y , u n ) 0 , y E .

Letting n in the above inequality, we have from (5.7) and (A4) that

f(y,u)0,yE.

For t(0,1] and yE, let y t =ty+(1t)u. Then we have y t E, which yields that f( y t ,u)0. From (A1), we also have

0=f( y t , y t )tf( y t ,y)+(1t)f( y t ,u)tf( y t ,y).

Dividing by t, we get

f( y t ,y)0,yE.

Letting t0, from the condition (A3), we obtain that

f(u,y)0,yE.

This means that uEP(f). Therefore, uFEP(f). □

Theorem 5.2 Let E be a 2-uniformly convex Banach space and let g:ER be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of E. Assume that there exists c 1 >0 such that g is ρ-convex with ρ(t):= c 1 2 t 2 for all t0. Let C be a nonempty, closed and convex subset of E and let f be a bifunction from C×C to satisfying (A1)-(A4). Assume that { T j } j N is an infinite family of Bregman weak relatively nonexpansive mappings from C into itself and that A:C E is a γ-inverse strongly monotone mapping for some γ>0. Suppose that F:= j = 1 F( T j ) A 1 (0)EP(f) is a nonempty subset of C, where EP(f) is the set of solutions to the equilibrium problem (5.1). Suppose in addition that T 0 =I, where I is the identity mapping on E. Let { x n } n N be a sequence generated by

{ x 0 = x C chosen arbitrarily , C 0 = C , y n = proj C g ( g [ g ( x n ) β A x n ] ) , z n = g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j y n ) ] , u n C such that f ( u n , y ) + 1 r n y u n , g ( u n ) g ( y n ) 0 , y C , C n + 1 = { z C n : D g ( z , u n ) D g ( z , x n ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(5.8)

where g is the right-hand derivative of g. Let β be a constant such that 0<β< c 2 2 γ 2 , where c 2 is the 2-uniformly convex constant of E satisfying Corollary  2.1(2). Let { α n , j :nN{0},jN{0}} be a sequence in (0,1) satisfying the following control conditions:

  1. (1)

    j = 0 α n , j =1, nN{0};

  2. (2)

    lim inf n α n , 0 α n , j >0, jN.

Then the sequence { x n } n N defined in (5.8) converges strongly to proj F g x as n.

Proof We divide the proof into several steps.

Step 1. Following the method of the proof of Theorem 3.1 Step 1, we obtain that C n is both closed and convex for each nN{0}.

Step 2. We claim that F C n for all nN{0}.

It is obvious that F C 0 =C. Assume now that F C m for some mN. It follows from Lemma 2.5 that, for each wF C m , we have

D g ( w , y m ) = D g ( w , proj C g ( g [ g ( x m ) β A x m ] ) ) D g ( w , g [ g ( x m ) β A x m ] ) = V ( w , g ( x m ) β A x m ) V ( w , g ( x m ) β A x m + β A x m ) g ( g ( x m ) β A x m ) w , β A x m = V ( w , g ( x m ) ) β g ( g ( x m ) β A x m ) w , A x m = D g ( w , x m ) β x m w , A x m β g ( g ( x m ) β A x m ) x m , A x m D g ( w , x m ) β γ A x m 2 + β g ( g ( x m ) β A x m ) g g ( x m ) A x m D g ( w , x m ) β γ A x m 2 + 4 β 2 c 2 2 A x m 2 D g ( w , x m ) + β ( 4 β c 2 2 γ ) A x m 2 .
(5.9)

This, together with 4 β c 2 2 γ<0, implies that

D g (w, y m ) D g (w, x m ).

Since T j is Bregman weak relatively nonexpansive, for each jN, we obtain

D g ( w , u m ) = D g ( w , T r m z m ) D g ( w , z m ) = D g ( w , g [ α m , 0 g ( x m ) + j = 1 α m , j g ( T j y m ) ] ) = V ( w , α m , 0 g ( x m ) + j = 1 α m , j g ( T j y m ) ) = g ( w ) w , α m , 0 g ( x m ) + j = 1 α m , j g ( T j y m ) + g ( α m , 0 g ( x m ) + j = 1 α m , j g ( T j y m ) ) α m , 0 g ( w ) + j = 1 α m , j g ( w ) + α m , 0 g ( g ( x m ) ) + j = 1 α m , j g ( g ( T j y m ) ) = α m , 0 V ( w , g ( x m ) ) + j = 1 α m , j V ( w , g ( T j y m ) ) = α m , 0 D g ( w , x m ) + j = 1 α m , j D g ( w , T j y m ) α m , 0 D g ( w , x m ) + j = 1 α m , j D g ( w , y m ) D g ( w , x m ) .

This proves that w C m + 1 . Consequently, we see that F C n for any nN{0}.

Step 3. By the same manner, as mentioned in the proof of Theorem 3.1, Step 3, we can prove that the sequences { x n } n N , { y n } n N , { z n } n N , { u n } n N and { T j y n :jN{0},nN{0}} are bounded.

Step 4. We show that x n u for some uF, where u= proj F g x.

A similar argument, as mentioned in Theorem 3.1, Step 4, shows that there exists uC such that

lim n x n u=0and lim n u n x n =0.
(5.10)

In view of Lemma 2.4, we deduce that

lim n D g ( u n , x n )=0.

Since g is uniformly norm-to-norm continuous on any bounded subset of E, we obtain

lim n g ( u n ) g ( x n ) =0.

It follows from the three point identity (see (2.2)) that

| D g ( w , x n ) D g ( w , u n ) | = | D g ( w , u n ) + D g ( u n , x n ) + w u n , g ( u n ) g ( x n ) D g ( w , u n ) | = | D g ( u n , x n ) w u n , g ( u n ) g ( x n ) | D g ( u n , x n ) + w u n g ( u n ) g ( x n ) 0
(5.11)

as n. The function g is bounded on bounded subsets of E and thus g is also bounded on bounded subsets of E (see, for example, [[23], Proposition 1.1.11] for more details). This, together with Step 3, implies that the sequences { g ( x n ) } n N , { g ( y n ) } n N , { g ( z n ) } n N and {g( T j x n ):jN{0},nN{0}} are bounded in E .

In view of Theorem 2.2(3), we know that dom g = E and g is strongly coercive and uniformly convex on bounded subsets. Let s=sup{g( x n ),g( T j x n ):jN{0},nN{0}} and ρ s : E R be the gauge of uniform convexity of the conjugate function g . For any given wF(T) and jN, in view of the definition of the Bregman distance (see (1.7)), (1.6), Lemmas 2.3 and 2.5, we obtain

D g ( w , u n ) = D g ( w , T r n z n ) D g ( w , z n ) = D g ( w , g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j y n ) ] ) = V ( w , α n , 0 g ( x n ) + j = 1 α n , j g ( T j y n ) ) = g ( w ) w , α n , 0 g ( x n ) + j = 1 α n , j g ( T j y n ) + g ( α n , 0 g ( x n ) + j = 1 α n , j g ( T j y n ) ) α n , 0 g ( w ) + j = 1 α n , j g ( w ) α n , 0 w , g ( x n ) j = 1 α n , j w , g ( T j y n ) + α n , 0 g ( g ( x n ) ) + j = 1 α n , j g ( g ( T j y n ) ) α n , i α n , j ρ s ( g ( x n ) g ( T j y n ) ) = α n , 0 V ( w , g ( x n ) ) + j = 1 α n , j V ( w , g ( T j y n ) ) α n , 0 α n , j ρ s ( g ( x n ) g ( T j y n ) ) = α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , T j y n ) α n , 0 α n , j ρ s ( g ( x n ) g ( T j y n ) ) = α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , y n ) α n , 0 α n , j ρ s ( g ( x n ) g ( T j y n ) ) α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , x n ) α n , 0 α n , j ρ s ( g ( x n ) g ( T j y n ) ) = D g ( w , x n ) α n , 0 α n , j ρ s ( g ( x n ) g ( T j x n ) ) .
(5.12)

In view of (5.10), (5.11) and (5.12), we conclude that

α n , 0 α n , j ρ s ( g ( x n ) g ( T j y n ) ) D g ( w , x n ) D g ( w , z n ) 0

as n. From the assumption lim inf n α n , 0 α n , j >0, jN, we have

lim n ρ s ( g ( x n ) g ( T j y n ) ) =0,jN.

Therefore, from the property of ρ s , we deduce that

lim n g ( x n ) g ( T j y n ) =0,jN.

Since g is uniformly norm-to-norm continuous on bounded subsets of E , we arrive at

lim n x n T j y n =0,jN.
(5.13)

Using inequalities (5.9) and (5.12), we obtain

D g ( w , u n ) α n , 0 D g ( w , x n ) + j = 1 α n , j D g ( w , y n ) α n , 0 D g ( w , x n ) + j = 1 α n , j [ D g ( w , x n ) + β ( 4 β c 2 2 γ ) A x n 2 ] = D g ( w , x n ) + β j = 1 α n , j ( 4 β c 2 2 γ ) A x n 2 .
(5.14)

It follows from (5.14) that

β j = 1 α n , j ( γ 4 β c 2 2 ) A x n 2 D g (w, x n ) D g (w, u n ).

Since 4 β c 2 2 γ<0, we see that

lim n A x n =0.
(5.15)

Furthermore, since x n C for all n0, then using (1.6), Lemma 2.5 and Corollary 2.1, we get

D g ( x n , y n ) = D g ( x n , proj C g ( g [ g ( x n ) β A x n ] ) ) D g ( x n , g [ g ( x n ) β A x n ] ) = V ( x n , g ( x n ) β A x n ) V ( x n , g ( x n ) β A x n + β A x n ) g ( g ( x n ) β A x n ) x n , β A x n = V ( x n , g ( x n ) ) β g ( g ( x n ) β A x n ) w , A x n = D g ( x n , x n ) β x n x n , A x n β g ( g ( x n ) β A x n ) x n , A x n β g ( g ( x n ) β A x n ) g g ( x n ) A x n 4 β 2 c 2 2 A x n 2 .

It follows from (5.15) that

lim n D g ( x n , y n )=0.

Lemma 2.2 now implies that

lim n x n y n =0.
(5.16)

Using (5.13) and (5.16), we conclude that

lim n y n u=0and lim n y n T j y n =0,jN.
(5.17)

Therefore, u F ˜ ( T j )=F( T j ), jN.

Step 5. We show that u A 1 (0).

Since A is γ-inverse strongly monotone, it is continuous and hence, using (5.16) and (5.17), we conclude that Au= lim n A x n =0. Therefore, u A 1 (0).

Step 6. Finally, we show that u= proj F g x.

The proof of this step is similar to that of Theorem 3.1, Step 4 and is omitted here. □

We end this section with the following simple example in order to support Theorem 5.2.

Example 5.1 Let E= l 2 and

x 0 = ( 1 , 0 , 0 , 0 , ) , x 1 = ( 1 , 1 , 0 , 0 , 0 , ) , x 2 = ( 1 , 0 , 1 , 0 , 0 , 0 , ) , x 3 = ( 1 , 0 , 0 , 1 , 0 , 0 , 0 , ) , , x n = ( σ n , 1 , σ n , 2 , , σ n , k , ) , ,

where

σ n , k ={ 1 if  k = 1 , n + 1 , 0 if  k 1 , k n + 1

for all nN. It is easy to see that the sequence { x n } n N converges weakly to x 0 . Let k be an even number in and let g:ER be defined by

g(x)= 1 k x k ,xE.

It is easy to show that g(x)= J k (x) for all xE, where

J k (x)= { x E : x , x = x x , x = x k 1 } .

It is also obvious that

J k (λx)= λ k 1 J k (x),xE,λR.

Now, we define a countable family of mappings T j :EE by

T j (x)={ n n + 1 x if  x = x n ; x j if  x x n

for all j1 and n0. It is clear that F( T j )={0} for all j1. Choose jN, then for any nN,

D g ( 0 , T j x n ) = g ( 0 ) g ( T j x n ) 0 T j x n , g ( T j x n ) = n k ( n + 1 ) k g ( x n ) + n k ( n + 1 ) k x n , g ( x n ) = n k ( n + 1 ) k [ g ( x n ) + x n , g ( x n ) ] = n k ( n + 1 ) k [ D g ( 0 , x n ) ] D g ( 0 , x n ) .

If x x n , then we have

D g ( 0 , T j x ) = g ( 0 ) g ( T j x ) 0 T j x , g ( T j x ) = 1 j k g ( x ) 1 j k x , g ( x ) = 1 j k [ g ( x ) x , g ( x ) ] D g ( 0 , x ) .

Therefore, T j is a Bregman quasi-nonexpansive mapping. Next, we claim that T j is a Bregman weak relatively nonexpansive mapping. Indeed, for any sequence { z n } n N E such that z n z 0 and z n T j z n 0 as n, there exists a sufficiently large number N 0 N such that z n x m for any n,m> N 0 . This implies that T j z n = z n j for all n> N 0 . It follows from z n T j z n 0 that j + 1 j z n 0 and hence z n z 0 =0. Since z 0 F( T j ), we conclude that T j is a Bregman weak relatively nonexpansive mapping. It is clear that j = 1 F ˜ ( T j )= j = 1 F( T j )={0}. Thus { T j } j N is a countable family of Bregman weak relatively nonexpansive mappings.

Next, we show that { T j } j N is not a countable family of Bregman relatively nonexpansive mappings. In fact, though x n x 0 and

x n T j x n = x n n n + 1 x n = 1 n + 1 x n 0

as n, but x 0 F( T j ) for all jN. Therefore, F ˆ ( T j )F( T j ) for all jN. This implies that j = 1 F ˆ ( T j ) j = 1 F( T j ).

Finally, it is obvious that the family { T j } j N satisfies all the aspects of the hypothesis of Theorem 5.2.

6 Applications (Hammerstein-type equations)

Let E be a real Banach space with the dual space E . The generalized formulation of many boundary value problems for ordinary and partial differential equations leads to operator equations of the type

z,Ax=z,b,zE,

which is equivalent to equality of functionals on E. That is, the equality of the form

Ax=b,
(6.1)

where A is a monotone-type operator acting from a Banach space E into E . Without loss of generality, we may assume b=0. It is well known that a solution of the equation Ax=0 (i.e., z,Ax=0, zE) is a solution of the variational inequality zx,Ax0, zE. Therefore, the theory of monotone operators and its applications to nonlinear partial differential equations and variational inequalities are related and have been involved in a substantial topic in nonlinear functional analysis. One important application of solving (6.1) is finding the zeros of the so-called equation of Hammerstein type (see, e.g., [54]), where a nonlinear integral equation of Hammerstein type is one of the form

u(x)+ Ω k(x,y)f ( y , u ( y ) ) dy=h(x),
(6.2)

where dy is a σ-finite measure on the measure space Ω; the real kernel k is defined on Ω×Ω, f is a real-valued function defined on Ω×R and is, in general, nonlinear and h is a given function on Ω. If we now define an operator K by Kv(x)= Ω k(x,y)v(y)dy; xΩ, and the so-called superposition or Nemytskii operator by Qu(y):=f(y,u(y)), then the integral Eq. (6.2) can be put in operator theoretic form as follows:

u+KQu=0,
(6.3)

where, without loss of generality, we have taken h=0.

Interest in Eq. (6.2) stems mainly from the fact that several problems that arise in differential equations, for instance, elliptic boundary value problems, whose linear parts possess Green’s functions, can, as a rule, be transformed into equations of the form (6.2) (see, e.g., [55], Chapter IV). Equations of Hammerstein type play a crucial role in the theory of optimal control systems (see, e.g., [56]). Several existence and uniqueness theorems have been proved for equations of Hammerstein type (see, e.g., [5762]). Very recently, Ofoedu and Malonza in [63] proposed an iterative solution of the operator Hammerstein Eq. (6.1) in a 2-uniformly convex and uniformly smooth Banach space.

Now, we give an application of Theorem 5.1 to an iterative solution of the operator Hammerstein Eq. (6.1).

Theorem 6.1 Let E be a real Banach space with a dual space E such that X=E× E (with the norm z X 2 = u E 2 + v E 2 , z=(u,v)X) is a 2-uniformly convex and uniformly smooth real Banach space. Let g:XR be a strongly coercive Bregman function which is bounded on bounded subsets and uniformly convex and uniformly smooth on bounded subsets of X. Assume that there exists c 1 >0 such that g is ρ-convex with ρ(t):= c 1 2 t 2 for all t0. Let Q:E E and K: E E with domK=Q(E)= E be continuous monotone-type operators such that Eq. (6.3) has a solution in E and such that the map A:X X defined by Az:=A(u,v)=(Quv,u+Kv) is γ-inverse strongly monotone. Let C be a nonempty, closed and convex subset of X, let f:C×CR be a bifunction satisfying (A1)-(A4) and let { T j } j N be an infinite family of Bregman weak relatively nonexpansive mappings from C into itself. Let { x n } n N be a sequence generated by

{ x 0 = x C chosen arbitrarily , C 0 = C , y n = proj C g ( g [ g ( x n ) β A x n ] ) , z n = g [ α n , 0 g ( x n ) + j = 1 α n , j g ( T j y n ) ] , u n C such that f ( u n , y ) + 1 r n y u n , g ( u n ) g ( y n ) 0 , y C , C n + 1 = { z C n : D g ( z , u n ) D g ( z , x n ) } , x n + 1 = proj C n + 1 g x and n N { 0 } ,
(6.4)

where g is the right-hand derivative of g. Let β be a constant such that 0<β< c 2 2 γ 2 , where c 2 is the 2-uniformly convex constant of E satisfying Corollary  2.1(2). Let { α n , j :nN{0},jN{0}} be a sequence in (0,1) satisfying the following control conditions:

  1. (1)

    j = 0 α n , j =1, nN{0};

  2. (2)

    lim inf n α n , 0 α n , j >0, jN.

Suppose that F:= j = 1 F( T j ) A 1 (0)EP(f), then the sequence { x n } n N defined by (6.4) converges strongly to proj F g x as n.

Remark 6.1 Observe that z 0 F implies, in particular, that z 0 A 1 (0)A z 0 =0. But z 0 =( u 0 , v 0 ) for some u 0 E and v 0 E ; moreover, A z 0 =A( u 0 , v 0 )=(Q u 0 v 0 , u 0 +K v 0 ). So, A z 0 =0 implies that (Q u 0 v 0 , u 0 +K v 0 )=(0,0). This is equivalent to Q u 0 v 0 =0 and u 0 +K v 0 =0. Thus we have v 0 =Q u 0 which in turn implies that u 0 +K v 0 =0. Therefore, u 0 E solves the Hammerstein-type Eq. (6.3).

References

  1. Haugazeau, Y: Sur les inéquations variationnelles et la minimisation de fonctionnelles convexes. Thése, Université de Paris, Paris, France (1968)

    Google Scholar 

  2. Ibaraki T, Takahashi W: Strong convergence theorems for a finite family of nonlinear operators of firmly nonexpansive type in Banach spaces. In Nonlinear Analysis and Convex Analysis. Edited by: Hsu S-B, Lai H-C, Lin L-J, Takahashi W, Tanaka T, Yao J-C. Yokahama Publishers, Yokahama; 2009:49–62.

    Google Scholar 

  3. Kim TH, Takahashi W: Strong convergence of modified iteration processes for relatively asymptotically nonexpansive mappings. Taiwan. J. Math. 2010, 14(6):2163–2180.

    MathSciNet  Google Scholar 

  4. Bauschke HH, Combettes PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2011, 26: 248–264.

    Article  MathSciNet  Google Scholar 

  5. Plubtieng S, Ungchittrakool K: Strong convergence theorems for a common fixed point of two relatively nonexpansive mappings in a Banach space. J. Approx. Theory 2007, 149: 103–115. 10.1016/j.jat.2007.04.014

    Article  MathSciNet  Google Scholar 

  6. Qin X, Cho YJ, Kang SM, Zhou H: Convergence of a modified Halpern-type iteration algorithm for quasi- ϕ -nonexpansive mappings. Appl. Math. Lett. 2009, 22: 1051–1055. 10.1016/j.aml.2009.01.015

    Article  MathSciNet  Google Scholar 

  7. Qin X, Cho YJ, Kang SM: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 2009, 225: 20–30. 10.1016/j.cam.2008.06.011

    Article  MathSciNet  Google Scholar 

  8. Qin X, Cho SY, Kang SM: On hybrid projection methods for asymptotically quasi- ϕ -nonexpansive mappings. Appl. Math. Comput. 2010, 215: 3874–3883. 10.1016/j.amc.2009.11.031

    Article  MathSciNet  Google Scholar 

  9. Qin X, Su Y: Strong convergence theorem for relatively nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67: 1958–1965. 10.1016/j.na.2006.08.021

    Article  MathSciNet  Google Scholar 

  10. Qin X, Su Y, Shang M: Strong convergence theorems for asymptotically nonexpansive mappings by hybrid methods. Kyungpook Math. J. 2008, 48: 133–142. 10.5666/KMJ.2008.48.1.133

    Article  MathSciNet  Google Scholar 

  11. Matsushita S, Takahashi W: Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2004, 2004: 37–47.

    Article  MathSciNet  Google Scholar 

  12. Takahashi W, Zembayashi K: Strong and weak convergence theorems for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70: 45–57. 10.1016/j.na.2007.11.031

    Article  MathSciNet  Google Scholar 

  13. Takahashi W, Takeuchi Y, Kubota R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2008, 341: 276–286. 10.1016/j.jmaa.2007.09.062

    Article  MathSciNet  Google Scholar 

  14. Takahashi W: Nonlinear Functional Analysis, Fixed Point Theory and Its Applications. Yokahama Publishers, Yokahama; 2000.

    MATH  Google Scholar 

  15. Takahashi W: Convex Analysis and Approximation of Fixed Points. Yokahama Publishers, Yokahama; 2000.

    MATH  Google Scholar 

  16. Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3

    Article  Google Scholar 

  17. Reich S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67: 274–276. 10.1016/0022-247X(79)90024-6

    Article  MathSciNet  Google Scholar 

  18. Genel A, Lindenstrauss J: An example concerning fixed points. Isr. J. Math. 1975, 22: 81–86. 10.1007/BF02757276

    Article  MathSciNet  Google Scholar 

  19. Güler O: On the convergence of the proximal point algorithm for convex optimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

    Article  MathSciNet  Google Scholar 

  20. Alber Y: Metric and generalized projection operators in Banach spaces: properties and applications. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Edited by: Kartsatos A. Dekker, New York; 1996:15–50.

    Google Scholar 

  21. Reich S: A weak convergence theorem for the altering method with Bregman distances. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Dekker, New York; 1996:313–318.

    Google Scholar 

  22. Matsushita S, Takahashi W: A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 2005, 134(2):257–266. 10.1016/j.jat.2005.02.007

    Article  MathSciNet  Google Scholar 

  23. Butnariu D, Iusem AN: Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization. Kluwer Academic, Dordrecht; 2000.

    Book  MATH  Google Scholar 

  24. Kohsaka F, Takahashi W: Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2005, 6(3):505–523.

    MathSciNet  Google Scholar 

  25. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  Google Scholar 

  26. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5

    Article  MathSciNet  Google Scholar 

  27. Rockafellar RT: Characterization of subdifferentials of convex functions. Pac. J. Math. 1966, 17: 497–510. 10.2140/pjm.1966.17.497

    Article  MathSciNet  Google Scholar 

  28. Rockafellar RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33: 209–216. 10.2140/pjm.1970.33.209

    Article  MathSciNet  Google Scholar 

  29. Bauschke HH, Borwein JM, Combettes PL: Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3: 615–647. 10.1142/S0219199701000524

    Article  MathSciNet  Google Scholar 

  30. Bonnas JF, Shapiro A: Perturbation Analysis of Optimization Problems. Springer, New York; 2000.

    Book  Google Scholar 

  31. Bauschke HH, Borwein JM: Legendre functions and the method of random Bregman functions. J. Convex Anal. 1997, 4: 27–67.

    MathSciNet  Google Scholar 

  32. Bregman LM: The relation method of finding the common point of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 200–217.

    Article  Google Scholar 

  33. Censor Y, Lent A: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 1981, 34: 321–358. 10.1007/BF00934676

    Article  MathSciNet  Google Scholar 

  34. Naraghirad E, Takahashi W, Yao J-C: Generalized retraction and fixed point theorems using Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2012, 13(1):141–156.

    MathSciNet  Google Scholar 

  35. Zălinescu C: Convex Analysis in General Vector Spaces. World Scientific, River Edge; 2002.

    Book  MATH  Google Scholar 

  36. Butnariu D, Censor Y, Reich S: Iterative averaging of entropic projections for solving stochastic convex feasibility problems. Comput. Optim. Appl. 1997, 8: 21–39. 10.1023/A:1008654413997

    Article  MathSciNet  Google Scholar 

  37. Butnariu D, Resmerita E: Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006., 2006: Article ID 84919

    Google Scholar 

  38. Bauschke HH, Borwein JM, Combettes PL: Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42: 596–636. 10.1137/S0363012902407120

    Article  MathSciNet  Google Scholar 

  39. Borwein MJ, Reich S, Sabach S: A characterization of Bregman firmly nonexpansive operators using a new monotonicity concept. J. Nonlinear Convex Anal. 2011, 12(1):161–184.

    MathSciNet  Google Scholar 

  40. Reich S, Sabach S: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31: 22–44. 10.1080/01630560903499852

    Article  MathSciNet  Google Scholar 

  41. Su Y, Wang Z, Xu HK: Strong convergence theorems for a common fixed point of two hemi-relatively nonexpansive mappings. Nonlinear Anal. 2009, 71: 5616–5628. 10.1016/j.na.2009.04.053

    Article  MathSciNet  Google Scholar 

  42. Bauschke HH, Combettes PL: Construction of best Bregman approximations in reflexive Banach spaces. Proc. Am. Math. Soc. 2003, 131(12):3757–3766. 10.1090/S0002-9939-03-07050-3

    Article  MathSciNet  Google Scholar 

  43. Reich S, Sabach S: A projection method for solving nonlinear problems in reflexive Banach spaces. J. Fixed Point Theory Appl. 2011, 9: 101–116. 10.1007/s11784-010-0037-5

    Article  MathSciNet  Google Scholar 

  44. Reich S, Sabach S: Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. Springer Optimization and Its Applications 49. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, New York; 2011:301–316.

    Chapter  Google Scholar 

  45. Censor Y, Reich S: Iterations of paracontractions and firmly nonexpansive operators with applications to feasibility and optimization. Optimization 1996, 37: 323–339. 10.1080/02331939608844225

    Article  MathSciNet  Google Scholar 

  46. Reich S, Sabach S: Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. 2010, 73: 122–135. 10.1016/j.na.2010.03.005

    Article  MathSciNet  Google Scholar 

  47. Bruck RE, Reich S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3: 459–470.

    MathSciNet  Google Scholar 

  48. Reich S: A limit theorem for projections. Linear Multilinear Algebra 1983, 13: 281–290. 10.1080/03081088308817526

    Article  Google Scholar 

  49. Sabach S: Products of finitely many resolvents of maximal monotone mappings in reflexive Banach spaces. SIAM J. Optim. 2011, 21: 1289–1308. 10.1137/100799873

    Article  MathSciNet  Google Scholar 

  50. Reich S, Sabach S: A strong convergence theorem for a proximal-type algorithm in Banach spaces. J. Nonlinear Convex Anal. 2009, 10(3):471–486.

    MathSciNet  Google Scholar 

  51. Chang SS, Chan CK, Lee HWJ: Modified block iterative algorithm for quasi- ϕ -asymptotically nonexpansive mappings and equilibrium problem in Banach spaces. Appl. Math. Comput. 2011, 217(18):7520–7530. 10.1016/j.amc.2011.02.060

    Article  MathSciNet  Google Scholar 

  52. Nakajo K, Takahashi W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279: 372–379. 10.1016/S0022-247X(02)00458-4

    Article  MathSciNet  Google Scholar 

  53. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  54. Hammerstein A: Nichtlineare integralgleichungen nebst anwendungen. Acta Math. 1930, 54: 117–176. 10.1007/BF02547519

    Article  MathSciNet  Google Scholar 

  55. Pascali D, Sburlan S: Nonlinear Mappings of Monotone Type. Editura Academiae, Bucharest; 1978.

    Book  MATH  Google Scholar 

  56. Dolezale V Studies in Automation and Control 3. In Monotone Operators and Its Applications in Automation and Network Theory. Elsevier, New York; 1979.

    Google Scholar 

  57. Browder FE, de Figueiredo DG, Gupta P: Maximal monotone operators and nonlinear integral equations of Hammerstein type. Bull. Am. Math. Soc. 1970, 76: 700–705. 10.1090/S0002-9904-1970-12511-3

    Article  MathSciNet  Google Scholar 

  58. Browder FE, Gupta P: Monotone operators and nonlinear integral equations of Hammerstein type. Bull. Am. Math. Soc. 1969, 75: 1347–1353. 10.1090/S0002-9904-1969-12420-1

    Article  MathSciNet  Google Scholar 

  59. Chang SS: On Chidume’s open questions and approximation solutions of multi-valued strongly accretive mapping equations in Banach spaces. J. Math. Anal. Appl. 1997, 216: 94–111. 10.1006/jmaa.1997.5661

    Article  MathSciNet  Google Scholar 

  60. Chepanovich RS: Nonlinear Hammerstein equations and fixed points. Publ. Inst. Math. (Belgr.) 1984, 35: 119–123.

    Google Scholar 

  61. Chidume CE: Fixed point iterations for nonlinear Hammerstein equations involving nonexpansive and accretive mappings. Indian J. Pure Appl. Math. 1989, 120: 129–135.

    MathSciNet  Google Scholar 

  62. de Figueiredo DG, Gupta CP: On the variational method for the existence of solutions to nonlinear equations of Hammerstein type. Proc. Am. Math. Soc. 1973, 40: 470–476.

    Article  MathSciNet  Google Scholar 

  63. Ofoedu EU, Malonza DM: Hybrid approximation of solutions of nonlinear operator equations and applications to equation of Hammerstein-type. Appl. Math. Comput. 2011, 217: 6019–6030. 10.1016/j.amc.2010.12.073

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Dedicated to Professor Wataru Takahashi on the occasion of his seventieth birthday.

The work of Eskandar Naraghirad was conducted with a postdoctoral fellowship at the National Sun Yat-sen University of Kaohsing. This research was partially supported by the Grant NSC 99-2115-M-037-002-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jen-Chih Yao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Naraghirad, E., Yao, JC. Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl 2013, 141 (2013). https://doi.org/10.1186/1687-1812-2013-141

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-141

Keywords