Skip to main content

Approximating common fixed points of averaged self-mappings with applications to the split feasibility problem and maximal monotone operators in Hilbert spaces

Abstract

In this paper, a modified proximal point algorithm for finding common fixed points of averaged self-mappings in Hilbert spaces is introduced and a strong convergence theorem associated with it is proved. As a consequence, we apply it to study the split feasibility problem, the zero point problem of maximal monotone operators, the minimization problem and the equilibrium problem, and to show that the unique minimum norm solution can be obtained through our algorithm for each of the aforementioned problems. Our results generalize and unify many results that occur in the literature.

MSC:47H10, 47J25, 68W25.

1 Introduction

Throughout this paper, denotes a real Hilbert space with the inner product , and the norm , I the identity mapping on , the set of all natural numbers and the set of all real numbers. For a self-mapping T on , F(T) denotes the set of all fixed points of T.

Let C and Q be nonempty closed convex subsets of two Hilbert spaces H 1 and H 2 respectively, and let A: H 1 H 2 be a bounded linear mapping. The split feasibility problem (SFP) is the problem of finding a point with the property:

x CandA x Q.
(1)

The SFP was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and medical image reconstruction. Recently, it has been found that the SFP can also be used to model the intensity-modulated radiation therapy. For details, the readers are referred to Xu [2] and the references therein.

Assume that the SFP has a solution. There are many iterative methods designed to approximate its solutions. The most popular algorithm is the CQ algorithm introduced by Byrne [3, 4]:

It starts with any x 1 H 1 and generates a sequence { x n } through the iteration

x n + 1 = P C ( I γ A ( I P Q ) A ) x n ,
(2)

where γ(0, 2 A 2 ) , A is the adjoint of A, P C and P Q are the metric projections onto C and Q respectively.

The sequence { x n } generated by the CQ algorithm (2) converges weakly to a solution of SFP (1), cf. [24]. Under the assumption that SFP (1) has a solution, it is known that a point x H 1 solves the SFP (1) if and only if x is a fixed point of the operator

P C ( I γ A ( I P Q ) A ) ,
(3)

cf. [2], where Xu also proposed the regularized method

x n + 1 = P C ( I γ n ( A ( I P Q ) A + α n I ) ) x n ,
(4)

and proved that the sequence { x n } converges strongly to a minimum norm solution of SFP (1) provided the parameters { α n } and { γ n } verify some suitable conditions. This regularized method was further investigated by Yao, Jiang and Liou [5], and Yao, Liou and Shahzad [6].

Motivated by the above works, it is desirable to devise an algorithm for approximating a point x C so that

A x QandB x Q,
(5)

where A, B are two bounded linear mappings from H 1 to H 2 .

On the other hand, it has been an interesting topic of finding zero points of maximal monotone operators. A set-valued map A:H 2 H with the domain D(A) is called monotone if

xy,uv0

for all x,yD(A) and for any uA(x), vA(y), where D(A) is defined to be

D(A)={xH:Ax}.

A is said to be maximal monotone if its graph {(x,u):xH,uA(x)} is not properly contained in the graph of any other monotone operator. For a positive real number α, we denote by J α A the resolvent of a monotone operator A, that is, J α A (x)= ( I + α A ) 1 (x) for any xH. A point vH is called a zero point of a maximal monotone operator A if 0A(v). In the sequel, we shall denote the set of all zero points of A by A 1 0, which is equal to F( J α A ) for any α>0. A well-known method to solve this problem is the proximal point algorithm which starts with any initial point x 1 H and then generates the sequence { x n } in by

x n + 1 = J α n A x n ,nN,

where { α n } is a sequence of positive real numbers. This algorithm was first introduced by Martinet [7] and then generally studied by Rockafellar [8], who devised the iterative sequence { x n } by

x n + 1 = J α n A x n + e n ,nN,
(6)

where { e n } is an error sequence in . Rockafellar showed that the sequence { x n } generated by (6) converges weakly to an element of A 1 0 provided that A 1 0 and lim inf n α n >0. In 1991, Güler [9] established an example showing that the sequence { x n } generated by (6) converges weakly but not strongly. Since then, many authors have conducted research on modifying the sequence in (6) so that the strong convergence is guaranteed, cf. [1019] and the references therein. Recently, Wang and Cui [16] considered the following algorithm:

x n + 1 = a n u+ b n x n + c n J α n A x n + e n ,nN,
(7)

where { a n }, { b n }, { c n } are sequences in (0,1) with a n + b n + c n =1 for all nN, and { e n } is an error sequence in . They showed that the sequence { x n } generated by (7) converges strongly to a zero point of A provided the following conditions (i) and (ii) are verified:

( i ) lim n a n = 0 , n = 1 a n = , lim inf n c n > 0 , lim inf n α n > 0 ; ( ii ) either n = 1 e n < or lim n e n a n = 0 .

This theorem generalizes and unifies many results that occur in the literature, cf. [1012, 18, 20].

For another maximal monotone operator B, we would like to seek appropriate conditions on the coefficient sequences { a n }, { b n }, { c n } and { d n } so that the sequence { x n } generated by

x n + 1 = a n u+ b n J β n B x n + c n J α n A x n + d n e n ,nN,
(8)

can converge strongly to a common zero of A and B.

We find that both of problems (5) and (8) can be solved simultaneously in a more general setting. As a matter of fact, any resolvent is firmly nonexpansive and any firmly nonexpansive mapping is 1 2 -averaged, cf. [21], which is a special case of λ-averaged mappings (for the definition of λ-averaged mappings, we refer readers to Section 2). Also, as shown in the proof of Theorem 3.6 of [2], for any γR with 0<γ< 2 A 2 , the operator (3) is 2 + γ A 2 4 -averaged. It is quite natural to ask whether the sequence { x n } generated by

x n + 1 = a n u+ b n S n x n + c n T n x n + d n e n
(9)

can converge strongly to a point of n = 1 F( S n ) n = 1 F( T n ) provided the coefficient sequences { a n }, { b n }, { c n } and { d n } are imposed on appropriate conditions, where for any nN, each S n is μ n -averaged by G n , and each T n is λ n -averaged by K n . We shall show in Section 3 that the sequence { x n } generated by (9) converges strongly to a point of n = 1 F( S n ) n = 1 F( T n ) provided n = 1 F( S n ) n = 1 F( T n ) and { μ n }, { λ n } and the coefficient sequences { a n }, { b n }, { c n } and { d n } verify the conditions:

  1. (i)

    { μ n } and { λ n } are convergent sequences in (0,1) with limit μ,λ(0,1) respectively;

  2. (ii)

    there are two nonnegative real-valued functions κ 1 and κ 2 on with

    G m x x + K m x x κ 1 ( m ) G n x x + κ 2 ( m ) K n x x , m N , n m , x C ;
  3. (iii)

    { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1), nN;

  4. (iv)

    lim n a n = lim n d n a n =0, n = 1 a n =, n = 1 d n <;

  5. (v)

    lim inf n b n >0, lim inf n c n >0.

Based on this main result, we shall deduce many corollaries for averaged mappings in Section 3. Section 4 is devoted to applications. We apply our results in Section 3 to study the split feasibility problem, the zero point problem of maximal monotone operators, the minimization problem and the equilibrium problem, and to show that the unique minimum norm solution can be obtained through our algorithm for each of the aforementioned problems.

2 Preliminaries

In order to facilitate our investigation in Section 3, we recall some basic facts. Let C be a nonempty closed convex subset of . A mapping T:CH is said to be

  1. (i)

    nonexpansive if

    TxTyxy,x,yC;
  2. (ii)

    firmly nonexpansive if

    T x T y 2 x y 2 ( I T ) x ( I T ) y 2 ,x,yC;
  3. (iii)

    λ-averaged by K if

    T=(1λ)I+λK

for some λ(0,1) and some nonexpansive mapping K.

If T:CC is nonexpansive, then the fixed point set F(T) of T is closed and convex, cf. [21]. If T=(1λ)I+λK is averaged, then T is nonexpansive with F(T)=F(K).

The metric projection P C from onto C is the mapping that assigns each xH the unique point P C x in C with the property

x P C x= min y C yx.

It is known that P C is nonexpansive and characterized by the inequality: for any xH,

x P C x,y P C x0,yC.
(10)

For α>0, the resolvent J α A of maximal monotone operator A on has the following properties.

Lemma 2.1 Let A be a maximal monotone operator on . Then

  1. (a)

    J α A is single-valued and firmly nonexpansive;

  2. (b)

    D( J α A )=H and F( J α A )= A 1 0;

  3. (c)

    (The resolvent identity) for μ,λ>0, the following identity holds:

    J μ A x= J λ A ( λ μ x + ( 1 λ μ ) J μ A x ) ,xH.

We still need some lemmas that will be quoted in the sequel.

Lemma 2.2 Let x,y,zH. Then

  1. (a)

    x + y 2 x 2 +2y,x+y;

  2. (b)

    for any λR,

    λ x + ( 1 λ ) y 2 =λ x 2 +(1λ) y 2 λ(1λ) x y 2 ;
  3. (c)

    for a,b,c[0,1] with a+b+c=1,

    a x + b y + c z 2 =a x 2 +b y 2 +c z 2 ab x y 2 ac x z 2 bc y z 2 .

Lemma 2.3 (Demiclosedness principle [21])

Let T be a nonexpansive self-mapping on a nonempty closed convex subset C of , and suppose that { x n } is a sequence in C such that { x n } converges weakly to some zC and lim n x n T x n =0. Then Tz=z.

Lemma 2.4 [18]

Let { s n } be a sequence of nonnegative real numbers satisfying

s n + 1 (1 α n ) s n + α n μ n + ν n ,nN,

where { α n }, { μ n } and { ν n } verify the conditions:

  1. (i)

    { α n }[0,1], n = 1 α n =;

  2. (ii)

    lim sup n μ n 0;

  3. (iii)

    { ν n }[0,) and n = 1 ν n <.

Then lim n s n =0.

Lemma 2.5 [22]

Let { s n } be a sequence in that does not decrease at infinity in the sense that there exists a subsequence { s n i } such that

s n i < s n i + 1 ,iN.

For any kN, define m k =max{jk: s j < s j + 1 }. Then m k as k and max{ s m k , s k } s m k + 1 , kN.

3 Strong convergence theorems

To establish a strong convergence theorem for averaged mappings S n , T n , nN, on associated with algorithm (9), we at first need a lemma.

Lemma 3.1 If T=(1λ)I+λK is a λ-averaged self-mapping by K on a nonempty closed convex subset C of and pF(T), then for any xC, one has

T x p 2 x p 2 λ(1λ) x K x 2 .

Proof Let x be any point in C. Then, using Tp=Kp=p and the nonexpansiveness of K, we have from Lemma 2.2(b) that

T x p 2 = T x T p 2 = ( 1 λ ) x + λ K x ( ( 1 λ ) p + K p ) 2 = ( 1 λ ) ( x p ) + λ ( K x K p ) 2 = ( 1 λ ) x p 2 + λ K x K p 2 λ ( 1 λ ) x p ( K x K p ) 2 ( 1 λ ) x p 2 + λ x p 2 λ ( 1 λ ) x K x 2 = x p 2 λ ( 1 λ ) x K x 2 .

 □

Theorem 3.2 For any nN, suppose that S n =(1 μ n )I+ μ n G n and T n =(1 λ n )I+ λ n K n are averaged self-mappings on a nonempty closed convex subset C of with Ω:= n = 1 F( S n ) n = 1 F( T n ), satisfying that

(3.1) lim n μ n =μ(0,1), lim n λ n =λ(0,1);

and there are two nonnegative real-valued functions κ 1 and κ 2 on with

( 3.2 ) G m x x + K m x x κ 1 ( m ) G n x x + κ 2 ( m ) K n x x , m N , n m , x C .

Suppose further that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN, and that { e n } and { v n } are two bounded sequences in C. For an arbitrary norm convergent sequence { u n } in C with limit u, start with an arbitrary x 1 = y 1 C and define two sequences { x n } and { y n } by

x n + 1 = a n u + b n S n x n + c n T n x n + d n e n ; y n + 1 = a n u n + b n S n y n + c n T n y n + d n v n .

Then both of { x n } and { y n } converge strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 .

Moreover, when every S n is the identity mapping I, the result still holds without the condition lim inf n b n >0.

Proof Put p= P Ω u. Firstly, we show that { x n } converges strongly to p. It comes from the nonexpansiveness of S n and T n that

x n + 1 p = a n ( u p ) + b n ( S n x n p ) + c n ( T n x n p ) + d n ( e n p ) = a n u p + ( b n + c n ) x n p + d n e n p ,

from which it follows that { x n } is a bounded sequence. Taking into account of Lemma 2.2 and using Lemma 3.1, we get

x n + 1 p 2 = a n ( u p ) + b n ( S n x n p ) + c n ( T n x n p ) + d ( e n p ) 2 b n ( S n x n p ) + c n ( T n x n p ) + d n ( e n p ) 2 + 2 a n u p , x n + 1 p = ( 1 a n ) 2 b n 1 a n ( S n x n p ) + c n 1 a n ( T n x n p ) + d n 1 a n ( e n p ) 2 + 2 a n u p , x n + 1 p ( 1 a n ) 2 ( b n 1 a n S n x n p 2 + c n 1 a n T n x n p 2 + d n 1 a n e n p 2 ) + 2 a n u p , x n + 1 p b n S n x n p 2 + c n T n x n p 2 + d n e n p 2 + 2 a n u p , x n + 1 p b n ( x n p 2 μ n ( 1 μ n ) x n G n x n 2 ) + c n ( x n p 2 λ n ( 1 λ n ) x n K n x n 2 ) + d n e n p 2 + 2 a n u p , x n + 1 p = ( b n + c n ) x n p 2 + d n e n p 2 + 2 a n u p , x n + 1 p b n μ n ( 1 μ n ) x n G n x n 2 c n λ n ( 1 λ n ) x n K n x n 2 .
(11)

We now carry on with the proof by considering the following two cases: (I) { x n p} is eventually decreasing, and (II) { x n p} is not eventually decreasing.

Case I: Suppose that { x n p} is eventually decreasing, that is, there is NN such that { x n p } n N is decreasing. In this case, lim n x n p exists in . By condition (ii), we may assume that there are two b,c(0,1) such that b b n and c c n for all nN. Then from inequality (11) we have

0 b μ n ( 1 μ n ) x n G n x n 2 + c λ n ( 1 λ n ) x n K n x n 2 b n μ n ( 1 μ n ) x n G n x n 2 + c n λ n ( 1 λ n ) x n K n x n 2 ( b n + c n ) x n p 2 x n + 1 p 2 + d n e n p 2 + 2 a n u p , x n + 1 p = ( 1 ( a n + d n ) ) x n p 2 x n + 1 p 2 + d n e n p 2 + 2 a n u p , x n + 1 p

and noting via condition (i) that

lim n ( 1 ( a n + d n ) ) x n p 2 = lim n x n + 1 p 2 and lim n d n e n p 2 = lim n 2 a n u p , x n + 1 p = 0 ,

we conclude that

lim n ( b μ n ( 1 μ n ) x n G n x n 2 + c λ n ( 1 λ n ) x n K n x n 2 ) =0,

which implies that

lim n x n G n x n = lim n x n K n x n =0.
(12)

Then from condition (3.2) we deduce for all mN that

lim n x n G m x n = lim n x n K m x n =0.
(13)

Since { x n } is bounded, it has a subsequence { x n k } such that { x n k } converges weakly to some zH and

lim sup n up, x n + 1 p= lim k up, x n k p=up,zp0,
(14)

where the last inequality follows from (13) since zΩ by Lemma 2.3. Choose M>0 so that sup{ e n p 2 +2up x n + 1 p:nN}M. From (11) we have

x n + 1 p 2 ( 1 ( a n + d n ) ) x n p 2 + ( a n + d n ) 2 u p , x n + 1 p + d n ( e n p 2 + 2 u p x n + 1 p ) ( 1 ( a n + d n ) ) x n p 2 + ( a n + d n ) 2 u p , x n + 1 p + d n M .
(15)

Accordingly, because of (14) and condition (i), we can apply Lemma 2.4 to inequality (15) with s n = x n p 2 , α n = a n + d n , μ n =2up, x n + 1 p and ν n = d n M to conclude that

lim n x n =p.

Case II: Suppose that { x n p} is not eventually decreasing. In this case, by Lemma 2.5, there exists a nondecreasing sequence { m k } in such that m k and

max { x m k p , x k p } x m k + 1 p,kN.
(16)

Then it follows from (11) and (16) that

x m k p 2 x m k + 1 p 2 ( b m k + c m k ) x m k p 2 + d m k e m k p 2 + 2 a m k u p , x m k + 1 p b m k μ m k ( 1 μ m k ) x m k G m k x m k 2 c m k λ m k ( 1 λ m k ) x m k K m k x m k 2 .
(17)

Therefore,

0 b m k μ m k ( 1 μ m k ) x m k G m k x m k 2 + c m k λ m k ( 1 λ m k ) x m k K m k x m k 2 ( 1 ( b m k + c m k ) ) x m k p 2 + d m k e m k p 2 + 2 a m k u p , x m k + 1 p = ( a m k + d m k ) x m k p 2 + d m k e m k p 2 + 2 a m k u p , x m k + 1 p ,

and then proceeding just as in the proof in Case I, we obtain

lim k x m k G m k x m k = lim k x m k K m k x m k =0,
(18)

which in conjunction with condition (3.2) shows for all m j that

lim k x m k G m j x m k = lim k x m k K m j x m k =0,

and then it follows that

lim sup k up, x m k + 1 p0.
(19)

From (17) we have

( 1 ( b m k + c m k ) ) x m k p 2 d m k e m k p 2 +2 a m k up, x m k + 1 p,

and thus

x m k p 2 d m k a m k + d m k e m k p 2 + 2 a m k a m k + d m k u p , x m k + 1 p d m k a m k e m k p 2 + 2 u p , x m k + 1 p .

Letting k and using (19) and condition (i), we obtain

lim k x m k p=0.
(20)

Also, since

x m k + 1 x m k a m k u x m k + b m k μ m k G m k x m k x m k + c m k λ m k K m k x m k x m k + d m k e m k x m k ,

which together with (18) implies lim k x m k + 1 x m k =0, and so

lim k x m k + 1 p=0
(21)

by virtue of (20). Consequently, we conclude lim k x k p=0 via (16) and (21). In addition, note that the condition lim inf n b n =0 is used to establish lim n x n G n x n =0 and lim k x m k G m k x m k =0 in (12) and (18) respectively. However, both limits hold trivially without this condition provided every S n is the identity mapping I.

Next, we show that { y n } converges strongly to p too. Applying Lemma 2.4 to the following inequality

y n + 1 x n + 1 a n u n u + ( b n + c n ) y n x n + d n v n e n = ( 1 ( a n + d n ) ) y n x n + a n u n u + d n v n e n

for all nN, we see that lim n y n x n =0, and hence lim n y n =p follows. This completes the proof. □

The following lemma is easily proved and so its proof is omitted.

Lemma 3.3 For any nN, suppose that S n =(1 μ n )I+ μ n G n and T n =(1 λ n )I+ λ n K n are averaged self-mappings on a nonempty closed convex subset C of such that condition (3.1) holds. Then { S n } and { T n } satisfy condition (3.2) if and only if { G n } and { K n } satisfy condition (3.2).

If the sequence { S n } (resp. { T n }) of averaged mappings consists of a single mapping S (resp. T), then { S n } and { T n } obviously verify conditions (3.1) and (3.2), and hence from Lemma 3.3 we have the following corollary.

Corollary 3.4 Suppose S and T are two averaged self-mappings on a nonempty closed convex subset C of with Ω=F(S)F(T), and suppose that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN, and { e n } and { v n } are two bounded sequences in C. For an arbitrary norm convergent sequence { u n } in C with limit u, start with an arbitrary x 1 = y 1 C and define two sequences { x n } and { y n } by

x n + 1 = a n u + b n S x n + c n T x n + d n e n ; y n + 1 = a n u n + b n S y n + c n T y n + d n v n .

Then both of { x n } and { y n } converge strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 .

Moreover, when S is the identity mapping I, the result still holds without the condition lim inf n b n >0.

Theorem 3.5 For any nN, suppose S n and T n are firmly nonexpansive self-mappings on a nonempty closed convex subset C of with Ω:= n = 1 F( S n ) n = 1 F( T n ), satisfying condition (3.2). Suppose further that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN, and { e n } and { v n } are two bounded sequences in C. For an arbitrary norm convergent sequence { u n } in C with limit u, start with an arbitrary x 1 = y 1 C and define two sequences { x n } and { y n } by

x n + 1 = a n u + b n S n x n + c n T n x n + d n e n ; y n + 1 = a n u n + b n S n y n + c n T n y n + d n v n .

Then both of { x n } and { y n } converge strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 .

Moreover, when every S n is the identity mapping I, the result still holds without the condition lim inf n b n >0.

Proof Since any firmly nonexpansive mapping is 1 2 -averaged, condition (3.1) holds, and hence by Lemma 3.3 we see that all the requirements of Theorem 3.2 are verified. Therefore, the desired conclusion follows. □

If S n =I and d n =0 for all nN in Theorem 3.2, then we have the following corollary.

Corollary 3.6 Suppose, for all nN, that T n =(1 λ n )I+ λ n K n is an averaged self-mapping on a nonempty closed convex subset C of with Ω= n = 1 F( T n ), and lim n λ n =λ(0,1), and assume that condition (3.2) holds for {I} and { T n }. Suppose further that { a n }, { b n } and { c n } are sequences in [0,1] with a n + b n + c n =1 and a n (0,1) for all nN. Let { α n } be a sequence in (0,). For an arbitrary fixed uC, start with an arbitrary x 1 C and define

x n + 1 = a n u+ b n x n + c n T n x n ,nN.

Then the sequence { x n } converges strongly to P Ω u provided the following conditions are satisfied:

lim n a n =0, n = 1 a n =, lim inf n c n >0.

Corollary 3.7 Suppose, for all nN, that T n =(1 λ n )I+ λ n K n is an averaged self-mapping on with Ω= n = 1 F( T n ), and lim n λ n =λ(0,1), and assume that condition (3.2) holds for {I} and { T n }. Suppose further that { a n }, { b n } and { c n } are sequences in [0,1] with a n + b n + c n =1 and a n (0,1) for all nN, and that { e n } is a bounded sequence in . Let { α n } be a sequence in (0,). For an arbitrary fixed uH, start with an arbitrary x 1 H and define

x n + 1 = a n u+ b n x n + c n T n x n + e n ,nN.

Then the sequence { x n } converges strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = 0 , n = 1 a n = , lim inf n c n > 0 ; ( ii ) either lim n e n a n = 0 or n = 1 e n < .

Proof Put p= P Ω u. Let z 1 = x 1 and define a sequence { z n } iteratively by

z n + 1 = a n u+ b n z n + c n T n z n .

We have lim n z n =p by Corollary 3.6. Since

x n + 1 z n + 1 b n x n z n + c n T n x n T n z n + e n ( b n + c n ) x n z n + e n = ( 1 a n ) x n z n + e n ,
(22)

the limit lim n x n z n =0 follows by applying Lemma 2.4 to (22), and thus,

lim n x n =p.

 □

4 Applications

In this section, we shall apply some of the strong convergence theorems in Section 3 to approximate a solution of the split feasibility problem, a common zero of maximal monotone operators, a minimizer of a proper lower semicontinuous convex function, and to study the related equilibrium problem.

Xu [2] transformed SFP (1) to the fixed point problem of the operator (3):

P C ( I γ A ( I P Q ) A ) .

He proved Lemma 4.1 below.

Lemma 4.1 [2]

A point x H 1 solves SFP (1) if and only if x is a fixed point of the operator (3): P C (Iγ A (I P Q )A).

Moreover, in the proof of Theorem 3.6 of [2], Xu showed the following lemma.

Lemma 4.2 [2]

For any γR with 0<γ< 2 A 2 , the operator (3): P C (Iγ A (I P Q )A) is 2 + γ A 2 4 -averaged.

Invoking Lemmas 4.1 and 4.2, we obtain the theorem below from Corollary 3.4 by putting S=I and T= P C (Iγ A (I P Q )A) for all nN.

Theorem 4.3 Let C and Q be nonempty closed convex subsets of two Hilbert spaces H 1 and H 2 respectively, and let A: H 1 H 2 be a bounded linear mapping. Put T= P C (Iγ A (I P Q )A), where γ satisfies 0<γ< 2 A 2 . Suppose that the solution set Ω of SFP (1) is nonempty, and suppose further that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN, and that { e n } is a bounded sequence in C. For an arbitrary fixed uC, start with an arbitrary x 1 C and define the sequence { x n } by

x n + 1 = a n u+ b n x n + c n T x n + d n e n .

Then { x n } converges strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n c n > 0 .

When the point u in the above theorem is taken to be 0, we see that the limit point v of the sequence { x n } is the unique minimum norm solution of SFP (1), that is, v=min{ x ˆ : x ˆ Ω}.

Here, readers may compare the above theorem with Theorem 3.6 of [2], which says, for 0<γ< 2 A 2 and sequence { a n } in [0, 4 2 + γ A 2 ] satisfying

n = 1 a n ( 4 2 + γ A 2 a n ) =,

that the sequence { x n } generated by

x n + 1 =(1 a n ) x n + a n P C ( I γ A ( I P Q ) A ) x n

converges weakly to a solution of SFP (1) provided the solution set of SFP (1) is nonempty. It is also interesting to compare Theorem 4.3 with Theorem 5.5 of [2] and Theorem 3.1 of [5]. Our method is different from those in [2] and [5] even in the case of u=0, because our algorithm contains an error term and uses the operator P C (Iγ A (I P Q )A) directly without any regularization.

Theorem 4.4 Let C and Q be nonempty closed convex subsets of two Hilbert spaces H 1 and H 2 respectively, and let A, B be bounded linear mappings from H 1 to H 2 . Put S= P C (Iγ B (I P Q )B) and T= P C (Iγ A (I P Q )A), where γ satisfies 0<γ<min{ 2 B 2 , 2 A 2 }. Suppose the solution set Ω of SFP (5) is nonempty, and suppose further that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1, and a n (0,1) for all nN, and that { e n } is a bounded sequence in C. For an arbitrary fixed uC, start with an arbitrary x 1 C and define the sequence { x n } by

x n + 1 = a n u+ b n S x n + c n T x n + d n e n .

Then { x n } converges strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 .

Proof It is clear that this theorem follows from Lemmas 4.1 and 4.2 and Corollary 3.4. □

Replacing S n and T n in Theorem 3.2 with the resolvents J β n B and J α n A of two maximal monotone operators B and A respectively, we have Theorem 4.5 below.

Theorem 4.5 Suppose that B and A are two maximal monotone operators on with B 1 0 A 1 0, and suppose that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN. Let { α n } and { β n } be sequences in (0,), and let { e n } and { v n } be two bounded sequences in . For an arbitrary norm convergent sequence { u n } in with limit u, start with an arbitrary x 1 = y 1 H and define two sequences { x n } and { y n } by

x n + 1 = a n u + b n J β n B x n + c n J α n A x n + d n e n ; y n + 1 = a n u n + b n J β n B y n + c n J α n A y n + d n v n .

Then both of the sequences { x n } and { y n } converge strongly to P B 1 0 A 1 0 u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 ; ( iii ) lim inf n α n > 0 , lim inf n β n > 0 .

Proof Since all the requirements of Theorem 3.2 are satisfied except conditions (3.1) and (3.2), we have to check these two conditions. For any nN, let S n = J β n B and T n = J α n A . By Lemma 2.1(b), we have B 1 0=F( S n ) and A 1 0=F( T n ) for all nN. Moreover, since all S n and T n are firmly nonexpansive, all of them are 1 2 -averaged, so condition (3.1) is satisfied with μ n = λ n = 1 2 for all nN. According to Lemma 3.3, it remains to prove that condition (3.2) holds for { J β n B } and { J α n A }. Since condition (iii) holds, we may assume that there is τ(0,1) such that τ< α n and τ< β n for all nN. Let κ 1 (n)=2+ α n τ and κ 2 (n)=2+ β n τ . Then, by virtue of the resolvent identity and the nonexpansiveness of J α m A , one has for all mN that

J α n A x J α m A x = J α m A ( α m α n x + ( 1 α m α n ) J α n A x ) J α m A x | 1 α m α n | J α n A x x ( 1 + α m α n ) J α n A x x ,

and thus

J α m A x x J α m A x J α n A x + J α n A x x ( 1 + α m α n + 1 ) J α n A x x ( 2 + α m τ ) J α n A x x = κ 1 ( m ) J α n A x x , n m , x H .

The same argument shows for all mN that

J β m B x x κ 2 (m) J β n B x x ,nm,xH.

Therefore, condition (3.2) is true for { J α n A } and { J β n B }. □

Putting T n = J α n A in Corollary 3.6 (resp. Corollary 3.7) and noting that {I} and { J α n A } verifies condition (3.2) due to lim inf n α n >0, we obtain the following two corollaries.

Corollary 4.6 Suppose that A is a maximal monotone operator on with A 1 0, and suppose that { a n }, { b n } and { c n } are sequences in [0,1] with a n + b n + c n =1 and a n (0,1) for all nN. Let { α n } be a sequence in (0,). For an arbitrary fixed uH, choose an arbitrary x 1 H and define

x n + 1 = a n u+ b n x n + c n J α n A x n ,nN.

Then the sequence { x n } converges strongly to P A 1 0 u provided the following conditions are satisfied:

lim n a n =0, n = 1 a n =, lim inf n c n >0, lim inf n α n >0.

Corollary 4.7 [16]

Suppose that A is a maximal monotone operator on with A 1 0, and suppose that { a n }, { b n } and { c n } are sequences in [0,1] with a n + b n + c n =1 and a n (0,1) for all nN. Let { α n } be a sequence in (0,), and let { e n } be a bounded sequence in  . For an arbitrary fixed uH, choose an arbitrary x 1 H and define

x n + 1 = a n u+ b n x n + c n J α n A x n + e n ,nN.

Then the sequence { x n } converges strongly to P A 1 0 u provided the following conditions are satisfied:

( i ) lim n a n = 0 , n = 1 a n = , lim inf n c n > 0 , lim inf n α n > 0 ; ( ii ) either lim n e n a n = 0 or n = 1 e n < .

Let f:H(,] be a proper lower semicontinuous convex function. The set of minimizers of f is defined to be

arg min y H f(y)= { z H : f ( x ) f ( y )  for all  y H } ,

and the subdifferential of f is defined as

f(x)= { z H : y x , z f ( y ) f ( x ) , y H }

for all xH. As shown in Rockafellar [23], ∂f is a maximal monotone operator. Moreover, one has

0f(z)z arg min y H f(y),

that is,

( f ) 1 0= arg min y H f(y).

Hence arg min y H f(y)=F( J α f ) for any α>0, and then invoking Corollary 4.6, we obtain the following theorem.

Theorem 4.8 Let f:H(,] be a proper lower semicontinuous convex function, and suppose that { a n }, { b n } and { c n } are sequences in [0,1] with a n + b n + c n =1 and a n (0,1) for all nN. Let { α n } be a sequence in (0,), and put Ω= arg min y H f(y). For an arbitrary fixed uH, choose an arbitrary x 1 H and define

x n + 1 = a n u+ b n x n + c n J α n f x n .
(23)

Then the sequence { x n } converges strongly to P Ω u provided the following conditions are satisfied:

( i ) lim n a n = 0 , n = 1 a n = ; ( ii ) lim inf n c n > 0 , lim inf n α n > 0 .

For any nN, define g n :H(,] by

g n (z)=f(z)+ 1 2 α n z x n 2

for all zH. Then we have, for any zH,

g n (z)=f(z)+ 1 α n (z x n ),

cf. [24]. Hence,

z ( g n ) 1 0 0 g n ( z ) 0 f ( z ) + 1 α n ( z x n ) x n z + α n f ( z ) = ( I + α n f ) ( z ) z = ( I + α n f ) 1 x n = J α n f x n .

This means that J α n f x n = arg min y H g n (y)= arg min y H {f(y)+ 1 2 α n y x n 2 }, and thus the iterative scheme (23) is can be replaced with

x n + 1 = a n u+ b n x n + c n arg min y H { f ( y ) + 1 2 α n y x n 2 } .

Let f:C×CR. An equilibrium problem is the problem of finding x ˆ C such that

f( x ˆ ,y)0,yC,

whose solution set is denoted by EP(f). For solving an equilibrium problem, we assume that the function f satisfies the following conditions:

  1. (A1)

    f(x,x)=0, xC;

  2. (A2)

    f is monotone, that is, f(x,y)+f(y,x)0, xC;

  3. (A3)

    for all x,y,zC, lim sup t 0 f((1t)x+tz,y)f(x,y);

  4. (A4)

    for all xC, f(x,) is convex and lower semicontinuous.

The following lemma appears implicitly in Blum and Oetti [25] and is proved in detail by Aoyama et al. [26], while Lemma 4.10 is Lemma 2.12 of Combettes and Hirstoaga [27].

Lemma 4.9 [25, 26]

Let f:C×CR be a function satisfying conditions (A1)-(A4), and let r>0 and xH. Then there exists a unique zC such that

f(z,y)+ 1 r yz,zx0,yC.

Lemma 4.10 [27]

Let f:C×CR be a function satisfying conditions (A1)-(A4). For r>0, define J r f :HC by

J r f x= { z C : f ( z , y ) + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (a)

    J r f is single-valued;

  2. (b)

    J r f is firmly nonexpansive;

  3. (c)

    F( J r f )=EP(f);

  4. (d)

    EP(f) is closed and convex.

We call J r f the resolvent of f for r>0. Using Lemmas 4.9 and 4.10, Takahashi et al. [15] established the lemma below.

Lemma 4.11 [15]

Let f:C×CR be a function satisfying conditions (A1)-(A4) and define a set-valued mapping of into itself by

G f (x)={ { z H : f ( x , y ) y x , z , y C } , x C , , x C .

Then the following hold:

  1. (a)

    G f is a maximal monotone operator with D( G f )C;

  2. (b)

    EP(f)= G f 1 0;

  3. (c)

    J r G f x= J r f x for all xH.

Theorem 4.12 Let C be a nonempty closed convex subset of and let f i :C×CR, i=1,2, be functions satisfying conditions (A1)-(A4) with EP( f 1 )EP( f 2 ). Suppose that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN. Let { α n } and { β n } be sequences in (0,), and let { e n } be a bounded sequence in  . For an arbitrary fixed uH, choose an arbitrary x 1 H and define

x n + 1 = a n u+ b n J β n f 2 x n + c n J α n f 1 x n + d n e n ,nN.

Then the sequence { x n } converges strongly to P EP ( f 1 ) EP ( f 2 ) u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 ; ( iii ) lim inf n α n > 0 , lim inf n β n > 0 .

Proof The set-valued mappings G f i associated with f i , i=1,2, defined in Lemma 4.11 are maximal monotone operators with D( G f i )C, and it follows from Lemmas 4.10 and 4.11 that J r G f i = J r f i and F( J r G f i )=F( J r f i )=EP( f i )= G f i 1 0 for any r>0. Putting B= G f 2 and A= G f 1 in Theorem 4.5, the desired conclusion follows. □

Here, it is worth mentioning, just as the SFP, that the unique minimum norm solution can be obtained through our algorithm for each of the minimization problem and the equilibrium problem by taking u=0 in Theorems 4.8 and 4.12.

For a nonempty closed convex subset C of , its indicator function ι C defined by

ι C (x)={ 0 , x C ; , x C ,

is a proper lower semicontinuous convex function and its subdifferential ι C defined by

ι C (x)= { z H : y x , z ι C ( y ) ι C ( x ) , y H }

is a maximal monotone operator, cf. Rockafellar [23]. As shown in Lin and Takahashi [28], the resolvent J r ι C of ι C for r>0 is the same as the metric projection P C , and ( ι C ) 1 0=C.

Theorem 4.13 Let C i , i=1,2, be two nonempty closed convex subsets of with C 1 C 2 . Suppose that { a n }, { b n }, { c n } and { d n } are sequences in [0,1] with a n + b n + c n + d n =1 and a n (0,1) for all nN. Let { e n } be a bounded sequence in . For an arbitrary fixed uH, choose an arbitrary x 1 H and define

x n + 1 = a n u+ b n P C 2 x n + c n P C 1 x n + d n e n ,nN.

Then the sequence { x n } converges strongly to P C 1 C 2 u provided the following conditions are satisfied:

( i ) lim n a n = lim n d n a n = 0 , n = 1 a n = , n = 1 d n < ; ( ii ) lim inf n b n > 0 , lim inf n c n > 0 .

Proof Putting A= ι C 1 and B= ι C 2 in Theorem 4.5, the desired conclusion follows. □

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    MathSciNet  Article  Google Scholar 

  2. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018 10.1088/0266-5611/26/10/105018

    Google Scholar 

  3. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    MathSciNet  Article  Google Scholar 

  4. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    MathSciNet  Article  Google Scholar 

  5. Yao Y, Jigang W, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012. 10.1155/2012/140679

    Google Scholar 

  6. Yao Y, Liou YC, Shahzad N: A strongly convergent method for the split feasibility problem. Abstr. Appl. Anal. 2012. 10.1155/2012/125046

    Google Scholar 

  7. Martinet B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970, 4: 154–158.

    MathSciNet  Google Scholar 

  8. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    MathSciNet  Article  Google Scholar 

  9. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

    MathSciNet  Article  Google Scholar 

  10. Boikanyo OA, Moroşanu G: Inexact Halpern-type proximal point algorithm. J. Glob. Optim. 2011, 51: 11–26. 10.1007/s10898-010-9616-7

    Article  Google Scholar 

  11. Boikanyo OA, Moroşanu G: Four parameter proximal point algorithms. Nonlinear Anal. 2011, 74: 544–555. 10.1016/j.na.2010.09.008

    MathSciNet  Article  Google Scholar 

  12. Boikanyo OA, Moroşanu G: A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4: 635–641. 10.1007/s11590-010-0176-z

    MathSciNet  Article  Google Scholar 

  13. Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

    MathSciNet  Article  Google Scholar 

  14. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.

    MathSciNet  Google Scholar 

  15. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2

    MathSciNet  Article  Google Scholar 

  16. Wang F, Cui H: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 2012, 54: 485–491. 10.1007/s10898-011-9772-4

    MathSciNet  Article  Google Scholar 

  17. Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7

    Article  Google Scholar 

  18. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66(2):240–256.

    Article  Google Scholar 

  19. Yao Y, Noor MA: On convergence criteria of generalized proximal point algorithm. J. Comput. Appl. Math. 2008, 217: 46–55. 10.1016/j.cam.2007.06.013

    MathSciNet  Article  Google Scholar 

  20. Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.

    MathSciNet  Article  Google Scholar 

  21. Goebel K, Kirk WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Book  Google Scholar 

  22. Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z

    MathSciNet  Article  Google Scholar 

  23. Rockafellar RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33: 209–216. 10.2140/pjm.1970.33.209

    MathSciNet  Article  Google Scholar 

  24. Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.

    Google Scholar 

  25. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

    MathSciNet  Google Scholar 

  26. Aoyama K, Kimura T, Takahashi W: Maximal monotone operators and maximal monotone functions for equilibrium problems. J. Convex Anal. 2008, 15: 395–409.

    MathSciNet  Google Scholar 

  27. Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

    MathSciNet  Google Scholar 

  28. Lin LJ, Takahashi W: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 2012, 16: 429–453. 10.1007/s11117-012-0161-0

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The work was supported by the National Science Council of Taiwan with contract No. NSC101-2221-E-020-031.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chung-Chien Hong.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors contributed equally to this work. Both authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Huang, YY., Hong, CC. Approximating common fixed points of averaged self-mappings with applications to the split feasibility problem and maximal monotone operators in Hilbert spaces. Fixed Point Theory Appl 2013, 190 (2013). https://doi.org/10.1186/1687-1812-2013-190

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-190

Keywords

  • averaged mapping
  • firmly nonexpansive mapping
  • maximal monotone operator
  • split feasibility problem
  • minimization problem
  • equilibrium problem