Open Access

Strong and weak convergence theorems for common zeros of finite accretive mappings

Fixed Point Theory and Applications20142014:77

https://doi.org/10.1186/1687-1812-2014-77

Received: 29 October 2013

Accepted: 7 March 2014

Published: 25 March 2014

Abstract

In this paper, we present a new iterative scheme to solve the problems of finding common zeros of finite m-accretive mappings in a real Hilbert space. Some strong and weak convergence theorems are established under different assumptions, which extends the corresponding works given by some authors.

MSC:47H05, 47H09.

Keywords

accretive mappingcontractionzero pointiterative schemestrong (weak) convergence

1 Introduction and preliminaries

Let H be a real Hilbert space with the inner product , and norm , respectively. Then for x , y H , and λ [ 0 , 1 ] ,
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 .
(1.1)

We write x n x to indicate that the sequence { x n } converges weakly to x, and x n x implies that { x n } converges strongly to x.

Let C be a closed and convex subset of H. Then, for every point x H , there exists a unique nearest point in C, denoted by P C x , such that x P C x x y for all y C . P C is called the metric projection of H onto C. It is well known that P C : H C is characterized by the properties:
  1. (i)

    x P C x , P C x y 0 , for all y C and x H ;

     
  2. (ii)

    For every x , y H , P C x P C y 2 x y , P C x P C y ;

     
  3. (iii)

    P C x P C y x y , for every x , y H ;

     
  4. (iv)

    x n x 0 and P C x n y 0 imply that P C x 0 = y 0 .

     

A mapping f : C C is called a contraction if there exists a constant k ( 0 , 1 ) such that f ( x ) f ( y ) k x y , for x , y C . We use C to denote the collection of mappings f verifying the above inequality. That is, C : = { f : C C | f  is a contraction with constant  k } .

A mapping T : C C is said to be nonexpansive if T x T y x y , for x , y C . We use F ( T ) to denote the fixed point set of T, that is, F ( T ) : = { x C : T x = x } .

A mapping A : H D ( A ) R ( A ) H is called accretive if x y , A x A y 0 , for x , y D ( A ) and it is called m-accretive if R ( I + λ A ) = H , for λ > 0 . An m-accretive mapping A is demi-closed, that is, if { x n } D ( A ) such that x n x and A x n y , then x D ( A ) and y = A x . Let A 1 0 denote the set of zeros of A, that is, A 1 0 : = { x D ( A ) : A x = 0 } . We denote by J r A (for r > 0 ) the resolvent of A, that is, J r A : = ( I + r A ) 1 . Then J r A is nonexpansive and F ( J r A ) = A 1 0 .

Interest in accretive mappings, which is an important class of nonlinear operators, stems mainly from their firm connection with equations of evolution. It is well known that many physically significant problems can be modeled by initial value problems of the form
x ( t ) + A x ( t ) = 0 , x ( 0 ) = x 0 ,
(1.2)
where A is an accretive mapping. Typical examples where such evolution equations occur can be found in the heat, wave or Schrödinger equations. If x ( t ) is dependent on t, then (1.2) is reduced to
A u = 0 ,
(1.3)
whose solutions correspond to the equilibrium of the system (1.2). Consequently, within the past 40 years or so, considerable research efforts have been devoted to methods for finding approximate solutions of (1.3). An early fundamental result in the theory of accretive operators, due to Browder [1]. One classical method for studying the problem 0 A x , where A is an m-accretive mapping, is the following so-called proximal method (cf. [2]):
x 0 H , x n + 1 J r n x n , n 0 ,
(1.4)

where J r n : = ( I + r n A ) 1 . It was shown that the sequence generated by (1.4) converges weakly or strongly to a zero point of A under some conditions.

Recall that the following normal Mann iterative scheme to approximate the fixed point of a nonexpansive mapping T : C C was introduced by Mann [3]:
x 0 C , x n + 1 = ( 1 α n ) x n + α n T x n , n 0 .
(1.5)

It was proved that under some conditions, the sequence { x n } produced by (1.5) converges weakly to a point in F ( T ) .

Later, many mathematicians tried to combine the ideas of proximal method and Mann iterative method to approximate the zeros of m-accretive mappings; see, e.g. [411] and references therein.

Especially, in 2007, Qin and Su [4] presented the following iterative scheme:
{ x 1 C , y n = β n x n + ( 1 β n ) J r n x n , x n + 1 = α n u + ( 1 α n ) y n ,
(1.6)

where J r n = ( I + r n A ) 1 . They showed that { x n } generated by the above scheme converges strongly to a zero of A.

Based on iterative schemes (1.4) and (1.5), Zegeye and Shahzad extended their discussion to the case of finite m-accretive mappings. They presented in [12] the following iterative scheme:
x 0 C , x n + 1 = α n u + ( 1 α n ) S r x n , n 0 ,
(1.7)

where S r = a 0 I + a 1 J A 1 + a 2 J A 2 + + a l J A l with J A i = ( I + A i ) 1 and i = 0 l a i = 1 . If i = 1 l A i 1 ( 0 ) , they proved that { x n } generated by (1.7) converges strongly to the common zeros of A i ( i = 1 , 2 , , l ) under some conditions.

Later, their work was extended to the following one presented by Hu and Liu in [13]:
x 0 C , x n + 1 = α n u + β n x n + ϑ n S r n x n , n 0 ,
(1.8)

where S r n = a 0 I + a 1 J r n A 1 + a 2 J r n A 2 + + a l J r n A l with J r n A i = ( I + r n A i ) 1 and i = 0 l a i = 1 . { α n } , { β n } , { ϑ n } ( 0 , 1 ) and α n + β n + ϑ n = 1 . If i = 1 l A i 1 ( 0 ) , they proved that { x n } converges strongly to the common zeros of A i ( i = 1 , 2 , , l ) under some conditions.

In this paper, based on the work of (1.6), (1.7), and (1.8), we present the following iterative scheme:
{ x 1 C , y n = β n f ( x n ) + ( 1 β n ) S r n A m A m 1 A 1 x n , u n = ϑ n f ( y n ) + ( 1 ϑ n ) W r n y n , x n + 1 = α n f ( u n ) + ( 1 α n ) u n ,
(A)

where S r n A m A m 1 A 1 : = J r n A m J r n A m 1 J r n A 1 , W r n = a 0 I + a 1 J r n B 1 + a 2 J r n B 2 + + a l J r n B l , J r n A i = ( I + r n A i ) 1 and J r n B j = ( I + r n B j ) 1 , for i = 1 , 2 , , m ; j = 1 , 2 , , l . k = 0 l a k = 1 , f : C C is a contraction, both { A i } i = 1 m and { B j } j = 1 l are finite families of m-accretive mappings. More details of iterative scheme (A) will be presented in Section 2. We shall prove a weak convergent theorem and a strong convergent theorem under different assumptions on { α n } , { β n } , { ϑ n } , and { r n } , respectively.

In order to prove our main results, we need the following lemmas.

By using the properties of the metric projection and m-accretive mappings, we can easily prove the following two lemmas.

Lemma 1.1 For x H and y C , P C x y 2 + P C x x 2 y x 2 .

Lemma 1.2 For y A 1 0 , x H and r > 0 ,
( I + r A ) 1 x y 2 + ( I + r A ) 1 x x 2 y x 2 .

Lemma 1.3 ([14])

Let { a n } and { b n } be two sequences of nonnegative real numbers satisfying
a n + 1 a n + b n , n 1 .

If n = 1 b n < + , then lim n a n exists.

Lemma 1.4 ([15])

Let H be a real Hilbert space and A be an m-accretive mapping. For λ , μ > 0 and x H , we have
J λ A x = J μ A ( μ λ x + ( 1 μ λ ) J λ A x ) ,

where J λ A = ( I + λ A ) 1 and J μ A = ( I + μ A ) 1 .

Lemma 1.5 ([16])

Let H be a real Hilbert space and C be a closed convex subset of H. Let T : C C be a nonexpansive mapping with F ( T ) , and f C . Then z t , defined by
z t = t f ( z t ) + ( 1 t ) T z t , z t C ,
converges strongly to a point in F ( T ) . If one defines Q : C F ( T ) by Q ( f ) : = lim t 0 z t , f C , then Q ( f ) solves the following variational inequality:
( I f ) Q ( f ) , Q ( f ) p 0 , p F ( T ) .

Lemma 1.6 ([17])

Let { a n } , { b n } , and { c n } be three sequences of nonnegative real numbers satisfying
a n + 1 ( 1 c n ) a n + b n c n , n 1 ,
where { c n } ( 0 , 1 ) such that
  1. (i)

    c n 0 and n = 1 c n = + ,

     
  2. (ii)

    either lim sup n b n 0 or n = 1 | b n c n | < + .

     

Then lim n a n = 0 .

Lemma 1.7 In a Hilbert space H, we can easily get the following inequality:
x + y 2 x 2 + 2 y , x + y , x , y H .

2 Weak and strong convergence theorems

Lemma 2.1 Let H be a real Hilbert space, C be a nonempty closed and convex subset of H and A i , B j : C C ( i = 1 , 2 , , m ; j = 1 , 2 , , l ) be finitely many m-accretive mappings such that D : = ( i = 1 m A i 1 0 ) ( j = 1 l B j 1 0 ) . Suppose S r A m A m 1 A 1 : = J r A m J r A m 1 J r A 1 and W r : = a 0 I + a 1 J r B 1 + a 2 J r B 2 + + a l J r B l , where J r A i = ( I + r A i ) 1 ( i = 1 , 2 , , m ), J r B j = ( I + r B j ) 1 ( j = 1 , 2 , , l ), a k ( 0 , 1 ) , k = 0 , 1 , , l , k = 0 l a k = 1 , and r > 0 . Then S r A m A m 1 A 1 : C C and W r : C C are nonexpansive.

Lemma 2.1 can easily be obtained in view of the facts that ( I + r A i ) 1 and ( I + r B j ) 1 are nonexpansive, i = 1 , 2 , , m ; j = 1 , 2 , , l .

Theorem 2.1 Let H, C, D, and A i , B j : C C ( i = 1 , 2 , , m ; j = 1 , 2 , , l ) be the same as those in Lemma  2.1. Suppose that D . Let { x n } be generated by the iterative scheme (A). If { α n } , { β n } and { ϑ n } are three sequences in [ 0 , 1 ) such that n = 1 α n < + , n = 1 β n < + , n = 1 ϑ n < + , { r n } ( 0 , + ) with lim n r n = + and f : C C is a contraction with contractive constant k ( 0 , 1 ) . Then { x n } converges weakly to a point v 0 D satisfying
lim n x n v 0 = min y D lim n x n y .
(2.1)

Proof We split our proof into five steps.

Step 1. { x n } , { u n } and { y n } are all bounded.

We can easily know that i = 1 m A i 1 0 F ( S r n A m A 1 ) , and j = 1 l B j 1 0 F ( W r n ) . Then for p D , from Lemma 2.1, we have
S r n A m A 1 x n p = S r n A m A 1 x n S r n A m A 1 p x n p .
(2.2)
Based on (2.2), we know that
y n p β n f ( x n ) p + ( 1 β n ) S r n A m A 1 x n p [ 1 β n ( 1 k ) ] x n p + β n f ( p ) p .
(2.3)
Then (2.3) and Lemma 2.1 imply that
u n p ϑ n f ( y n ) f ( p ) + ϑ n f ( p ) p + ( 1 ϑ n ) y n p [ 1 β n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] x n p + [ ϑ n + β n ϑ n β n ( 1 k ) ] f ( p ) p .
(2.4)
Using (2.4), we know that
x n + 1 p α n f ( u n ) f ( p ) + α n f ( p ) p + ( 1 α n ) u n p [ 1 β n ( 1 k ) ] [ 1 α n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] x n p + { [ 1 α n ( 1 k ) ] [ ϑ n + β n ϑ n β n ( 1 k ) ] + α n } f ( p ) p x n p + ( ϑ n + β n + α n ) f ( p ) p .
(2.5)

Then Lemma 1.3 implies that lim n x n p exists, which ensures that { x n } is bounded. Combining with the fact that f is a contraction and noticing (2.2), (2.3), and (2.4), we can easily know that { f ( x n ) } , { u n } , { y n } , { f ( u n ) } , { f ( y n ) } , { S r n A i A 1 x n } ( i = 1 , 2 , , m ), and { J r n B j x n } ( j = 1 , 2 , , l ) are all bounded.

We may let M 1 = max{ sup { x n : n 1 } , sup { y n : n 1 } , sup { u n : n 1 } , sup { f ( x n ) : n 1 } , sup { f ( y n ) : n 1 } , sup { f ( u n ) : n 1 } , sup { S r n A i A 1 x n : n 1 , i = 1 , 2 , , m } , sup { J r n B j x n : n 1 , j = 1 , 2 , , l } }.

Step 2. lim n P D x n x n exists.

In fact, it follows from the property of P D that
P D x n + 1 x n + 1 P D x n x n + 1 .
(2.6)
In view of Lemma 1.1, we know that for v D ,
v P D x n 2 v x n 2 P D x n x n 2 x n v 2 ,
(2.7)

which implies that { P D x n } is bounded since { x n } is bounded from step 1. Then { f ( P D x n ) } is also bounded.

Let M 2 = max { sup { P D x n : n 1 } , sup { f ( P D x n ) : n 1 } } .

Noticing (2.5) and (2.6), we have
x n + 1 P D x n + 1 x n P D x n + ( ϑ n + β n + α n ) f ( P D x n ) P D x n x n P D x n + 2 ( ϑ n + β n + α n ) M 2 .

Therefore, in view of Lemma 1.3, lim n P D x n x n exists.

Step 3. P D x n v 0 , where v 0 D satisfies (2.1), as n .

We first claim that there exists a unique element v 0 D such that
lim n x n v 0 = min y D lim n x n y .

In fact, if we let h ( y ) = lim n x n y , y D . Then we can easily find that h ( y ) : D R + is proper, strictly convex and lower-semi-continuous and h ( y ) + as y + . This ensures that there exists a unique element v 0 D such that h ( v 0 ) = min y D h ( y ) .

From (2.7), we know that
lim n v 0 P D x n 2 lim n ( v 0 x n 2 P D x n x n 2 ) = h 2 ( v 0 ) lim n P D x n x n 2 0 .

Therefore, P D x n v 0 , as n .

Step 4. ω ( x n ) D , where ω ( x n ) denotes the set consisting all of the weak limit points of { x n } .

Since { x n } is bounded, then there exists a subsequence of { x n } , for simplicity, we still denote it by { x n } , such that x n x , as n .

Since is convex, by using Lemma 1.2 and noticing (2.3), we have, for p D ,
y n p 2 β n f ( x n ) p 2 + ( 1 β n ) S r n A m A 1 x n p 2 β n f ( x n ) p 2 + ( 1 β n ) [ S r n A m 1 A 1 x n p 2 S r n A m A 1 x n S r n A m 1 A 1 x n 2 ] β n k x n p 2 + ( 1 β n ) [ x n p 2 S r n A m A 1 x n S r n A m 1 A 1 x n 2 ] + β n f ( p ) p 2 + 2 β n k x n p f ( p ) p x n p 2 ( 1 β n ) S r n A m A 1 x n S r n A m 1 A 1 x n 2 + β n f ( p ) p 2 + 2 β n k x n p f ( p ) p .
(2.8)
Then using (2.8), we have
u n p 2 ϑ n f ( y n ) p 2 + ( 1 ϑ n ) W r n y n W r n p 2 [ 1 ϑ n ( 1 k ) ] y n p 2 + 2 ϑ n k y n p f ( p ) p + ϑ n f ( p ) p 2 x n p 2 ( 1 β n ) S r n A m A 1 x n S r n A m 1 A 1 x n 2 + ( ϑ n + β n ) f ( p ) p 2 + 2 k ( β n x n p + ϑ n y n p ) f ( p ) p ,
(2.9)
which implies that
x n + 1 p 2 [ 1 α n ( 1 k ) ] u n p 2 + 2 α n k u n p f ( p ) p + α n f ( p ) p 2 x n p 2 ( 1 β n ) S r n A m A 1 x n S r n A m 1 A 1 x n 2 + ( α n + β n + ϑ n ) f ( p ) p 2 + 2 k ( α n u n p + β n x n p + ϑ n y n p ) f ( p ) p .
(2.10)
Thus
0 ( 1 β n ) S r n A m A 1 x n S r n A m 1 A 1 x n 2 x n p 2 x n + 1 p 2 + ( α n + β n + ϑ n ) f ( p ) p 2 + 2 k ( α n u n p + β n x n p + ϑ n y n p ) f ( p ) p .
(2.11)

Since from the proof of step 1, we know that lim n x n p exists, then S r n A m A 1 x n S r n A m 1 A 1 x n 0 , as n .

Going back to (2.8) again, we know that
y n p 2 β n f ( x n ) p 2 + ( 1 β n ) S r n A m 1 A 1 x n p 2 β n f ( x n ) p 2 + ( 1 β n ) [ S r n A m 2 A 1 x n p 2 S r n A m 1 A 1 x n S r n A m 2 A 1 x n 2 ] β n k x n p 2 + ( 1 β n ) [ x n p 2 S r n A m 1 A 1 x n S r n A m 2 A 1 x n 2 ] + β n f ( p ) p 2 + 2 β n k x n p f ( p ) p x n p 2 ( 1 β n ) S r n A m 1 A 1 x n S r n A m 2 A 1 x n 2 + β n f ( p ) p 2 + 2 β n k x n p f ( p ) p .
(2.12)
Then using (2.12), repeating the processes of (2.9)-(2.11), we know that
S r n A m 1 A 1 x n S r n A m 2 A 1 x n 0 , as  n .
By using the inductive method, we have the following results:
S r n A m 2 A 1 x n S r n A m 3 A 1 x n 0 , ( I + r n A 1 ) 1 x n x n 0 ,

as n . Therefore, ( I + r n A 1 ) 1 x n x , …, S r n A m A m 1 A 1 x n = ( I + r n A m ) 1 ( I + r n A 1 ) 1 x n x , as n .

Let v n , 1 = ( I + r n A 1 ) 1 x n , then A 1 v n , 1 = x n v n , 1 r n 0 , since r n + and both { x n } and { v n , 1 } are bounded. This ensures that x A 1 1 0 .

Let v n , 2 = ( I + r n A 2 ) 1 ( I + r n A 1 ) 1 x n = ( I + r n A 2 ) 1 v n , 1 , then A 2 v n , 2 = v n , 1 v n , 2 r n 0 , which implies that x A 2 1 0 .

By induction, let v n , m = ( I + r n A m ) 1 ( I + r n A 1 ) 1 x n = ( I + r n A m ) 1 v n , m 1 , then A m v n , m = v n , m 1 v n , m r n 0 , which implies that x A m 1 0 . Thus x i = 1 m A i 1 0 .

Next, we shall show that x j = 1 l B j 1 0 .

From step 1, we may assume that there exists M 3 > 0 such that 2 x n p f ( p ) p + f ( p ) p 2 M 3 , 2 y n p f ( p ) p + f ( p ) p 2 M 3 and 2 u n p f ( p ) p + f ( p ) p 2 M 3 .

Now, computing the following, p D :
y n p 2 [ 1 β n ( 1 k ) ] x n p 2 + β n f ( p ) p 2 + 2 β n k x n p f ( p ) p [ 1 β n ( 1 k ) ] x n p 2 + β n M 3 .
(2.13)
By using Lemma 1.2,
u n p 2 k ϑ n y n p 2 + 2 ϑ n k f ( p ) p y n p + ϑ n f ( p ) p 2 + ( 1 ϑ n ) ( a 0 y n p 2 + j = 1 l a j ( I + r n B j ) 1 y n p 2 ) k ϑ n y n p 2 + 2 ϑ n k f ( p ) p y n p + ϑ n f ( p ) p 2 + ( 1 ϑ n ) [ a 0 y n p 2 + j = 1 l a j ( y n p 2 ( I + r n B j ) 1 y n y n 2 ) ] = [ 1 ϑ n ( 1 k ) ] y n p 2 + 2 ϑ n k y n p f ( p ) p + ϑ n f ( p ) p 2 ( 1 ϑ n ) j = 1 l a j ( I + r n B j ) 1 y n y n 2 y n p 2 ( 1 ϑ n ) j = 1 l a j ( I + r n B j ) 1 y n y n 2 + ϑ n M 3 .
(2.14)
Then (2.13) and (2.14) imply that
x n + 1 p 2 [ 1 α n ( 1 k ) ] u n p 2 + 2 α n k u n p f ( p ) p + α n f ( p ) p 2 [ 1 α n ( 1 k ) ] u n p 2 + α n M 3 [ 1 α n ( 1 k ) ] [ y n p 2 ( 1 ϑ n ) j = 1 l a j ( I + r n B j ) 1 y n y n 2 + ϑ n M 3 ] + α n M 3 [ 1 α n ( 1 k ) ] [ 1 β n ( 1 k ) ] x n p 2 + [ 1 α n ( 1 k ) ] M 3 ( β n + ϑ n ) + α n M 3 [ 1 α n ( 1 k ) ] ( 1 ϑ n ) j = 1 l a j ( I + r n B j ) 1 y n y n 2 .
(2.15)
From step 1, we know that lim n x n p exists, then (2.15) implies that
( I + r n B j ) 1 y n y n 0 , as  n ,  for  j = 1 , 2 , , l .
(2.16)
From the iterative scheme (A), β n 0 , and the results of step 1, we know that
y n S r n A m A m 1 A 1 x n = β n ( f ( x n ) S r n A m A m 1 A 1 x n ) 0 , as  n .

Then y n x , since S r n A m A m 1 A 1 x n x , as n .

Thus from (2.16), we have ( I + r n B j ) 1 y n x , imitating the proof of x i = 1 m A i 1 0 , we can see that x j = 1 l B j 1 0 , and then x D .

Step 5. x n v 0 = lim n P D x n .

In fact, for y D ,
P D x n y , P D x n x n 0 .
(2.17)

From step 3, we know that P D x n v 0 , as n . Let { x n i } be a subsequence of { x n } which is weakly convergent to x 0 . Then x 0 D from step 4. Taking the limits on both sides of (2.17), we know that v 0 y , v 0 x 0 0 .

Letting y = x 0 , we have x 0 = v 0 .

Supposing { x n j } is another subsequence of { x n } such that x n j x 1 as j . Then repeating the above proof, we have x 1 = v 0 . Since all of the weakly convergent subsequences of { x n } converge to the same element v 0 , then the whole sequence { x n } converges weakly to v 0 .

This completes the proof. □

Remark 2.1 To prove the strong convergence result in Theorem 2.2, we need to prove the following two lemmas first and some new proof techniques can be seen.

Lemma 2.2 Let H , C , D , A i , B j : C C ( i = 1 , 2 , , m ; j = 1 , 2 , , l ), S r A m A m 1 A 1 and W r be the same as those in Lemma  2.1. Suppose that D . Then F ( S r A m A m 1 A 1 ) = i = 1 m A i 1 0 and F ( W r ) = j = 1 l B j 1 0 , for r > 0 .

Proof It is easy to check i = 1 m A i 1 0 F ( S r A m A m 1 A 1 ) and j = 1 l B j 1 0 F ( W r ) , for r > 0 .

Next, we shall show that F ( W r ) j = 1 l B j 1 0 .

For p F ( W r ) , q j = 1 l B j 1 0 . Since j = 1 l B j 1 0 F ( W r ) , then q = W r q . Thus
q p = a 0 ( q p ) + a 1 ( J r B 1 q J r B 1 p ) + + a l ( J r B l q J r B l p ) a 0 q p + a 1 J r B 1 q J r B 1 p + + a l J r B l q J r B l p q p .
Then a 0 ( q p q p ) + a 1 ( q p J r B 1 q J r B 1 p ) + + a l ( q p J r B l q J r B l p ) = 0 . Since q p J r B j q J r B j p 0 , j = 1 , 2 , , l , then q p J r B j q J r B j p = 0 , j = 1 , 2 , , l . That is,
q p = J r B j q J r B j p = q J r B j p , j = 1 , 2 , , l .
(2.18)

By using Lemma 1.2 and (2.18), we know that p J r B j p 2 q p 2 q J r B j p 2 = 0 , j = 1 , 2 , , l . Thus p = J r B j p , which implies that p B j 1 0 , j = 1 , 2 , , l . Then F ( W r ) j = 1 l B j 1 0 , for r > 0 .

Finally, we shall show that F ( S r A m A m 1 A 1 ) i = 1 m A i 1 0 .

For p F ( S r A m A m 1 A 1 ) , then p = S r A m A m 1 A 1 p . Let q i = 1 m A i 1 0 , then q = S r A m A m 1 A 1 q , since i = 1 m A i 1 0 F ( S r A m A m 1 A 1 ) . Therefore,
q p = S r A m A m 1 A 1 q S r A m A m 1 A 1 p S r A m 1 A m 2 A 1 q S r A m 1 A m 2 A 1 p S r A m 2 A m 3 A 1 q S r A m 2 A m 3 A 1 p ( I + r A 1 ) 1 q ( I + r A 1 ) 1 p q p .
(2.19)
From (2.19), we know that
q ( I + r A 1 ) 1 p = q p .
(2.20)

Noticing that (2.20) and (2.18) have the same form, then repeating the proof of p = J r B j p , we know that p = ( I + r A 1 ) 1 p and then p A 1 1 0 .

Since p A 1 1 0 , using (2.19) again, we know that
q p = ( I + r A 2 ) 1 ( I + r A 1 ) 1 q ( I + r A 2 ) 1 ( I + r A 1 ) 1 p = q ( I + r A 2 ) 1 p .
(2.21)

Repeating the above proof again, p A 2 1 0 .

By induction, we have p A m 1 0 . Therefore, F ( S r A m A m 1 A 1 ) i = 1 m A i 1 0 .

This completes the proof. □

Lemma 2.3 Let H , C , D , A i , B j : C C ( i = 1 , 2 , , m ; j = 1 , 2 , , l ), S r A m A m 1 A 1 and W r be the same as those in Lemma  2.1. Suppose that D . Then W r S r A m A m 1 A 1 : C C is nonexpansive and F ( W r S r A m A m 1 A 1 ) = D , for r > 0 .

Proof It is easy to check that W r S r A m A m 1 A 1 : C C is nonexpansive. We are left to show that F ( W r S r A m A m 1 A 1 ) = D .

p D , then, from Lemma 2.2, p = S r A m A m 1 A 1 p and p = W r p . Thus p = W r S r A m A m 1 A 1 p , which implies that D F ( W r S r A m A m 1 A 1 ) .

On the other hand, let p F ( W r S r A m A m 1 A 1 ) , then p = W r S r A m A m 1 A 1 p . Let q D , then q = W r S r A m A m 1 A 1 q , since D F ( W r S r A m A m 1 A 1 ) . Then Lemma 2.1 ensures that
p q S r A m A m 1 A 1 p S r A m A m 1 A 1 q S r A m 1 A 1 p S r A m 1 A 1 q J r A 1 p J r A 1 q p q ,
which implies that
J r A 1 p q = S r A 2 A 1 p q = = S r A m A m 1 A 1 p q = p q .

Using the same method as that in Lemma 2.2, p i = 1 m A i 1 0 . Thus p = S r A m A m 1 A 1 p . Since p = W r S r A m A m 1 A 1 p , then p = W r p , which implies that p j = 1 l B j 1 0 from Lemma 2.2. Therefore, F ( W r S r A m A m 1 A 1 ) D .

This completes the proof. □

Theorem 2.2 Suppose H, D, C, { A i } i = 1 m , { B j } j = 1 l and f are the same as those in Theorem  2.1. Let { x n } be generated by the iterative scheme (A). If { α n } , { β n } and { ϑ n } are three sequences in ( 0 , 1 ) and { r n } ( 0 , + ) satisfy the following conditions:
  1. (i)

    n = 1 | α n + 1 α n | < + , and α n 0 , as n ;

     
  2. (ii)

    n = 1 β n = + , n = 1 | β n + 1 β n | < + , and β n 0 , as n ;

     
  3. (iii)

    n = 1 | ϑ n + 1 ϑ n | < + , and ϑ n 0 , as n ;

     
  4. (iv)

    n = 1 | r n + 1 r n | < + , and r n r ε > 0 , as n .

     
Then { x n } converges strongly to a point p 0 D , which is the unique solution of the following variational inequality:
f ( p 0 ) p 0 , p 0 q 0 , q D .
(2.22)

Proof We shall split the proof into five steps:

Step 1. { x n } is bounded.
p D , y n p [ 1 β n ( 1 k ) ] x n p + β n f ( p ) p , u n p [ 1 ϑ n ( 1 k ) ] y n p + ϑ n f ( p ) p .
Letting δ n = α n + β n + ϑ n ( α n β n + α n ϑ n + β n ϑ n ) ( 1 k ) + α n β n ϑ n ( 1 k ) 2 . Then
x n + 1 p [ 1 α n ( 1 k ) ] u n p + α n f ( p ) p [ 1 α n ( 1 k ) ] [ 1 β n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] x n p + { [ 1 α n ( 1 k ) ] ϑ n + α n + [ 1 α n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] β n } f ( p ) p = [ 1 δ n ( 1 k ) ] x n p + δ n f ( p ) p max { x n p , 1 1 k f ( p ) p } , n 1 .

By induction, x n p max { x 1 p , 1 1 k f ( p ) p } , n 1 . Thus { x n } is bounded.

Step 2. lim n x n + 1 x n = 0 and lim n x n u n = 0 .

In fact,
y n y n 1 | β n β n 1 | f ( x n ) S r n A m A 1 x n + β n 1 f ( x n ) f ( x n 1 ) + ( 1 β n 1 ) S r n A m A 1 x n S r n 1 A m A 1 x n 1 2 M 1 | β n β n 1 | + β n 1 k x n x n 1 + ( 1 β n 1 ) S r n A m A 1 x n S r n 1 A m A 1 x n 1 .
(2.23)

Next we discuss S r n A m A 1 x n S r n 1 A m A 1 x n 1 .

If r n 1 r n , then in view of Lemma 1.4,
J r n A 1 x n J r n 1 A 1 x n 1 = J r n 1 A 1 ( r n 1 r n x n + ( 1 r n 1 r n ) J r n A 1 x n ) J r n 1 A 1 x n 1 r n 1 r n x n + ( 1 r n 1 r n ) J r n A 1 x n x n 1 r n 1 r n x n x n 1 + ( 1 r n 1 r n ) J r n A 1 x n x n 1 x n x n 1 + r n r n 1 ε J r n A 1 x n x n 1 .
(2.24)
For p D , let M 4 = M 1 + p , then
J r n A 1 x n x n 1 ( I + r n A 1 ) 1 x n p + p x n 1 x n p + p x n 1 2 M 4 .
(2.25)
From (2.24) and (2.25), we know that
J r n A 1 x n J r n 1 A 1 x n 1 x n x n 1 + 2 M 4 r n r n 1 ε .
(2.26)
Notice that S r n A 2 A 1 x n = J r n A 2 J r n A 1 x n and S r n 1 A 2 A 1 x n 1 = J r n 1 A 2 J r n 1 A 1 x n 1 ; similar to (2.26), we have
S r n A 2 A 1 x n S r n 1 A 2 A 1 x n 1 J r n A 1 x n J r n 1 A 1 x n 1 + 2 M 4 r n r n 1 ε .
(2.27)
Following from (2.26) and (2.27), we have
S r n A 2 A 1 x n S r n 1 A 2 A 1 x n 1 x n x n 1 + 2 × 2 M 4 r n r n 1 ε .
Then by induction, we can get the following result:
S r n A m A 1 x n S r n 1 A m A 1 x n 1 x n x n 1 + 2 × m M 4 r n r n 1 ε .
(2.28)
Putting (2.28) into (2.23), and letting M 5 = max { 2 × m M 4 ε , 2 M 1 } ,
y n y n 1 [ 1 β n ( 1 k ) ] x n x n 1 + 2 × m M 4 ε ( r n r n 1 ) + 2 M 1 | β n β n 1 | [ 1 β n ( 1 k ) ] x n x n 1 + M 5 [ ( r n r n 1 ) + | β n β n 1 | ] .
(2.29)
If r n r n 1 , then imitating the above proof, we have
y n y n 1 [ 1 β n ( 1 k ) ] x n x n 1 + M 5 [ ( r n 1 r n ) + | β n β n 1 | ] .
(2.30)
Combining (2.29) and (2.30),
y n y n 1 [ 1 β n ( 1 k ) ] x n x n 1 + M 5 ( | r n 1 r n | + | β n β n 1 | ) .
(2.31)
Similar to the discussion of (2.24), we have
W r n y n W r n 1 y n 1 a 0 y n y n 1 + j = 1 l a j J r n B j y n J r n 1 B j y n 1 a 0 y n y n 1 + j = 1 l a j ( y n y n 1 + | r n r n 1 | ε J r n B j y n y n 1 ) y n y n 1 + 2 M 1 | r n r n 1 | ε .
(2.32)
Using (2.32), then
u n u n 1 ϑ n k y n y n 1 + | ϑ n ϑ n 1 | ( f ( y n 1 ) + W r n 1 y n 1 ) + ( 1 ϑ n ) W r n y n W r n 1 y n 1 [ 1 ϑ n ( 1 k ) ] y n y n 1 + 2 M 1 | ϑ n ϑ n 1 | + 2 M 1 ε | r n r n 1 | .
(2.33)
Based on (2.31) and (2.33), and letting M 6 = M 5 + 2 M 1 ε , we have
x n + 1 x n α n f ( u n ) f ( u n 1 ) + | α n α n 1 | f ( u n 1 ) + ( 1 α n ) u n u n 1 + | α n α n 1 | u n 1 [ 1 α n ( 1 k ) ] u n u n 1 + 2 M 1 | α n α n 1 | [ 1 α n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] y n y n 1 + 2 M 1 ( | ϑ n ϑ n 1 | + | α n α n 1 | ) + 2 M 1 ε | r n r n 1 | [ 1 β n ( 1 k ) ] x n x n 1 + M 6 ( | r n r n 1 | + | β n β n 1 | + | α n α n 1 | + | ϑ n ϑ n 1 | ) .

In view of Lemma 1.6, we know that x n + 1 x n 0 , as n . Combining with the fact that x n + 1 u n = α n f ( u n ) u n 0 , we can easily know that x n u n x n + 1 x n + x n + 1 u n 0 , as n .

Step 3. W r u n u n 0 , and S r A m A m 1 A 1 u n u n 0 , as n . In view of Lemma 1.4 again, we know that
S r n A 1 x n S r A 1 x n = J r A 1 ( r r n x n + ( 1 r r n ) J r n A 1 x n ) J r A 1 x n | 1 r r n | J r n A 1 x n x n 2 M 1 | 1 r r n | ,
and then
S r n A 2 A 1 x n S r A 2 A 1 x n r r n J r n A 1 x n J r A 1 x n + | 1 r r n | S r n A 2 A 1 x n J r A 1 x n 2 M 1 | 1 r r n | ( r r n + 1 ) .
By induction,
S r n A m A 1 x n S r A m A 1 x n 2 M 1 | 1 r r n | [ ( r r n ) m 1 + + r r n + 1 ] 0 ,
(2.34)

as n , since r n r .

p D , continuing the computation of (2.15), we have
0 [ 1 α n ( 1 k ) ] ( 1 ϑ n ) j = 1 l a j ( I + r n B j ) 1 y n y n 2 x n p 2 x n + 1 p 2 + M 3 ( α n + β n + ϑ n ) .
From step 2, we know that x n x n + 1 0 , then ( I + r n B j ) 1 y n y n 0 , j = 1 , 2 , , l , which implies that
W r n y n y n 0 , as  n .
(2.35)

Noticing that u n W r n y n = ϑ n f ( y n ) W r n y n 0 , and y n S r n A m A 1 x n = β n f ( x n ) S r n A m A 1 x n 0 , as n .

Combining with the facts of (2.34), (2.35), and step 2, we know that
u n S r A m A 1 u n u n W r n y n + W r n y n y n + y n S r n A m A 1 x n + S r n A m A 1 x n S r A m A 1 x n + S r A m A 1 x n S r A m A 1 u n 0 , as  n .
Using Lemma 1.4 again, then
W r n y n W r y n j = 1 l a j J r n B j y n J r B j y n 2 M 1 ( 1 a 0 ) | 1 r r n | 0 .

Since W r u n W r y n u n y n ϑ n f ( y n ) y n + ( 1 ϑ n ) W r n y n y n 0 , then u n W r u n u n W r n y n + W r n y n W r y n + W r y n W r u n 0 , as n .

Step 4. lim sup n f ( p 0 ) p 0 , u n p 0 0 , lim sup n f ( p 0 ) p 0 , x n + 1 p 0 0 , lim sup n f ( p 0 ) p 0 , y n p 0 0 , where p 0 satisfies (2.22).

Using Lemmas 1.5 and 2.3, we know that if we let z t = t f ( z t ) + ( 1 t ) W r S r A m A m 1 A 1 z t , r > 0 and t ( 0 , 1 ) , then z t p 0 F ( W r S r A m A m 1 A 1 ) = D , as t 0 + . And, p 0 satisfies (2.22).

From step 3, we may choose t n ( 0 , 1 ) such that t n 0 , S r A m A 1 u n u n t n 0 , and W r u n u n t n 0 , as n .

Using Lemma 1.7,
z t n u n 2 ( 1 t n ) 2 W r S r A m A 1 z t n u n 2 + 2 t n f ( z t n ) u n , z t n u n ( 1 t n ) 2 [ z t n u n + u n S r A m A 1 u n + u n W r u n ] 2 + 2 t n f ( z t n ) z t n , z t n u n + 2 t n z t n u n 2 .
Then
f ( z t n ) z t n , u n z t n t n 2 z t n u n 2 + ( 1 t n ) 2 t n z t n u n ( S r A m A 1 u n u n + u n W r u n ) + ( 1 t n ) 2 2 t n ( S r A m A 1 u n u n + W r u n u n ) 2 .
(2.36)

Since { S r A m A 1 u n } , { W r u n } , { x n } , { u n } and { z t n } are all bounded, and S r A m A 1 u n u n t n 0 , and W r u n u n t n 0 , from (2.36), lim sup n f ( z t n ) z t n , u n z t n 0 .

Recalling that z t n p 0 , then z t n p 0 , u n z t n 0 . Thus lim sup n f ( z t n ) p 0 , u n z t n 0 . Since f ( z t n ) p 0 , u n p 0 = f ( z t n ) p 0 , u n z t n + f ( z t n ) p 0 , z t n p 0 , then lim sup n f ( p 0 ) p 0 , u n p 0 0 . Then from step 2, lim sup n f ( p 0 ) p 0 , x n + 1 p 0 0 .

Noticing that
f ( p 0 ) p 0 , y n p 0 = f ( p 0 ) p 0 , y n W r n y n + f ( p 0 ) p 0 , W r n y n u n + f ( p 0 ) p 0 , u n x n + 1 + f ( p 0 ) p 0 , x n + 1 p 0 ,

and using (2.35), iterative scheme (A) and the result of step 2, we have lim sup n f ( p 0 ) p 0 , y n p 0 0 .

Step 5. x n p 0 , which satisfies (2.22), as n .

Using Lemma 1.7, we know that
y n p 0 2 [ 1 β n ( 1 k ) ] x n p 0 2 + 2 β n f ( p 0 ) p 0 , y n p 0 .
(2.37)
We have
u n p 0 2 [ 1 ϑ n ( 1 k ) ] y n p 0 2 + 2 ϑ n f ( p 0 ) p 0 , u n p 0 .
(2.38)
Letting M 7 = max { ( M 1 + p 0 ) 2 , 2 ( M 1 + p 0 ) ( f ( p 0 ) + p 0 ) } and using (2.37) and (2.38), we have
x n + 1 p 0 2 [ 1 α n ( 1 k ) ] u n p 0 2 + 2 α n f ( p 0 ) p 0 , x n + 1 p 0 [ 1 α n ( 1 k ) ] [ 1 β n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] x n p 0 2 + 2 [ 1 α n ( 1 k ) ] [ 1 ϑ n ( 1 k ) ] β n f ( p 0 ) p 0 , y n p 0 + 2 [ 1 α n ( 1 k ) ] ϑ n f ( p 0 ) p 0 , u n p 0 + 2 α n f ( p 0 ) p 0 , x n + 1 p 0 [ 1 ( 1 k ) ( α n + β n + ϑ n ) ] x n p 0 2 + M 7 ( 1 k ) 2 ( α n β n + β n ϑ n + α n ϑ n ) + 2 α n ϑ n ( 1 k ) p 0 f ( p 0 ) , u n p 0 + 2 ( α n β n + β n ϑ n ) ( 1 k ) p 0 f ( p 0 ) , y n p 0 + 2 α n β n ϑ n ( 1 k ) 2 f ( p 0 ) p 0 , y n p 0 + 2 α n f ( p 0 ) p 0 , x n + 1 p 0 + 2 β n f ( p 0 ) p 0 , y n p 0 + 2 ϑ n f ( p 0 ) p 0 , u n p 0 [ 1 ( 1 k ) ( α n + β n + ϑ n ) ] x n p 0 2 + M 7 ( 1 k ) 2 ( α n β n + β n ϑ n + α n ϑ n ) + M 7 ( 1 k ) ( α n β n + β n ϑ n + α n ϑ n ) + 2 M 7 α n β n ϑ n ( 1 k ) 2 + 2 α n f ( p 0 ) p 0 , x n + 1 p 0 + 2 β n f ( p 0 ) p 0 , y n p 0 + 2 ϑ n f ( p 0 ) p 0 , u n p 0 .
(2.39)

Let c n = ( α n + β n + ϑ n ) ( 1 k ) , then c n 0 and n = 1 c n = + .

Let b n = M 7 [ ( 2 k ) ( α n β n + β n ϑ n + α n ϑ n ) α n + β n + ϑ n + 2 ( 1 k ) α n β n ϑ n α n + β n + ϑ n ] + 2 α n ( α n + β n + ϑ n ) ( 1 k ) f ( p 0 ) p 0 , x n + 1 p 0 + 2 ϑ n ( α n + β n + ϑ n ) ( 1 k ) f ( p 0 ) p 0 , u n p 0 + 2 β n ( α n + β n + ϑ n ) ( 1 k ) f ( p 0 ) p 0 , y n p 0 .

Notice that lim n α n β n + β n ϑ n + α n ϑ n α n + β n + ϑ n = 0 , lim n α n β n ϑ n α n + β n + ϑ n = 0 and from the results in step 4, we have lim sup n + b n 0 .

Using Lemma 1.6, x n p 0 , which satisfies (2.22), as n .

This completes the proof. □

Remark 2.2 The iterative construction in this paper generalizes and extends some corresponding ones in [2, 4, 12, 13], etc., in Hilbert spaces.

Declarations

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 11071053), Natural Science Foundation of Hebei (No. A2014207010), Key Project of Science and Research of Hebei Education Department (ZH2012080) and Key Project of Science and Research of Hebei University of Economics and Business (2013KYZ01).

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Hebei University of Economics and Business

References

  1. Browder FE: Nonlinear mappings of nonexpansive and accretive type in Banach spaces. Bull. Am. Math. Soc. 1967, 73: 875–882. 10.1090/S0002-9904-1967-11823-8View ArticleMathSciNetGoogle Scholar
  2. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056View ArticleMathSciNetGoogle Scholar
  3. Mann WR: Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4: 506–510. 10.1090/S0002-9939-1953-0054846-3View ArticleGoogle Scholar
  4. Qin XL, Su YF: Approximation of a zero point of accretive operator in Banach spaces. J. Math. Anal. Appl. 2007, 329: 415–424. 10.1016/j.jmaa.2006.06.067View ArticleMathSciNetGoogle Scholar
  5. Mainge PE: Viscosity methods for zeroes of accretive operators. J. Approx. Theory 2006, 140: 127–140. 10.1016/j.jat.2005.11.017View ArticleMathSciNetGoogle Scholar
  6. Qin XL, Cho SY, Wang L: Iterative algorithms with errors for zero points of m -accretive operators. Fixed Point Theory Appl. 2013., 2013: Article ID 148Google Scholar
  7. Ceng LC, Wu SY, Yao JC: New accuracy criteria for modified approximate proximal point algorithms in Hilbert spaces. Taiwan. J. Math. 2008, 12: 1691–1705.MathSciNetGoogle Scholar
  8. Xu HK: Strong convergence of an iterative method for nonexpansive and accretive operators. J. Math. Anal. Appl. 2006, 314: 631–643. 10.1016/j.jmaa.2005.04.082View ArticleMathSciNetGoogle Scholar
  9. Cho YJ, Kang SM, Zhou HY: Approximate proximal point algorithms for finding zeroes of maximal monotone operators in Hilbert spaces. J. Inequal. Appl. 2008., 2008: Article ID 598191Google Scholar
  10. Ceng LC, Khan AR, Ansari QH, Yao JC: Strong convergence of composite iterative schemes for zeros of m -accretive operators in Banach spaces. Nonlinear Anal. 2009, 70: 1830–1840. 10.1016/j.na.2008.02.083View ArticleMathSciNetGoogle Scholar
  11. Chen RD, Liu YJ, Shen XL: Iterative approximation of a zero of accretive operator in Banach space. Nonlinear Anal. 2009, 71: e346-e350. 10.1016/j.na.2008.11.054View ArticleMathSciNetGoogle Scholar
  12. Zegeye H, Shahzad N: Strong convergence theorems for a common zero of a finite family of m -accretive mappings. Nonlinear Anal. 2007, 66: 1161–1169. 10.1016/j.na.2006.01.012View ArticleMathSciNetGoogle Scholar
  13. Hu LG, Liu LW: A new iterative algorithm for common solutions of a finite family of accretive operators. Nonlinear Anal. 2009, 70: 2344–2351. 10.1016/j.na.2008.03.016View ArticleMathSciNetGoogle Scholar
  14. Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309View ArticleMathSciNetGoogle Scholar
  15. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Space. Noordhoff, Groningen; 1976.View ArticleGoogle Scholar
  16. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059View ArticleMathSciNetGoogle Scholar
  17. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar

Copyright

© Wei and Tan; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.