Skip to content

Advertisement

  • Research
  • Open Access

Mann’s type extragradient for solving split feasibility and fixed point problems of Lipschitz asymptotically quasi-nonexpansive mappings

Fixed Point Theory and Applications20132013:349

https://doi.org/10.1186/1687-1812-2013-349

  • Received: 16 September 2013
  • Accepted: 14 November 2013
  • Published:

Abstract

The purpose of this paper is to introduce and analyze Mann’s type extragradient for finding a common solution set Γ of the split feasibility problem and the set Fix ( T ) of fixed points of Lipschitz asymptotically quasi-nonexpansive mappings T in the setting of infinite-dimensional Hilbert spaces. Consequently, we prove that the sequence generated by the proposed algorithm converges weakly to an element of Fix ( T ) Γ under mild assumption. The result presented in the paper also improves and extends some result of Xu (Inverse Probl. 26:105018, 2010; Inverse Probl. 22:2021-2034, 2006) and some others.

MSC:49J40, 47H05.

Keywords

  • split feasibility problems
  • fixed point problems
  • extragradient methods
  • asymptotically quasi-nonexpansive mappings
  • maximal monotone mappings

1 Introduction

The split feasibility problem (SFP) in finite dimensional spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. Recently, it has been found that the SFP can also be used in various disciplines such as image restoration, computer tomograph and radiation therapy treatment planning [35]. The split feasibility problem in an infinite-dimensional Hilbert space can be found in [2, 4, 610] and references therein.

Throughout this paper, we always assume that H 1 , H 2 are real Hilbert spaces, ‘→’, ‘’ denote strong and weak convergence, respectively, and F ( T ) is the fixed point set of a mapping T.

Let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces H 1 and H 2 , respectively, and let A B ( H 1 , H 2 ) , where B ( H 1 , H 2 ) denotes the class of all bounded linear operators from H 1 to H 2 . The split feasibility problem (SFP) is finding a point x ˆ with the property
x ˆ C , A x ˆ Q .
(1.1)
In the sequel, we use Γ to denote the set of solutions of SFP (1.1), i.e.,
Γ = { x ˆ C : A x ˆ Q } .
Assuming that the SFP is consistent (i.e., (1.1) has a solution), it is not hard to see that x C solves (1.1) if and only if it solves the fixed-point equation
x = P C ( I γ A ( I P Q ) A ) x , x C ,
(1.2)

where P C and P Q are the (orthogonal) projections onto C and Q, respectively, γ > 0 is any positive constant, and A denotes the adjoint of A.

To solve (1.2), Byrne [2] proposed his CQ algorithm, which generates a sequence ( x k ) by
x k + 1 = P C ( I γ A ( I P Q ) A ) x k , k N ,
(1.3)

where γ ( 0 , 2 / λ ) , and again λ is the spectral radius of the operator A A .

The CQ algorithm (1.3) is a special case of the Krasnonsel’skii-Mann (K-M) algorithm. The K-M algorithm generates a sequence { x n } according to the recursive formula
x n + 1 = ( 1 α n ) x n + α n T x n ,
where { α n } is a sequence in the interval ( 0 , 1 ) and the initial guess x 0 C is chosen arbitrarily. Due to the fixed point for formulation (1.2) of the SFP, we can apply the K-M algorithm to the operator P C ( I γ A ( I P Q ) A ) to obtain a sequence given by
x k + 1 = ( 1 α k ) x k + α k P C ( I γ A ( I P Q ) A ) x k , k N ,
(1.4)

where γ ( 0 , 2 / λ ) , and again λ is the spectral radius of the operator A A .

Then, as long as ( x k ) satisfies the condition k = 1 α k ( 1 α k ) = + , we have weak convergence of the sequence generated by (1.4).

Very recently, Xu [8] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.

On the other hand, Korpelevich [11] introduced an iterative method, the so-called extragradient method, for finding the solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of a saddle point problem.

Motivated by the idea of an extragradient method in [12], Ceng [13] introduced and analyzed an extragradient method with regularization for finding a common element of the solution set Γ of the split feasibility problem and the set Fix ( T ) of a nonexpansive mapping T in the setting of infinite-dimensional Hilbert spaces. Chang [14] introduced an algorithm for solving the split feasibility problems for total quasi-asymptotically nonexpansive mappings in infinite-dimensional Hilbert spaces.

The purpose of this paper is to study and analyze a Mann’s type extragradient method for finding a common element of the solution set Γ of the SFP and the set Fix ( T ) of asymptotically quasi-nonexpansive mappings and Lipshitz continuous mappings in a real Hilbert space. We prove that the sequence generated by the proposed method converges weakly to an element x ˆ in Fix ( T ) Γ .

2 Preliminaries

We first recall some definitions, notations and conclusions which will be needed in proving our main results.

Let H be a real Hilbert space with the inner product , and , and let C be a nonempty closed and convex subset of H.

Let E be a Banach space. A mapping T : E E is said to be demi-closed at origin if for any sequence { x n } E with x n x and ( I T ) x n 0 , then x = T x .

A Banach space E is said to have the Opial property if for any sequence { x n } with x n x , then
lim inf n x n x < lim inf n x n y , y E  with  y x .

Remark 2.1 It is well known that each Hilbert space possesses the Opial property.

Proposition 2.2 For given x H and z C :
  1. (i)

    z = P C x if and only if x z , y z 0 for all y C .

     
  2. (ii)

    z = P C x if and only if x z 2 x y 2 y z 2 for all y C .

     
  3. (iii)

    For all x , y H , P C x P C y , x y P C x P C y 2 .

     
Definition 2.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H. We denote by F ( T ) the set of fixed points of T, that is, F ( T ) = { x C : x = T x } . Then T is said to be
  1. (1)

    nonexpansive if T x T y x y for all x , y C ;

     
  2. (2)
    asymptotically nonexpansive if there exists a sequence k n 1 , lim n k n = 1 and
    T n x T n y k n x y
    (2.1)

    for all x , y C and n 1 ;

     
  3. (3)
    asymptotically quasi-nonexpansive if there exists a sequence k n 1 , lim n k n = 1 and
    T n x p k n x p
    (2.2)

    for all x C , p F ( T ) and n 1 ;

     
  4. (4)
    uniformly L-Lipschitzian if there exists a constant L > 0 such that
    T n x T n y L x y
    (2.3)

    for all x , y C and n 1 .

     
Remark 2.4 By the above definitions, it is clear that:
  1. (i)

    a nonexpansive mapping is an asymptotically quasi-nonexpansive mapping;

     
  2. (ii)

    a quasi-nonexpansive mapping is an asymptotically-quasi nonexpansive mapping;

     
  3. (iii)

    an asymptotically nonexpansive mapping is an asymptotically quasi-nonexpansive mapping.

     

Proposition 2.5 (see [15])

We have the following assertions.
  1. (1)

    T is nonexpansive if and only if the complement I T is 1 2 -ism.

     
  2. (2)

    If T is ν-ism and γ > 0 , then γT is ν γ -ism.

     
  3. (3)

    T is averaged if and only if the complement I T is ν-ism for some ν > 1 2 .

     

Indeed, for α ( 0 , 1 ) , T is α-averaged if and only if I T is 1 2 α -ism.

Proposition 2.6 (see [15, 16])

Let S , T , V : H H be given operators. We have the following assertions.
  1. (1)

    If T = ( 1 α ) S + α V for some α ( 0 , 1 ) , S is averaged and V is nonexpansive, then T is averaged.

     
  2. (2)

    T is firmly nonexpansive if and only if the complement I T is firmly nonexpansive.

     
  3. (3)

    If T = ( 1 α ) S + α V for some α ( 0 , 1 ) , S is firmly nonexpansive and V is nonexpansive, then T is averaged.

     
  4. (4)

    The composite of finite many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 n is averaged, then so is the composite T 1 T 2 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 ( 0 , 1 ) , then the composite T 1 T 2 is α-averaged, where α = α 1 + α 2 α 1 α 2 .

     
  5. (5)
    If the mappings { T i } i = 1 n are averaged and have a common fixed point, then
    i = 1 n Fix ( T i ) = Fix ( T 1 T N ) .
     

The notation Fix ( T ) denotes the set of all fixed points of the mapping T, that is, Fix ( T ) = { x H : T x = x } .

Lemma 2.7 (see [17], demiclosedness principle)

Let C be a nonempty closed and convex subset of a real Hilbert space H, and let T : C C be a nonexpansive mapping with Fix ( S ) . If the sequence { x n } C converges weakly to x and the sequence { ( I S ) x n } converges strongly to y, then ( I S ) x = y ; in particular, if y = 0 , then x Fix ( S ) .

Lemma 2.8 (see [18])

Let { a n } n = 1 and { b n } n = 1 be two sequences of nonnegative numbers satisfying the inequality
a n + 1 a n + b n , n 0 ,

if n = 0 b n converges, then lim n a n exists.

The following lemma gives some characterizations and useful properties of the metric projection P C in a Hilbert space.

For every point x H , there exists a unique nearest point in C, denoted by P C x , such that
x P C x x y , y C ,
(2.4)

where P C is called the metric projection of H onto C. We know that P C is a nonexpansive mapping of H onto C.

Lemma 2.9 (see [19])

Let C be a nonempty closed and convex subset of a real Hilbert space H, and let P C be the metric projection from H onto C. Given x H and z C , then z = P C x if and only if the following holds:
x z , y z 0 , y C .
(2.5)

Lemma 2.10 (see [20])

Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let P C : H C be the metric projection from H onto C. Then the following inequality holds:
y P C x 2 + x P C x 2 x y 2 , x H , y C .
(2.6)

Lemma 2.11 (see [19])

Let H be a real Hilbert space. Then the following equations hold:
  1. (i)

    x y 2 = x 2 y 2 2 x y , y for all x , y H ;

     
  2. (ii)

    t x + ( 1 t ) y 2 = t x 2 + ( 1 t ) y 2 t ( 1 t ) x y 2 for all t [ 0 , 1 ] and x , y H .

     
Throughout this paper, we assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let f : H 1 R be a continuous differentiable function. The minimization problem
min x C f ( x ) : = 1 2 A x P Q A x 2
(2.7)
is ill-posed. Therefore (see [8]) consider the following Tikhonov regularized problem:
min x C f α ( x ) : = 1 2 A x P Q A x 2 + 1 2 α x 2 ,
(2.8)

where α > 0 is the regularization parameter.

We observe that the gradient
f α = f + α I = A ( I P Q ) A + α I
(2.9)

is ( α + A 2 ) -Lipschitz continuous and α-strongly monotone.

Let C be a nonempty closed convex subset of a real Hilbert space H, and let F : C H be a monotone mapping. The variational inequality problem (VIP) is to find x C such that
F x , y x 0 , y C .
The solution set of the VIP is denoted by VIP ( C , F ) . It is well known that
x VI ( C , F ) x = P C ( x λ F x ) , λ > 0 .
A set-valued mapping T : H 2 H is called monotone if for all x , y H , f T x and g T y imply
x y , f g 0 .
A monotone mapping T : H 2 H is called maximal if its graph G ( T ) is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if, for ( x , f ) H × H , x y , f g 0 for every ( y , g ) G ( T ) implies f T x . Let F : C H be a monotone and k-Lipschitz continuous mapping, and let N C v be the normal cone to C at v C , that is,
N C v = { w H : v u , w 0 , u C } .
Define
T v = { F v + N C v if  v C , if  v C .

Then T is maximal monotone and 0 T v if and only if v VI ( C , F ) ; see [18] for more details.

We can use fixed point algorithms to solve the SFP on the basis of the following observation.

Let λ > 0 and assume that x Γ . Then A x Q , which implies that ( I P Q ) A x = 0 , and thus λ A ( I P Q ) A x = 0 . Hence, we have the fixed point equation ( I λ A ( I P Q ) A ) x = x . Requiring that x C , we consider the fixed point equation
P C ( I λ f ) x = P C ( I λ A ( I P Q ) A ) x = x .
(2.10)

It is proved in [[8], Proposition 3.2] that the solutions of fixed point equation (2.10) are exactly the solutions of the SFP; namely, for given x H 1 , x solves the SFP if and only if x solves fixed point equation (2.10).

Proposition 2.12 (see [13])

Given x H 1 , the following statements are equivalent.
  1. (i)

    x solves the SFP;

     
  2. (ii)

    x solves fixed point equation (2.10);

     
  3. (iii)
    x solves the variational inequality problem (VIP) of finding x C such that
    f ( x ) , x x 0 , x C ,
    (2.11)
     

where f = A ( I P Q ) A and A is the adjoint of A.

Proof (i) (ii). See the proof in ([8], Proposition 3.2).

(ii) (iii). Observe that
P C ( I λ A ( I P Q ) A ) x = x ( I λ A ( I P Q ) A ) x x , x x 0 , x C λ A ( I P Q ) A x , x x 0 , x C f ( x ) , x x 0 , x C ,

where f = A ( I P Q ) A . □

Remark 2.13 It is clear from Proposition 2.12 that
Γ = Fix ( P C ( I λ f ) ) = VI ( C , f ) ,

for any λ > 0 , where Fix ( P C ( I λ f ) ) and VI ( C , f ) denote the set of fixed points of P C ( I λ f ) and the solution set of VIP.

Proposition 2.14 (see [13])

There hold the following statements:
  1. (i)
    the gradient
    f α = f + α I = A ( I P Q ) A + α I

    is ( α + A 2 ) -Lipschitz continuous and α-strongly monotone;

     
  2. (ii)
    the mapping P C ( I λ f α ) is a contraction with coefficient
    1 λ ( 2 α λ ( A 2 + α ) 2 ) ( 1 α λ 1 1 2 α λ ) ,

    where 0 < λ α ( A 2 + α ) 2 ;

     
  3. (iii)

    if the SFP is consistent, then the strong lim α 0 x α exists and is the minimum norm solution of the SFP.

     

3 Main result

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let T : C C be an uniformly L-Lipschitzian and asymptotically quasi-nonexpansive mapping with Fix ( T ) Γ and { k n } [ 1 , ) for all n N such that lim n k n = 1 . Let { x n } , { y n } and { u n } be the sequences in C generated by the following algorithm:
{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n x n ) , u n = P C ( x n λ n f α n y n ) , x n + 1 = β n u n + ( 1 β n ) T n u n ,
(3.1)
where f α n = f + α n I = A ( I P Q ) A + α n I , and the sequences { α n } , { λ n } and { β n } satisfy the following conditions:
  1. (i)

    0 < lim inf n β n lim sup n β n < 1 ,

     
  2. (ii)

    { λ n } ( 0 , 1 A 2 ) and n = 1 λ n < ,

     
  3. (iii)

    n = 1 α n < .

     

Then the sequence { x n } converges weakly to an element x ˆ Fix ( T ) Γ .

Proof We first show that P C ( I λ f α ) is ζ-averaged for each λ n ( 0 , 2 α + A 2 ) , where
ζ = 2 + λ ( α + A 2 ) 4 .
Indeed, it is easy to see that f = A ( I P Q ) A is 1 A 2 -ism, that is,
f ( x ) f ( y ) , x y 1 A 2 f ( x ) f ( y ) 2 .
Observe that
( α + A 2 ) f α ( x ) f α ( y ) , x y = ( α + A 2 ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α A 2 x y 2 + A 2 f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f ( x ) f ( y ) 2 .
Hence, it follows that f α = α I + A ( I P Q ) A is 1 α + A 2 -ism. Thus, λ f α is 1 λ ( α + A 2 ) -ism. By Proposition 2.5(iii) the composite ( I λ f α ) is λ ( α + A 2 ) 2 -averaged. Therefore, noting that P C is 1 2 -averaged and utilizing Proposition 2.6(iv), we know that for each λ ( 0 , 2 α + A 2 ) , P C ( I λ f α ) is ζ-averaged with
ζ = 1 2 + λ ( α + A 2 ) 2 1 2 λ ( α + A 2 ) 2 = 2 + λ ( α + A 2 ) 4 ( 0 , 1 ) .
This shows that P C ( I λ f α ) is nonexpansive. Furthermore, for { λ n } [ a , b ] with a , b ( 0 , 1 A 2 ) , utilizing the fact that lim n 1 α n + A 2 = 1 A 2 , we may assume that
0 < a λ n b < 1 A 2 = lim n 1 α n + A 2 , n 0 .
Without loss of generality, we may assume that
0 < a λ n b < 1 α n + A 2 , n 0 .
Consequently, it follows that for each integer n 0 , P C ( I λ n f α n ) is ζ n -averaged with
ζ n = 1 2 + λ n ( α n + A 2 ) 2 1 2 λ n ( α n + A 2 ) 2 = 2 + λ n ( α n + A 2 ) 4 ( 0 , 1 ) .

This immediately implies that P C ( I λ n f α n ) is nonexpansive for all n 0 .

We divide the remainder of the proof into several steps.

Step 1. We will prove that { x n } is bounded. Indeed, we take fixed p Fix ( T ) Γ arbitrarily. Then we get P C ( I λ n f ) p = p for λ n ( 0 , 1 A 2 ) . Since P C and ( I λ n f α n ) are nonexpansive mappings, then we have
y n p = P C ( I λ n f α n ) x n P C ( I λ n f ) p P C ( I λ n f α n ) x n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p x n p + ( I λ n f α n ) p ( I λ n f ) p = x n p + p λ n f α n p ( p λ n f p ) = x n p + λ n f p λ n f α n p = x n p + λ n f p f α n p = x n p + λ n f p f p α n p = x n p + λ n α n p
(3.2)
and
u n p = P C ( x n λ n f α n y n ) p = P C ( x n λ n f α n y n ) P C ( I λ n f ) p ( x n λ n f α n y n ) ( p λ n f ) p = ( x n p ) + ( λ n f p λ n f α n y n ) = ( x n p ) + λ n ( f p f α n y n ) = ( x n p ) + λ n ( f p f α n p + f α n p f α n y n ) = ( x n p ) + λ n ( f p ( f p + α n p ) ) + λ n ( f α n p f α n y n ) x n p + λ n α n p + λ n f α n ( p ) f α n ( y n ) x n p + λ n α n p + λ n ( α n + A 2 ) p y n .
(3.3)
Substituting (3.2) into (3.3) and simplifying, we have
u n p x n p + λ n α n p + λ n ( α n + A 2 ) p y n = x n p + λ n α n p + λ n ( α n + A 2 ) [ x n p + λ n α n p ] = x n p + λ n α n p + λ n ( α n + A 2 ) x n p + λ n 2 α n ( α n + A 2 ) p = x n p + λ n α n p + λ n α n x n p + λ n A 2 x n p + λ n 2 α n 2 p + λ n 2 α n A 2 p = ( 1 + λ n α n + λ n A 2 ) x n p + λ n α n p ( 1 + λ n α n + λ n A 2 ) .
(3.4)
Since u n = P C ( x n λ n f α n y n ) for each n 0 , then by Proposition 2.2(ii) we have
u n p 2 x n λ n f α n ( y n ) p 2 x n λ n f α n ( y n ) u n 2 = x n p 2 x n u n 2 + 2 λ n f α n ( y n ) , p u n = x n p 2 x n u n 2 + 2 λ n ( f α n ( y n ) f α n ( p ) , p y n + f α n ( p ) , p y n + f α n ( y n ) , y n u n ) x n p 2 x n u n 2 + 2 λ n ( f α n ( p ) , p y n + f α n ( y n ) , y n u n ) = x n p 2 x n u n 2 + 2 λ n [ ( α n I + f ) p , p y n + f α n ( y n ) , y n u n ] x n p 2 x n u n 2 + 2 λ n [ α n p , p u n + f α n ( y n ) , y n u n ] = x n p 2 x n y n + y n u n 2 + 2 λ n [ α n p , p u n + f α n ( y n ) , y n u n ] = x n p 2 x n y n 2 2 x n y n , y n u n y n u n 2 + 2 λ n [ α n p , p u n + f α n ( y n ) , y n u n ] = x n p 2 x n y n 2 y n u n 2 + 2 x n λ n f α n ( y n ) y n , u n y n + 2 λ n α n p , p u n .
Furthermore, by Proposition 2.2(i) we have
x n λ n f α n ( y n ) y n , u n y n = x n λ n f α n ( x n ) y n , u n y n + λ n f α n ( x n ) λ n f α n ( y n ) , u n y n λ n f α n ( x n ) λ n f α n ( y n ) , u n y n λ n f α n ( x n ) f α n ( y n ) u n y n λ n ( α n + A 2 ) x n y n u n y n .
So, we obtain
u n p 2 x n p 2 x n y n 2 y n u n 2 + 2 λ n ( α n + A 2 ) x n y n u n y n + 2 λ n α n p p u n .
(3.5)
Consider
[ λ n ( α n + A 2 ) x n y n u n y n ] 2 = λ n 2 ( α n + A 2 ) 2 x n y n 2 2 λ n ( α n + A 2 ) x n y n u n y n + u n y n 2 ,
it follows that
2 λ n ( α n + A 2 ) x n y n u n y n = λ n 2 ( α n + A 2 ) 2 x n y n 2 + u n y n 2 [ λ n ( α n + A 2 ) x n y n u n y n ] 2 λ n 2 ( α n + A 2 ) 2 x n y n 2 + u n y n 2 .
(3.6)
Substituting (3.6) into (3.5) and simplifying, we have
u n p 2 x n p 2 x n y n 2 y n u n 2 + λ n 2 ( α n + A 2 ) 2 x n y n 2 + u n y n 2 + 2 λ n α n p p u n = x n p 2 x n y n 2 + λ n 2 ( α n + A 2 ) 2 x n y n 2 + 2 λ n α n p p u n = x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p p u n .
(3.7)
Substituting (3.4) into (3.7) and simplifying, we have
u n p 2 x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p [ ( 1 + λ n α n + λ n A 2 ) x n p + λ n α n p ( 1 + λ n α n + λ n A 2 ) ] = x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p ( 1 + λ n α n + λ n A 2 ) x n p + 2 λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) .
(3.8)
Consequently, utilizing Lemma 2.11(ii) and the last relations, we conclude that
x n + 1 p 2 = β n u n + ( 1 β n ) T n u n ( β n + ( 1 β n ) ) p 2 = β n u n β n p + ( 1 β n ) T n u n ( 1 β n ) p 2 = β n ( u n p ) + ( 1 β n ) ( T n u n p ) 2 = β n u n p 2 + ( 1 β n ) T n u n p 2 β n ( 1 β n ) u n T n u n 2 β n u n p 2 + ( 1 β n ) k n 2 u n p 2 β n ( 1 β n ) u n T n u n 2 = ( β n + ( 1 β n ) k n 2 ) u n p 2 β n ( 1 β n ) u n T n u n 2 = ( β n + ( 1 β n ) k n 2 ) { x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p ( 1 + λ n α n + λ n A 2 ) x n p + 2 λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) } β n ( 1 β n ) u n T n u n 2 = ( β n + ( 1 β n ) k n 2 ) x n p 2 + ( β n + ( 1 β n ) k n 2 ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n ( β n + ( 1 β n ) k n 2 ) ( 1 + λ n α n + α n A 2 ) p x n p 2 + 2 ( β n + ( 1 β n ) k n 2 ) λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) β n ( 1 β n ) u n T n u n 2 = ( k n 2 β n ( k n 2 1 ) ) x n p 2 + ( k n 2 β n ( k n 2 1 ) ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n ( k n 2 β n ( k n 2 1 ) ) ( 1 + λ n α n + α n A 2 ) p x n p 2 + 2 ( k n 2 β n ( k n 2 1 ) ) λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) β n ( 1 β n ) u n T n u n 2 .
(3.9)
Since lim n k n = 1 , (i)-(iii) and by Corollary 2.8, we deduce that
lim n x n p  exists for each  p Fix ( T ) Γ ,
(3.10)
and the sequences { x n } , { u n } and { y n } are bounded. It follows that
T n x n p k n x n p .

Hence { T n x n p } is bounded.

Step 2. We will prove that
lim n u n T u n = 0 .
From (3.9) we have
x n + 1 p 2 ( k n 2 β n ( k n 2 1 ) ) x n p 2 + ( k n 2 β n ( k n 2 1 ) ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n ( k n 2 β n ( k n 2 1 ) ) ( 1 + λ n α n + α n A 2 ) p x n p 2 + 2 ( k n 2 β n ( k n 2 1 ) ) λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) β n ( 1 β n ) u n T n u n 2 = ( k n 2 β n ( k n 2 1 ) ) x n p 2 + ( k n 2 β n ( k n 2 1 ) ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + α n ( k n 2 β n ( k n 2 1 ) ) M 1 + α n ( k n 2 β n ( k n 2 1 ) ) M 2 β n ( 1 β n ) u n T n u n 2 = ( k n 2 β n ( k n 2 1 ) ) x n p 2 ( k n 2 β n ( k n 2 1 ) ) ( 1 λ n 2 ( α n + A 2 ) 2 ) x n y n 2 + α n ( k n 2 β n ( k n 2 1 ) ) ( M 1 + M 2 ) β n ( 1 β n ) u n T n u n 2 ,
where M 1 = sup n 0 { 2 λ n ( 1 + λ n α n + α n A 2 ) p x n p 2 } < and
M 2 = sup n 0 { 2 λ n 2 α n p 2 ( 1 + λ n α n + λ n A 2 ) } < .
So,
( k n 2 β n ( k n 2 1 ) ) ( 1 λ n 2 ( α n + A 2 ) 2 ) x n y n 2 + β n ( 1 β n ) u n T n u n 2 ( k n 2 β n ( k n 2 1 ) ) x n p 2 x n + 1 p 2 + α n ( k n 2 β n ( k n 2 1 ) ) ( M 1 + M 2 ) .
Since lim n k n = 1 , α n 0 , (i) and from (3.10), we have
lim n 0 x n y n = lim n 0 u n T n u n = 0 .
(3.11)
Furthermore, we obtain
y n u n = P C ( x n λ n f α n ( x n ) ) P C ( x n λ n f α n ( y n ) ) ( x n λ n f α n ( x n ) ) ( x n λ n f α n ( y n ) ) = λ n f α n ( x n ) f α n ( y n ) λ n ( α n + A 2 ) x n y n .
This together with (3.11) implies that
lim n 0 y n u n = 0 .
(3.12)
Also,
x n u n x n y n + y n u n
together with (3.11) and (3.12) implies that
lim n 0 x n u n = 0 .
(3.13)
We can rewrite (3.11) from (3.13) by
lim n 0 x n T n u n = 0 .
(3.14)
Consider
x n + 1 x n = β n u n + ( 1 β n ) T n u n x n β n u n x n + ( 1 β n ) T n u n x n .
From (3.13) and (3.14), we obtain
x n + 1 x n 0 ( as  n ) .
(3.15)
Next, we will show that (3.11) implies that
lim n 0 u n T u n = 0 .
(3.16)
We compute that
y n + 1 y n = P C ( x n + 1 λ n + 1 f α n + 1 x n + 1 ) P C ( x n λ n f α n x n ) = P C ( I λ n + 1 f α n + 1 ) x n + 1 P C ( I λ n f α n ) x n P C ( I λ n + 1 f α n + 1 ) x n + 1 P C ( I λ n + 1 f α n + 1 ) x n + P C ( I λ n + 1 f α n + 1 ) x n P C ( I λ n f α n ) x n x n + 1 x n + ( I λ n + 1 f α n + 1 ) x n ( I λ n f α n ) x n = x n + 1 x n + x n λ n + 1 f α n + 1 x n ( x n λ n f α n x n ) = x n + 1 x n + λ n f α n x n λ n + 1 f α n + 1 x n = x n + 1 x n + λ n ( f + α n ) x n λ n + 1 ( f + α n + 1 ) x n = x n + 1 x n + λ n f x n + λ n α n x n ( λ n + 1 f x n + λ n + 1 α n + 1 x n ) = x n + 1 x n + ( λ n λ n + 1 ) f x n + λ n α n x n λ n + 1 α n + 1 x n = x n + 1 x n + ( λ n λ n + 1 ) f x n + λ n α n x n λ n α n + 1 x n + λ n α n + 1 x n λ n + 1 α n + 1 x n = x n + 1 x n + ( λ n λ n + 1 ) f x n + λ n ( α n α n + 1 ) x n + ( λ n λ n + 1 ) α n + 1 x n x n + 1 x n + | λ n λ n + 1 | f x n + λ n | α n α n + 1 | x n + α n + 1 | λ n λ n + 1 | x n .
From conditions (ii), (iii) and (3.15), we obtain that
y n + 1 y n 0 ( as  n )
(3.17)
and
u n + 1 u n = P C ( x n + 1 λ n + 1 f α n + 1 y n + 1 ) P C ( x n λ n f α n y n ) ( x n + 1 λ n + 1 f α n + 1 y n + 1 ) ( x n λ n f α n y n ) = ( x n + 1 x n ) + ( λ n f α n y n λ n + 1 f α n + 1 y n + 1 ) x n + 1 x n + λ n f α n y n λ n + 1 f α n + 1 y n + 1 = x n + 1 x n + λ n ( f + α n ) y n λ n + 1 ( f + α n + 1 ) y n + 1 = x n + 1 x n + λ n f y n + λ n α n y n ( λ n + 1 f y n + 1 + λ n + 1 α n + 1 y n + 1 ) = x n + 1 x n + ( λ n f y n λ n + 1 f y n + 1 ) + λ n α n y n λ n + 1 α n + 1 y n + 1 x n + 1 x n + λ n f y n λ n + 1 f y n + 1 + λ n α n y n λ n + 1 α n + 1 y n + 1 = x n + 1 x n + ( λ n f y n λ n f y n + 1 ) + ( λ n f y n + 1 λ n + 1 f y n + 1 ) + ( λ n α n y n λ n α n y n + 1 ) + ( λ n α n y n + 1 λ n + 1 α n + 1 y n + 1 ) x n + 1 x n + λ n f y n f y n + 1 + | λ n λ n + 1 | f y n + 1 + λ n α n y n y n + 1 + | λ n α n λ n + 1 α n + 1 | y n + 1 .
From conditions (ii), (iii), (3.15) and (3.17), we obtain that
u n + 1 u n 0 ( as  n ) .
(3.18)
Since T is uniformly L-Lipschitzian continuous, then
u n T u n u n u n + 1 + u n + 1 T n + 1 u n + 1 + T n + 1 u n + 1 T n + 1 u n + T n + 1 u n T u n u n u n + 1 + u n + 1 T n + 1 u n + 1 + L u n u n + 1 + L T n u n u n .
Since lim n u n + 1 u n = 0 and lim n u n T n u n = 0 , it follows that
lim n u n T u n = 0 .
(3.19)

Step 3. We will show that x ˆ Fix ( T ) Γ .

We have from (3.11)
x n y n 0 ( as  n ) .
(3.20)
Since f = A ( I P Q ) A is Lipschitz continuous and from (3.11), we have
lim n f ( x n ) f ( y n ) = 0 .

Since { x n } is bounded, there is a subsequence { x n i } of { x n } that converges weakly to some  x ˆ .

First, we show that x ˆ Γ . Since x n y n 0 , it is known that y n i x ˆ .

Put
A w = { f w + N C w if  w C , if  w C ,
where N C w = { z H 1 : w v , z 0 , v C } . Then A is maximal monotone and 0 A w if and only if w VI ( C , f ) ; see [21] for more details. Let ( w , z ) G ( A ) , we have
z A w = f w + N C w ,
and hence
z f w N C w .
So, we have
w v , z f w 0 , v C .
On the other hand, from
u n = P C ( x n λ n f α n y n ) and w C ,
we have
x n λ n f α n y n u n , u n w 0 ,
and hence
w u n , u n x n λ n + f α n y n 0 .
Therefore from z f w N C w and { u n i } C it follows that
w u n i , z w u n i , f w w u n i , f w w u n i , u n i x n i λ n i + f α n i y n i = w u n i , f w w u n i , u n i x n i λ n i + f y n i α n i w u n i , y n i = w u n i , f w f u n i + w u n i , f u n i f y n i w u n i , u n i x n i λ n i α n i w u n i , y n i w u n i , f u n i f y n i w u n i , u n i x n i λ n i α n i w u n i , y n i .
Hence, we obtain
w x ˆ , z 0 as  i .

Since A is maximal monotone, we have x ˆ A 0 1 , and hence x ˆ VI ( C , f ) . Thus it is clear that x ˆ Γ .

Next, we show that x ˆ Fix ( T ) . Indeed, since y n i x ˆ and u n i T u n i 0 , by (3.16) and Lemma 2.7, we get x ˆ Fix ( T ) . Therefore, we have x ˆ Fix ( T ) Γ .

Now we prove that x n x ˆ and y n x ˆ .

Suppose the contrary and let { x n k } be another subsequences of { x n } such that { x n k } x . Then x Fix ( T ) Γ . Let us show that x ˆ = x . Assume that x ˆ x . From the Opial condition [22], we have
lim n x n x ˆ = lim k inf x n k x ˆ < lim k inf x n k x = lim n x n x = lim k inf x n k x < lim k inf x n k x ˆ = lim n x n x ˆ .
This is a contradiction. Thus, we have x ˆ = x . This implies
x n x ˆ Fix ( T ) Γ .

Further, from x n y n 0 it follows that y n x ˆ . This shows that both sequences { y n } and { u n } converge weakly to x ˆ Fix ( T ) Γ . This completes the proof. □

Utilising Theorem 3.1, we have the following new results in the setting of real Hilbert spaces.

Take T n T in Theorem 3.1. Therefore the conclusion follows.

Corollary 3.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let T : C C be an uniformly L-Lipschitzian and quasi-nonexpansive mapping with Fix ( T ) Γ . Let { x n } , { y n } and { u n } be the sequences in C generated by the following algorithm:
{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n x n ) , u n = P C ( x n λ n f α n y n ) , x n + 1 = β n u n + ( 1 β n ) T n u n ,
(3.21)
where f α n = f + α n I = A ( I P Q ) A + α n I , and the sequences { α n } , { λ n } and { β n } satisfy the following conditions:
  1. (i)

    0 < lim inf n β n lim sup n β n < 1 ,

     
  2. (ii)

    { λ n } ( 0 , 2 A 2 ) and n = 1 λ n < ,

     
  3. (iii)

    n = 1 α n < .

     

Then the sequence { x n } converges weakly to an element x ˆ Fix ( T ) Γ .

Take T n I (identity mappings) in Theorem 3.1. Therefore the conclusion follows.

Corollary 3.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let T : C C be an uniformly L-Lipschitzian with Fix ( T ) Γ . Let { x n } , { y n } and { u n } be the sequences in C generated by the following algorithm:
{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n x n ) , u n = P C ( x n λ n f α n y n ) , x n + 1 = β n u n + ( 1 β n ) T n u n ,
(3.22)
where f α n = f + α n I = A ( I P Q ) A + α n I , and the sequences { α n } , { λ n } and { β n } satisfy the following conditions:
  1. (i)

    0 < lim inf n β n lim sup n β n < 1 ,

     
  2. (ii)

    { λ n } ( 0 , 2 A 2 ) ,

     
  3. (iii)

    n = 1 α n < .

     

Then the sequence { x n } converges weakly to an element x ˆ Fix ( T ) Γ .

Remark 3.4 Theorem 3.1 improves and extends [[8], Theorem 5.7] in the following respects:
  1. (a)

    The iterative algorithm [[8], Theorem 5.7] is extended for developing our Mann’s type extragradient algorithm in Theorem 3.1.

     
  2. (b)

    The technique of proving weak convergence in Theorem 3.1 is different from that in [[8], Theorem 5.7] because our technique uses asymptotically quasi-nonexpansive mappings and the property of maximal monotone mappings.

     
  3. (c)

    The problem of finding a common element of Fix ( T ) Γ for asymptotically quasi-nonexpansive mappings is more general than that for nonexpansive mappings and the problem of finding a solution of the (SFP) in [[8], Theorem 5.7].

     

Declarations

Acknowledgements

The authors thank the referees for comments and suggestions on this manuscript. The first author was supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0033/2554) and the King Mongkut’s University of Technology Thonburi.

Authors’ Affiliations

(1)
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha Uthit Rd., Bang Mod, Thrung Khru, Bangkok, 10140, Thailand

References

  1. Censor Y, Elving T: A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
  2. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310MathSciNetView ArticleGoogle Scholar
  3. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problem in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001View ArticleGoogle Scholar
  4. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017MathSciNetView ArticleGoogle Scholar
  5. Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010MathSciNetView ArticleGoogle Scholar
  6. Deepho J, Kumam P: A modified Halpern’s iterative scheme for solving split feasibility problems. Abstr. Appl. Anal. 2012., 2012: Article ID 876069Google Scholar
  7. Sunthrayuth, P, Cho, YJ, Kumam, P: General iterative algorithms approach to variational inequalities and minimum-norm fixed point for minimization and split feasibility problems. Opsearch (2013, in press). doi:10.1007/s12597–013–0150–5. 10.1007/s12597-013-0150-5Google Scholar
  8. Xu HK: A variable Krasnosel’skill-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007View ArticleGoogle Scholar
  9. Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014View ArticleGoogle Scholar
  10. Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/0266-5611/21/5/017View ArticleGoogle Scholar
  11. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 1976, 12: 747–756.Google Scholar
  12. Phiangsungnoen S, Kumam P: A hybrid extragradient method for solving Ky Fan inequalities, variational inequalities and fixed point problems. II. Proceedings of the International MultiConference of Engineers and Computer Scientists 2013 2013, 1042–1047.Google Scholar
  13. Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64(4):633–642. 10.1016/j.camwa.2011.12.074MathSciNetView ArticleGoogle Scholar
  14. Chang S: Split feasibility problems total quasi-asymptotically nonexpansive mappings. Fixed Point Theory Appl. 2012., 2012: Article ID 151 10.1186/1687-1812-2012-151Google Scholar
  15. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 26: 103–120.MathSciNetView ArticleGoogle Scholar
  16. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operator. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157MathSciNetView ArticleGoogle Scholar
  17. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.View ArticleGoogle Scholar
  18. Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309MathSciNetView ArticleGoogle Scholar
  19. Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert space. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055MathSciNetView ArticleGoogle Scholar
  20. Nakajo K, Takahashi W: Strong convergence theorem for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279: 372–379. 10.1016/S0022-247X(02)00458-4MathSciNetView ArticleGoogle Scholar
  21. Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5MathSciNetView ArticleGoogle Scholar
  22. Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0MathSciNetView ArticleGoogle Scholar

Copyright

© Deepho and Kumam; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement