Skip to main content

Mann’s type extragradient for solving split feasibility and fixed point problems of Lipschitz asymptotically quasi-nonexpansive mappings

Abstract

The purpose of this paper is to introduce and analyze Mann’s type extragradient for finding a common solution set Γ of the split feasibility problem and the set Fix(T) of fixed points of Lipschitz asymptotically quasi-nonexpansive mappings T in the setting of infinite-dimensional Hilbert spaces. Consequently, we prove that the sequence generated by the proposed algorithm converges weakly to an element of Fix(T)Γ under mild assumption. The result presented in the paper also improves and extends some result of Xu (Inverse Probl. 26:105018, 2010; Inverse Probl. 22:2021-2034, 2006) and some others.

MSC:49J40, 47H05.

1 Introduction

The split feasibility problem (SFP) in finite dimensional spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. Recently, it has been found that the SFP can also be used in various disciplines such as image restoration, computer tomograph and radiation therapy treatment planning [35]. The split feasibility problem in an infinite-dimensional Hilbert space can be found in [2, 4, 610] and references therein.

Throughout this paper, we always assume that H 1 , H 2 are real Hilbert spaces, ‘→’, ‘’ denote strong and weak convergence, respectively, and F(T) is the fixed point set of a mapping T.

Let C and Q be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces H 1 and H 2 , respectively, and let AB( H 1 , H 2 ), where B( H 1 , H 2 ) denotes the class of all bounded linear operators from H 1 to H 2 . The split feasibility problem (SFP) is finding a point x ˆ with the property

x ˆ C,A x ˆ Q.
(1.1)

In the sequel, we use Γ to denote the set of solutions of SFP (1.1), i.e.,

Γ={ x ˆ C:A x ˆ Q}.

Assuming that the SFP is consistent (i.e., (1.1) has a solution), it is not hard to see that xC solves (1.1) if and only if it solves the fixed-point equation

x= P C ( I γ A ( I P Q ) A ) x,xC,
(1.2)

where P C and P Q are the (orthogonal) projections onto C and Q, respectively, γ>0 is any positive constant, and A denotes the adjoint of A.

To solve (1.2), Byrne [2] proposed his CQ algorithm, which generates a sequence ( x k ) by

x k + 1 = P C ( I γ A ( I P Q ) A ) x k ,kN,
(1.3)

where γ(0,2/λ), and again λ is the spectral radius of the operator A A.

The CQ algorithm (1.3) is a special case of the Krasnonsel’skii-Mann (K-M) algorithm. The K-M algorithm generates a sequence { x n } according to the recursive formula

x n + 1 =(1 α n ) x n + α n T x n ,

where { α n } is a sequence in the interval (0,1) and the initial guess x 0 C is chosen arbitrarily. Due to the fixed point for formulation (1.2) of the SFP, we can apply the K-M algorithm to the operator P C (Iγ A (I P Q )A) to obtain a sequence given by

x k + 1 =(1 α k ) x k + α k P C ( I γ A ( I P Q ) A ) x k ,kN,
(1.4)

where γ(0,2/λ), and again λ is the spectral radius of the operator A A.

Then, as long as ( x k ) satisfies the condition k = 1 α k (1 α k )=+, we have weak convergence of the sequence generated by (1.4).

Very recently, Xu [8] gave a continuation of the study on the CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged CQ algorithm which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.

On the other hand, Korpelevich [11] introduced an iterative method, the so-called extragradient method, for finding the solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of a saddle point problem.

Motivated by the idea of an extragradient method in [12], Ceng [13] introduced and analyzed an extragradient method with regularization for finding a common element of the solution set Γ of the split feasibility problem and the set Fix(T) of a nonexpansive mapping T in the setting of infinite-dimensional Hilbert spaces. Chang [14] introduced an algorithm for solving the split feasibility problems for total quasi-asymptotically nonexpansive mappings in infinite-dimensional Hilbert spaces.

The purpose of this paper is to study and analyze a Mann’s type extragradient method for finding a common element of the solution set Γ of the SFP and the set Fix(T) of asymptotically quasi-nonexpansive mappings and Lipshitz continuous mappings in a real Hilbert space. We prove that the sequence generated by the proposed method converges weakly to an element x ˆ in Fix(T)Γ.

2 Preliminaries

We first recall some definitions, notations and conclusions which will be needed in proving our main results.

Let H be a real Hilbert space with the inner product , and , and let C be a nonempty closed and convex subset of H.

Let E be a Banach space. A mapping T:EE is said to be demi-closed at origin if for any sequence { x n }E with x n x and (IT) x n 0, then x =T x .

A Banach space E is said to have the Opial property if for any sequence { x n } with x n x , then

lim inf n x n x < lim inf n x n y,yE with y x .

Remark 2.1 It is well known that each Hilbert space possesses the Opial property.

Proposition 2.2 For given xH and zC:

  1. (i)

    z= P C x if and only if xz,yz0 for all yC.

  2. (ii)

    z= P C x if and only if x z 2 x y 2 y z 2 for all yC.

  3. (iii)

    For all x,yH, P C x P C y,xy P C x P C y 2 .

Definition 2.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H. We denote by F(T) the set of fixed points of T, that is, F(T)={xC:x=Tx}. Then T is said to be

  1. (1)

    nonexpansive if TxTyxy for all x,yC;

  2. (2)

    asymptotically nonexpansive if there exists a sequence k n 1, lim n k n =1 and

    T n x T n y k n xy
    (2.1)

    for all x,yC and n1;

  3. (3)

    asymptotically quasi-nonexpansive if there exists a sequence k n 1, lim n k n =1 and

    T n x p k n xp
    (2.2)

    for all xC, pF(T) and n1;

  4. (4)

    uniformly L-Lipschitzian if there exists a constant L>0 such that

    T n x T n y Lxy
    (2.3)

    for all x,yC and n1.

Remark 2.4 By the above definitions, it is clear that:

  1. (i)

    a nonexpansive mapping is an asymptotically quasi-nonexpansive mapping;

  2. (ii)

    a quasi-nonexpansive mapping is an asymptotically-quasi nonexpansive mapping;

  3. (iii)

    an asymptotically nonexpansive mapping is an asymptotically quasi-nonexpansive mapping.

Proposition 2.5 (see [15])

We have the following assertions.

  1. (1)

    T is nonexpansive if and only if the complement IT is 1 2 -ism.

  2. (2)

    If T is ν-ism and γ>0, then γT is ν γ -ism.

  3. (3)

    T is averaged if and only if the complement IT is ν-ism for some ν> 1 2 .

Indeed, for α(0,1), T is α-averaged if and only if IT is 1 2 α -ism.

Proposition 2.6 (see [15, 16])

Let S,T,V:HH be given operators. We have the following assertions.

  1. (1)

    If T=(1α)S+αV for some α(0,1), S is averaged and V is nonexpansive, then T is averaged.

  2. (2)

    T is firmly nonexpansive if and only if the complement IT is firmly nonexpansive.

  3. (3)

    If T=(1α)S+αV for some α(0,1), S is firmly nonexpansive and V is nonexpansive, then T is averaged.

  4. (4)

    The composite of finite many averaged mappings is averaged. That is, if each of the mappings { T i } i = 1 n is averaged, then so is the composite T 1 T 2 T N . In particular, if T 1 is α 1 -averaged and T 2 is α 2 -averaged, where α 1 , α 2 (0,1), then the composite T 1 T 2 is α-averaged, where α= α 1 + α 2 α 1 α 2 .

  5. (5)

    If the mappings { T i } i = 1 n are averaged and have a common fixed point, then

    i = 1 n Fix( T i )=Fix( T 1 T N ).

The notation Fix(T) denotes the set of all fixed points of the mapping T, that is, Fix(T)={xH:Tx=x}.

Lemma 2.7 (see [17], demiclosedness principle)

Let C be a nonempty closed and convex subset of a real Hilbert space H, and let T:CC be a nonexpansive mapping with Fix(S). If the sequence { x n }C converges weakly to x and the sequence {(IS) x n } converges strongly to y, then (IS)x=y; in particular, if y=0, then xFix(S).

Lemma 2.8 (see [18])

Let { a n } n = 1 and { b n } n = 1 be two sequences of nonnegative numbers satisfying the inequality

a n + 1 a n + b n ,n0,

if n = 0 b n converges, then lim n a n exists.

The following lemma gives some characterizations and useful properties of the metric projection P C in a Hilbert space.

For every point xH, there exists a unique nearest point in C, denoted by P C x, such that

x P C xxy,yC,
(2.4)

where P C is called the metric projection of H onto C. We know that P C is a nonexpansive mapping of H onto C.

Lemma 2.9 (see [19])

Let C be a nonempty closed and convex subset of a real Hilbert space H, and let P C be the metric projection from H onto C. Given xH and zC, then z= P C x if and only if the following holds:

xz,yz0,yC.
(2.5)

Lemma 2.10 (see [20])

Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let P C :HC be the metric projection from H onto C. Then the following inequality holds:

y P C x 2 + x P C x 2 x y 2 ,xH,yC.
(2.6)

Lemma 2.11 (see [19])

Let H be a real Hilbert space. Then the following equations hold:

  1. (i)

    x y 2 = x 2 y 2 2xy,y for all x,yH;

  2. (ii)

    t x + ( 1 t ) y 2 =t x 2 +(1t) y 2 t(1t) x y 2 for all t[0,1] and x,yH.

Throughout this paper, we assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let f: H 1 R be a continuous differentiable function. The minimization problem

min x C f(x):= 1 2 A x P Q A x 2
(2.7)

is ill-posed. Therefore (see [8]) consider the following Tikhonov regularized problem:

min x C f α (x):= 1 2 A x P Q A x 2 + 1 2 α x 2 ,
(2.8)

where α>0 is the regularization parameter.

We observe that the gradient

f α =f+αI= A (I P Q )A+αI
(2.9)

is (α+ A 2 )-Lipschitz continuous and α-strongly monotone.

Let C be a nonempty closed convex subset of a real Hilbert space H, and let F:CH be a monotone mapping. The variational inequality problem (VIP) is to find xC such that

Fx,yx0,yC.

The solution set of the VIP is denoted by VIP(C,F). It is well known that

xVI(C,F)x= P C (xλFx),λ>0.

A set-valued mapping T:H 2 H is called monotone if for all x,yH, fTx and gTy imply

xy,fg0.

A monotone mapping T:H 2 H is called maximal if its graph G(T) is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping T is maximal if and only if, for (x,f)H×H, xy,fg0 for every (y,g)G(T) implies fTx. Let F:CH be a monotone and k-Lipschitz continuous mapping, and let N C v be the normal cone to C at vC, that is,

N C v= { w H : v u , w 0 , u C } .

Define

Tv= { F v + N C v if  v C , if  v C .

Then T is maximal monotone and 0Tv if and only if vVI(C,F); see [18] for more details.

We can use fixed point algorithms to solve the SFP on the basis of the following observation.

Let λ>0 and assume that x Γ. Then A x Q, which implies that (I P Q )A x =0, and thus λ A (I P Q )A x =0. Hence, we have the fixed point equation (Iλ A (I P Q )A) x = x . Requiring that x C, we consider the fixed point equation

P C (Iλf) x = P C ( I λ A ( I P Q ) A ) x = x .
(2.10)

It is proved in [[8], Proposition 3.2] that the solutions of fixed point equation (2.10) are exactly the solutions of the SFP; namely, for given x H 1 , x solves the SFP if and only if x solves fixed point equation (2.10).

Proposition 2.12 (see [13])

Given x H 1 , the following statements are equivalent.

  1. (i)

    x solves the SFP;

  2. (ii)

    x solves fixed point equation (2.10);

  3. (iii)

    x solves the variational inequality problem (VIP) of finding x C such that

    f ( x ) , x x 0,xC,
    (2.11)

where f= A (I P Q )A and A is the adjoint of A.

Proof (i) (ii). See the proof in ([8], Proposition 3.2).

(ii) (iii). Observe that

P C ( I λ A ( I P Q ) A ) x = x ( I λ A ( I P Q ) A ) x x , x x 0 , x C λ A ( I P Q ) A x , x x 0 , x C f ( x ) , x x 0 , x C ,

where f= A (I P Q )A. □

Remark 2.13 It is clear from Proposition 2.12 that

Γ=Fix ( P C ( I λ f ) ) =VI(C,f),

for any λ>0, where Fix( P C (Iλf)) and VI(C,f) denote the set of fixed points of P C (Iλf) and the solution set of VIP.

Proposition 2.14 (see [13])

There hold the following statements:

  1. (i)

    the gradient

    f α =f+αI= A (I P Q )A+αI

    is (α+ A 2 )-Lipschitz continuous and α-strongly monotone;

  2. (ii)

    the mapping P C (Iλ f α ) is a contraction with coefficient

    1 λ ( 2 α λ ( A 2 + α ) 2 ) ( 1 α λ 1 1 2 α λ ) ,

    where 0<λ α ( A 2 + α ) 2 ;

  3. (iii)

    if the SFP is consistent, then the strong lim α 0 x α exists and is the minimum norm solution of the SFP.

3 Main result

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let T:CC be an uniformly L-Lipschitzian and asymptotically quasi-nonexpansive mapping with Fix(T)Γ and { k n }[1,) for all nN such that lim n k n =1. Let { x n }, { y n } and { u n } be the sequences in C generated by the following algorithm:

{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n x n ) , u n = P C ( x n λ n f α n y n ) , x n + 1 = β n u n + ( 1 β n ) T n u n ,
(3.1)

where f α n =f+ α n I= A (I P Q )A+ α n I, and the sequences { α n }, { λ n } and { β n } satisfy the following conditions:

  1. (i)

    0< lim inf n β n lim sup n β n <1,

  2. (ii)

    { λ n }(0, 1 A 2 ) and n = 1 λ n <,

  3. (iii)

    n = 1 α n <.

Then the sequence { x n } converges weakly to an element x ˆ Fix(T)Γ.

Proof We first show that P C (Iλ f α ) is ζ-averaged for each λ n (0, 2 α + A 2 ), where

ζ= 2 + λ ( α + A 2 ) 4 .

Indeed, it is easy to see that f= A (I P Q )A is 1 A 2 -ism, that is,

f ( x ) f ( y ) , x y 1 A 2 f ( x ) f ( y ) 2 .

Observe that

( α + A 2 ) f α ( x ) f α ( y ) , x y = ( α + A 2 ) [ α x y 2 + f ( x ) f ( y ) , x y ] = α 2 x y 2 + α f ( x ) f ( y ) , x y + α A 2 x y 2 + A 2 f ( x ) f ( y ) , x y α 2 x y 2 + 2 α f ( x ) f ( y ) , x y + f ( x ) f ( y ) 2 = α ( x y ) + f ( x ) f ( y ) 2 = f ( x ) f ( y ) 2 .

Hence, it follows that f α =αI+ A (I P Q )A is 1 α + A 2 -ism. Thus, λ f α is 1 λ ( α + A 2 ) -ism. By Proposition 2.5(iii) the composite (Iλ f α ) is λ ( α + A 2 ) 2 -averaged. Therefore, noting that P C is 1 2 -averaged and utilizing Proposition 2.6(iv), we know that for each λ(0, 2 α + A 2 ), P C (Iλ f α ) is ζ-averaged with

ζ= 1 2 + λ ( α + A 2 ) 2 1 2 λ ( α + A 2 ) 2 = 2 + λ ( α + A 2 ) 4 (0,1).

This shows that P C (Iλ f α ) is nonexpansive. Furthermore, for { λ n }[a,b] with a,b(0, 1 A 2 ), utilizing the fact that lim n 1 α n + A 2 = 1 A 2 , we may assume that

0<a λ n b< 1 A 2 = lim n 1 α n + A 2 ,n0.

Without loss of generality, we may assume that

0<a λ n b< 1 α n + A 2 ,n0.

Consequently, it follows that for each integer n0, P C (I λ n f α n ) is ζ n -averaged with

ζ n = 1 2 + λ n ( α n + A 2 ) 2 1 2 λ n ( α n + A 2 ) 2 = 2 + λ n ( α n + A 2 ) 4 (0,1).

This immediately implies that P C (I λ n f α n ) is nonexpansive for all n0.

We divide the remainder of the proof into several steps.

Step 1. We will prove that { x n } is bounded. Indeed, we take fixed pFix(T)Γ arbitrarily. Then we get P C (I λ n f)p=p for λ n (0, 1 A 2 ). Since P C and (I λ n f α n ) are nonexpansive mappings, then we have

y n p = P C ( I λ n f α n ) x n P C ( I λ n f ) p P C ( I λ n f α n ) x n P C ( I λ n f α n ) p + P C ( I λ n f α n ) p P C ( I λ n f ) p x n p + ( I λ n f α n ) p ( I λ n f ) p = x n p + p λ n f α n p ( p λ n f p ) = x n p + λ n f p λ n f α n p = x n p + λ n f p f α n p = x n p + λ n f p f p α n p = x n p + λ n α n p
(3.2)

and

u n p = P C ( x n λ n f α n y n ) p = P C ( x n λ n f α n y n ) P C ( I λ n f ) p ( x n λ n f α n y n ) ( p λ n f ) p = ( x n p ) + ( λ n f p λ n f α n y n ) = ( x n p ) + λ n ( f p f α n y n ) = ( x n p ) + λ n ( f p f α n p + f α n p f α n y n ) = ( x n p ) + λ n ( f p ( f p + α n p ) ) + λ n ( f α n p f α n y n ) x n p + λ n α n p + λ n f α n ( p ) f α n ( y n ) x n p + λ n α n p + λ n ( α n + A 2 ) p y n .
(3.3)

Substituting (3.2) into (3.3) and simplifying, we have

u n p x n p + λ n α n p + λ n ( α n + A 2 ) p y n = x n p + λ n α n p + λ n ( α n + A 2 ) [ x n p + λ n α n p ] = x n p + λ n α n p + λ n ( α n + A 2 ) x n p + λ n 2 α n ( α n + A 2 ) p = x n p + λ n α n p + λ n α n x n p + λ n A 2 x n p + λ n 2 α n 2 p + λ n 2 α n A 2 p = ( 1 + λ n α n + λ n A 2 ) x n p + λ n α n p ( 1 + λ n α n + λ n A 2 ) .
(3.4)

Since u n = P C ( x n λ n f α n y n ) for each n0, then by Proposition 2.2(ii) we have

u n p 2 x n λ n f α n ( y n ) p 2 x n λ n f α n ( y n ) u n 2 = x n p 2 x n u n 2 + 2 λ n f α n ( y n ) , p u n = x n p 2 x n u n 2 + 2 λ n ( f α n ( y n ) f α n ( p ) , p y n + f α n ( p ) , p y n + f α n ( y n ) , y n u n ) x n p 2 x n u n 2 + 2 λ n ( f α n ( p ) , p y n + f α n ( y n ) , y n u n ) = x n p 2 x n u n 2 + 2 λ n [ ( α n I + f ) p , p y n + f α n ( y n ) , y n u n ] x n p 2 x n u n 2 + 2 λ n [ α n p , p u n + f α n ( y n ) , y n u n ] = x n p 2 x n y n + y n u n 2 + 2 λ n [ α n p , p u n + f α n ( y n ) , y n u n ] = x n p 2 x n y n 2 2 x n y n , y n u n y n u n 2 + 2 λ n [ α n p , p u n + f α n ( y n ) , y n u n ] = x n p 2 x n y n 2 y n u n 2 + 2 x n λ n f α n ( y n ) y n , u n y n + 2 λ n α n p , p u n .

Furthermore, by Proposition 2.2(i) we have

x n λ n f α n ( y n ) y n , u n y n = x n λ n f α n ( x n ) y n , u n y n + λ n f α n ( x n ) λ n f α n ( y n ) , u n y n λ n f α n ( x n ) λ n f α n ( y n ) , u n y n λ n f α n ( x n ) f α n ( y n ) u n y n λ n ( α n + A 2 ) x n y n u n y n .

So, we obtain

u n p 2 x n p 2 x n y n 2 y n u n 2 + 2 λ n ( α n + A 2 ) x n y n u n y n + 2 λ n α n p p u n .
(3.5)

Consider

[ λ n ( α n + A 2 ) x n y n u n y n ] 2 = λ n 2 ( α n + A 2 ) 2 x n y n 2 2 λ n ( α n + A 2 ) x n y n u n y n + u n y n 2 ,

it follows that

2 λ n ( α n + A 2 ) x n y n u n y n = λ n 2 ( α n + A 2 ) 2 x n y n 2 + u n y n 2 [ λ n ( α n + A 2 ) x n y n u n y n ] 2 λ n 2 ( α n + A 2 ) 2 x n y n 2 + u n y n 2 .
(3.6)

Substituting (3.6) into (3.5) and simplifying, we have

u n p 2 x n p 2 x n y n 2 y n u n 2 + λ n 2 ( α n + A 2 ) 2 x n y n 2 + u n y n 2 + 2 λ n α n p p u n = x n p 2 x n y n 2 + λ n 2 ( α n + A 2 ) 2 x n y n 2 + 2 λ n α n p p u n = x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p p u n .
(3.7)

Substituting (3.4) into (3.7) and simplifying, we have

u n p 2 x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p [ ( 1 + λ n α n + λ n A 2 ) x n p + λ n α n p ( 1 + λ n α n + λ n A 2 ) ] = x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p ( 1 + λ n α n + λ n A 2 ) x n p + 2 λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) .
(3.8)

Consequently, utilizing Lemma 2.11(ii) and the last relations, we conclude that

x n + 1 p 2 = β n u n + ( 1 β n ) T n u n ( β n + ( 1 β n ) ) p 2 = β n u n β n p + ( 1 β n ) T n u n ( 1 β n ) p 2 = β n ( u n p ) + ( 1 β n ) ( T n u n p ) 2 = β n u n p 2 + ( 1 β n ) T n u n p 2 β n ( 1 β n ) u n T n u n 2 β n u n p 2 + ( 1 β n ) k n 2 u n p 2 β n ( 1 β n ) u n T n u n 2 = ( β n + ( 1 β n ) k n 2 ) u n p 2 β n ( 1 β n ) u n T n u n 2 = ( β n + ( 1 β n ) k n 2 ) { x n p 2 + ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n p ( 1 + λ n α n + λ n A 2 ) x n p + 2 λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) } β n ( 1 β n ) u n T n u n 2 = ( β n + ( 1 β n ) k n 2 ) x n p 2 + ( β n + ( 1 β n ) k n 2 ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n ( β n + ( 1 β n ) k n 2 ) ( 1 + λ n α n + α n A 2 ) p x n p 2 + 2 ( β n + ( 1 β n ) k n 2 ) λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) β n ( 1 β n ) u n T n u n 2 = ( k n 2 β n ( k n 2 1 ) ) x n p 2 + ( k n 2 β n ( k n 2 1 ) ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n ( k n 2 β n ( k n 2 1 ) ) ( 1 + λ n α n + α n A 2 ) p x n p 2 + 2 ( k n 2 β n ( k n 2 1 ) ) λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) β n ( 1 β n ) u n T n u n 2 .
(3.9)

Since lim n k n =1, (i)-(iii) and by Corollary 2.8, we deduce that

lim n x n p exists for each pFix(T)Γ,
(3.10)

and the sequences { x n }, { u n } and { y n } are bounded. It follows that

T n x n p k n x n p.

Hence { T n x n p} is bounded.

Step 2. We will prove that

lim n u n T u n =0.

From (3.9) we have

x n + 1 p 2 ( k n 2 β n ( k n 2 1 ) ) x n p 2 + ( k n 2 β n ( k n 2 1 ) ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + 2 λ n α n ( k n 2 β n ( k n 2 1 ) ) ( 1 + λ n α n + α n A 2 ) p x n p 2 + 2 ( k n 2 β n ( k n 2 1 ) ) λ n 2 α n 2 p 2 ( 1 + λ n α n + λ n A 2 ) β n ( 1 β n ) u n T n u n 2 = ( k n 2 β n ( k n 2 1 ) ) x n p 2 + ( k n 2 β n ( k n 2 1 ) ) ( λ n 2 ( α n + A 2 ) 2 1 ) x n y n 2 + α n ( k n 2 β n ( k n 2 1 ) ) M 1 + α n ( k n 2 β n ( k n 2 1 ) ) M 2 β n ( 1 β n ) u n T n u n 2 = ( k n 2 β n ( k n 2 1 ) ) x n p 2 ( k n 2 β n ( k n 2 1 ) ) ( 1 λ n 2 ( α n + A 2 ) 2 ) x n y n 2 + α n ( k n 2 β n ( k n 2 1 ) ) ( M 1 + M 2 ) β n ( 1 β n ) u n T n u n 2 ,

where M 1 = sup n 0 {2 λ n (1+ λ n α n + α n A 2 )p x n p 2 }< and

M 2 = sup n 0 { 2 λ n 2 α n p 2 ( 1 + λ n α n + λ n A 2 ) } <.

So,

( k n 2 β n ( k n 2 1 ) ) ( 1 λ n 2 ( α n + A 2 ) 2 ) x n y n 2 + β n ( 1 β n ) u n T n u n 2 ( k n 2 β n ( k n 2 1 ) ) x n p 2 x n + 1 p 2 + α n ( k n 2 β n ( k n 2 1 ) ) ( M 1 + M 2 ) .

Since lim n k n =1, α n 0, (i) and from (3.10), we have

lim n 0 x n y n = lim n 0 u n T n u n =0.
(3.11)

Furthermore, we obtain

y n u n = P C ( x n λ n f α n ( x n ) ) P C ( x n λ n f α n ( y n ) ) ( x n λ n f α n ( x n ) ) ( x n λ n f α n ( y n ) ) = λ n f α n ( x n ) f α n ( y n ) λ n ( α n + A 2 ) x n y n .

This together with (3.11) implies that

lim n 0 y n u n =0.
(3.12)

Also,

x n u n x n y n + y n u n

together with (3.11) and (3.12) implies that

lim n 0 x n u n =0.
(3.13)

We can rewrite (3.11) from (3.13) by

lim n 0 x n T n u n =0.
(3.14)

Consider

x n + 1 x n = β n u n + ( 1 β n ) T n u n x n β n u n x n + ( 1 β n ) T n u n x n .

From (3.13) and (3.14), we obtain

x n + 1 x n 0(as n).
(3.15)

Next, we will show that (3.11) implies that

lim n 0 u n T u n =0.
(3.16)

We compute that

y n + 1 y n = P C ( x n + 1 λ n + 1 f α n + 1 x n + 1 ) P C ( x n λ n f α n x n ) = P C ( I λ n + 1 f α n + 1 ) x n + 1 P C ( I λ n f α n ) x n P C ( I λ n + 1 f α n + 1 ) x n + 1 P C ( I λ n + 1 f α n + 1 ) x n + P C ( I λ n + 1 f α n + 1 ) x n P C ( I λ n f α n ) x n x n + 1 x n + ( I λ n + 1 f α n + 1 ) x n ( I λ n f α n ) x n = x n + 1 x n + x n λ n + 1 f α n + 1 x n ( x n λ n f α n x n ) = x n + 1 x n + λ n f α n x n λ n + 1 f α n + 1 x n = x n + 1 x n + λ n ( f + α n ) x n λ n + 1 ( f + α n + 1 ) x n = x n + 1 x n + λ n f x n + λ n α n x n ( λ n + 1 f x n + λ n + 1 α n + 1 x n ) = x n + 1 x n + ( λ n λ n + 1 ) f x n + λ n α n x n λ n + 1 α n + 1 x n = x n + 1 x n + ( λ n λ n + 1 ) f x n + λ n α n x n λ n α n + 1 x n + λ n α n + 1 x n λ n + 1 α n + 1 x n = x n + 1 x n + ( λ n λ n + 1 ) f x n + λ n ( α n α n + 1 ) x n + ( λ n λ n + 1 ) α n + 1 x n x n + 1 x n + | λ n λ n + 1 | f x n + λ n | α n α n + 1 | x n + α n + 1 | λ n λ n + 1 | x n .

From conditions (ii), (iii) and (3.15), we obtain that

y n + 1 y n 0(as n)
(3.17)

and

u n + 1 u n = P C ( x n + 1 λ n + 1 f α n + 1 y n + 1 ) P C ( x n λ n f α n y n ) ( x n + 1 λ n + 1 f α n + 1 y n + 1 ) ( x n λ n f α n y n ) = ( x n + 1 x n ) + ( λ n f α n y n λ n + 1 f α n + 1 y n + 1 ) x n + 1 x n + λ n f α n y n λ n + 1 f α n + 1 y n + 1 = x n + 1 x n + λ n ( f + α n ) y n λ n + 1 ( f + α n + 1 ) y n + 1 = x n + 1 x n + λ n f y n + λ n α n y n ( λ n + 1 f y n + 1 + λ n + 1 α n + 1 y n + 1 ) = x n + 1 x n + ( λ n f y n λ n + 1 f y n + 1 ) + λ n α n y n λ n + 1 α n + 1 y n + 1 x n + 1 x n + λ n f y n λ n + 1 f y n + 1 + λ n α n y n λ n + 1 α n + 1 y n + 1 = x n + 1 x n + ( λ n f y n λ n f y n + 1 ) + ( λ n f y n + 1 λ n + 1 f y n + 1 ) + ( λ n α n y n λ n α n y n + 1 ) + ( λ n α n y n + 1 λ n + 1 α n + 1 y n + 1 ) x n + 1 x n + λ n f y n f y n + 1 + | λ n λ n + 1 | f y n + 1 + λ n α n y n y n + 1 + | λ n α n λ n + 1 α n + 1 | y n + 1 .

From conditions (ii), (iii), (3.15) and (3.17), we obtain that

u n + 1 u n 0(as n).
(3.18)

Since T is uniformly L-Lipschitzian continuous, then

u n T u n u n u n + 1 + u n + 1 T n + 1 u n + 1 + T n + 1 u n + 1 T n + 1 u n + T n + 1 u n T u n u n u n + 1 + u n + 1 T n + 1 u n + 1 + L u n u n + 1 + L T n u n u n .

Since lim n u n + 1 u n =0 and lim n u n T n u n =0, it follows that

lim n u n T u n =0.
(3.19)

Step 3. We will show that x ˆ Fix(T)Γ.

We have from (3.11)

x n y n 0(as n).
(3.20)

Since f= A (I P Q )A is Lipschitz continuous and from (3.11), we have

lim n f ( x n ) f ( y n ) =0.

Since { x n } is bounded, there is a subsequence { x n i } of { x n } that converges weakly to some  x ˆ .

First, we show that x ˆ Γ. Since x n y n 0, it is known that y n i x ˆ .

Put

Aw= { f w + N C w if  w C , if  w C ,

where N C w={z H 1 :wv,z0,vC}. Then A is maximal monotone and 0Aw if and only if wVI(C,f); see [21] for more details. Let (w,z)G(A), we have

zAw=fw+ N C w,

and hence

zfw N C w.

So, we have

wv,zfw0,vC.

On the other hand, from

u n = P C ( x n λ n f α n y n )andwC,

we have

x n λ n f α n y n u n , u n w0,

and hence

w u n , u n x n λ n + f α n y n 0.

Therefore from zfw N C w and { u n i }C it follows that

w u n i , z w u n i , f w w u n i , f w w u n i , u n i x n i λ n i + f α n i y n i = w u n i , f w w u n i , u n i x n i λ n i + f y n i α n i w u n i , y n i = w u n i , f w f u n i + w u n i , f u n i f y n i w u n i , u n i x n i λ n i α n i w u n i , y n i w u n i , f u n i f y n i w u n i , u n i x n i λ n i α n i w u n i , y n i .

Hence, we obtain

w x ˆ ,z0as i.

Since A is maximal monotone, we have x ˆ A 0 1 , and hence x ˆ VI(C,f). Thus it is clear that x ˆ Γ.

Next, we show that x ˆ Fix(T). Indeed, since y n i x ˆ and u n i T u n i 0, by (3.16) and Lemma 2.7, we get x ˆ Fix(T). Therefore, we have x ˆ Fix(T)Γ.

Now we prove that x n x ˆ and y n x ˆ .

Suppose the contrary and let { x n k } be another subsequences of { x n } such that { x n k } x . Then x Fix(T)Γ. Let us show that x ˆ = x . Assume that x ˆ x . From the Opial condition [22], we have

lim n x n x ˆ = lim k inf x n k x ˆ < lim k inf x n k x = lim n x n x = lim k inf x n k x < lim k inf x n k x ˆ = lim n x n x ˆ .

This is a contradiction. Thus, we have x ˆ = x . This implies

x n x ˆ Fix(T)Γ.

Further, from x n y n 0 it follows that y n x ˆ . This shows that both sequences { y n } and { u n } converge weakly to x ˆ Fix(T)Γ. This completes the proof. □

Utilising Theorem 3.1, we have the following new results in the setting of real Hilbert spaces.

Take T n T in Theorem 3.1. Therefore the conclusion follows.

Corollary 3.2 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let T:CC be an uniformly L-Lipschitzian and quasi-nonexpansive mapping with Fix(T)Γ. Let { x n }, { y n } and { u n } be the sequences in C generated by the following algorithm:

{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n x n ) , u n = P C ( x n λ n f α n y n ) , x n + 1 = β n u n + ( 1 β n ) T n u n ,
(3.21)

where f α n =f+ α n I= A (I P Q )A+ α n I, and the sequences { α n }, { λ n } and { β n } satisfy the following conditions:

  1. (i)

    0< lim inf n β n lim sup n β n <1,

  2. (ii)

    { λ n }(0, 2 A 2 ) and n = 1 λ n <,

  3. (iii)

    n = 1 α n <.

Then the sequence { x n } converges weakly to an element x ˆ Fix(T)Γ.

Take T n I (identity mappings) in Theorem 3.1. Therefore the conclusion follows.

Corollary 3.3 Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let T:CC be an uniformly L-Lipschitzian with Fix(T)Γ. Let { x n }, { y n } and { u n } be the sequences in C generated by the following algorithm:

{ x 0 = x C chosen arbitrarily , y n = P C ( x n λ n f α n x n ) , u n = P C ( x n λ n f α n y n ) , x n + 1 = β n u n + ( 1 β n ) T n u n ,
(3.22)

where f α n =f+ α n I= A (I P Q )A+ α n I, and the sequences { α n }, { λ n } and { β n } satisfy the following conditions:

  1. (i)

    0< lim inf n β n lim sup n β n <1,

  2. (ii)

    { λ n }(0, 2 A 2 ),

  3. (iii)

    n = 1 α n <.

Then the sequence { x n } converges weakly to an element x ˆ Fix(T)Γ.

Remark 3.4 Theorem 3.1 improves and extends [[8], Theorem 5.7] in the following respects:

  1. (a)

    The iterative algorithm [[8], Theorem 5.7] is extended for developing our Mann’s type extragradient algorithm in Theorem 3.1.

  2. (b)

    The technique of proving weak convergence in Theorem 3.1 is different from that in [[8], Theorem 5.7] because our technique uses asymptotically quasi-nonexpansive mappings and the property of maximal monotone mappings.

  3. (c)

    The problem of finding a common element of Fix(T)Γ for asymptotically quasi-nonexpansive mappings is more general than that for nonexpansive mappings and the problem of finding a solution of the (SFP) in [[8], Theorem 5.7].

References

  1. 1.

    Censor Y, Elving T: A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    MathSciNet  Article  Google Scholar 

  2. 2.

    Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    MathSciNet  Article  Google Scholar 

  3. 3.

    Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problem in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  4. 4.

    Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    MathSciNet  Article  Google Scholar 

  5. 5.

    Censor Y, Motova A, Segal A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010

    MathSciNet  Article  Google Scholar 

  6. 6.

    Deepho J, Kumam P: A modified Halpern’s iterative scheme for solving split feasibility problems. Abstr. Appl. Anal. 2012., 2012: Article ID 876069

    Google Scholar 

  7. 7.

    Sunthrayuth, P, Cho, YJ, Kumam, P: General iterative algorithms approach to variational inequalities and minimum-norm fixed point for minimization and split feasibility problems. Opsearch (2013, in press). doi:10.1007/s12597–013–0150–5. 10.1007/s12597-013-0150-5

    Google Scholar 

  8. 8.

    Xu HK: A variable Krasnosel’skill-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  Google Scholar 

  9. 9.

    Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  Google Scholar 

  10. 10.

    Zhao J, Yang Q: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/0266-5611/21/5/017

    Article  Google Scholar 

  11. 11.

    Korpelevich GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 1976, 12: 747–756.

    Google Scholar 

  12. 12.

    Phiangsungnoen S, Kumam P: A hybrid extragradient method for solving Ky Fan inequalities, variational inequalities and fixed point problems. II. Proceedings of the International MultiConference of Engineers and Computer Scientists 2013 2013, 1042–1047.

    Google Scholar 

  13. 13.

    Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64(4):633–642. 10.1016/j.camwa.2011.12.074

    MathSciNet  Article  Google Scholar 

  14. 14.

    Chang S: Split feasibility problems total quasi-asymptotically nonexpansive mappings. Fixed Point Theory Appl. 2012., 2012: Article ID 151 10.1186/1687-1812-2012-151

    Google Scholar 

  15. 15.

    Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 26: 103–120.

    MathSciNet  Article  Google Scholar 

  16. 16.

    Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operator. Optimization 2004, 53(5–6):475–504. 10.1080/02331930412331327157

    MathSciNet  Article  Google Scholar 

  17. 17.

    Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

    Chapter  Google Scholar 

  18. 18.

    Tan KK, Xu HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178: 301–308. 10.1006/jmaa.1993.1309

    MathSciNet  Article  Google Scholar 

  19. 19.

    Marino G, Xu HK: Weak and strong convergence theorems for strict pseudo-contractions in Hilbert space. J. Math. Anal. Appl. 2007, 329: 336–346. 10.1016/j.jmaa.2006.06.055

    MathSciNet  Article  Google Scholar 

  20. 20.

    Nakajo K, Takahashi W: Strong convergence theorem for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279: 372–379. 10.1016/S0022-247X(02)00458-4

    MathSciNet  Article  Google Scholar 

  21. 21.

    Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5

    MathSciNet  Article  Google Scholar 

  22. 22.

    Opial Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73: 591–597. 10.1090/S0002-9904-1967-11761-0

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors thank the referees for comments and suggestions on this manuscript. The first author was supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0033/2554) and the King Mongkut’s University of Technology Thonburi.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Poom Kumam.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Deepho, J., Kumam, P. Mann’s type extragradient for solving split feasibility and fixed point problems of Lipschitz asymptotically quasi-nonexpansive mappings. Fixed Point Theory Appl 2013, 349 (2013). https://doi.org/10.1186/1687-1812-2013-349

Download citation

Keywords

  • split feasibility problems
  • fixed point problems
  • extragradient methods
  • asymptotically quasi-nonexpansive mappings
  • maximal monotone mappings