Skip to main content

Variable KM-like algorithms for fixed point problems and split feasibility problems

Abstract

The convergence analysis of a variable KM-like method for approximating common fixed points of a possibly infinitely countable family of nonexpansive mappings in a Hilbert space is proposed and proved to be strongly convergent to a common fixed point of a family of nonexpansive mappings. Our variable KM-like technique is applied to solve the split feasibility problem and the multiple-sets split feasibility problem. Especially, the minimum norm solutions of the split feasibility problem and the multiple-sets split feasibility problem are derived. Our results can be viewed as an improvement and refinement of the previously known results.

MSC:47H10, 65J20, 65J22, 65J25.

1 Introduction

Problems of image reconstruction from projections can be represented by a system of linear equations

Ax=b.
(1.1)

In practice, the system (1.1) is often inconsistent, and one usually seeks a point which minimizes x R n by some predetermined optimization criterion. The problem is frequently ill-posed and there may be more than one optimal solution. The standard approach to dealing with that problem is via regularization. The well-known convex feasibility problem is to find a point x satisfying the following:

to find a point x i = 1 m C i ,

where m1 is an integer, and each C i is a nonempty closed convex subset of a Hilbert space H. A special case of the convex feasibility problem is the split feasibility problem given by:

Let C, Q be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, and let A: H 1 H 2 be a bounded linear operator. The split feasibility problem (SFP) is

to find a point xC such that AxQ.
(1.2)

The SFP is said to be consistent if (1.2) has a solution. It is easy to see that SFP (1.2) is consistent if and only if the following fixed point problem has a solution:

find xC such that x= P C ( I γ A ( I P Q ) A ) x,
(1.3)

where P C and P Q are the projections onto C and Q, respectively, and A is the adjoint of A. Let L denote the spectral radius of A A. It is well known that if γ(0,2/L), the operator T= P C (Iγ A (I P Q )A) in the operator equation (1.3) is nonexpansive [1].

It has been extensively studied during the last decade because of its applications in modeling inverse problems which arise in phase retrievals and in medical image reconstruction. It has also been applied to modeling intensity-modulated radiation therapy; see, for example [27] and the references therein.

Several iterative methods have been proposed and analyzed to solve the SFP (1.2); see, for example [1, 3, 6, 814] and the references therein. Byrne [3] introduced the CQ algorithm

x n + 1 =T x n ,nN,
(1.4)

and proved that the sequence { x n } generated by the CQ algorithm (1.4) converges weakly to a solution of SFP (1.2), where T= P C (Iγ A (I P Q )A) and 0<γ<2/L.

In view of the fixed point formulation (1.3) of the SFP (1.2), Xu [1] and Yang [14] applied the following perturbed Krasnosel’skiĭ-Mann CQ algorithm to solve the SFP (1.2):

x n + 1 =(1 α n ) x n + α n T n x n ,nN.
(1.5)

Here { T n } is a sequence of operators defined by

T n = P C n ( I γ A ( I P Q n ) A ) ,nN,

where { C n } and { Q n } are sequences of nonempty closed convex subsets in H 1 and H 2 , respectively, which obey the following assumption:

(C0) n = 1 α n d ρ ( C n ,C)< and n = 1 α n d ρ ( Q n ,Q)< for each ρ>0, where d ρ is the ρ-distance between Q n and Q (see Section 3.2).

It is not very easy to verify condition (C0) for each ρ>0. Thus, the condition (C0) is quite restrictive even for weak convergence of the sequence { x n } defined by (1.5). One of our objectives is to relax the condition (C0).

Many practical problems can be formulated as a fixed point problem (FPP): finding an element x such that

x=Tx,
(1.6)

where T is a nonexpansive self-mapping defined on a closed convex subset C of a Hilbert space H. The solution set of FPP (1.6) is denoted by F(T). It is well known that if F(T), then F(T) is closed and convex. The fixed point problem (1.6) is ill-posed (it may fail to have a solution, nor uniqueness of solution) in general. Regularization by contractions can removed such illness. We replace the nonexpansive mapping T by a family of contractions T t f :=tf+(1t)T, with t(0,1) and f:CC a fixed contraction. We call f an anchoring function. The regularized problem of fixed point for T is the fixed point problem for T t f . The mapping T t f has a unique fixed point, namely, x t C. Therefore, x t is the solution of the fixed point problem

x t =tf x t +(1t)T x t .
(1.7)

We now discretize the regularization (1.7) to define an explicit iterative algorithm:

x n + 1 = t n f x n +(1 t n )T x n ,nN.
(1.8)

The iterative algorithm (1.8) is due to Moudafi [15], by generalizing Browder’s and Halpern’s methods, who introduced viscosity approximation methods. Suzuki [16] established a strong convergence theorem by using Halpern’s method to averaged mapping T λ =λI+(1λ)T, λ(0,1) for nonexpansive mappings T in certain Banach spaces. Takahashi [17] proved a strong convergence theorem of the following iterative algorithm for countable families of nonexpansive mappings in certain Banach spaces:

x n + 1 = t n f x n +(1 t n ) T n x n ,nN.
(1.9)

Recently, Yao and Xu [18] introduced and studied strong convergence of the following modified methods:

x n + 1 = P C [ t n f x n + ( 1 t n ) T x n ] ,nN,
(1.10)

where f:CH is a fixed non-self contraction and { t n } is a sequence in (0,1) satisfying the conditions:

(S1) lim n t n =0 and n = 1 t n =,

(S2) either n = 1 | t n + 1 t n |< or t n + 1 / t n 0 as n.

One can easily see that (1.10) is a regularized iterative algorithm.

Motivated by [1, 11, 14], we study the following more general non-regularized algorithm, called variable KM-like algorithm which generates a sequence { x n } according to the recursive formula:

{ x 1 C , y n = α n f n x n + ( 1 α n ) T n x n , x n + 1 = ( 1 β n ) x n + β n Proj C [ y n ] for all  n N ,

where { α n } and { β n } are sequences in (0,1), { T n } is a sequence of nonexpansive self-mappings of C and { f n } is a sequence of (not necessarily contraction) mappings from C into H.

In the present paper, we will study the strong convergence of the proposed variable KM-like algorithm in the framework of Hilbert spaces. The paper is organized as follows. The next section contains preliminaries. In Section 3, we will study the convergence analysis of our variable KM-like algorithm for fixed point problem (1.6) without the assumption (S2). This result will be applied to prove convergence of some perturbed algorithms for the SFP (1.2) and the multiple-sets split feasibility problem under some weaker assumptions. As special cases, we obtain algorithms which converge strongly to the minimum norm solutions of the split feasibility problem and the multiple-sets split feasibility problem. Our results are new and interesting in the following contexts:

  1. (i)

    Our algorithm (3.1) is not regularized by contractions.

  2. (ii)

    f n is not necessarily contraction. In the existing literature, anchoring function f is a fixed contraction mapping [15, 1719] or strongly pseudo-contraction mapping [20].

  3. (iii)

    In the convergence analysis of (3.1) for fixed point problem (1.6), the assumption (S2) is not required.

  4. (iv)

    A fixed ρ>0 for a (C0)-like condition is adopted.

2 Preliminaries

Let C be a nonempty subset of a Hilbert space H. Throughout the paper, we denote by B r [x] the closed ball defined by B r [x]={yC:yxr}. Let T 1 , T 2 :CH be two mappings. We denote by B(C) the collection of all bounded subsets of C. The deviation between T 1 and T 2 on BB(C) [21], denoted by D B ( T 1 , T 2 ), is defined by

D B ( T 1 , T 2 )=sup { T 1 x T 2 x : x B } .

Let T:CH be a mapping. Then T is said to be a κ-contraction if there exists κ[0,1) such that TxTyκxy for all x,yC. Furthermore, it is called nonexpansive if TxTyxy for all x,yC.

Let { f n } be a sequence of mappings from C into H. Following [2022], we say { f n } is a sequence of nearly contraction mappings with sequence {( k n , a n )} if there exist a sequence { k n } in [0,1) and a sequence { a n } in [0,) with a n 0 such that

f n x f n y k n xy+ a n for all x,yC and nN.

One can observe that a sequence of contraction mappings is essentially a sequence of nearly contraction mappings.

We now construct a sequence of nearly contractions.

Example 2.1 Let H=R and C=[0,1]. Let { f n } be a sequence of mappings f n :CH defined by

f n (x)= { x n + 1 , if  0 x 1 2 ; 3 n + 1 , if  1 2 < x 1 .
(2.1)

Set k n := 1 n + 1 . We consider the following cases:

Case 1: If x,y[0, 1 2 ], then

f n x f n y= k n (xy)for all nN.

Case 2: If x,y( 1 2 ,1], then

f n x f n y=0for all nN.

Case 3: If x[0, 1 2 ] and y( 1 2 ,1], then

| f n x f n y|= | x n + 1 3 n + 1 | 1 n + 1 |xy|+ 1 n + 1 |y3| k n |xy|+ 5 2 ( n + 1 ) ,nN.

Therefore, for all x,y[0,1], we have

| f n x f n y| k n |xy|+ a n for all nN,

where a n := 5 2 ( n + 1 ) . Therefore, { f n } is a sequence of nearly contraction mappings with sequence {( k n , a n )}.

Let C be a nonempty closed convex subset of a Hilbert space H. We use P C to denote the (metric) projection from H onto C; namely, for xH, P C (x) is the unique point in C with the property

x P C ( x ) =inf { x z : z C } .

The following is a useful characterization of projections.

Lemma 2.2 Let C be a nonempty closed convex subset of a real Hilbert space H and let P C be the metric projection from H onto C. Given xH and zC. Then z= P C (x) if and only if

xz,yz0for all yC.

Lemma 2.3 [[23], Corollary 5.6.4]

Let C be a nonempty closed convex subset of H and T:CC a nonexpansive mapping. Then IT is demiclosed at zero, that is, if { x n } is a sequence in C weakly converging to x and if {(IT) x n } converges strongly to zero, then (IT)x=0.

Lemma 2.4 [24]

Let { a n } and { c n } be two sequences of nonnegative real numbers and let { b n } be a sequence in satisfying the following condition:

a n + 1 (1 α n ) a n + b n + c n for all nN,

where { α n } is a sequence in (0,1]. Assume that n = 1 c n <. Then the following statements hold:

  1. (a)

    If b n K α n for all nN and for some K0, then

    a n + 1 δ n a 1 +(1 δ n )K+ j = 1 n c j for all nN,

    where δ n = Π j = 1 n (1 α j ) and hence { a n } is bounded.

  2. (b)

    If n = 1 α n = and lim sup n ( b n / α n )0, then { a n } n = 1 converges to zero.

3 Convergence analysis of a variable KM-like algorithm

First, we prove the following.

Proposition 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H, T:CC a nonexpansive mapping with F(T) and f:CH a κ-contraction. Then there exists a unique point x C such that x = P F ( T ) f( x ).

Proof Since f:CH is a κ-contraction, it follows that P F ( T ) f is a κ-contraction from C onto itself. Then there exists a unique point x C such that x = P F ( T ) f( x ). □

3.1 A variable KM-like algorithm

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { f n } be a sequence of nearly contractions from C into H such that { f n } converges pointwise to f and let { T n } be a sequence of nonexpansive self-mappings of C which are viewed as perturbations. For computing a common fixed point of the sequence { T n } of nonexpansive mappings, we propose the following variable KM-like algorithm:

{ x 1 C , y n = α n f n x n + ( 1 α n ) T n x n , x n + 1 = ( 1 β n ) x n + β n P C [ y n ] for all  n N ,
(3.1)

where { α n } and { β n } are sequences in [0,1].

We investigate the asymptotic behavior of the sequence { x n } generated, from an arbitrary x 1 C, by the algorithm (3.1) to a common fixed point of the sequence { T n }.

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H, T:CC be a nonexpansive mapping such that F(T), and let f:CH be a κ-contraction with κ[0,1) such that P F ( T ) f( x )= x F(T). Let { f n } be a sequence of nearly contraction mappings from C into H with the sequence {( k n , a n )} in [0,1)×[0,) such that k n κ, and let { T n } be a sequence of nonexpansive mappings from C into itself. For given x 1 C, let { x n } be a sequence in C generated by (3.1), where { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1). Assume that the following conditions are satisfied:

(C1) lim n α n =0 and n = 1 α n =,

(C2) 0< lim inf n β n lim sup n β n <1,

(C3) lim n f n x =f x ,

(C4) n = 1 (1 α n ) T n x x <.

Define

R : = max { x 1 x , K } + n = 1 ( 1 α n ) T n x x and K : = sup n N f n x x + a n 1 k n .
(3.2)

Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.1) remains in the closed ball B R [ x ].

  2. (b)

    If the following assumption holds:

(C5) lim n T n v n T v n =0 for all { v n } in B R [ x ],

then { x n } converges strongly to x .

Proof (a) Set z n := P C [ y n ]. Observe that

z n x = P C [ y n ] P C [ x ] y n x α n f n x n x + ( 1 α n ) T n x n x α n ( f n x n f n x + f n x x ) + ( 1 α n ) ( T n x n T n x + T n x x ) α n ( k n x n x + f n x x + a n ) + ( 1 α n ) ( x n x + T n x x ) = ( 1 ( 1 k n ) α n ) x n x + α n ( f n x x + a n ) + ( 1 α n ) T n x x .

From (3.1), we have

x n + 1 x = β n ( x n x ) + β n ( z n x ) ( 1 β n ) x n x + β n z n x ( 1 β n ) x n x + β n [ ( 1 ( 1 k n ) α n ) x n x + α n ( f n x x + a n ) + ( 1 α n ) T n x x ] = ( 1 ( 1 k n ) α n β n ) x n x + α n β n ( f n x x + a n ) + ( 1 α n ) β n T n x x ( 1 ( 1 k n ) α n β n ) x n x + ( 1 k n ) α n β n K + ( 1 α n ) T n x x max { x 1 x , K } + j = 1 n ( 1 α j ) T j x x .

Since n = 1 (1 α n ) T n x x <, by Lemma 2.4, we find that { x n x } is bounded. Moreover,

x n + 1 x max { x 1 x , K } + n = 1 (1 α n ) T n x x =R,nN.

Therefore, { x n } is well defined in the ball B R [ x ].

(b) Assume that lim n T n v n T v n =0 for all { v n } in B R [ x ]. Set γ n := x f x , x z n . We now proceed with the following steps:

Step 1: { f n x n } and { T n x n } are bounded.

Without loss of generality, we may assume that β β n for all nN for some β>0. From (C3), we have

n = 1 (1 α n ) T n x x <,

which implies that lim n (1 α n ) T n x x =0. Since α n 0, it follows that

lim n T n x x =0.

Since

T n x n x T n x n T n x + T n x x x n x + T n x T x ,

and { T n x x } converges to 0, we conclude that { T n x n } is bounded. Moreover,

f n x n x f n x n f n x + f n x x k n x n x + f n x x + a n ,

it follows that { f n x n } is bounded.

Step 2: lim n x n x n + 1 =0.

Set u n := f n x n . We write

x n + 1 =(1 β n ) x n + β n z n .

Observe that

z n + 1 z n y n + 1 y n = α n + 1 u n + 1 + ( 1 α n + 1 ) T n + 1 x n + 1 ( α n u n + ( 1 α n ) T n x n ) = α n + 1 u n + 1 α n u n + ( 1 α n + 1 ) ( T n + 1 x n + 1 T n + 1 x n ) + ( 1 α n + 1 ) T n + 1 x n ( 1 α n ) T n x n = ( 1 α n + 1 ) x n + 1 x n + T n + 1 x n T n x n + α n + 1 ( T n + 1 x n + u n + 1 ) + α n ( T n x n + u n ) ,

which gives

z n + 1 z n x n + 1 x n T n + 1 x n T n x n + α n + 1 ( T n + 1 x n + u n + 1 ) + α n ( T n x n + u n ) .

As we have shown in Step 1, { T n x n } and { u n } are bounded. Observe that

T n + 1 x n x T n + 1 x n T n + 1 x + T n + 1 x x x n x + T n + 1 x x

and

f n + 1 x n x f n + 1 x n f n + 1 x + T n + 1 x x x n x + f n + 1 x x + a n .

Thus, { f n + 1 x n } and { T n + 1 x n } are bounded. Hence,

lim sup n ( z n + 1 z n x n + 1 x n ) 0.

By [[25], Lemma 2.2], we obtain

lim n z n x n =0,

which implies that

lim n x n + 1 x n = lim n β n z n x n =0.

Step 3: lim n x n T x n =0.

Note

y n T n x n = α n f n x n + ( 1 α n ) T n x n T n x n = α n f n x n T n x n ,

and hence

x n T x n x n x n + 1 + x n + 1 T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) x n T n x n + β n z n T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) x n T n x n + β n y n T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) x n T n x n + α n β n f n x n T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) ( x n T x n + T x n T n x n ) + α n β n f n x n T n x n + T n x n T x n ,

which implies that

β n x n T x n x n x n + 1 + α n β n f n x n T n x n +2 T n x n T x n .

Note α n 0 and T n x n T x n 0, we conclude that lim n x n T x n =0.

Step 4: lim sup n γ n 0.

Note that

z n T z n z n x n + x n T x n + T x n T z n 2 z n x n + x n T x n 0 as  n .

We take a subsequence { z n i } of { z n } such that

lim sup n f x x , z n x = lim n f x x , z n i x and z n i zCas i.

Since { z n } is in C and z n T z n 0, we conclude, from Lemma 2.3 that zF(T). Since x = P F ( T ) f( x ), we obtain from Lemma 2.2 that

lim sup n γ n = lim sup n f x x , z n i x = f x x , z x 0.

Step 5: x n x .

Since { z n x } is bounded, there exists R 1 >0 such that z n x R 1 for all nN. Noting that z n = P C [ y n ]. Hence, from (3.1), we have

z n x 2 = z n y n + y n x , z n x y n x , z n x = α n ( f n x n f n x + f n x f x + f x x ) + ( 1 α n ) ( T n x n T n x + T n x x ) , z n x [ α n ( f n x n f n x + f n x f x ) + ( 1 α n ) ( T n x n T n x + T n x T x ) ] z n x + α n f x x , z n x [ α n ( k n x n x + f n x f x + a n ) + ( 1 α n ) ( x n x + T n x T x ) ] z n x + α n γ n = ( 1 ( 1 k n ) α n ) x n x z n x + [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] z n x + α n γ n 1 ( 1 k n ) α n 2 ( x n x 2 + z n x 2 ) + [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + α n γ n 1 ( 1 k n ) α n 2 x n x 2 + 1 2 z n x 2 + [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + α n γ n ,

which implies that

z n x 2 ( 1 ( 1 k n ) α n ) x n x 2 + 2 [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + 2 α n γ n .

From (3.1), we have

x n + 1 x 2 = ( 1 β n ) ( x n x ) + β n ( z n x ) 2 ( 1 β n ) x n x 2 + β n z n x 2 ( 1 ( 1 k n ) α n β n ) x n x 2 + 2 [ α n β n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + 2 α n β n γ n .

Since lim n f n x f x 1 k n =0 and n = 1 (1 α n ) T n x T x <, we conclude from Lemma 2.4(b) that x n x . □

Remark 3.3 Theorem 3.2 has the following characterization for convergence analysis of (3.1):

  1. (a)

    Iterates of (3.1) remains in the closed ball B R [ x ].

  2. (b)

    The assumption (S2) is not required.

  3. (c)

    (C4) is adopted for only for x F(T). In particular, the condition ‘ n = 1 T n zTz< for all zF(T)’ is adopted in [[26], Theorem 3.1].

Thus, Theorem 3.2 is more general by nature. Therefore, Theorem 3.2 significantly extends and improves [[26], Theorem 3.1] and [[18], Theorem 3.2].

Theorem 3.2 remains true if condition (C4) is replaced with the condition that the mappings { T n } and T have common fixed points. In fact, we have

Theorem 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H, T:CC a nonexpansive mapping such that F(T), and f:CH be a κ-contraction with κ[0,1) such that P F ( T ) f( x )= x F(T). Let { f n } be a sequence of nearly contraction mappings from C into H with sequence {( k n , a n )} in [0,1)×[0,) such that k n κ. Let { T n } be a sequence of nonexpansive mappings from C into itself such that F(T)= n N F( T n ). For given x 1 C, let { x n } be a sequence in C generated by (3.1), where { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1), (C2), and (C3). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.1) remains in the closed ball B R [ x ], where R=max{ x 1 x , K } and K is given in (3.2).

  2. (b)

    If the assumption (C5) holds, then { x n } converges strongly to x .

We now prove strong convergence of the sequence { x n } generated by (3.1) under condition (C6).

Theorem 3.5 Let C be a nonempty closed convex subset of a real Hilbert space H, T:CC be a nonexpansive mapping such that F(T), and { T n } be a sequence of nonexpansive mappings from C into itself. Let f:CH be a κ-contraction with κ[0,1) such that P F ( T ) f( x )= x F(T) and { f n } be a sequence of k n -contraction mappings from C into H such that k n κ. For given x 1 C, let { x n } be a sequence in C generated by (3.1), where { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1), (C2), (C3), and (C4). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.1) remains in the closed ball B R [ x ], where

    R=max { x 1 x , sup n N f n x x 1 k n } + n = 1 (1 α n ) T n x x .
  2. (b)

    If the following assumption holds:

(C6) n = 1 D B R [ x ] ( T n ,T)<,

then { x n } converges strongly to x .

Proof We show that n = 1 D B R [ x ] ( T n ,T)< implies that lim n T n v n T v n =0 for all { v n } in B R [ x ]. Let { w n } be a sequence in B R [ x ]. Then

n = 1 T n w n T w n n = 1 D B R [ x ] ( T n ,T)<.

It follows that lim n T n w n T w n =0. Thus, the condition (C5) in Theorem 3.2 holds. Therefore, Theorem 3.5 follows from Theorem 3.2. □

For a sequence { u n } in H with u n uH, define f n :CH by

f n x= u n ,xC.

Then each f n is 0-contraction with f n xfx=u. In this case algorithm (3.1) with T n =T reduces to

x n + 1 =(1 β n ) x n + β n P C [ α n u n + ( 1 α n ) T x n ] ,nN.
(3.3)

Corollary 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H and T:CC be a nonexpansive mapping such that F(T). Let { u n } be a sequence in H such that u n uH and P F ( T ) (u)= x F(T). For given x 1 C, let { x n } be a sequence in C generated by (3.3), where { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1) and (C2). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.3) remains in the closed ball B R [ x ], where R=max{ x 1 x , sup n N u n x }.

  2. (b)

    { x n } converges strongly to x .

Remark 3.7 If u=0 in Corollary 3.6, then { x n } generated by Algorithm 3.3 converges strongly to the minimum norm solution of the FPP (1.6). Corollary 3.6 also provides a closed ball in which { x n } lies. Therefore, Corollary 3.6 significantly extends and improves [[27], Theorem 3.1].

3.2 The split feasibility problem

In this section we apply Theorem 3.5 to solve the SFP (1.2). We begin with the ρ-distance:

Definition 3.8 Let C and Q be two closed convex subsets of a Hilbert space H and let ρ be a positive constant. The ρ-distance between C and Q is defined by

d ρ (C,Q)= sup x ρ P C x P Q x.

By employing Theorem 3.5, we present a variable KM-like CQ algorithm (3.6) for finding solutions of the SFP (1.2) and prove its strong convergence.

Theorem 3.9 Let C and Q be two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and let { C n } and { Q n } be sequences of closed convex subsets of H 1 and  H 2 , respectively. Let f:C H 1 be a κ-contraction and { f n } be a sequence of k n -contraction mappings from C into H 1 such that k n κ. Let A: H 1 H 2 be a bounded linear operator with the adjoint A . For γ(0,2/L), define

T= P C ( I γ A ( I P Q ) A ) ,
(3.4)

and

T n = P C n ( I γ A ( I P Q n ) A ) ,nN.
(3.5)

Assume that SFP (1.2) is consistent with P F ( T ) f x = x F(T). For given x 1 C, let { x n } be a sequence in C generated by the following variable KM-like CQ algorithm:

x n + 1 =(1 β n ) x n + β n P C [ α n f n x n + ( 1 α n ) P C n ( I γ A ( I P Q n ) A ) x n ] ,nN,
(3.6)

where { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1), (C2), (C3), and (C4). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.6) remains in the closed ball B R [ x ], where

    R=max { x 1 x , sup n N ( f n x x / ( 1 k n ) ) } + n = 1 (1 α n ) T n x x .
  2. (b)

    If ρ ¯ =max{Ax,(Iγ A (I P Q )A)x:x B R [ x ]} and the following assumption holds:

(C7) n = 1 d ρ ¯ ( C n ,C)< and n = 1 d ρ ¯ ( Q n ,Q)<,

then { x n } converges strongly to x .

Proof (a) Since γ(0,2/L), T and T n for all nN are nonexpansive mappings and F(T) because SFP (1.2) is consistent. Hence this part follows from Theorem 3.5(a).

(b) Assume that

ρ ¯ =max { A x , ( I γ A ( I P Q ) A ) x : x B R [ x ] } .

Now, let x H 1 be such that x B R [ x ]. Since each P C n is the nonexpansive, we have

T n x T x = P C n ( I γ A ( I P Q n ) A ) x P C ( I γ A ( I P Q ) A ) x P C n ( I γ A ( I P Q n ) A ) x P C n ( I γ A ( I P Q ) A ) x + P C n ( I γ A ( I P Q ) A ) x P C ( I γ A ( I P Q ) A ) x γ A ( P Q n A x P Q A x ) + P C n ( I γ A ( I P Q ) A ) x P C ( I γ A ( I P Q ) A ) x γ A P Q n A x P Q A x + d ρ ¯ ( C n , C ) γ A d ρ ¯ ( Q n , Q ) + d ρ ¯ ( C n , C ) .

Thus,

n = 1 D B R [ x ] ( T n , T ) = n = 1 sup x B R [ x ] T n x T x n = 1 d ρ ¯ ( C n , C ) + γ A n = 1 d ρ ¯ ( Q n , Q ) < .

Hence condition (C6) in Theorem 3.5 holds. Therefore, Theorem 3.9(b) follows from Theorem 3.5(b). □

For a sequence { u n } in H 1 with u n 0 H 1 , define f n :C H 1 by

f n x= u n ,xC.

Then each f n is 0-contraction with f n xfx=0. In this case variable KM-like CQ algorithm (3.6) reduces to the following variable KM-like CQ algorithm:

x n + 1 =(1 β n ) x n + β n P C [ α n u n + ( 1 α n ) P C n ( I γ A ( I P Q n ) A ) x n ] ,nN.
(3.7)

We now present strong convergence of the variable KM-like CQ algorithm (3.7) to the minimum norm solution of the SFP (1.2).

Corollary 3.10 Let C and Q be two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and let { C n } and { Q n } be sequences of closed convex subsets of H 1 and H 2 , respectively. Let A: H 1 H 2 be a bounded linear operator with the adjoint A . For γ(0,2/L), define T and T n by (3.4) and (3.5), respectively. Assume that the SFP (1.2) is consistent with P F ( T ) (0)= x F(T). For given x 1 C and a sequence { u n } in H 1 with u n 0 H 1 , let { x n } be a sequence in C generated by a variable KM-like CQ algorithm (3.7), { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1), (C2), and (C4). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.7) remains in the closed ball B R [ x ], where

    R=max { x 1 x , x } + n = 1 (1 α n ) T n x x .
  2. (b)

    If ρ ¯ =max{Ax,(Iγ A (I P Q )A)x:x B R [ x ]} and the assumption (C7) holds, then { x n } converges strongly to x .

Corollary 3.10 significantly extends and improves [[11], Theorem 3.1].

3.3 The constrained multiple-sets split feasibility problem

In this section, we consider the following multiple-sets split feasibility problem which models the intensity-modulated radiation therapy [6] and has recently been investigated by many researchers, see, for example, [1, 3, 6, 814] and the references therein.

Let H 1 and H 2 be two Hilbert spaces and let r and p be two natural numbers. For each i{1,2,,p}, let C i be a nonempty closed convex subset of H 1 and for each j{1,2,,r}, let Q j be a nonempty closed convex subset of H 2 . Further, for each j{1,2,,r}, let A j : H 1 H 2 be a bounded linear operator and Ω be a closed convex subset of H 1 . The (constrained) multiple-sets split feasibility problem (MSSFP) is to find a point x Ω such that

x C:= i = 1 p C i and A j x Q j ,j{1,2,,r}.
(3.8)

When p=r=1, then the MSSFP (3.8) reduces to the SFP (1.2).

The split feasibility problem (SFP) and multiples-set split feasibility problem (MSSFP) model image retrieval [28] and intensity-modulated radiation therapy [6], and they have recently been investigated by many researchers.

For each i{1,2,,p} and j{1,2,,r}, let α ¯ i and β ¯ j be two positive numbers. Let B: H 1 H 1 be the gradient ψ of a convex and continuously differentiable function ψ: H 1 R defined by

ψ(x):= 1 2 i = 1 p α ¯ i x P C i x 2 + 1 2 j = 1 r β ¯ j A j x P Q j A j x 2 ,x H 1 .
(3.9)

Following [28], we see that

Bx:= i = 1 p α ¯ i (I P C i )x+ j = 1 r β ¯ j A j (I P Q j ) A j x,x H 1 ,
(3.10)

where A j is the adjoint of A j , j{1,2,,r}. The nonexpansivity of I P C implies that B is a Lipschitzian mapping with Lipschitz constant

L := i = 1 p α ¯ i + j = 1 r β ¯ j A j 2 .
(3.11)

Thus, variable KM-like CQ algorithm can be developed to solve the MSSFP (3.8). Let { Ω n }, { C n , i } and { Q n , j } be the sequences of closed convex sets, which are viewed as perturbations for the closed convex sets Ω, { C i } and { Q j }, respectively.

We now present an iterative algorithm for solving the MSSFP (3.8).

Theorem 3.11 Let f:Ω H 1 be a κ-contraction and let { f n } be a sequence of k n -contraction mappings from Ω into H 1 such that k n κ. For γ(0,2/ L ), define

Tx= P Ω n ( x γ ( i = 1 p α ¯ i ( I P C i ) x + j = 1 r β ¯ j A j ( I P Q j ) A j x ) )
(3.12)

and

T n x= P Ω n ( x γ ( i = 1 p α ¯ i ( I P C i , n ) x + j = 1 r β ¯ j A j ( I P Q j , n ) A j x ) ) ,nN.
(3.13)

Assume that the MSSFP (3.8) is consistent with P F ( T ) f x = x F(T). For given x 1 Ω, let { x n } be a sequence in Ω generated by

x n + 1 =(1 β n ) x n + β n P C [ α n f n x n + ( 1 α n ) T n x n ] ,nN,

where { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1), (C2), (C3), and (C4). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.12) remains in the closed ball B R [ x ], where

    R=max { x 1 x , sup n N ( f n x x / ( 1 k n ) ) } + n = 1 (1 α n ) T n x x .
  2. (b)

    If ρ ¯ =max{ max 1 j p A j x,(Iγψ)x:x B R [ x ]} and for each i{1,2,,p} and j{1,2,,r}, the following assumption holds:

(C8) n = 1 d ρ ¯ ( Ω n ,Ω)<, n = 1 D B R [ x ] ( P C i , n , P C i )< and n = 1 d ρ ¯ ( Q n , i ,Q)<,

then { x n } converges strongly to x .

Proof (a) Define

ψ n (x):= 1 2 i = 1 p α ¯ i x P i , n x 2 + 1 2 j = 1 r β ¯ j A j x P Q j , n A j x 2 .

The gradients of ψ and ψ n are given by

ψ(x)= i = 1 p α ¯ i (I P C i )x+ j = 1 r β ¯ j A j (I P Q j ) A j x

and

ψ n (x)= i = 1 p α ¯ i (I P C i , n )x+ j = 1 r β ¯ j A j (I P Q j , n ) A j x.

Hence, from (3.12) and (3.13), we have

Tx= P Ω ( x γ ψ ( x ) ) ,

and

T n x= P Ω n ( x γ ψ n ( x ) ) ,nN.

Since γ(0,2/ L ), T and T n , for all nN, are nonexpansive mappings, and F(T) because the MSSFP (3.8) is consistent. Hence, this part follows from Theorem 3.5(a).

(b) Assume that

ρ ¯ =max { max 1 j p A j x , ( I γ ψ ) x : x B R [ x ] } .

Let x H 1 be such that x B R [ x ]. Since each P C n is the nonexpansive, we have

T n x T x = P Ω n ( I γ ψ n ) x P Ω ( I γ ψ ) x P Ω n ( I γ ψ n ) x P Ω n ( I γ ψ ) x + P Ω n ( I ψ ) x P Ω ( I γ ψ ) x γ ψ n ( x ) ψ ( x ) + P Ω n ( I γ ψ ) x P Ω ( I γ ψ ) x γ i = 1 p α ¯ i P C i , n x P C i x + j = 1 p β ¯ j A j P Q j , n A j x P Q j A j x + d ρ ¯ ( Ω n , Ω ) γ i = 1 p α ¯ i D B R [ x ] ( P C i , n , P C i ) + j = 1 p A j β ¯ j d ρ ¯ ( P Q j , n , P Q j ) + d ρ ¯ ( Ω n , Ω ) .

By the assumptions, we have

n = 1 D B R [ x ] ( T n , T ) = n = 1 sup x B R [ x ] T n x T x γ i = 1 p α ¯ i n = 1 D B R [ x ] ( P C i , n , P C i ) + j = 1 p A j β ¯ j n = 1 d ρ ¯ ( P Q j , n , P Q j ) + n = 1 d ρ ¯ ( Ω n , Ω ) < .

Hence condition (C6) in Theorem 3.5 holds. Therefore, Theorem 3.9(b) follows from Theorem 3.5(b). □

Theorem 3.11 significantly extends and improves [[12], Theorem 1].

Finally, we present strong convergence of variable KM-like CQ algorithm (3.7) to the minimum norm solution of the MSSFP (3.8).

Corollary 3.12 Define T and T n by (3.12) and (3.13), respectively. Assume that the MSSFP (3.8) is consistent with P F ( T ) (0)= x F(T). For given x 1 C and a sequence { u n } in H 1 with u n 0 H 1 , let { x n } be a sequence in C generated by the following variable KM-like CQ algorithm:

x n + 1 =(1 β n ) x n + β n P C [ α n u n + ( 1 α n ) T n x n ] for all nN,

where 0<γ<2/ L , { α n } is a sequence in (0,1] and { β n } is a sequence in (0,1) satisfying (C1), (C2), and (C4). Then the following statements hold:

  1. (a)

    The sequence { x n } generated by (3.12) remains in the closed ball B R [ x ], where

    R=max { x 1 x , x } + n = 1 (1 α n ) T n x x .
  2. (b)

    If ρ ¯ =max{ max 1 j p A j x,(Iγψ)x:x B R [ x ]} and for each i{1,2,,p} and j{1,2,,r}, the assumption (C8) holds, then { x n } converges strongly to x .

4 Numerical examples

In order to demonstrate the effectiveness, realization, and convergence of algorithm of Theorem 3.2, we consider the following example.

Example 4.1 Let H=R and C=[0,1]. Let T be a self-mapping on C defined by Tx=1x for all xC. Define { α n } in (0,1) by α n = 1 n + 1 and { β n } by β n = 1 2 for all nN. For each nN, define f n :CH by (2.1). It is shown in Example 2.1 that { f n } is a sequence of nearly contraction mappings from C into H with sequence {( k n , a n )}, where k n = 1 n + 1 and a n = 5 2 ( n + 1 ) . It is easy to see that { f n } converges pointwise to f, where f(x)=0 for all xC. Note k n κ=0, F(T)={ x }={1/2}, and lim n f n x =f x . It can be observed that all the assumptions of Theorem 3.2 are satisfied and the sequence { x n } generated by (3.1) with T n =T converges to 1 2 . In fact, under the above assumptions, the algorithm (3.1) can be simplified as follows:

{ x 1 C , y n = α n f n x n + ( 1 α n ) ( 1 x n ) , x n + 1 = x n + P C [ y n ] 2 for all  n N .
(4.1)

The projection point of y n onto C can be expressed as

P C [ y n ]= { 0 , if  y n < 0 ; y n , if  y n C ; 1 , if  y n > 0 .

The iterates of algorithm (4.1) for initial guess x 1 =0,0.2,0.8,1 are shown in Table 1. From Table 1, we see that the iterations converge to 1/2 which is the unique fixed point of T. The convergence of each iteration is also shown in Figure 1 for comparison.

Figure 1
figure 1

The convergence comparison of different initial values x 1 =0,0.2,0.8,1 .

Table 1 The numerical results for initial guess x 1 =0,0.2,0.8,1

References

  1. Xu HK: A variable Krasnosel’skiĭ-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007

    Article  Google Scholar 

  2. Byrne CL: Iterative oblique projection onto convex subsets and split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310

    Article  MathSciNet  Google Scholar 

  3. Byrne CL: A unified treatment of some iterative algorithms insignal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

    Article  MathSciNet  Google Scholar 

  4. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001

    Article  Google Scholar 

  5. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

    Article  MathSciNet  Google Scholar 

  6. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017

    Article  MathSciNet  Google Scholar 

  7. Censor Y, Motova A, Segal A: Perturbed projection and subgradient projections for multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010

    Article  MathSciNet  Google Scholar 

  8. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012

    Article  MathSciNet  Google Scholar 

  9. Ceng LC, Ansari QH, Yao JC: An extragradient method for split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633–642. 10.1016/j.camwa.2011.12.074

    Article  MathSciNet  Google Scholar 

  10. Ceng LC, Ansari QH, Yao JC: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 2012, 16: 471–495. 10.1007/s11117-012-0174-8

    Article  MathSciNet  Google Scholar 

  11. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007

    Google Scholar 

  12. Masad E, Reich S: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8: 367–371.

    MathSciNet  Google Scholar 

  13. Sahu DR: Applications of the S -iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12: 187–204.

    MathSciNet  Google Scholar 

  14. Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014

    Article  Google Scholar 

  15. Moudafi A: Viscosity approximation methods for fixed-point problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615

    Article  MathSciNet  Google Scholar 

  16. Suzuki T: Sufficient and necessary condition for Halpern’s strong convergence to fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 2007, 135: 99–106.

    Article  Google Scholar 

  17. Takahashi W: Viscosity approximation methods for countable families of nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70: 719–734. 10.1016/j.na.2008.01.005

    Article  MathSciNet  Google Scholar 

  18. Yao Y, Xu HK: Iterative methods for finding minimum-norm fixed points of nonexpansive mappings with applications. Optimization 2011, 60(6):645–658. 10.1080/02331930903582140

    Article  MathSciNet  Google Scholar 

  19. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059

    Article  MathSciNet  Google Scholar 

  20. Sahu DR, Wong NC, Yao JC: A unified hybrid iterative method for solving variational inequalities involving generalized pseudo-contractive mappings. SIAM J. Control Optim. 2012, 50: 2335–2354. 10.1137/100798648

    Article  MathSciNet  Google Scholar 

  21. Sahu DR, Wong NC, Yao JC: A generalized hybrid steepest-descent method for variational inequalities in Banach spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 754702

    Google Scholar 

  22. Sahu DR: Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment. Math. Univ. Carol. 2005, 46: 653–666.

    MathSciNet  Google Scholar 

  23. Agarwal RP, O’Regan D, Sahu DR Topological Fixed Point Theory and Its Applications 6. In Fixed Point Theory for Lipschitzian-Type Mappings with Applications. Springer, New York; 2009.

    Google Scholar 

  24. Maing PE: Approximation method for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325: 469–479. 10.1016/j.jmaa.2005.12.066

    Article  MathSciNet  Google Scholar 

  25. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.

    Article  Google Scholar 

  26. Lopez G, Martin V, Xu HK: Perturbation techniques for nonexpansive mappings with applications. Nonlinear Anal. 2009, 10(4):2369–2383. 10.1016/j.nonrwa.2008.04.020

    Article  MathSciNet  Google Scholar 

  27. Yao Y, Shahzad N: New methods with perturbations for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 79

    Google Scholar 

  28. Bruck RE, Reich S: A general convergence principle of nonlinear functional analysis. Nonlinear Anal. 1980, 4: 939–950. 10.1016/0362-546X(80)90006-1

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. Therefore, the authors acknowledge with thanks DSR, for technical and financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdul Latif.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Latif, A., Sahu, D.R. & Ansari, Q.H. Variable KM-like algorithms for fixed point problems and split feasibility problems. Fixed Point Theory Appl 2014, 211 (2014). https://doi.org/10.1186/1687-1812-2014-211

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-211

Keywords