Skip to content

Advertisement

Open Access

Variable KM-like algorithms for fixed point problems and split feasibility problems

Fixed Point Theory and Applications20142014:211

https://doi.org/10.1186/1687-1812-2014-211

Received: 9 May 2014

Accepted: 16 September 2014

Published: 14 October 2014

Abstract

The convergence analysis of a variable KM-like method for approximating common fixed points of a possibly infinitely countable family of nonexpansive mappings in a Hilbert space is proposed and proved to be strongly convergent to a common fixed point of a family of nonexpansive mappings. Our variable KM-like technique is applied to solve the split feasibility problem and the multiple-sets split feasibility problem. Especially, the minimum norm solutions of the split feasibility problem and the multiple-sets split feasibility problem are derived. Our results can be viewed as an improvement and refinement of the previously known results.

MSC:47H10, 65J20, 65J22, 65J25.

Keywords

split feasibility problemsfixed point problemsregularized algorithmsconvergence analysis of algorithms

1 Introduction

Problems of image reconstruction from projections can be represented by a system of linear equations
A x = b .
(1.1)
In practice, the system (1.1) is often inconsistent, and one usually seeks a point which minimizes x R n by some predetermined optimization criterion. The problem is frequently ill-posed and there may be more than one optimal solution. The standard approach to dealing with that problem is via regularization. The well-known convex feasibility problem is to find a point x satisfying the following:
to find a point  x i = 1 m C i ,

where m 1 is an integer, and each C i is a nonempty closed convex subset of a Hilbert space H. A special case of the convex feasibility problem is the split feasibility problem given by:

Let C, Q be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, and let A : H 1 H 2 be a bounded linear operator. The split feasibility problem (SFP) is
to find a point  x C  such that  A x Q .
(1.2)
The SFP is said to be consistent if (1.2) has a solution. It is easy to see that SFP (1.2) is consistent if and only if the following fixed point problem has a solution:
find  x C  such that  x = P C ( I γ A ( I P Q ) A ) x ,
(1.3)

where P C and P Q are the projections onto C and Q, respectively, and A is the adjoint of A. Let L denote the spectral radius of A A . It is well known that if γ ( 0 , 2 / L ) , the operator T = P C ( I γ A ( I P Q ) A ) in the operator equation (1.3) is nonexpansive [1].

It has been extensively studied during the last decade because of its applications in modeling inverse problems which arise in phase retrievals and in medical image reconstruction. It has also been applied to modeling intensity-modulated radiation therapy; see, for example [27] and the references therein.

Several iterative methods have been proposed and analyzed to solve the SFP (1.2); see, for example [1, 3, 6, 814] and the references therein. Byrne [3] introduced the CQ algorithm
x n + 1 = T x n , n N ,
(1.4)

and proved that the sequence { x n } generated by the CQ algorithm (1.4) converges weakly to a solution of SFP (1.2), where T = P C ( I γ A ( I P Q ) A ) and 0 < γ < 2 / L .

In view of the fixed point formulation (1.3) of the SFP (1.2), Xu [1] and Yang [14] applied the following perturbed Krasnosel’skiĭ-Mann CQ algorithm to solve the SFP (1.2):
x n + 1 = ( 1 α n ) x n + α n T n x n , n N .
(1.5)
Here { T n } is a sequence of operators defined by
T n = P C n ( I γ A ( I P Q n ) A ) , n N ,

where { C n } and { Q n } are sequences of nonempty closed convex subsets in H 1 and H 2 , respectively, which obey the following assumption:

(C0) n = 1 α n d ρ ( C n , C ) < and n = 1 α n d ρ ( Q n , Q ) < for each ρ > 0 , where d ρ is the ρ-distance between Q n and Q (see Section 3.2).

It is not very easy to verify condition (C0) for each ρ > 0 . Thus, the condition (C0) is quite restrictive even for weak convergence of the sequence { x n } defined by (1.5). One of our objectives is to relax the condition (C0).

Many practical problems can be formulated as a fixed point problem (FPP): finding an element x such that
x = T x ,
(1.6)
where T is a nonexpansive self-mapping defined on a closed convex subset C of a Hilbert space H. The solution set of FPP (1.6) is denoted by F ( T ) . It is well known that if F ( T ) , then F ( T ) is closed and convex. The fixed point problem (1.6) is ill-posed (it may fail to have a solution, nor uniqueness of solution) in general. Regularization by contractions can removed such illness. We replace the nonexpansive mapping T by a family of contractions T t f : = t f + ( 1 t ) T , with t ( 0 , 1 ) and f : C C a fixed contraction. We call f an anchoring function. The regularized problem of fixed point for T is the fixed point problem for T t f . The mapping T t f has a unique fixed point, namely, x t C . Therefore, x t is the solution of the fixed point problem
x t = t f x t + ( 1 t ) T x t .
(1.7)
We now discretize the regularization (1.7) to define an explicit iterative algorithm:
x n + 1 = t n f x n + ( 1 t n ) T x n , n N .
(1.8)
The iterative algorithm (1.8) is due to Moudafi [15], by generalizing Browder’s and Halpern’s methods, who introduced viscosity approximation methods. Suzuki [16] established a strong convergence theorem by using Halpern’s method to averaged mapping T λ = λ I + ( 1 λ ) T , λ ( 0 , 1 ) for nonexpansive mappings T in certain Banach spaces. Takahashi [17] proved a strong convergence theorem of the following iterative algorithm for countable families of nonexpansive mappings in certain Banach spaces:
x n + 1 = t n f x n + ( 1 t n ) T n x n , n N .
(1.9)
Recently, Yao and Xu [18] introduced and studied strong convergence of the following modified methods:
x n + 1 = P C [ t n f x n + ( 1 t n ) T x n ] , n N ,
(1.10)

where f : C H is a fixed non-self contraction and { t n } is a sequence in ( 0 , 1 ) satisfying the conditions:

(S1) lim n t n = 0 and n = 1 t n = ,

(S2) either n = 1 | t n + 1 t n | < or t n + 1 / t n 0 as n .

One can easily see that (1.10) is a regularized iterative algorithm.

Motivated by [1, 11, 14], we study the following more general non-regularized algorithm, called variable KM-like algorithm which generates a sequence { x n } according to the recursive formula:
{ x 1 C , y n = α n f n x n + ( 1 α n ) T n x n , x n + 1 = ( 1 β n ) x n + β n Proj C [ y n ] for all  n N ,

where { α n } and { β n } are sequences in ( 0 , 1 ) , { T n } is a sequence of nonexpansive self-mappings of C and { f n } is a sequence of (not necessarily contraction) mappings from C into H.

In the present paper, we will study the strong convergence of the proposed variable KM-like algorithm in the framework of Hilbert spaces. The paper is organized as follows. The next section contains preliminaries. In Section 3, we will study the convergence analysis of our variable KM-like algorithm for fixed point problem (1.6) without the assumption (S2). This result will be applied to prove convergence of some perturbed algorithms for the SFP (1.2) and the multiple-sets split feasibility problem under some weaker assumptions. As special cases, we obtain algorithms which converge strongly to the minimum norm solutions of the split feasibility problem and the multiple-sets split feasibility problem. Our results are new and interesting in the following contexts:
  1. (i)

    Our algorithm (3.1) is not regularized by contractions.

     
  2. (ii)

    f n is not necessarily contraction. In the existing literature, anchoring function f is a fixed contraction mapping [15, 1719] or strongly pseudo-contraction mapping [20].

     
  3. (iii)

    In the convergence analysis of (3.1) for fixed point problem (1.6), the assumption (S2) is not required.

     
  4. (iv)

    A fixed ρ > 0 for a (C0)-like condition is adopted.

     

2 Preliminaries

Let C be a nonempty subset of a Hilbert space H. Throughout the paper, we denote by B r [ x ] the closed ball defined by B r [ x ] = { y C : y x r } . Let T 1 , T 2 : C H be two mappings. We denote by B ( C ) the collection of all bounded subsets of C. The deviation between T 1 and T 2 on B B ( C ) [21], denoted by D B ( T 1 , T 2 ) , is defined by
D B ( T 1 , T 2 ) = sup { T 1 x T 2 x : x B } .

Let T : C H be a mapping. Then T is said to be a κ-contraction if there exists κ [ 0 , 1 ) such that T x T y κ x y for all x , y C . Furthermore, it is called nonexpansive if T x T y x y for all x , y C .

Let { f n } be a sequence of mappings from C into H. Following [2022], we say { f n } is a sequence of nearly contraction mappings with sequence { ( k n , a n ) } if there exist a sequence { k n } in [ 0 , 1 ) and a sequence { a n } in [ 0 , ) with a n 0 such that
f n x f n y k n x y + a n for all  x , y C  and  n N .

One can observe that a sequence of contraction mappings is essentially a sequence of nearly contraction mappings.

We now construct a sequence of nearly contractions.

Example 2.1 Let H = R and C = [ 0 , 1 ] . Let { f n } be a sequence of mappings f n : C H defined by
f n ( x ) = { x n + 1 , if  0 x 1 2 ; 3 n + 1 , if  1 2 < x 1 .
(2.1)

Set k n : = 1 n + 1 . We consider the following cases:

Case 1: If x , y [ 0 , 1 2 ] , then
f n x f n y = k n ( x y ) for all  n N .
Case 2: If x , y ( 1 2 , 1 ] , then
f n x f n y = 0 for all  n N .
Case 3: If x [ 0 , 1 2 ] and y ( 1 2 , 1 ] , then
| f n x f n y | = | x n + 1 3 n + 1 | 1 n + 1 | x y | + 1 n + 1 | y 3 | k n | x y | + 5 2 ( n + 1 ) , n N .
Therefore, for all x , y [ 0 , 1 ] , we have
| f n x f n y | k n | x y | + a n for all  n N ,

where a n : = 5 2 ( n + 1 ) . Therefore, { f n } is a sequence of nearly contraction mappings with sequence { ( k n , a n ) } .

Let C be a nonempty closed convex subset of a Hilbert space H. We use P C to denote the (metric) projection from H onto C; namely, for x H , P C ( x ) is the unique point in C with the property
x P C ( x ) = inf { x z : z C } .

The following is a useful characterization of projections.

Lemma 2.2 Let C be a nonempty closed convex subset of a real Hilbert space H and let P C be the metric projection from H onto C. Given x H and z C . Then z = P C ( x ) if and only if
x z , y z 0 for all  y C .

Lemma 2.3 [[23], Corollary 5.6.4]

Let C be a nonempty closed convex subset of H and T : C C a nonexpansive mapping. Then I T is demiclosed at zero, that is, if { x n } is a sequence in C weakly converging to x and if { ( I T ) x n } converges strongly to zero, then ( I T ) x = 0 .

Lemma 2.4 [24]

Let { a n } and { c n } be two sequences of nonnegative real numbers and let { b n } be a sequence in satisfying the following condition:
a n + 1 ( 1 α n ) a n + b n + c n for all  n N ,
where { α n } is a sequence in ( 0 , 1 ] . Assume that n = 1 c n < . Then the following statements hold:
  1. (a)
    If b n K α n for all n N and for some K 0 , then
    a n + 1 δ n a 1 + ( 1 δ n ) K + j = 1 n c j for all  n N ,

    where δ n = Π j = 1 n ( 1 α j ) and hence { a n } is bounded.

     
  2. (b)

    If n = 1 α n = and lim sup n ( b n / α n ) 0 , then { a n } n = 1 converges to zero.

     

3 Convergence analysis of a variable KM-like algorithm

First, we prove the following.

Proposition 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H, T : C C a nonexpansive mapping with F ( T ) and f : C H a κ-contraction. Then there exists a unique point x C such that x = P F ( T ) f ( x ) .

Proof Since f : C H is a κ-contraction, it follows that P F ( T ) f is a κ-contraction from C onto itself. Then there exists a unique point x C such that x = P F ( T ) f ( x ) . □

3.1 A variable KM-like algorithm

Let C be a nonempty closed convex subset of a real Hilbert space H. Let { f n } be a sequence of nearly contractions from C into H such that { f n } converges pointwise to f and let { T n } be a sequence of nonexpansive self-mappings of C which are viewed as perturbations. For computing a common fixed point of the sequence { T n } of nonexpansive mappings, we propose the following variable KM-like algorithm:
{ x 1 C , y n = α n f n x n + ( 1 α n ) T n x n , x n + 1 = ( 1 β n ) x n + β n P C [ y n ] for all  n N ,
(3.1)

where { α n } and { β n } are sequences in [ 0 , 1 ] .

We investigate the asymptotic behavior of the sequence { x n } generated, from an arbitrary x 1 C , by the algorithm (3.1) to a common fixed point of the sequence { T n } .

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H, T : C C be a nonexpansive mapping such that F ( T ) , and let f : C H be a κ-contraction with κ [ 0 , 1 ) such that P F ( T ) f ( x ) = x F ( T ) . Let { f n } be a sequence of nearly contraction mappings from C into H with the sequence { ( k n , a n ) } in [ 0 , 1 ) × [ 0 , ) such that k n κ , and let { T n } be a sequence of nonexpansive mappings from C into itself. For given x 1 C , let { x n } be a sequence in C generated by (3.1), where { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) . Assume that the following conditions are satisfied:

(C1) lim n α n = 0 and n = 1 α n = ,

(C2) 0 < lim inf n β n lim sup n β n < 1 ,

(C3) lim n f n x = f x ,

(C4) n = 1 ( 1 α n ) T n x x < .

Define
R : = max { x 1 x , K } + n = 1 ( 1 α n ) T n x x and K : = sup n N f n x x + a n 1 k n .
(3.2)
Then the following statements hold:
  1. (a)

    The sequence { x n } generated by (3.1) remains in the closed ball B R [ x ] .

     
  2. (b)

    If the following assumption holds:

     

(C5) lim n T n v n T v n = 0 for all { v n } in B R [ x ] ,

then { x n } converges strongly to x .

Proof (a) Set z n : = P C [ y n ] . Observe that
z n x = P C [ y n ] P C [ x ] y n x α n f n x n x + ( 1 α n ) T n x n x α n ( f n x n f n x + f n x x ) + ( 1 α n ) ( T n x n T n x + T n x x ) α n ( k n x n x + f n x x + a n ) + ( 1 α n ) ( x n x + T n x x ) = ( 1 ( 1 k n ) α n ) x n x + α n ( f n x x + a n ) + ( 1 α n ) T n x x .
From (3.1), we have
x n + 1 x = β n ( x n x ) + β n ( z n x ) ( 1 β n ) x n x + β n z n x ( 1 β n ) x n x + β n [ ( 1 ( 1 k n ) α n ) x n x + α n ( f n x x + a n ) + ( 1 α n ) T n x x ] = ( 1 ( 1 k n ) α n β n ) x n x + α n β n ( f n x x + a n ) + ( 1 α n ) β n T n x x ( 1 ( 1 k n ) α n β n ) x n x + ( 1 k n ) α n β n K + ( 1 α n ) T n x x max { x 1 x , K } + j = 1 n ( 1 α j ) T j x x .
Since n = 1 ( 1 α n ) T n x x < , by Lemma 2.4, we find that { x n x } is bounded. Moreover,
x n + 1 x max { x 1 x , K } + n = 1 ( 1 α n ) T n x x = R , n N .

Therefore, { x n } is well defined in the ball B R [ x ] .

(b) Assume that lim n T n v n T v n = 0 for all { v n } in B R [ x ] . Set γ n : = x f x , x z n . We now proceed with the following steps:

Step 1: { f n x n } and { T n x n } are bounded.

Without loss of generality, we may assume that β β n for all n N for some β > 0 . From (C3), we have
n = 1 ( 1 α n ) T n x x < ,
which implies that lim n ( 1 α n ) T n x x = 0 . Since α n 0 , it follows that
lim n T n x x = 0 .
Since
T n x n x T n x n T n x + T n x x x n x + T n x T x ,
and { T n x x } converges to 0, we conclude that { T n x n } is bounded. Moreover,
f n x n x f n x n f n x + f n x x k n x n x + f n x x + a n ,

it follows that { f n x n } is bounded.

Step 2: lim n x n x n + 1 = 0 .

Set u n : = f n x n . We write
x n + 1 = ( 1 β n ) x n + β n z n .
Observe that
z n + 1 z n y n + 1 y n = α n + 1 u n + 1 + ( 1 α n + 1 ) T n + 1 x n + 1 ( α n u n + ( 1 α n ) T n x n ) = α n + 1 u n + 1 α n u n + ( 1 α n + 1 ) ( T n + 1 x n + 1 T n + 1 x n ) + ( 1 α n + 1 ) T n + 1 x n ( 1 α n ) T n x n = ( 1 α n + 1 ) x n + 1 x n + T n + 1 x n T n x n + α n + 1 ( T n + 1 x n + u n + 1 ) + α n ( T n x n + u n ) ,
which gives
z n + 1 z n x n + 1 x n T n + 1 x n T n x n + α n + 1 ( T n + 1 x n + u n + 1 ) + α n ( T n x n + u n ) .
As we have shown in Step 1, { T n x n } and { u n } are bounded. Observe that
T n + 1 x n x T n + 1 x n T n + 1 x + T n + 1 x x x n x + T n + 1 x x
and
f n + 1 x n x f n + 1 x n f n + 1 x + T n + 1 x x x n x + f n + 1 x x + a n .
Thus, { f n + 1 x n } and { T n + 1 x n } are bounded. Hence,
lim sup n ( z n + 1 z n x n + 1 x n ) 0 .
By [[25], Lemma 2.2], we obtain
lim n z n x n = 0 ,
which implies that
lim n x n + 1 x n = lim n β n z n x n = 0 .

Step 3: lim n x n T x n = 0 .

Note
y n T n x n = α n f n x n + ( 1 α n ) T n x n T n x n = α n f n x n T n x n ,
and hence
x n T x n x n x n + 1 + x n + 1 T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) x n T n x n + β n z n T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) x n T n x n + β n y n T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) x n T n x n + α n β n f n x n T n x n + T n x n T x n x n x n + 1 + ( 1 β n ) ( x n T x n + T x n T n x n ) + α n β n f n x n T n x n + T n x n T x n ,
which implies that
β n x n T x n x n x n + 1 + α n β n f n x n T n x n + 2 T n x n T x n .

Note α n 0 and T n x n T x n 0 , we conclude that lim n x n T x n = 0 .

Step 4: lim sup n γ n 0 .

Note that
z n T z n z n x n + x n T x n + T x n T z n 2 z n x n + x n T x n 0 as  n .
We take a subsequence { z n i } of { z n } such that
lim sup n f x x , z n x = lim n f x x , z n i x and z n i z C as  i .
Since { z n } is in C and z n T z n 0 , we conclude, from Lemma 2.3 that z F ( T ) . Since x = P F ( T ) f ( x ) , we obtain from Lemma 2.2 that
lim sup n γ n = lim sup n f x x , z n i x = f x x , z x 0 .

Step 5: x n x .

Since { z n x } is bounded, there exists R 1 > 0 such that z n x R 1 for all n N . Noting that z n = P C [ y n ] . Hence, from (3.1), we have
z n x 2 = z n y n + y n x , z n x y n x , z n x = α n ( f n x n f n x + f n x f x + f x x ) + ( 1 α n ) ( T n x n T n x + T n x x ) , z n x [ α n ( f n x n f n x + f n x f x ) + ( 1 α n ) ( T n x n T n x + T n x T x ) ] z n x + α n f x x , z n x [ α n ( k n x n x + f n x f x + a n ) + ( 1 α n ) ( x n x + T n x T x ) ] z n x + α n γ n = ( 1 ( 1 k n ) α n ) x n x z n x + [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] z n x + α n γ n 1 ( 1 k n ) α n 2 ( x n x 2 + z n x 2 ) + [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + α n γ n 1 ( 1 k n ) α n 2 x n x 2 + 1 2 z n x 2 + [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + α n γ n ,
which implies that
z n x 2 ( 1 ( 1 k n ) α n ) x n x 2 + 2 [ α n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + 2 α n γ n .
From (3.1), we have
x n + 1 x 2 = ( 1 β n ) ( x n x ) + β n ( z n x ) 2 ( 1 β n ) x n x 2 + β n z n x 2 ( 1 ( 1 k n ) α n β n ) x n x 2 + 2 [ α n β n ( f n x f x + a n ) + ( 1 α n ) T n x T x ] R 1 + 2 α n β n γ n .

Since lim n f n x f x 1 k n = 0 and n = 1 ( 1 α n ) T n x T x < , we conclude from Lemma 2.4(b) that x n x . □

Remark 3.3 Theorem 3.2 has the following characterization for convergence analysis of (3.1):
  1. (a)

    Iterates of (3.1) remains in the closed ball B R [ x ] .

     
  2. (b)

    The assumption (S2) is not required.

     
  3. (c)

    (C4) is adopted for only for x F ( T ) . In particular, the condition ‘ n = 1 T n z T z < for all z F ( T ) ’ is adopted in [[26], Theorem 3.1].

     

Thus, Theorem 3.2 is more general by nature. Therefore, Theorem 3.2 significantly extends and improves [[26], Theorem 3.1] and [[18], Theorem 3.2].

Theorem 3.2 remains true if condition (C4) is replaced with the condition that the mappings { T n } and T have common fixed points. In fact, we have

Theorem 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H, T : C C a nonexpansive mapping such that F ( T ) , and f : C H be a κ-contraction with κ [ 0 , 1 ) such that P F ( T ) f ( x ) = x F ( T ) . Let { f n } be a sequence of nearly contraction mappings from C into H with sequence { ( k n , a n ) } in [ 0 , 1 ) × [ 0 , ) such that k n κ . Let { T n } be a sequence of nonexpansive mappings from C into itself such that F ( T ) = n N F ( T n ) . For given x 1 C , let { x n } be a sequence in C generated by (3.1), where { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1), (C2), and (C3). Then the following statements hold:
  1. (a)

    The sequence { x n } generated by (3.1) remains in the closed ball B R [ x ] , where R = max { x 1 x , K } and K is given in (3.2).

     
  2. (b)

    If the assumption (C5) holds, then { x n } converges strongly to x .

     

We now prove strong convergence of the sequence { x n } generated by (3.1) under condition (C6).

Theorem 3.5 Let C be a nonempty closed convex subset of a real Hilbert space H, T : C C be a nonexpansive mapping such that F ( T ) , and { T n } be a sequence of nonexpansive mappings from C into itself. Let f : C H be a κ-contraction with κ [ 0 , 1 ) such that P F ( T ) f ( x ) = x F ( T ) and { f n } be a sequence of k n -contraction mappings from C into H such that k n κ . For given x 1 C , let { x n } be a sequence in C generated by (3.1), where { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1), (C2), (C3), and (C4). Then the following statements hold:
  1. (a)
    The sequence { x n } generated by (3.1) remains in the closed ball B R [ x ] , where
    R = max { x 1 x , sup n N f n x x 1 k n } + n = 1 ( 1 α n ) T n x x .
     
  2. (b)

    If the following assumption holds:

     

(C6) n = 1 D B R [ x ] ( T n , T ) < ,

then { x n } converges strongly to x .

Proof We show that n = 1 D B R [ x ] ( T n , T ) < implies that lim n T n v n T v n = 0 for all { v n } in B R [ x ] . Let { w n } be a sequence in B R [ x ] . Then
n = 1 T n w n T w n n = 1 D B R [ x ] ( T n , T ) < .

It follows that lim n T n w n T w n = 0 . Thus, the condition (C5) in Theorem 3.2 holds. Therefore, Theorem 3.5 follows from Theorem 3.2. □

For a sequence { u n } in H with u n u H , define f n : C H by
f n x = u n , x C .
Then each f n is 0-contraction with f n x f x = u . In this case algorithm (3.1) with T n = T reduces to
x n + 1 = ( 1 β n ) x n + β n P C [ α n u n + ( 1 α n ) T x n ] , n N .
(3.3)
Corollary 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H and T : C C be a nonexpansive mapping such that F ( T ) . Let { u n } be a sequence in H such that u n u H and P F ( T ) ( u ) = x F ( T ) . For given x 1 C , let { x n } be a sequence in C generated by (3.3), where { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1) and (C2). Then the following statements hold:
  1. (a)

    The sequence { x n } generated by (3.3) remains in the closed ball B R [ x ] , where R = max { x 1 x , sup n N u n x } .

     
  2. (b)

    { x n } converges strongly to x .

     

Remark 3.7 If u = 0 in Corollary 3.6, then { x n } generated by Algorithm 3.3 converges strongly to the minimum norm solution of the FPP (1.6). Corollary 3.6 also provides a closed ball in which { x n } lies. Therefore, Corollary 3.6 significantly extends and improves [[27], Theorem 3.1].

3.2 The split feasibility problem

In this section we apply Theorem 3.5 to solve the SFP (1.2). We begin with the ρ-distance:

Definition 3.8 Let C and Q be two closed convex subsets of a Hilbert space H and let ρ be a positive constant. The ρ-distance between C and Q is defined by
d ρ ( C , Q ) = sup x ρ P C x P Q x .

By employing Theorem 3.5, we present a variable KM-like CQ algorithm (3.6) for finding solutions of the SFP (1.2) and prove its strong convergence.

Theorem 3.9 Let C and Q be two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and let { C n } and { Q n } be sequences of closed convex subsets of H 1 and  H 2 , respectively. Let f : C H 1 be a κ-contraction and { f n } be a sequence of k n -contraction mappings from C into H 1 such that k n κ . Let A : H 1 H 2 be a bounded linear operator with the adjoint A . For γ ( 0 , 2 / L ) , define
T = P C ( I γ A ( I P Q ) A ) ,
(3.4)
and
T n = P C n ( I γ A ( I P Q n ) A ) , n N .
(3.5)
Assume that SFP (1.2) is consistent with P F ( T ) f x = x F ( T ) . For given x 1 C , let { x n } be a sequence in C generated by the following variable KM-like CQ algorithm:
x n + 1 = ( 1 β n ) x n + β n P C [ α n f n x n + ( 1 α n ) P C n ( I γ A ( I P Q n ) A ) x n ] , n N ,
(3.6)
where { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1), (C2), (C3), and (C4). Then the following statements hold:
  1. (a)
    The sequence { x n } generated by (3.6) remains in the closed ball B R [ x ] , where
    R = max { x 1 x , sup n N ( f n x x / ( 1 k n ) ) } + n = 1 ( 1 α n ) T n x x .
     
  2. (b)

    If ρ ¯ = max { A x , ( I γ A ( I P Q ) A ) x : x B R [ x ] } and the following assumption holds:

     

(C7) n = 1 d ρ ¯ ( C n , C ) < and n = 1 d ρ ¯ ( Q n , Q ) < ,

then { x n } converges strongly to x .

Proof (a) Since γ ( 0 , 2 / L ) , T and T n for all n N are nonexpansive mappings and F ( T ) because SFP (1.2) is consistent. Hence this part follows from Theorem 3.5(a).

(b) Assume that
ρ ¯ = max { A x , ( I γ A ( I P Q ) A ) x : x B R [ x ] } .
Now, let x H 1 be such that x B R [ x ] . Since each P C n is the nonexpansive, we have
T n x T x = P C n ( I γ A ( I P Q n ) A ) x P C ( I γ A ( I P Q ) A ) x P C n ( I γ A ( I P Q n ) A ) x P C n ( I γ A ( I P Q ) A ) x + P C n ( I γ A ( I P Q ) A ) x P C ( I γ A ( I P Q ) A ) x γ A ( P Q n A x P Q A x ) + P C n ( I γ A ( I P Q ) A ) x P C ( I γ A ( I P Q ) A ) x γ A P Q n A x P Q A x + d ρ ¯ ( C n , C ) γ A d ρ ¯ ( Q n , Q ) + d ρ ¯ ( C n , C ) .
Thus,
n = 1 D B R [ x ] ( T n , T ) = n = 1 sup x B R [ x ] T n x T x n = 1 d ρ ¯ ( C n , C ) + γ A n = 1 d ρ ¯ ( Q n , Q ) < .

Hence condition (C6) in Theorem 3.5 holds. Therefore, Theorem 3.9(b) follows from Theorem 3.5(b). □

For a sequence { u n } in H 1 with u n 0 H 1 , define f n : C H 1 by
f n x = u n , x C .
Then each f n is 0-contraction with f n x f x = 0 . In this case variable KM-like CQ algorithm (3.6) reduces to the following variable KM-like CQ algorithm:
x n + 1 = ( 1 β n ) x n + β n P C [ α n u n + ( 1 α n ) P C n ( I γ A ( I P Q n ) A ) x n ] , n N .
(3.7)

We now present strong convergence of the variable KM-like CQ algorithm (3.7) to the minimum norm solution of the SFP (1.2).

Corollary 3.10 Let C and Q be two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and let { C n } and { Q n } be sequences of closed convex subsets of H 1 and H 2 , respectively. Let A : H 1 H 2 be a bounded linear operator with the adjoint A . For γ ( 0 , 2 / L ) , define T and T n by (3.4) and (3.5), respectively. Assume that the SFP (1.2) is consistent with P F ( T ) ( 0 ) = x F ( T ) . For given x 1 C and a sequence { u n } in H 1 with u n 0 H 1 , let { x n } be a sequence in C generated by a variable KM-like CQ algorithm (3.7), { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1), (C2), and (C4). Then the following statements hold:
  1. (a)
    The sequence { x n } generated by (3.7) remains in the closed ball B R [ x ] , where
    R = max { x 1 x , x } + n = 1 ( 1 α n ) T n x x .
     
  2. (b)

    If ρ ¯ = max { A x , ( I γ A ( I P Q ) A ) x : x B R [ x ] } and the assumption (C7) holds, then { x n } converges strongly to x .

     

Corollary 3.10 significantly extends and improves [[11], Theorem 3.1].

3.3 The constrained multiple-sets split feasibility problem

In this section, we consider the following multiple-sets split feasibility problem which models the intensity-modulated radiation therapy [6] and has recently been investigated by many researchers, see, for example, [1, 3, 6, 814] and the references therein.

Let H 1 and H 2 be two Hilbert spaces and let r and p be two natural numbers. For each i { 1 , 2 , , p } , let C i be a nonempty closed convex subset of H 1 and for each j { 1 , 2 , , r } , let Q j be a nonempty closed convex subset of H 2 . Further, for each j { 1 , 2 , , r } , let A j : H 1 H 2 be a bounded linear operator and Ω be a closed convex subset of H 1 . The (constrained) multiple-sets split feasibility problem (MSSFP) is to find a point x Ω such that
x C : = i = 1 p C i and A j x Q j , j { 1 , 2 , , r } .
(3.8)

When p = r = 1 , then the MSSFP (3.8) reduces to the SFP (1.2).

The split feasibility problem (SFP) and multiples-set split feasibility problem (MSSFP) model image retrieval [28] and intensity-modulated radiation therapy [6], and they have recently been investigated by many researchers.

For each i { 1 , 2 , , p } and j { 1 , 2 , , r } , let α ¯ i and β ¯ j be two positive numbers. Let B : H 1 H 1 be the gradient ψ of a convex and continuously differentiable function ψ : H 1 R defined by
ψ ( x ) : = 1 2 i = 1 p α ¯ i x P C i x 2 + 1 2 j = 1 r β ¯ j A j x P Q j A j x 2 , x H 1 .
(3.9)
Following [28], we see that
B x : = i = 1 p α ¯ i ( I P C i ) x + j = 1 r β ¯ j A j ( I P Q j ) A j x , x H 1 ,
(3.10)
where A j is the adjoint of A j , j { 1 , 2 , , r } . The nonexpansivity of I P C implies that B is a Lipschitzian mapping with Lipschitz constant
L : = i = 1 p α ¯ i + j = 1 r β ¯ j A j 2 .
(3.11)

Thus, variable KM-like CQ algorithm can be developed to solve the MSSFP (3.8). Let { Ω n } , { C n , i } and { Q n , j } be the sequences of closed convex sets, which are viewed as perturbations for the closed convex sets Ω, { C i } and { Q j } , respectively.

We now present an iterative algorithm for solving the MSSFP (3.8).

Theorem 3.11 Let f : Ω H 1 be a κ-contraction and let { f n } be a sequence of k n -contraction mappings from Ω into H 1 such that k n κ . For γ ( 0 , 2 / L ) , define
T x = P Ω n ( x γ ( i = 1 p α ¯ i ( I P C i ) x + j = 1 r β ¯ j A j ( I P Q j ) A j x ) )
(3.12)
and
T n x = P Ω n ( x γ ( i = 1 p α ¯ i ( I P C i , n ) x + j = 1 r β ¯ j A j ( I P Q j , n ) A j x ) ) , n N .
(3.13)
Assume that the MSSFP (3.8) is consistent with P F ( T ) f x = x F ( T ) . For given x 1 Ω , let { x n } be a sequence in Ω generated by
x n + 1 = ( 1 β n ) x n + β n P C [ α n f n x n + ( 1 α n ) T n x n ] , n N ,
where { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1), (C2), (C3), and (C4). Then the following statements hold:
  1. (a)
    The sequence { x n } generated by (3.12) remains in the closed ball B R [ x ] , where
    R = max { x 1 x , sup n N ( f n x x / ( 1 k n ) ) } + n = 1 ( 1 α n ) T n x x .
     
  2. (b)

    If ρ ¯ = max { max 1 j p A j x , ( I γ ψ ) x : x B R [ x ] } and for each i { 1 , 2 , , p } and j { 1 , 2 , , r } , the following assumption holds:

     

(C8) n = 1 d ρ ¯ ( Ω n , Ω ) < , n = 1 D B R [ x ] ( P C i , n , P C i ) < and n = 1 d ρ ¯ ( Q n , i , Q ) < ,

then { x n } converges strongly to x .

Proof (a) Define
ψ n ( x ) : = 1 2 i = 1 p α ¯ i x P i , n x 2 + 1 2 j = 1 r β ¯ j A j x P Q j , n A j x 2 .
The gradients of ψ and ψ n are given by
ψ ( x ) = i = 1 p α ¯ i ( I P C i ) x + j = 1 r β ¯ j A j ( I P Q j ) A j x
and
ψ n ( x ) = i = 1 p α ¯ i ( I P C i , n ) x + j = 1 r β ¯ j A j ( I P Q j , n ) A j x .
Hence, from (3.12) and (3.13), we have
T x = P Ω ( x γ ψ ( x ) ) ,
and
T n x = P Ω n ( x γ ψ n ( x ) ) , n N .

Since γ ( 0 , 2 / L ) , T and T n , for all n N , are nonexpansive mappings, and F ( T ) because the MSSFP (3.8) is consistent. Hence, this part follows from Theorem 3.5(a).

(b) Assume that
ρ ¯ = max { max 1 j p A j x , ( I γ ψ ) x : x B R [ x ] } .
Let x H 1 be such that x B R [ x ] . Since each P C n is the nonexpansive, we have
T n x T x = P Ω n ( I γ ψ n ) x P Ω ( I γ ψ ) x P Ω n ( I γ ψ n ) x P Ω n ( I γ ψ ) x + P Ω n ( I ψ ) x P Ω ( I γ ψ ) x γ ψ n ( x ) ψ ( x ) + P Ω n ( I γ ψ ) x P Ω ( I γ ψ ) x γ i = 1 p α ¯ i P C i , n x P C i x + j = 1 p β ¯ j A j P Q j , n A j x P Q j A j x + d ρ ¯ ( Ω n , Ω ) γ i = 1 p α ¯ i D B R [ x ] ( P C i , n , P C i ) + j = 1 p A j β ¯ j d ρ ¯ ( P Q j , n , P Q j ) + d ρ ¯ ( Ω n , Ω ) .
By the assumptions, we have
n = 1 D B R [ x ] ( T n , T ) = n = 1 sup x B R [ x ] T n x T x γ i = 1 p α ¯ i n = 1 D B R [ x ] ( P C i , n , P C i ) + j = 1 p A j β ¯ j n = 1 d ρ ¯ ( P Q j , n , P Q j ) + n = 1 d ρ ¯ ( Ω n , Ω ) < .

Hence condition (C6) in Theorem 3.5 holds. Therefore, Theorem 3.9(b) follows from Theorem 3.5(b). □

Theorem 3.11 significantly extends and improves [[12], Theorem 1].

Finally, we present strong convergence of variable KM-like CQ algorithm (3.7) to the minimum norm solution of the MSSFP (3.8).

Corollary 3.12 Define T and T n by (3.12) and (3.13), respectively. Assume that the MSSFP (3.8) is consistent with P F ( T ) ( 0 ) = x F ( T ) . For given x 1 C and a sequence { u n } in H 1 with u n 0 H 1 , let { x n } be a sequence in C generated by the following variable KM-like CQ algorithm:
x n + 1 = ( 1 β n ) x n + β n P C [ α n u n + ( 1 α n ) T n x n ] for all  n N ,
where 0 < γ < 2 / L , { α n } is a sequence in ( 0 , 1 ] and { β n } is a sequence in ( 0 , 1 ) satisfying (C1), (C2), and (C4). Then the following statements hold:
  1. (a)
    The sequence { x n } generated by (3.12) remains in the closed ball B R [ x ] , where
    R = max { x 1 x , x } + n = 1 ( 1 α n ) T n x x .
     
  2. (b)

    If ρ ¯ = max { max 1 j p A j x , ( I γ ψ ) x : x B R [ x ] } and for each i { 1 , 2 , , p } and j { 1 , 2 , , r } , the assumption (C8) holds, then { x n } converges strongly to x .

     

4 Numerical examples

In order to demonstrate the effectiveness, realization, and convergence of algorithm of Theorem 3.2, we consider the following example.

Example 4.1 Let H = R and C = [ 0 , 1 ] . Let T be a self-mapping on C defined by T x = 1 x for all x C . Define { α n } in ( 0 , 1 ) by α n = 1 n + 1 and { β n } by β n = 1 2 for all n N . For each n N , define f n : C H by (2.1). It is shown in Example 2.1 that { f n } is a sequence of nearly contraction mappings from C into H with sequence { ( k n , a n ) } , where k n = 1 n + 1 and a n = 5 2 ( n + 1 ) . It is easy to see that { f n } converges pointwise to f, where f ( x ) = 0 for all x C . Note k n κ = 0 , F ( T ) = { x } = { 1 / 2 } , and lim n f n x = f x . It can be observed that all the assumptions of Theorem 3.2 are satisfied and the sequence { x n } generated by (3.1) with T n = T converges to 1 2 . In fact, under the above assumptions, the algorithm (3.1) can be simplified as follows:
{ x 1 C , y n = α n f n x n + ( 1 α n ) ( 1 x n ) , x n + 1 = x n + P C [ y n ] 2 for all  n N .
(4.1)
The projection point of y n onto C can be expressed as
P C [ y n ] = { 0 , if  y n < 0 ; y n , if  y n C ; 1 , if  y n > 0 .
The iterates of algorithm (4.1) for initial guess x 1 = 0 , 0.2 , 0.8 , 1 are shown in Table 1. From Table 1, we see that the iterations converge to 1 / 2 which is the unique fixed point of T. The convergence of each iteration is also shown in Figure 1 for comparison.
Figure 1
Figure 1

The convergence comparison of different initial values x 1 = 0 , 0.2 , 0.8 , 1 .

Table 1

The numerical results for initial guess x 1 = 0 , 0.2 , 0.8 , 1

n

x 1 = 0

x 1 = 0.2

x 1 = 0.8

x 1 = 1

5

0.452291666666667

0.452604166666667

0.514843750000000

0.514947916666667

10

0.476041066961784

0.476041067553836

0.476047944746260

0.476047944894273

15

0.483829591828978

0.483829591828978

0.483829591829845

0.483829591829845

20

0.487787940564059

0.487787940564059

0.487787940564059

0.487787940564059

25

0.490187554131032

0.490187554131032

0.490187554131032

0.490187554131032

30

0.491798400218960

0.491798400218960

0.491798400218960

0.491798400218960

35

0.492954698188619

0.492954698188619

0.492954698188619

0.492954698188619

40

0.493825130132048

0.493825130132048

0.493825130132048

0.493825130132048

45

0.494504074844059

0.494504074844059

0.494504074844059

0.494504074844059

50

0.495048473664881

0.495048473664881

0.495048473664881

0.495048473664881

Declarations

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. Therefore, the authors acknowledge with thanks DSR, for technical and financial support.

Authors’ Affiliations

(1)
Department of Mathematics, King Abdulaziz University, Jeddah, Saudi Arabia
(2)
Department of Mathematics, Banaras Hindu University, Varanasi, India
(3)
Department of Mathematics, Aligarh Muslim University, Aligarh, India
(4)
Department of Mathematics & Statistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia

References

  1. Xu HK: A variable Krasnosel’skiĭ-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007View ArticleGoogle Scholar
  2. Byrne CL: Iterative oblique projection onto convex subsets and split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310View ArticleMathSciNetGoogle Scholar
  3. Byrne CL: A unified treatment of some iterative algorithms insignal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006View ArticleMathSciNetGoogle Scholar
  4. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001View ArticleGoogle Scholar
  5. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692View ArticleMathSciNetGoogle Scholar
  6. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017View ArticleMathSciNetGoogle Scholar
  7. Censor Y, Motova A, Segal A: Perturbed projection and subgradient projections for multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327: 1244–1256. 10.1016/j.jmaa.2006.05.010View ArticleMathSciNetGoogle Scholar
  8. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012View ArticleMathSciNetGoogle Scholar
  9. Ceng LC, Ansari QH, Yao JC: An extragradient method for split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633–642. 10.1016/j.camwa.2011.12.074View ArticleMathSciNetGoogle Scholar
  10. Ceng LC, Ansari QH, Yao JC: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 2012, 16: 471–495. 10.1007/s11117-012-0174-8View ArticleMathSciNetGoogle Scholar
  11. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
  12. Masad E, Reich S: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8: 367–371.MathSciNetGoogle Scholar
  13. Sahu DR: Applications of the S -iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12: 187–204.MathSciNetGoogle Scholar
  14. Yang Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014View ArticleGoogle Scholar
  15. Moudafi A: Viscosity approximation methods for fixed-point problems. J. Math. Anal. Appl. 2000, 241: 46–55. 10.1006/jmaa.1999.6615View ArticleMathSciNetGoogle Scholar
  16. Suzuki T: Sufficient and necessary condition for Halpern’s strong convergence to fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 2007, 135: 99–106.View ArticleGoogle Scholar
  17. Takahashi W: Viscosity approximation methods for countable families of nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70: 719–734. 10.1016/j.na.2008.01.005View ArticleMathSciNetGoogle Scholar
  18. Yao Y, Xu HK: Iterative methods for finding minimum-norm fixed points of nonexpansive mappings with applications. Optimization 2011, 60(6):645–658. 10.1080/02331930903582140View ArticleMathSciNetGoogle Scholar
  19. Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059View ArticleMathSciNetGoogle Scholar
  20. Sahu DR, Wong NC, Yao JC: A unified hybrid iterative method for solving variational inequalities involving generalized pseudo-contractive mappings. SIAM J. Control Optim. 2012, 50: 2335–2354. 10.1137/100798648View ArticleMathSciNetGoogle Scholar
  21. Sahu DR, Wong NC, Yao JC: A generalized hybrid steepest-descent method for variational inequalities in Banach spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 754702Google Scholar
  22. Sahu DR: Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment. Math. Univ. Carol. 2005, 46: 653–666.MathSciNetGoogle Scholar
  23. Agarwal RP, O’Regan D, Sahu DR Topological Fixed Point Theory and Its Applications 6. In Fixed Point Theory for Lipschitzian-Type Mappings with Applications. Springer, New York; 2009.Google Scholar
  24. Maing PE: Approximation method for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325: 469–479. 10.1016/j.jmaa.2005.12.066View ArticleMathSciNetGoogle Scholar
  25. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.View ArticleGoogle Scholar
  26. Lopez G, Martin V, Xu HK: Perturbation techniques for nonexpansive mappings with applications. Nonlinear Anal. 2009, 10(4):2369–2383. 10.1016/j.nonrwa.2008.04.020View ArticleMathSciNetGoogle Scholar
  27. Yao Y, Shahzad N: New methods with perturbations for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2011., 2011: Article ID 79Google Scholar
  28. Bruck RE, Reich S: A general convergence principle of nonlinear functional analysis. Nonlinear Anal. 1980, 4: 939–950. 10.1016/0362-546X(80)90006-1View ArticleMathSciNetGoogle Scholar

Copyright

© Latif et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Advertisement