Skip to content

Advertisement

  • Research
  • Open Access

An improved method for solving multiple-sets split feasibility problem

Fixed Point Theory and Applications20122012:168

https://doi.org/10.1186/1687-1812-2012-168

  • Received: 23 March 2012
  • Accepted: 11 September 2012
  • Published:

Abstract

The multiple-sets split feasibility problem (MSSFP) has a variety of applications in the real world such as medical care, image reconstruction and signal processing. Censor et al. proposed solving the MSSFP by a proximity function, and then developed a class of simultaneous methods for solving split feasibility. In our paper, we improve a simultaneous method for solving the MSSFP and prove its convergence.

Keywords

  • multiple-sets split feasibility problem
  • convergence

1 Introduction

Throughout this paper, let H be a Hilbert space, , denote the inner product and denote the corresponding norm. The multiple-sets split feasibility problem (MSSFP) is a generalization of the split feasibility problem (SFP) and the convex feasibility problem (CFP); see [1]. Let C i and Q j be closed convex sets in the N-dimensional and M-dimensional Euclidean spaces, respectively. The MSSFP is to find a vector x satisfying
x C : = i = 1 t C i such that  A x Q : = j = 1 r Q j ,
(1)
where A is a matrix of M × N , and t , r > 1 are integers. When t = r = 1 , the problem becomes to find a vector x C , such that A x Q , which is just the two-sets split feasibility problem (SFP) introduced in [2]. The MSSFP has many applications in our real life such as image restoration, signal processing and medical care (e.g., [35]). In order to solve the MSSFP, Censor et al. considered the MSSFP in the following form:
x C : = X ( i = 1 t C i ) and A x Q : = j = 1 r Q j ,
(2)
X R n is a nonempty closed convex set. In fact, (2) is equivalent to (1). Many methods have been developed to solve the SFP or MSSFP. The basic CQ algorithm was proposed by Byrne [6], then it was generalized to MSSFP by Censor [4]. The relaxed CQ algorithm was proposed by Yang [7], the half-space relaxation projection method was proposed by Qu and Xiu [8], and the variable Krasnosel’skii-Mann algorithm was proposed by Xu [9]. These algorithms first converted the problem to an equivalent optimization problem and then solved it via some technique from numerical optimization. It is easy to see that the MSSFP (1) is equivalent to the minimization problem
min { 1 2 x P C x 2 + 1 2 x P Q A x 2 } ,
where P C and P Q denote the orthogonal projections onto C and Q, respectively. The projections of a point onto C and Q are difficult to compute. In practical applications, however, the projections onto individual sets C i are more easily calculated than the projection onto the intersection C. For this purpose, Censor et al. [4] defined a proximity function p ( x ) to measure the distance of a point to all sets
p ( x ) : = 1 2 i = 1 t a i P C i x x 2 + 1 2 j = 1 r b j P Q j A x A x 2 ,
(3)
where a i > 0 , b j > 0 for all i and j, respectively, and
i = 1 t a i + j = 1 r b j = 1 .
With the proximity function (3), they proposed an optimization model
min { p ( x ) | x X }
(4)
to approach the (2) and exerted the projection gradient method to solve it with
x k + 1 = P C { x k s p ( x k ) } ,
where p denotes the gradient of p ( x ) and can be shown as follows (see [4]):
p ( x ) = i = 1 t a i ( x P C i ( x ) ) + j = 1 r b j ( A x P Q j ( A x ) ) .
In this paper, we continue the algorithmic improvement on the constrained MSSFP. More specifically, the constrained MSSFP [10] is to find x such that
x X ( i = 1 t ) C i and A x Y ( j = 1 r Q j ) .
(5)
By the same idea of approaching (2) via the model (4), we define p 1 : R n R and p 2 : R m R as follows:
(6)
(7)
Then we get the following optimization model which can solve (5):
min { p 1 ( x ) + p 2 ( A x ) | x X , y Y } .
(8)
It is easy to see that model (8) is nonnegative and with the minimal value zero. So, we can further reformulate (8) into the following separable form:
min { p 1 ( x ) + p 2 ( y ) | A x y = 0 , x X , y Y } .
(9)

2 Preliminaries

In this section, we present some concepts and properties of the MSSFP.

Let M be a positive definite matrix. We denote the M-norm by v M = v , M v . In particular, v = v , v is the Euclidean norm of the vector v R n .

Lemma 1 Let S be a nonempty closed convex subset of R n . We denote P S ( ) as the projection onto S, i.e.,
P S ( z ) = arg min { z x | x S } .
Then the following properties hold:
  1. (1)

    x S P S ( x ) = x ;

     
  2. (2)

    x P S ( x ) , z P S ( x ) 0 , x R n and z S ;

     
  3. (3)

    x y , P S ( x ) P S ( y ) P S ( x ) P S ( y ) 2 , x , y R n ;

     
  4. (4)

    P S ( x ) z 2 x z 2 P S ( x ) x 2 , x R n and z S ;

     
  5. (5)

    P S ( x ) P S ( y ) x y , x , y R n .

     

Proof See Facchinei and Pang [11, 12]. □

Definition 1 Let F be a mapping from S R n into R n , then
  1. (a)
    F is called monotone on S if
    F ( x ) F ( y ) , x y 0 , x , y S .
     
  2. (b)
    F is called strongly monotone on S if there is a μ > 0 such that
    F ( x ) F ( y ) , x y μ x y 2 , x , y S .
     
  3. (c)
    F is called co-coercive (or ν-inverse strongly monotone) on S if there is a ν > 0 such that
    F ( x ) F ( y ) , x y ν F ( x ) F ( y ) 2 , x , y S .
     
  4. (d)
    F is called pseudo-monotone on S if
    F ( y ) , x y 0 F ( x ) , x y 0 , x , y S .
     
  5. (e)
    F is called Lipschitz continuous on S if there exists a constant L > 0 such that
    F ( x ) F ( y ) L x y , x , y S
     

and F is called nonexpansive iff L = 1 .

Remark 1 From Lemma 1 and the above definition, we can infer that a monotone mapping is a pseudo-monotone mapping. An inverse strongly monotone mapping is monotone and Lipschitz continuous. A Lipschitz continuous and strongly monotone mapping is a strongly monotone mapping. The projection operator is 1-ism and nonexpansive.

Lemma 2 A mapping F is 1-ism if and only if the mapping I F is 1-ism, where I is the identity operator.

Proof See [[3], Lemma 2.3]. □

Remark 2 If F is an inverse strongly monotone mapping, then F is a nonexpansive mapping.

Definition 2 Let S be a nonempty closed convex subset of H and x n be a sequence in H, then the sequence x n is called Fejér monotone with respect to S if
x n + 1 z x n z , n 0 , z S .

Lemma 3 Let p 1 ( x ) and p 2 ( x ) be defined in (6)-(7), then p 1 ( x ) and p 2 ( y ) are both Lipschitz continuous and inverse strongly monotone on X and Y, respectively.

Proof From the definition (6), p 1 ( x ) is differentiable on X and
p 1 ( x ) = i = 1 t a i ( x P C i ( x ) ) .
Since the projection operator P C i is 1-ism (from Remark 1), then from Lemma 2, the operator I P C i is 1-ism and is also nonexpansive. So, we have
p 1 ( x 1 ) p 1 ( x 2 ) = i = 1 t a i ( I P C i ) ( x 1 x 2 ) i = 1 t a i ( I P C i ) ( x 1 x 2 ) ( i = 1 t a i ) x 1 x 2 ,

therefore, p i ( x ) is Lipschitz continuous on X, and the Lipschitz constant is L 1 = i = 1 t a i . It also follows from [[13], Corollary 10] that p 1 ( x ) is 1 L 1 -ism. Similarly, we can prove that p 2 ( y ) is Lipschitz continuous on Y, and the Lipschitz constant is L 2 = j = 1 r b j , furthermore, it is 1 L 2 -ism. □

For notational convenience, let
M = ( τ I n σ I m 1 β I m ) ,
where τ, σ and β are given positive scalars.
(10)
(11)
Furthermore, we let ω = ( x , y , z ) . Suppose that ( x , y ) is an optimal solution of the problem (9). Then the constrained MSSFP (5) is equivalent to finding ω = ( x , y , z ) W = X × Y × R n such that for any ω ´ = ( x ´ , y ´ , z ´ ) W , we have
{ x ´ x , p 1 ( x ) 0 ; y ´ y , p 1 ( y ) 0 ; z ´ z , A x y 0 .
(12)

3 Main results

In this section, we will present our method for solving the MSSFP and prove its convergence. Our algorithm is defined as follows:

Algorithm 3.1 Step 1. Give arbitrary ν ( 0 , 1 ) , β > 0 , γ ( 0 , 2 ) , μ > 1 , τ 0 > 0 , σ 0 > 0 and x 0 , y 0 , z 0 . Let ε > 0 be the error tolerance for an approximate solution and set k = 1 .

Step 2.
  1. (1)
    Find the smallest nonnegative integer l k such that τ k = μ l k τ k 1 and
    x ˆ k = P X { x k 1 τ k [ p 1 ( x k ) + β A T ( A x k y k ) ] } ,
    (13)
     
which satisfies
  1. (2)
    Find the smallest nonnegative integer m k such that σ k = μ m k σ k 1 and
    y ˆ k = P Y { y k 1 σ k [ p 2 ( y k ) β ( A x k y k ) ] } ,
    (14)
     
which satisfies
  1. (3)
    Then we define z ˆ k by
    z ˆ k = z k β ( A x ˆ k y ˆ k ) .
    (15)
     

Step 3.

Let ω k = ( x k , y k , z k ) , then we get the new iterate ω k + 1 via
ω k + 1 = ω k γ α k d ( ω k , ω ˆ k ) ,
(16)
where
α k = φ ( ω k , ω ˆ k ) d ( ω k , ω ˆ k ) ) 2 .
(17)

Step 4.

If p ( x k + 1 ) p ( x 0 ) ε , stop. Otherwise, set k = k + 1 and go to Step 2.

Remark 3 In fact, from Lemma 3, we know that p 1 ( x ) is Lipschitz continuous with a constant L 1 , then the left-hand side of (13) satisfies
x k x ˆ k , p 1 ( x k ) p 1 ( x ˆ k ) + β A x k A x ˆ k 2 ( L 1 + β A T A ) x k x ˆ k 2 .
So, (13) holds as long as τ k ( L 1 + β A T A ) υ . Since ( L 1 + β A T A ) υ > 0 , it has inf { τ k } > 0 denoted by τ 0 = inf { τ k } . On the other hand, by a similar analysis as in [[14], Lemma 3.3], it indicates that τ k ( L 1 + β A T A ) υ 2 , so we have
τ 0 τ k τ max = ( L 1 + β A T A ) υ 2 , k > 1 .
(18)
Similarly, we can also have
σ 0 σ k σ max = ( L 2 + β ) υ 2 , k > 1 .
(19)

Next, we analyze the convergence of Algorithm 3.1:

Lemma 4 Suppose ω k and ω ˆ k are generated by Algorithm 3.1, and ω = ( x , y , z ) is a solution of (12). Then there exits m > 0 for any k 1 such that
ω k ω , d ( ω k , ω ˆ k ) φ ( ω k , ω ˆ k ) m ( ω k ω ˆ 2 ) 2 .

Proof First, we prove ω k ω , d ( ω k , ω ˆ k ) φ ( ω k , ω ˆ k ) .

From the property of the projection operator in Lemma 1,
x P S ( x ) , z P S ( x ) 0 , x R n  and  z S .
Combining it with (13), we have
x x ˆ k , x ˆ k x k + 1 τ k [ p 1 ( x k ) + β A T ( A x k y k ) ] 0 , x X .
Multiplying by τ k , we get
x x ˆ k , τ k ( x ˆ k x k ) + ( p 1 ( x k ) p 1 ( x ˆ k ) ) + p 1 ( x ˆ k ) + β A T ( A x k y k ) 0 , x X .
And from the definitions of ξ x ( ω k , ω ˆ k ) , η x ( ω k ) in (10) and (11), it is equivalent to
x x ˆ k , τ k ( x ˆ k x k ) + ξ x ( ω k , ω ˆ k ) + p 1 ( x ˆ k ) + η x ( ω k ) 0 , x X .
(20)
Similarly, from (14) and (15), we can also get
y y ˆ k , σ k ( y ˆ k y k ) + ξ y ( ω k , ω ˆ k ) + p 2 ( y ˆ k ) + η y ( ω k ) 0 , y Y
(21)
and
z z ˆ k , A x ˆ k y ˆ k + 1 β ( z ˆ k z k ) 0 , z R n .
(22)
Using the notation defined above, from (20)-(22), we have
ω ω ˆ k , F ( ω ˆ k ) + η ( ω k ) + ξ ( ω k , ω ˆ k ) M k ( ω k ω ˆ k ) 0 , ω W ,
namely
ω ω ˆ k , q ( ω k , ω ˆ k ) d ( ω k , ω ˆ k ) 0 , ω W .
(23)
Note that F ( ω ) is monotone on W because of the monotonicity of p 1 ( x ) and p 2 ( y ) . From (12), we have
ω ˆ k ω , F ( ω ˆ k ) ω ˆ k ω , F ( ω ˆ ) 0 .
(24)
Consequently,
ω ˆ k ω , q ( ω k , ω ˆ k ) = ω ˆ k ω , F ( ω ˆ k ) + ω ˆ k ω , η ( ω k ) ω ˆ k ω , η ( ω k ) = x ˆ k x , β A T ( A x k y k ) + y ˆ k y k , β ( A x k y k ) = A x ˆ k A x , β ( A x k y k ) + y ˆ k y k , β ( A x k y k ) = β ( A x ˆ k A x y ˆ k + y ) , A x k y k = β ( A x ˆ k y ˆ k ) , A x k y k = z k z ˆ k , A x k y k = φ ( ω k , ω ˆ k ) ω k ω ˆ k , d ( ω k , ω ˆ k ) ,
(25)

where the first inequality follows from (24), the second equality follows from the definition of ω ˆ k , ω and η ( ω k ) .

Setting ω = ω in (23), since ω W is a solution, we get
ω ˆ k ω , d ( ω k , ω ˆ k ) ω ˆ k ω , q ( ω k , ω ˆ k ) .
Then
ω k ω , d ( ω k , ω ˆ k ) = ω k ω ˆ k , d ( ω k , ω ˆ k ) + ω ˆ k ω , d ( ω k , ω ˆ k ) ω k ω ˆ k , d ( ω k , ω ˆ k ) + ω ˆ k ω , q ( ω k , ω ˆ k ) φ ( ω k , ω ˆ k ) ,

where the last inequality follows from (25).

Next, we prove
φ ( ω k , ω ˆ k ) m ω k ω ˆ k 2 .
From the definition of φ ( ω k , ω ˆ k ) and d ( ω k , ω ˆ k ) , we obtain
φ ( ω k , ω ˆ k ) = ω k ω ˆ k , d ( ω k , ω ˆ k ) + z k z ˆ k , A x k y k = ω k ω ˆ k , M ( ω k ω ˆ k ) ω k ω ˆ k , ξ ( ω k ω ˆ k ) + z k z ˆ k , A x k y k = ω k ω ˆ k M k 2 x k x ˆ k , ξ x ( ω k , ω ˆ k ) y k y ˆ k , ξ y ( ω k , ω ˆ k ) + z k z ˆ k , A x k y k = ω k ω ˆ k M k 2 x k x ˆ k , p 1 ( x k ) p 1 ( x ˆ k ) y k y ˆ k , p 2 ( y k ) p 2 ( y ˆ k ) + z k z ˆ k , A x k y k ω k ω ˆ k M k 2 τ k ν x k x ˆ k 2 + β A x k A x ˆ k 2 σ k ν y k y ˆ k 2 + β y k y ˆ k 2 + z k z ˆ k , A x ˆ k y ˆ k + z k z ˆ k , A x k A x ˆ k y k + y ˆ k τ k ( 1 ν ) x k x ˆ k 2 + σ k ( 1 ν ) y k y ˆ k 2 + 2 β z k z ˆ k 2 + β A x k A x ˆ k 2 + z k z ˆ k , A x k A x ˆ k + β y k y ˆ k 2 z k z ˆ k , y k y ˆ k ,
(26)

where the first inequality follows from (13) and (14).

Note that
β A x k A x ˆ k 2 + z k z ˆ k , A x k A x ˆ k = β A x k A x ˆ k + 1 2 β ( z k z ˆ k ) 2 1 4 β z k z ˆ k 2
and
β y k y ˆ k 2 z k z ˆ k , y k y ˆ k = β y k y ˆ k 1 2 β ( z k z ˆ k ) 2 1 4 β z k z ˆ k 2 .
Substituting them into (26), we have
φ ( ω k , ω ˆ k ) τ k ( 1 ν ) x k x ˆ k 2 + σ k ( 1 ν ) y k y ˆ k 2 + 3 2 β z k z ˆ k 2 + β A x k A x ˆ k + 1 2 β ( z k z ˆ k ) 2 + β y k y ˆ k 1 2 β ( z k z ˆ k ) 2 τ k ( 1 ν ) x k x ˆ k 2 + σ k ( 1 ν ) y k y ˆ k 2 + 3 2 β z k z ˆ k 2 .
For the sequences { τ k } and { σ k } are bounded from (18) and (19), let
m = min { ( 1 ν ) τ 0 , ( 1 ν ) σ 0 , 3 2 β } ,
therefore,
φ ( ω k , ω ˆ k ) m ω k ω ˆ k .

This completes the proof. □

Next, we prove the sequence ω k is Fejér monotone.

Theorem 1 Suppose ω k and ω ˆ k are generated by Algorithm 3.1, and ω = ( x , y , z ) is a solution of (12). Then there exits C > 0 for any k 1 such that
ω k + 1 ω 2 ω k ω 2 m 2 r ( 2 r ) C 2 ω k ω ˆ k 2 .
Proof From (16), we have
ω k + 1 ω 2 = ω k γ α k d ( ω k , ω ˆ k ) ω 2 = ω k ω 2 2 γ α k ω k ω , d ( ω k , ω ˆ k ) + ( γ α k ) 2 d ( ω k , ω ˆ k ) 2 ω k ω 2 2 γ α k φ ( ω k , ω ˆ k ) + γ 2 α k φ ( ω k , ω ˆ k ) = ω k ω 2 γ α k ( 2 γ ) α k φ ( ω k , ω ˆ k ) ω k ω 2 γ α k ( 2 γ ) m ω k ω ˆ k 2 ,

where the inequalities follow from Lemma 4 and (17).

Because ξ x ( ω k , ω ˆ k ) L 1 x k x ˆ k and ξ y ( ω k , ω ˆ k ) L 2 y k y ˆ k , and
M k ( ω k ω ˆ k ) = τ k x k x ˆ k 2 + σ k y k y ˆ k 2 + 1 β z k z ˆ k 2 τ max x k x ˆ k 2 + σ max y k y ˆ k 2 + 1 β z k z ˆ k 2 ,
we have
d ( ω k , ω ˆ k ) = M k ( ω k ω ˆ k ) ξ ( ω k , ω ˆ k ) M k ( ω k ω ˆ k ) + ξ ( ω k , ω ˆ k ) ( L 1 + τ max ) x k x ˆ k + ( L 2 + σ max ) y k y ˆ k + 1 β z k z ˆ k .

Let C = max { ( L 1 + τ max ) , ( L 2 + σ max ) , 1 β } , then we can get d ( ω k , ω ˆ k ) C ω k ω ˆ k .

So, from Lemma 4, we can yield
α k = φ ( ω k , ω ˆ k ) d ( ω k , ω ˆ k ) 2 m C 2 .
Therefore,
ω k + 1 ω 2 ω k ω 2 m 2 r ( 2 r ) C 2 ω k ω ˆ k 2 .

 □

Theorem 2 The sequence ω k generated by Algorithm 3.1 converges to a solution of (8).

Proof Suppose ω is a solution of (8). It follows from Theorem 1 that
ω k + 1 ω 2 ω k ω 2 ω 0 ω 2 ,
(27)

which means that the sequence ω k is bounded. Thus, it has at least a cluster point.

Furthermore, Theorem 1 also shows that
m 2 r ( 2 r ) C 2 ω k ω ˆ k 2 ω k ω 2 ω k + 1 ω 2 .
Summing both sides for all k, we obtain
m 2 r ( 2 r ) C 2 k = 0 ω k ω ˆ k 2 k = 0 { ω k ω 2 ω k + 1 ω 2 } ω 0 ω ˆ 2 ,
which means that
lim x ω k ω ˆ k 2 = 0 .
So, { ω k } and { ω ˆ k } have the same cluster points. Without loss of generality, let ω ¯ be a cluster point of { ω k } and { ω ˆ k } , τ ¯ and σ ¯ be the cluster points of { τ k } and { σ k } , respectively. Let { ω k j } , { ω ˆ k j } , { τ k j } , { σ k j } be the subsequences converging to them. Then, by taking limits over the subsequences in (13), (14), (15), we have

It then follows from [15] that ω ¯ is a solution of (12).

Because of the arbitrary ω , we can take ω = ω ¯ in (27) and obtain
ω k + 1 ω ¯ ω k ω ¯ , k 1 .

Therefore, the whole sequence { ω k } converges to ω ¯ . This completes the proof. □

Remark 4 Our iteration method is simpler in the form and is an improvement of the corresponding result of [10].

4 Applications

The multiple-sets split feasibility problem (MSSFP) requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It serves as a model for real-word inverse problems where constraints are imposed on the solution in the domain of a linear operator as well as in the operator’s range.

In this paper, our algorithm converges to a solution of the multiple-sets split feasibility problem (MSSFP), for any starting vector ω 0 = ( x 0 , y 0 , z 0 ) , whenever the MSSFP has a solution. In the inconsistent case, it finds a point which is least violating the feasibility by being ‘closest’ to all sets as ‘measured’ by a proximity function.

In the general case, computing the projection in the MSSFP is difficult to implement. So, Yang [7] solves this problem by the relaxed CQ-algorithm. Without loss of generality, take the two-sets split feasibility problem for instance. He assumes the sets C and Q are nonempty and given by
C = { x R N | c ( x ) 0 } , and Q = { y R N | q ( y ) 0 } ,

where c : R N R and q : R M R are convex functions, respectively. Here he uses the subgradient projections instead of the orthogonal projections. This is a huge achievement and it enables the split feasibility problem to achieve computer operation.

Lastly, we want to say that our work is related to significant real-world applications. The multiple-sets split feasibility problem was applied to the inverse problem of intensity-modulated radiation therapy (IMRT). In this field, beams of penetrating radiation are directed at the lesion (tumor) from external sources in order to eradicate the tumor without causing irreparable damage to surrounding healthy tissues; see, e.g., [4].

Declarations

Acknowledgements

We wish to thank the referees for their helpful comments and suggestions. This research was supported by the National Natural Science Foundation of China, under the Grant No.11071279.

Authors’ Affiliations

(1)
Department of Mathematics, Tianjin Polytechnic University, Tianjin, 300387, China

References

  1. Censor Y, Segal A: The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16: 587–600.MathSciNetGoogle Scholar
  2. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
  3. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleGoogle Scholar
  4. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017MathSciNetView ArticleGoogle Scholar
  5. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001View ArticleGoogle Scholar
  6. Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310MathSciNetView ArticleGoogle Scholar
  7. Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014View ArticleGoogle Scholar
  8. Qu B, Xiu N: A new half space-relaxation projection method for the split feasibility problem. Linear Algebra Appl. 2008, 428: 1218–1229. 10.1016/j.laa.2007.03.002MathSciNetView ArticleGoogle Scholar
  9. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007View ArticleGoogle Scholar
  10. Zhang WX, Han DR, Yuan XM: An efficient simultaneous method for the constrained multiple-sets split feasibility problem. Comput. Optim. Appl. 2012. doi:10.1007/s10589–011–9429–8Google Scholar
  11. Facchinei F, Pang JS I. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.Google Scholar
  12. Facchinei F, Pang JS II. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.Google Scholar
  13. Baillon J, Haddad G: Quelques propriétés des opérateurs angel-bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664MathSciNetView ArticleGoogle Scholar
  14. Zhang WX, Han DR, Li ZB: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 2009., 25: Article ID 11. doi:10.1088/0266–5611/25/11/115001Google Scholar
  15. Bertsekas DP, Tsitsiklis JN: Parallel and Distributed Computation. Numerical Methods. Prentice Hall, Englewood Cliffs; 1989.Google Scholar

Copyright

© Du and Chen; licensee Springer 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement