Skip to content

Advertisement

  • Research
  • Open Access

Strong convergence of a self-adaptive method for the split feasibility problem

Fixed Point Theory and Applications20132013:201

https://doi.org/10.1186/1687-1812-2013-201

  • Received: 21 May 2013
  • Accepted: 10 July 2013
  • Published:

Abstract

Self-adaptive methods which permit step-sizes being selected self-adaptively are effective methods for solving some important problems, e.g., variational inequality problems. We devote this paper to developing and improving the self-adaptive methods for solving the split feasibility problem. A new improved self-adaptive method is introduced for solving the split feasibility problem. As a special case, the minimum norm solution of the split feasibility problem can be approached iteratively.

MSC:47J25, 47J20, 49N45, 65J15.

Keywords

  • split feasibility problem
  • self-adaptive method
  • projection
  • minimization problem
  • minimum-norm

1 Introduction

As we know, the original split feasibility problem (SFP) was introduced firstly by Censor and Elfving [1], and has received much attention since its inception in 1994. This is due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy; please, see [26].

Since the SFP is a special case of the convex feasibility problem (CFP), which is to find a point in the nonempty intersection of finitely many closed and convex sets, we next briefly review some historic approaches which relate to the CFP. The CFP is an important problem because many real-world inversion or estimation problems in engineering as well as in mathematics can be cast into this framework; see, e.g., Combettes [7], Bauschke and Borwein [8] and Kiwiel [9]. Traditionally, iterative projection methods for solving the CFP employ orthogonal projections onto convex sets (i.e., nearest point projections with respect to the Euclidean distance function); see, e.g., [1014]. Much work has been done with generalized distance functions and the generalized projections associated with them suggested by Bregman [15].

In 1994, Censor and Elfving [1] investigated the use of different kinds of generalized projections in a single iterative process for solving the SFP. Their proposal is an iterative algorithm, which involves the computation of the inverse of a matrix, which is known to be a difficult task. That is why Byrne [16, 17] proposed the so-called CQ algorithm, which generates a sequence by a recursive procedure with suitable step-size. The CQ algorithm only involves the computations of the projections onto the sets C and Q, respectively, and is therefore implementable in the case where these projections have closed-form expressions (e.g., C and Q are the closed balls or half-spaces). There is a large number of references on the CQ method in the literature; see, for instance, [1834]. However, we have to remark that the determination of the step-size depends on the operator (matrix) norm (or the dominant eigenvalue of a matrix product). This means that in order to implement the CQ algorithm, one has first to compute (or, at least, to estimate) the matrix norm of an operator, which is in general not an easy work in practice.

To overcome the above difficulty, the so-called self-adaptive method which permits step-size being selected self-adaptively was developed. Note that this method is the application of the projection method of Goldstein [35], Levitin and Polyak [36] to a suitable variational inequality problem, which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends strongly on the choice of the step-size parameter. If one chooses a small parameter such that it guarantees the convergence of the iterative sequence, the recursion leads to slow speed of convergence. On the other hand, if one chooses a large step-size to improve the speed of convergence, the generated sequence may not converge. In real applications to solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the SFP. Some self-adaptive methods for solving variational inequality problems have been developed according to the original Goldstein-Levitin-Polyak method [35, 36]. See, e.g., [3745].

Motivated by the self-adaptive strategy, Zhang et al. [45] proposed their method by using variable step-sizes instead of the fixed step-sizes as in Censor et al. [46]. Also, a self-adaptive projection method was introduced by Zhao and Yang [29], and it was adopted by using the Armijo-like searches. The advantage of these algorithms lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required, and still convergence is guaranteed.

In this paper, we further develop and improve the self-adaptive methods for solving the SFP. An improved self-adaptive method is introduced for solving the SFP. As a special case, the minimum norm solution of the SFP can be approached iteratively.

2 Framework and preliminary results

Let H 1 and H 2 be two Hilbert spaces, and let C and Q be two closed and convex subsets of H 1 and H 2 , respectively. Let A : H 1 H 2 be a bounded linear operator. The split feasibility problem (SFP) is to find a point x such that
x C and A x Q .
(1)

Next, we use Γ to denote the solution set of the SFP, i.e., Γ = { x C : A x Q } .

In 1994, Censor and Elfving [1] investigated the use of different kinds of generalized projections in a single iterative process for solving the SFP. They were the first to propose the following algorithm which involved the computation of the inverse A 1 :
x k + 1 = A 1 P Q ( P A ( C ) ( A x k ) ) , k 0 ,
where C and Q are closed and convex sets in R n , while A is a full rank n × n matrix and A ( C ) = { y R n y = A x , x C } . Note that A 1 is not easily executed. Consequently, Byrne [16, 17] proposed the so-called CQ algorithm which generates a sequence { x n } by the recursive procedure
x n + 1 = P C ( x n τ n A ( I P Q ) A x n ) ,
(2)

where the step-size τ n is chosen in the interval ( 0 , 2 / A 2 ) . It is remarkable that the CQ algorithm only involves the computations of the projections P C and P Q onto the sets C and Q, respectively, and is therefore implementable in the case where P C and P Q have closed-form expressions (e.g., C and Q are the closed balls or half-spaces). However, we observe that the determination of the step-size τ n depends on the operator (matrix) norm A (or the largest eigenvalue of A A ). This means that for practical implementation of the CQ algorithm, one has first to compute (or, at least, to estimate) the matrix norm of A, which is in general not an easy task in practice.

To overcome the above difficulty, the so-called self-adaptive method which permits step-size τ n being selected self-adaptively was developed. If we set
f ( x ) : = 1 2 A x P Q A x 2 ,
then the convex objective f is differentiable and has a Lipschitz gradient given by
f ( x ) = A ( I P Q ) A .
Thus, the CQ algorithm (2) can be obtained by minimizing the following convex minimization problem
min x C f ( x ) .
(3)
We know that a point x C is a stationary point of problem (3) if it satisfies
f ( x ) , x x 0 , x C .
(4)
Thus, we can use a gradient projection algorithm below to solve the (SFP)
x n + 1 = P C ( x n τ n f ( x n ) ) ,
(5)

where τ n , the step-size at iteration n, is chosen in the interval ( 0 , 2 / L ) , where L is the Lipschitz constant of f.

The above method (5) has to be thought of as the application of the projection method of Goldstein [35], Levitin and Polyak [36] to the variational inequality problem (4), which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends greatly on the choice of the parameter τ n . A small τ n guarantees the convergence of the iterative sequence, but the recursion leads to slow speed of convergence. On the other hand, a large step-size will improve the speed of convergence, but the generated sequence may not converge. In real applications for solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the SFP.

The methods in Zhang et al. [45] and Censor et al. [46] were proposed for solving the multiple-sets split feasibility problem.

Algorithm 2.1 S1. Given a nonnegative sequence τ n such that n = 0 τ n < , δ ( 0 , 1 ) , μ ( 0 , 1 ) , ρ ( 0 , 1 ) , ϵ > 0 , β 0 > 0 , and arbitrary initial point x 0 . Set γ 0 = β 0 and n = 0 .

S2. Find the smallest nonnegative integer l n such that β n + 1 = μ l k γ k and
x n + 1 = P C ( x n β n + 1 f ( x n ) ) ,
which satisfies
β n + 1 f ( x n ) f ( x n + 1 ) 2 ( 2 δ ) x n x n + 1 , f ( x n ) f ( x n + 1 ) .
S3. If
β n + 1 f ( x n ) f ( x n + 1 ) 2 ρ x n x n + 1 , f ( x n ) f ( x n + 1 ) ,

then set γ n + 1 = ( 1 + τ n + 1 ) β n + 1 ; otherwise, set γ n + 1 = β n + 1 .

S4. If e ( x n , β n ) ϵ , stop; otherwise, set n : = n + 1 and go to S2.

The following self-adaptive projection method was introduced by Zhao and Yang [29], and it was adopted by using the Armijo-like searches.

Algorithm 2.2 Given constants β > 0 , σ ( 0 , 1 ) and γ ( 0 , 1 ) . Let x 0 be arbitrary. For n = 0 , 1 ,  , calculate
x n + 1 = P C ( x n τ n f ( x n ) ) ,
where τ n = β γ l n and l n is the smallest nonnegative integer l such that
f ( P C ( x n β γ l f ( x n ) ) ) f ( x n ) σ f ( x n ) , x n P C ( x n β γ l f ( x n ) ) .

The advantage of Algorithm 2.1 and Algorithm 2.2 lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required, and still convergence is guaranteed.

We shall introduce our improved self-adaptive method for solving the SFP. In this respect, we need the ingredients introduced right now.

Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping T : C C is called nonexpansive if
T x T y x y , x , y C .
A mapping ψ : C C is said to be δ-contractive if there exists a constant δ [ 0 , 1 ) such that
ψ ( x ) ψ ( y ) δ x y , x , y C .
Recall that the (nearest point or metric) projection from H onto C, denoted by P C , assigns to each x H the unique point P C ( x ) C with the property
x P C ( x ) = inf { x y : y C } .
It is well known that the metric projection P C of H onto C has the following basic properties:
  1. (a)

    P C ( x ) P C ( y ) x y for all x , y H ;

     
  2. (b)

    x y , P C ( x ) P C ( y ) P C ( x ) P C ( y ) 2 for every x , y H ;

     
  3. (c)

    x P C ( x ) , y P C ( x ) 0 for all x H and y C .

     

Next we adopt the following notation:

  • x n x means that x n converges strongly to x;

  • x n x means that x n converges weakly to x;

  • ω w ( x n ) : = { x : x n j x } is the weak ω-limit set of the sequence { x n } .

Recall that a function f : H R is called convex if
f ( λ x + ( 1 λ ) y ) λ f ( x ) + ( 1 λ ) f ( y ) , λ ( 0 , 1 ) , x , y H .
It is known that a differentiable function f is convex if and only if the following relation holds:
f ( z ) f ( x ) + f ( x ) , z x , z H .
Recall that an element g H is said to be a subgradient of f : H R at x if
f ( z ) f ( x ) + g , z x , z H .
If the function f : H R has at least one subgradient at x, it is said to be subdifferentiable at x. The set of subgradients of f at the point x is called the subdifferential of f at x, and is denoted by f ( x ) . A function f is called subdifferentiable if it is subdifferentiable at all x H . If f is convex and differentiable, then its gradient and subgradient coincide. A function f : H R is said to be weakly lower semi-continuous (w-lsc) at x if x n x implies
f ( x ) lim inf n f ( x n ) .

f is said to be w-lsc on H if it is w-lsc at every point x H .

The first lemma is easy to prove.

Lemma 2.1 [14]

Let f ( x ) : = 1 2 A x P Q A x 2 . Then
  1. (i)

    f is convex and differentiable;

     
  2. (ii)

    f is w-lsc on C.

     

Lemma 2.2 [47]

Given x H 1 . Then x solves the SFP if and only if x solves the fixed point equation
x = P C ( x γ A ( I P Q ) A x ) ,

where γ > 0 .

Lemma 2.3 [48]

Assume that { a n } is a sequence of nonnegative real numbers such that
a n + 1 ( 1 γ n ) a n + δ n ,
where { γ n } is a sequence in ( 0 , 1 ) and { δ n } is a sequence such that
  1. (1)

    n = 1 γ n = ;

     
  2. (2)

    lim sup n δ n γ n 0 or n = 1 | δ n | < .

     

Then lim n a n = 0 .

Lemma 2.4 [49]

Let { s n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { s n i } of { s n } such that s n i s n i + 1 for all i 0 . For every n n 0 , define an integer sequence { τ ( n ) } as
τ ( n ) = max { k n : s n i < s n i + 1 } .
Then τ ( n ) as n and for all n n 0
max { s τ ( n ) , s n } s τ ( n ) + 1 .

3 Main results

In this section we state and prove our main results.

Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. Let ψ : C H 1 be a δ-contraction with δ [ 0 , 2 2 ) . Let A : H 1 H 2 be a bounded linear operator.

Algorithm 3.1 For given x 0 C , assume that { x n } has been constructed. If f ( x n ) = 0 , then stop and x n is a solution of SFP (1). Otherwise, continue and compute x n + 1 by the recursion
x n + 1 = P C [ α n ψ ( x n ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] , n 0 ,
(6)

where { α n } ( 0 , 1 ) and { ρ n } ( 0 , 2 ) .

Theorem 3.1 Suppose that the SFP is consistent, that is, Γ . Assume that the following conditions hold:
  1. (a)

    lim n α n = 0 and n = 1 α n = ;

     
  2. (b)

    inf n ρ n ( 2 ρ n ) > 0 .

     
Then { x n } defined by (6) converges strongly to z, which solves the following variational inequality:
z Γ such that z ψ ( z ) , z x 0 for all x Γ .
(7)
Proof First, it is obvious that the solution of the variational inequality (7) is unique (by the strong monotonicity of I ψ according to the related results in variational inequality), denoted by z. Then z = P Γ ( ψ ( z ) ) . We may assume that the sequence { x n } is infinite, that is, Algorithm 3.1 does not terminate in a finite number of iterations. Thus, f ( x n ) 0 for all n. From (6), we have
x n + 1 z 2 = P C [ α n ψ ( x n ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] z 2 α n ( ψ ( x n ) z ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) 2 α n ψ ( x n ) z 2 + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z 2 ( 1 α n ) [ x n z 2 + ρ n 2 f 2 ( x n ) f ( x n ) 2 2 ρ n f ( x n ) f ( x n ) 2 f ( x n ) , x n z ] + α n ( ψ ( x n ) ψ ( z ) + ψ ( z ) z ) 2 .
(8)
By the convexity of f (Lemma 2.1) and the fact that f ( z ) = 0 for z Γ , we deduce that
f ( x n ) = f ( x n ) f ( z ) f ( x n ) , x n z .
(9)
Using the inequality ( a + b ) 2 2 ( a 2 + b 2 ) for all a , b R , we have
( ψ ( x n ) ψ ( z ) + ψ ( z ) z ) 2 2 ψ ( x n ) ψ ( z ) 2 + 2 ψ ( z ) z 2 2 δ 2 x n z 2 + 2 ψ ( z ) z 2 .
(10)
From (8)-(10), we get
x n + 1 z 2 ( 1 α n ) [ x n z 2 ρ n ( 2 ρ n ) f 2 ( x n ) f ( x n ) 2 ] + 2 δ 2 α n x n z 2 + 2 α n ψ ( z ) z 2 [ 1 ( 1 2 δ 2 ) α n ] x n z 2 + ( 1 2 δ 2 ) α n 2 ψ ( z ) z 2 1 2 δ 2 max { x n z 2 , 2 ψ ( z ) z 2 1 2 δ 2 } .
By induction, we deduce
x n + 1 z 2 max { x 0 z 2 , 2 ψ ( z ) z 2 1 2 δ 2 } .

Hence, { x n } is bounded.

By using the firm nonexpansivity of P C , we derive that
x n + 1 z 2 = P C [ α n ψ ( x n ) + ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] P C z 2 α n ψ ( x n ) z , x n + 1 z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z , x n + 1 z = α n ψ ( x n ) ψ ( z ) , x n + 1 z + α n ψ ( z ) z , x n + 1 z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z , x n + 1 z α n δ x n z x n + 1 z + α n ψ ( z ) z , x n + 1 z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z x n + 1 z = ( α n δ x n z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) x n + 1 z + α n ψ ( z ) z , x n + 1 z 1 2 ( α n δ x n z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) 2 + 1 2 x n + 1 z 2 + α n ψ ( z ) z , x n + 1 z .
It follows that
x n + 1 z 2 ( α n δ x n z + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z ) 2 + 2 α n ψ ( z ) z , x n + 1 z α n δ 2 x n z 2 + ( 1 α n ) x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) z 2 + 2 α n ψ ( z ) z , x n + 1 z α n δ 2 x n z 2 + ( 1 α n ) [ x n z 2 ρ n ( 2 ρ n ) f 2 ( x n ) f ( x n ) 2 ] + 2 α n ψ ( z ) z , x n + 1 z = [ 1 ( 1 δ 2 ) α n ] x n z 2 + 2 α n ψ ( z ) z , x n + 1 z ( 1 α n ) ρ n ( 2 ρ n ) f 2 ( x n ) f ( x n ) 2 .
(11)
Next, we will prove that x n z following the ideas in [49]. Set s n = x n z 2 for all n 0 . Since α n 0 and inf n ρ n ( 2 ρ n ) > 0 , we may assume, without loss of generality, that ( 1 α n ) ρ n ( 2 ρ n ) σ for some σ > 0 . Thus, we can rewrite (11) as
s n + 1 s n + ( 1 δ 2 ) α n s n + σ f 2 ( x n ) f ( x n ) 2 2 α n ψ ( z ) z , x n + 1 z .
(12)

Now, we consider two possible cases.

Case 1. Assume that { s n } is eventually decreasing, i.e., there exists N > 0 such that { s n } is decreasing for n N . In this case, { s n } must be convergent, and from (12) it follows that
0 σ f 2 ( x n ) f ( x n ) 2 s n s n + 1 ( 1 δ 2 ) α n s n + 2 α n ψ ( z ) z x n + 1 z s n s n + 1 + M α n ,
(13)
where M > 0 is a constant such that sup n { 2 ψ ( z ) z x n + 1 z } M . Letting n in (13), we get
lim n f ( x n ) = 0 .

Since { x n } is bounded, there exists a subsequence { x n k } of { x n } converging weakly to x ˜ C .

From the weak lower semicontinuity of f, we have
0 f ( x ˜ ) lim inf k f ( x n k ) = lim n f ( x n ) = 0 .
Hence, f ( x ˜ ) = 0 , i.e., A x ˜ Q . This indicates that
ω w ( x n ) Γ .
Furthermore, due to the property of the projection (c),
lim sup n ψ ( z ) z , x n + 1 z = max ω ω w ( x n ) ψ ( z ) P Γ ( ψ ( z ) ) , ω P Γ ( ψ ( z ) ) 0 .
From (12), we obtain
s n + 1 [ 1 ( 1 δ 2 ) α n ] s n + 2 α n ψ ( z ) z , x n + 1 z .
(14)

Applying Lemma 2.3 to (14), we get s n 0 .

Case 2. Assume { s n } is not eventually decreasing. That is, there exists an integer n 0 such that s n 0 s n 0 + 1 . Thus, we can define an integer sequence { τ n } for all n n 0 as follows:
τ ( n ) = max { k N n 0 k n , s k s k + 1 } .
Clearly, τ ( n ) is a non-decreasing sequence such that τ ( n ) + as n and
s τ ( n ) s τ ( n ) + 1
for all n n 0 . In this case, we derive from (13) that
σ f 2 ( x τ ( n ) ) f ( x τ ( n ) ) 2 M α τ ( n ) 0 .
It follows that
lim n f ( x τ ( n ) ) = 0 .

This implies that every weak cluster point of { x τ ( n ) } is in the solution set Γ; i.e., ω w ( x τ ( n ) ) Γ .

On the other hand, we note that
x τ ( n ) + 1 x τ ( n ) α τ ( n ) ψ ( x τ ( n ) ) x τ ( n ) + ( 1 α τ ( n ) ) ρ τ ( n ) f ( x τ ( n ) ) f ( x τ ( n ) ) 0 ,
from which we can deduce that
lim sup n ψ ( z ) z , x τ ( n ) + 1 z = lim sup n ψ ( z ) z , x τ ( n ) z = max ω ω w ( x τ ( n ) ) ψ ( z ) P Γ ( ψ ( z ) ) , ω P Γ ( ψ ( z ) ) 0 .
(15)
Since s τ ( n ) s τ ( n ) + 1 , we have from (12) that
s τ ( n ) 2 1 δ 2 ψ ( z ) z , x τ ( n ) + 1 z .
(16)
Combining (15) and (16) yields
lim sup n s τ ( n ) 0 ,
and hence
lim n s τ ( n ) = 0 .
From (14), we have
lim sup n s τ ( n ) + 1 lim sup n s τ ( n ) .
Thus,
lim n s τ ( n ) + 1 = 0 .
From Lemma 2.4, we have
0 s n max { s τ ( n ) , s τ ( n ) + 1 } .

Therefore, s n 0 . That is, x n z . This completes the proof. □

From Theorem 3.1, we can deduce easily the following algorithm and corollary.

Algorithm 3.2 For given x 0 C , assume that { x n } has been constructed. If f ( x n ) = 0 , then stop and x n is a solution of SFP (1). Otherwise, continue and compute x n + 1 by the recursion
x n + 1 = P C [ ( 1 α n ) ( x n ρ n f ( x n ) f ( x n ) 2 f ( x n ) ) ] , n 0 ,
(17)

where { α n } ( 0 , 1 ) and { ρ n } ( 0 , 2 ) .

Theorem 3.2 Suppose that the SFP is consistent, that is, Γ . Assume that the following conditions hold:
  1. (a)

    lim n α n = 0 and n = 1 α n = ;

     
  2. (b)

    inf n ρ n ( 2 ρ n ) > 0 .

     

Then { x n } defined by (17) converges strongly to the minimum norm solution of the SFP.

4 Concluding remarks

This work contains our study dedicated to developing and improving self-adaptive methods for solving the split feasibility problem. We have introduced our improved self-adaptive method for solving the split feasibility problem. As a special case, the minimum norm solution of the split feasibility problem can be approached iteratively. This study is motivated by relevant applications for solving many real-world problems, which give rise to mathematical models in the sphere of variational inequality problems.

Declarations

Acknowledgements

The first author was supported in part by NSFC 11071279 and NSFC 71161001-G0105. The third author was partially supported by NSC 101-2628-E-230-001-MY3.

Authors’ Affiliations

(1)
Department of Mathematics, Tianjin Polytechnic University, Tianjin, 300387, China
(2)
Faculty of Applied Sciences, University ‘Politehnica’ of Bucharest, Splaiul Independentei 313, Bucharest, 060042, Romania
(3)
Department of Information Management, Cheng Shiu University, Kaohsiung, 833, Taiwan

References

  1. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
  2. Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001View ArticleGoogle Scholar
  3. Stark H (Ed): Image Recovery: Theory and Applications. Academic Press, San Diego; 1987.Google Scholar
  4. Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633–642. 10.1016/j.camwa.2011.12.074MathSciNetView ArticleGoogle Scholar
  5. Ceng LC, Ansari QH, Yao JC: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 2012, 16: 471–495. 10.1007/s11117-012-0174-8MathSciNetView ArticleGoogle Scholar
  6. Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012MathSciNetView ArticleGoogle Scholar
  7. Combettes PL: The foundations of set theoretic estimation. Proc. IEEE 1993, 81: 182–208.View ArticleGoogle Scholar
  8. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MathSciNetView ArticleGoogle Scholar
  9. Kiwiel, KC: Block-iterative surrogate projection methods for convex feasibility problems. Technical report, Systems Research Inst., Polish Academy of Sciences, Newelska 6, 01–447, Warsaw, Poland (December 1992)Google Scholar
  10. Butnariu D, Censor Y: On the behavior of a block-iterative projection method for solving convex feasibility problems. Int. J. Comput. Math. 1990, 34: 79–94. 10.1080/00207169008803865View ArticleGoogle Scholar
  11. Censor Y: Row-action methods for huge and sparse systems and their applications. SIAM Rev. 1981, 23: 444–464. 10.1137/1023097MathSciNetView ArticleGoogle Scholar
  12. Gubin LG, Polyak BT, Raik EV: The method of projection for finding the common point of convex sets. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 1–24.View ArticleGoogle Scholar
  13. Han SP: A successive projection method. Math. Program. 1988, 40: 1–14.View ArticleGoogle Scholar
  14. Iusem AN, De Pierro AR: On the convergence of Han’s method for convex programming with quadratic objective. Math. Program. 1991, 52: 265–284. 10.1007/BF01582891MathSciNetView ArticleGoogle Scholar
  15. Bregman LM: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 200–217.View ArticleGoogle Scholar
  16. Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310MathSciNetView ArticleGoogle Scholar
  17. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006MathSciNetView ArticleGoogle Scholar
  18. Byrne C: Bregman-Legendre multidistance projection algorithms for convex feasibility and optimization. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. Elsevier, Amsterdam; 2001:87–100.View ArticleGoogle Scholar
  19. Censor Y, Segal A: The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16: 587–600.MathSciNetGoogle Scholar
  20. Dang Y, Gao Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
  21. Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58–70. 10.1007/BF01589441MathSciNetView ArticleGoogle Scholar
  22. Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009MathSciNetView ArticleGoogle Scholar
  23. Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. 10.1155/2010/102085Google Scholar
  24. Wang F, Xu HK: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011, 74: 4105–4111. 10.1016/j.na.2011.03.044MathSciNetView ArticleGoogle Scholar
  25. Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2010. 10.1016/j.amc.2010.11.058Google Scholar
  26. Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007View ArticleGoogle Scholar
  27. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-zMathSciNetView ArticleGoogle Scholar
  28. Yang Q, Zhao J: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/0266-5611/21/5/017View ArticleGoogle Scholar
  29. Zhao J, Yang Q: Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 2011., 27: Article ID 035009Google Scholar
  30. Yao Y, Wu J, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 140679Google Scholar
  31. Ceng LC, Ansari QH, Yao JC: Extragradient-projection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 2011, 1: 341–359.MathSciNetView ArticleGoogle Scholar
  32. Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005MathSciNetView ArticleGoogle Scholar
  33. Ceng LC, Ansari QH, Wen CF: Multi-step implicit iterative methods with regularization for minimization problems and fixed point problems. J. Inequal. Appl. 2013., 2013: Article ID 240 10.1186/1029-242X-2013-240Google Scholar
  34. Ceng LC, Ansari QH, Wen CF: Implicit relaxed and hybrid methods with regularization for minimization problems and asymptotically strict pseudocontractive mappings in the intermediate sense. Abstr. Appl. Anal. 2013., 2013: Article ID 854297Google Scholar
  35. Goldstein AA: Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70: 709–710. 10.1090/S0002-9904-1964-11178-2View ArticleGoogle Scholar
  36. Levitin ES, Polyak BT: Constrained minimization problems. U.S.S.R. Comput. Math. Math. Phys. 1966, 6: 1–50.View ArticleGoogle Scholar
  37. Han D: Solving linear variational inequality problems by a self-adaptive projection method. Appl. Math. Comput. 2006, 182: 1765–1771. 10.1016/j.amc.2006.06.013MathSciNetView ArticleGoogle Scholar
  38. Han D: Inexact operator splitting methods with self-adaptive strategy for variational inequality problems. J. Optim. Theory Appl. 2007, 132: 227–243. 10.1007/s10957-006-9060-5MathSciNetView ArticleGoogle Scholar
  39. Han D, Sun W: A new modified Goldstein-Levitin-Polyak projection method for variational inequality problems. Comput. Math. Appl. 2004, 47: 1817–1825. 10.1016/j.camwa.2003.12.002MathSciNetView ArticleGoogle Scholar
  40. He BS, He X, Liu H, Wu T: Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 2009, 196: 43–48. 10.1016/j.ejor.2008.03.004MathSciNetView ArticleGoogle Scholar
  41. He BS, Yang H, Meng Q, Han D: Modified Goldstein-Levitin-Polyak projection method for asymmetric strong monotone variational inequalities. J. Optim. Theory Appl. 2002, 112: 129–143. 10.1023/A:1013048729944MathSciNetView ArticleGoogle Scholar
  42. He BS, Yang H, Wang SL: Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 2000, 106: 337–356. 10.1023/A:1004603514434MathSciNetView ArticleGoogle Scholar
  43. Liao LZ, Wang SL: A self-adaptive projection and contraction method for monotone symmetric linear variational inequalities. Comput. Math. Appl. 2002, 43: 41–48. 10.1016/S0898-1221(01)00269-3MathSciNetView ArticleGoogle Scholar
  44. Yang Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleGoogle Scholar
  45. Zhang W, Han D, Li Z: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 2009., 25: Article ID 115001Google Scholar
  46. Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017MathSciNetView ArticleGoogle Scholar
  47. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
  48. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
  49. Mainge PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-zMathSciNetView ArticleGoogle Scholar

Copyright

Advertisement