 Research
 Open Access
Strong convergence of a selfadaptive method for the split feasibility problem
 Yonghong Yao^{1},
 Mihai Postolache^{2}Email author and
 YeongCheng Liou^{3}
https://doi.org/10.1186/168718122013201
© Yao et al.; licensee Springer 2013
 Received: 21 May 2013
 Accepted: 10 July 2013
 Published: 25 July 2013
Abstract
Selfadaptive methods which permit stepsizes being selected selfadaptively are effective methods for solving some important problems, e.g., variational inequality problems. We devote this paper to developing and improving the selfadaptive methods for solving the split feasibility problem. A new improved selfadaptive method is introduced for solving the split feasibility problem. As a special case, the minimum norm solution of the split feasibility problem can be approached iteratively.
MSC:47J25, 47J20, 49N45, 65J15.
Keywords
 split feasibility problem
 selfadaptive method
 projection
 minimization problem
 minimumnorm
1 Introduction
As we know, the original split feasibility problem (SFP) was introduced firstly by Censor and Elfving [1], and has received much attention since its inception in 1994. This is due to its applications in signal processing and image reconstruction, with particular progress in intensitymodulated radiation therapy; please, see [2–6].
Since the SFP is a special case of the convex feasibility problem (CFP), which is to find a point in the nonempty intersection of finitely many closed and convex sets, we next briefly review some historic approaches which relate to the CFP. The CFP is an important problem because many realworld inversion or estimation problems in engineering as well as in mathematics can be cast into this framework; see, e.g., Combettes [7], Bauschke and Borwein [8] and Kiwiel [9]. Traditionally, iterative projection methods for solving the CFP employ orthogonal projections onto convex sets (i.e., nearest point projections with respect to the Euclidean distance function); see, e.g., [10–14]. Much work has been done with generalized distance functions and the generalized projections associated with them suggested by Bregman [15].
In 1994, Censor and Elfving [1] investigated the use of different kinds of generalized projections in a single iterative process for solving the SFP. Their proposal is an iterative algorithm, which involves the computation of the inverse of a matrix, which is known to be a difficult task. That is why Byrne [16, 17] proposed the socalled CQ algorithm, which generates a sequence by a recursive procedure with suitable stepsize. The CQ algorithm only involves the computations of the projections onto the sets C and Q, respectively, and is therefore implementable in the case where these projections have closedform expressions (e.g., C and Q are the closed balls or halfspaces). There is a large number of references on the CQ method in the literature; see, for instance, [18–34]. However, we have to remark that the determination of the stepsize depends on the operator (matrix) norm (or the dominant eigenvalue of a matrix product). This means that in order to implement the CQ algorithm, one has first to compute (or, at least, to estimate) the matrix norm of an operator, which is in general not an easy work in practice.
To overcome the above difficulty, the socalled selfadaptive method which permits stepsize being selected selfadaptively was developed. Note that this method is the application of the projection method of Goldstein [35], Levitin and Polyak [36] to a suitable variational inequality problem, which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends strongly on the choice of the stepsize parameter. If one chooses a small parameter such that it guarantees the convergence of the iterative sequence, the recursion leads to slow speed of convergence. On the other hand, if one chooses a large stepsize to improve the speed of convergence, the generated sequence may not converge. In real applications to solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the SFP. Some selfadaptive methods for solving variational inequality problems have been developed according to the original GoldsteinLevitinPolyak method [35, 36]. See, e.g., [37–45].
Motivated by the selfadaptive strategy, Zhang et al. [45] proposed their method by using variable stepsizes instead of the fixed stepsizes as in Censor et al. [46]. Also, a selfadaptive projection method was introduced by Zhao and Yang [29], and it was adopted by using the Armijolike searches. The advantage of these algorithms lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required, and still convergence is guaranteed.
In this paper, we further develop and improve the selfadaptive methods for solving the SFP. An improved selfadaptive method is introduced for solving the SFP. As a special case, the minimum norm solution of the SFP can be approached iteratively.
2 Framework and preliminary results
Next, we use Γ to denote the solution set of the SFP, i.e., $\mathrm{\Gamma}=\{x\in C:Ax\in Q\}$.
where the stepsize ${\tau}_{n}$ is chosen in the interval $(0,2/{\parallel A\parallel}^{2})$. It is remarkable that the CQ algorithm only involves the computations of the projections ${P}_{C}$ and ${P}_{Q}$ onto the sets C and Q, respectively, and is therefore implementable in the case where ${P}_{C}$ and ${P}_{Q}$ have closedform expressions (e.g., C and Q are the closed balls or halfspaces). However, we observe that the determination of the stepsize ${\tau}_{n}$ depends on the operator (matrix) norm $\parallel A\parallel $ (or the largest eigenvalue of ${A}^{\ast}A$). This means that for practical implementation of the CQ algorithm, one has first to compute (or, at least, to estimate) the matrix norm of A, which is in general not an easy task in practice.
where ${\tau}_{n}$, the stepsize at iteration n, is chosen in the interval $(0,2/L)$, where L is the Lipschitz constant of ∇f.
The above method (5) has to be thought of as the application of the projection method of Goldstein [35], Levitin and Polyak [36] to the variational inequality problem (4), which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends greatly on the choice of the parameter ${\tau}_{n}$. A small ${\tau}_{n}$ guarantees the convergence of the iterative sequence, but the recursion leads to slow speed of convergence. On the other hand, a large stepsize will improve the speed of convergence, but the generated sequence may not converge. In real applications for solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the SFP.
The methods in Zhang et al. [45] and Censor et al. [46] were proposed for solving the multiplesets split feasibility problem.
Algorithm 2.1 S1. Given a nonnegative sequence ${\tau}_{n}$ such that ${\sum}_{n=0}^{\mathrm{\infty}}{\tau}_{n}<\mathrm{\infty}$, $\delta \in (0,1)$, $\mu \in (0,1)$, $\rho \in (0,1)$, $\u03f5>0$, ${\beta}_{0}>0$, and arbitrary initial point ${x}_{0}$. Set ${\gamma}_{0}={\beta}_{0}$ and $n=0$.
then set ${\gamma}_{n+1}=(1+{\tau}_{n+1}){\beta}_{n+1}$; otherwise, set ${\gamma}_{n+1}={\beta}_{n+1}$.
S4. If $\parallel e({x}_{n},{\beta}_{n})\parallel \le \u03f5$, stop; otherwise, set $n:=n+1$ and go to S2.
The following selfadaptive projection method was introduced by Zhao and Yang [29], and it was adopted by using the Armijolike searches.
The advantage of Algorithm 2.1 and Algorithm 2.2 lies in the fact that neither prior information about the matrix norm A nor any other conditions on Q and A are required, and still convergence is guaranteed.
We shall introduce our improved selfadaptive method for solving the SFP. In this respect, we need the ingredients introduced right now.
 (a)
$\parallel {P}_{C}(x){P}_{C}(y)\parallel \le \parallel xy\parallel $ for all $x,y\in H$;
 (b)
$\u3008xy,{P}_{C}(x){P}_{C}(y)\u3009\ge {\parallel {P}_{C}(x){P}_{C}(y)\parallel}^{2}$ for every $x,y\in H$;
 (c)
$\u3008x{P}_{C}(x),y{P}_{C}(x)\u3009\le 0$ for all $x\in H$ and $y\in C$.
Next we adopt the following notation:

${x}_{n}\to x$ means that ${x}_{n}$ converges strongly to x;

${x}_{n}\rightharpoonup x$ means that ${x}_{n}$ converges weakly to x;

${\omega}_{w}({x}_{n}):=\{x:\mathrm{\exists}{x}_{{n}_{j}}\rightharpoonup x\}$ is the weak ωlimit set of the sequence $\{{x}_{n}\}$.
f is said to be wlsc on H if it is wlsc at every point $x\in H$.
The first lemma is easy to prove.
Lemma 2.1 [14]
 (i)
f is convex and differentiable;
 (ii)
f is wlsc on C.
Lemma 2.2 [47]
where $\gamma >0$.
Lemma 2.3 [48]
 (1)
${\sum}_{n=1}^{\mathrm{\infty}}{\gamma}_{n}=\mathrm{\infty}$;
 (2)
${lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}\frac{{\delta}_{n}}{{\gamma}_{n}}\le 0$ or ${\sum}_{n=1}^{\mathrm{\infty}}{\delta}_{n}<\mathrm{\infty}$.
Then ${lim}_{n\to \mathrm{\infty}}{a}_{n}=0$.
Lemma 2.4 [49]
3 Main results
In this section we state and prove our main results.
Let C and Q be nonempty closed convex subsets of real Hilbert spaces ${H}_{1}$ and ${H}_{2}$, respectively. Let $\psi :C\to {H}_{1}$ be a δcontraction with $\delta \in [0,\frac{\sqrt{2}}{2})$. Let $A:{H}_{1}\to {H}_{2}$ be a bounded linear operator.
where $\{{\alpha}_{n}\}\subset (0,1)$ and $\{{\rho}_{n}\}\subset (0,2)$.
 (a)
${lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0$ and ${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}$;
 (b)
${inf}_{n}{\rho}_{n}(2{\rho}_{n})>0$.
Hence, $\{{x}_{n}\}$ is bounded.
Now, we consider two possible cases.
Since $\{{x}_{n}\}$ is bounded, there exists a subsequence $\{{x}_{{n}_{k}}\}$ of $\{{x}_{n}\}$ converging weakly to $\tilde{x}\in C$.
Applying Lemma 2.3 to (14), we get ${s}_{n}\to 0$.
This implies that every weak cluster point of $\{{x}_{\tau (n)}\}$ is in the solution set Γ; i.e., ${\omega}_{w}({x}_{\tau (n)})\subset \mathrm{\Gamma}$.
Therefore, ${s}_{n}\to 0$. That is, ${x}_{n}\to z$. This completes the proof. □
From Theorem 3.1, we can deduce easily the following algorithm and corollary.
where $\{{\alpha}_{n}\}\subset (0,1)$ and $\{{\rho}_{n}\}\subset (0,2)$.
 (a)
${lim}_{n\to \mathrm{\infty}}{\alpha}_{n}=0$ and ${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}$;
 (b)
${inf}_{n}{\rho}_{n}(2{\rho}_{n})>0$.
Then $\{{x}_{n}\}$ defined by (17) converges strongly to the minimum norm solution of the SFP.
4 Concluding remarks
This work contains our study dedicated to developing and improving selfadaptive methods for solving the split feasibility problem. We have introduced our improved selfadaptive method for solving the split feasibility problem. As a special case, the minimum norm solution of the split feasibility problem can be approached iteratively. This study is motivated by relevant applications for solving many realworld problems, which give rise to mathematical models in the sphere of variational inequality problems.
Declarations
Acknowledgements
The first author was supported in part by NSFC 11071279 and NSFC 71161001G0105. The third author was partially supported by NSC 1012628E230001MY3.
Authors’ Affiliations
References
 Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MathSciNetView ArticleGoogle Scholar
 Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/00319155/51/10/001View ArticleGoogle Scholar
 Stark H (Ed): Image Recovery: Theory and Applications. Academic Press, San Diego; 1987.Google Scholar
 Ceng LC, Ansari QH, Yao JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64: 633–642. 10.1016/j.camwa.2011.12.074MathSciNetView ArticleGoogle Scholar
 Ceng LC, Ansari QH, Yao JC: Mann type iterative methods for finding a common solution of split feasibility and fixed point problems. Positivity 2012, 16: 471–495. 10.1007/s1111701201748MathSciNetView ArticleGoogle Scholar
 Ceng LC, Ansari QH, Yao JC: Relaxed extragradient methods for finding minimumnorm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75: 2116–2125. 10.1016/j.na.2011.10.012MathSciNetView ArticleGoogle Scholar
 Combettes PL: The foundations of set theoretic estimation. Proc. IEEE 1993, 81: 182–208.View ArticleGoogle Scholar
 Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MathSciNetView ArticleGoogle Scholar
 Kiwiel, KC: Blockiterative surrogate projection methods for convex feasibility problems. Technical report, Systems Research Inst., Polish Academy of Sciences, Newelska 6, 01–447, Warsaw, Poland (December 1992)Google Scholar
 Butnariu D, Censor Y: On the behavior of a blockiterative projection method for solving convex feasibility problems. Int. J. Comput. Math. 1990, 34: 79–94. 10.1080/00207169008803865View ArticleGoogle Scholar
 Censor Y: Rowaction methods for huge and sparse systems and their applications. SIAM Rev. 1981, 23: 444–464. 10.1137/1023097MathSciNetView ArticleGoogle Scholar
 Gubin LG, Polyak BT, Raik EV: The method of projection for finding the common point of convex sets. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 1–24.View ArticleGoogle Scholar
 Han SP: A successive projection method. Math. Program. 1988, 40: 1–14.View ArticleGoogle Scholar
 Iusem AN, De Pierro AR: On the convergence of Han’s method for convex programming with quadratic objective. Math. Program. 1991, 52: 265–284. 10.1007/BF01582891MathSciNetView ArticleGoogle Scholar
 Bregman LM: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 200–217.View ArticleGoogle Scholar
 Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/02665611/18/2/310MathSciNetView ArticleGoogle Scholar
 Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/02665611/20/1/006MathSciNetView ArticleGoogle Scholar
 Byrne C: BregmanLegendre multidistance projection algorithms for convex feasibility and optimization. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Edited by: Butnariu D, Censor Y, Reich S. Elsevier, Amsterdam; 2001:87–100.View ArticleGoogle Scholar
 Censor Y, Segal A: The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16: 587–600.MathSciNetGoogle Scholar
 Dang Y, Gao Y: The strong convergence of a KMCQlike algorithm for a split feasibility problem. Inverse Probl. 2011., 27: Article ID 015007Google Scholar
 Fukushima M: A relaxed projection method for variational inequalities. Math. Program. 1986, 35: 58–70. 10.1007/BF01589441MathSciNetView ArticleGoogle Scholar
 Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/02665611/21/5/009MathSciNetView ArticleGoogle Scholar
 Wang F, Xu HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010. 10.1155/2010/102085Google Scholar
 Wang F, Xu HK: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011, 74: 4105–4111. 10.1016/j.na.2011.03.044MathSciNetView ArticleGoogle Scholar
 Wang Z, Yang Q, Yang Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2010. 10.1016/j.amc.2010.11.058Google Scholar
 Xu HK: A variable Krasnosel’skiiMann algorithm and the multipleset split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/02665611/22/6/007View ArticleGoogle Scholar
 Xu HK: Averaged mappings and the gradientprojection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s109570119837zMathSciNetView ArticleGoogle Scholar
 Yang Q, Zhao J: Several solution methods for the split feasibility problem. Inverse Probl. 2005, 21: 1791–1799. 10.1088/02665611/21/5/017View ArticleGoogle Scholar
 Zhao J, Yang Q: Selfadaptive projection methods for the multiplesets split feasibility problem. Inverse Probl. 2011., 27: Article ID 035009Google Scholar
 Yao Y, Wu J, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012., 2012: Article ID 140679Google Scholar
 Ceng LC, Ansari QH, Yao JC: Extragradientprojection method for solving constrained convex minimization problems. Numer. Algebra Control Optim. 2011, 1: 341–359.MathSciNetView ArticleGoogle Scholar
 Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005MathSciNetView ArticleGoogle Scholar
 Ceng LC, Ansari QH, Wen CF: Multistep implicit iterative methods with regularization for minimization problems and fixed point problems. J. Inequal. Appl. 2013., 2013: Article ID 240 10.1186/1029242X2013240Google Scholar
 Ceng LC, Ansari QH, Wen CF: Implicit relaxed and hybrid methods with regularization for minimization problems and asymptotically strict pseudocontractive mappings in the intermediate sense. Abstr. Appl. Anal. 2013., 2013: Article ID 854297Google Scholar
 Goldstein AA: Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70: 709–710. 10.1090/S000299041964111782View ArticleGoogle Scholar
 Levitin ES, Polyak BT: Constrained minimization problems. U.S.S.R. Comput. Math. Math. Phys. 1966, 6: 1–50.View ArticleGoogle Scholar
 Han D: Solving linear variational inequality problems by a selfadaptive projection method. Appl. Math. Comput. 2006, 182: 1765–1771. 10.1016/j.amc.2006.06.013MathSciNetView ArticleGoogle Scholar
 Han D: Inexact operator splitting methods with selfadaptive strategy for variational inequality problems. J. Optim. Theory Appl. 2007, 132: 227–243. 10.1007/s1095700690605MathSciNetView ArticleGoogle Scholar
 Han D, Sun W: A new modified GoldsteinLevitinPolyak projection method for variational inequality problems. Comput. Math. Appl. 2004, 47: 1817–1825. 10.1016/j.camwa.2003.12.002MathSciNetView ArticleGoogle Scholar
 He BS, He X, Liu H, Wu T: Selfadaptive projection method for cocoercive variational inequalities. Eur. J. Oper. Res. 2009, 196: 43–48. 10.1016/j.ejor.2008.03.004MathSciNetView ArticleGoogle Scholar
 He BS, Yang H, Meng Q, Han D: Modified GoldsteinLevitinPolyak projection method for asymmetric strong monotone variational inequalities. J. Optim. Theory Appl. 2002, 112: 129–143. 10.1023/A:1013048729944MathSciNetView ArticleGoogle Scholar
 He BS, Yang H, Wang SL: Alternating direction method with selfadaptive penalty parameters for monotone variational inequalities. J. Optim. Theory Appl. 2000, 106: 337–356. 10.1023/A:1004603514434MathSciNetView ArticleGoogle Scholar
 Liao LZ, Wang SL: A selfadaptive projection and contraction method for monotone symmetric linear variational inequalities. Comput. Math. Appl. 2002, 43: 41–48. 10.1016/S08981221(01)002693MathSciNetView ArticleGoogle Scholar
 Yang Q: On variablestep relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302: 166–179. 10.1016/j.jmaa.2004.07.048MathSciNetView ArticleGoogle Scholar
 Zhang W, Han D, Li Z: A selfadaptive projection method for solving the multiplesets split feasibility problem. Inverse Probl. 2009., 25: Article ID 115001Google Scholar
 Censor Y, Elfving T, Kopf N, Bortfeld T: The multiplesets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/02665611/21/6/017MathSciNetView ArticleGoogle Scholar
 Xu HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
 Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332View ArticleGoogle Scholar
 Mainge PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. SetValued Anal. 2008, 16: 899–912. 10.1007/s112280080102zMathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.