Iterative methods for constrained convex minimization problem in Hilbert spaces
 Ming Tian^{1}Email author and
 LiHua Huang^{1}
https://doi.org/10.1186/168718122013105
© Tian and Huang; licensee Springer 2013
Received: 14 December 2012
Accepted: 4 April 2013
Published: 18 April 2013
Abstract
In this paper, based on Yamada’s hybrid steepest descent method, a general iterative method is proposed for solving constrained convex minimization problem. It is proved that the sequences generated by proposed implicit and explicit schemes converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality.
MSC:58E35, 47H09, 65J15.
Keywords
1 Introduction
Let H be a real Hilbert space with inner product $\u3008\cdot ,\cdot \u3009$ and induced norm $\parallel \cdot \parallel $. Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.
Let $T,A:H\to H$ be nonlinear operators.

T is nonexpansive if $\parallel TxTy\parallel \le \parallel xy\parallel $ for all $x,y\in H$.

T is Lipschitz continuous if there exists a constant $L>0$ such that $\parallel TxTy\parallel \le L\parallel xy\parallel $, for all $x,y\in H$.

$A:H\to H$ is monotone if $\u3008xy,AxAy\u3009\ge 0$, for all $x,y\in H$.

Given is a number $\eta >0$, $A:H\to H$ is ηstrongly monotone if $\u3008xy,AxAy\u3009\ge \eta {\parallel xy\parallel}^{2}$, for all $x,y\in H$.

Given is a number $\upsilon >0$. $A:H\to H$ is υinverse strongly monotone (υism) if $\u3008xy,AxAy\u3009\ge \upsilon {\parallel AxAy\parallel}^{2}$, for all $x,y\in H$.
It is known that inverse strongly monotone operators have been studied widely (see [1–3]), and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).

$T:H\to H$ is said to be an averaged mapping if $T=(1\alpha )I+\alpha S$, where α is a number in $(0,1)$ and $S:H\to H$ is nonexpansive. In particular, projections are ($1/2$)averaged mappings.
Averaged mappings have received many investigations, see [6–10].
where $f:C\to R$ is a real valued convex function. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set. It is known that the gradientprojection algorithm is one of the powerful methods for solving the minimization problem (1.1) (see [11–18]), and sometimes the minimization problem (1.1) has more than one solution. So, regularization is needed. We can use the idea of regularization to design an iterative algorithm for finding the minimumnorm solution of (1.1).
${x}_{\mathrm{min}}$ can be obtained by two steps. First, observing that the gradient $\mathrm{\nabla}{f}_{\alpha}=\mathrm{\nabla}f+\alpha I$ is $(L+\alpha )$Lipschitzian and αstrongly monotone, the mapping ${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{\alpha})$ is a contraction with coefficient $\sqrt{1\gamma (2\alpha \gamma {(L+\alpha )}^{2})}\le 1\frac{1}{2}\alpha \gamma $, where $0<\gamma \le \frac{\alpha}{{(L+\alpha )}^{2}}$. So, the regularized problem (1.2) has a unique solution, which is denoted as ${x}_{\alpha}\in C$ and which can be obtained via the Banach contraction principle. Secondly, letting $\alpha \to 0$ yields ${x}_{\alpha}\to {x}_{\mathrm{min}}$ in norm. The following result shows that for suitable choices of γ and α, the minimumnorm solution ${x}_{\mathrm{min}}$ can be obtained by a single step.
Theorem 1.1 [9]
 (i)
$0<{\gamma}_{n}\le {\alpha}_{n}/{(L+{\alpha}_{n})}^{2}$ for all n;
 (ii)
${\alpha}_{n}\to 0$ (and ${\gamma}_{n}\to 0$) as $n\to \mathrm{\infty}$;
 (iii)
${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}{\gamma}_{n}=\mathrm{\infty}$;
 (iv)
$({\gamma}_{n}{\gamma}_{n1}+{\alpha}_{n}{\gamma}_{n}{\alpha}_{n1}{\gamma}_{n1})/{({\alpha}_{n}{\gamma}_{n})}^{2}\to 0$ as $n\to \mathrm{\infty}$.
Then ${x}_{n}\to {x}_{\mathrm{min}}$ as $n\to \mathrm{\infty}$.
In the assumptions of Theorem 1.1, the sequence $\{{\gamma}_{n}\}$ is forced to tend to zero. If we keep it as a constant, then we have weak convergence as follows.
Theorem 1.2 [19]
Assume that $0<\gamma <2/L$ and ${\sum}_{n=1}^{\mathrm{\infty}}{\alpha}_{n}<\mathrm{\infty}$. Then ${\{{x}_{n}\}}_{n=0}^{\mathrm{\infty}}$ converges weakly to a solution of the minimization problem (1.1).
In this paper, we introduce a modification of algorithm (1.4) which is based on Yamada’s method. It is proved that the sequence generated by our proposed algorithm converges strongly to a minimizer of (1.1), which is also a solution of a certain variational inequality.
2 Preliminaries
In this section, we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.
 (i)
If $T=(1\alpha )S+\alpha V$, for some $\alpha \in (0,1)$ and if S is averaged and V is nonexpansive, then T is averaged.
 (ii)
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings ${\{{T}_{i}\}}_{i=1}^{N}$ is averaged, then so is the composite ${T}_{1}\cdots {T}_{N}$. In particular, if ${T}_{1}$ is ${\alpha}_{1}$averaged and ${T}_{2}$ is ${\alpha}_{2}$averaged, where ${\alpha}_{1},{\alpha}_{2}\in (0,1)$, then the composite ${T}_{1}{T}_{2}$ is αaveraged, where $\alpha ={\alpha}_{1}+{\alpha}_{2}{\alpha}_{1}{\alpha}_{2}$.
 (iii)If the mappings ${\{{T}_{i}\}}_{i=1}^{N}$ are averaged and have a common fixed point, then$\bigcap _{i=1}^{N}Fix({T}_{i})=Fix({T}_{1}\cdots {T}_{N}).$
Here, the notations $Fix(T)$ denotes the set of fixed point of the mapping T; that is, $Fix(T):=\{x\in H:Tx=x\}$.
 (i)
T is nonexpansive, if and only if the complement $IT$ is ($1/2$)ism;
 (ii)
If T is υism, then for $\gamma >0$, γT is ($\upsilon /\gamma $)ism;
 (iii)
T is averaged, if and only if the complement $IT$ is υism for some $\upsilon >1/2$; indeed, for $\alpha \in (0,1)$, T is αaveraged, if and only if $IT$ is ($1/2\alpha $)ism.
The socalled demiclosed principle for nonexpansive mappings will often be used.
Lemma 2.3 (Demiclosed Principle [21])
Let C be a closed and convex subset of a Hilbert space H and let $T:C\to C$ be a nonexpansive mapping with $Fix(T)\ne \mathrm{\varnothing}$. If ${\{{x}_{n}\}}_{n=1}^{\mathrm{\infty}}$ is a sequence in C weakly converging to x and if ${\{(IT){x}_{n}\}}_{n=1}^{\mathrm{\infty}}$ converges strongly to y, then $(IT)x=y$. In particular, if $y=0$, then $x\in Fix(T)$.
${Proj}_{C}$ is characterized as follows.
 (i)
${\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}=\mathrm{\infty}$;
 (ii)
either ${lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\delta}_{n}\le 0$ or ${\sum}_{n=0}^{\mathrm{\infty}}{\gamma}_{n}{\delta}_{n}<\mathrm{\infty}$;
 (iii)
${\sum}_{n=0}^{\mathrm{\infty}}{\beta}_{n}<\mathrm{\infty}$.
Then ${lim}_{n\to \mathrm{\infty}}{a}_{n}=0$.
We adopt the following notation:

${x}_{n}\to x$ means that ${x}_{n}\to x$ strongly;

${x}_{n}\rightharpoonup x$ means that ${x}_{n}\to x$ weakly.
3 Main results
Recall that throughout this paper, we use S to denote the solution set of constrained convex minimization problem (1.1).
 (i)
${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})=(1{\theta}_{s})I+{\theta}_{s}{T}_{{\lambda}_{s}}$ and $\gamma \in (0,2/L)$;
 (ii)
${\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4}$;
 (iii)
${\lambda}_{s}$ is continuous with respect to s and ${\lambda}_{s}=o(s)$.
The following proposition summarizes the properties of the net $\{{x}_{s}\}$.
 (a)
$\{{x}_{s}\}$ is bounded for $s\in (0,1)$;
 (b)
${lim}_{s\to 0}\parallel {x}_{s}{T}_{{\lambda}_{s}}{x}_{s}\parallel =0$;
 (c)
${x}_{s}$ defines a continuous curve from $(0,1)$ into C.
where $0<\gamma <2/L$ is a constant. It is clear that $\tilde{x}=T\tilde{x}$, i.e., $\tilde{x}\in S=Fix(T)$.
where ${\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4}$ and $\theta =\frac{2+\gamma L}{4}$.
Hence, $\{{x}_{s}\}$ is bounded.
where ${\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4}$ and ${\theta}_{{s}_{0}}=\frac{2+\gamma (L+{\lambda}_{{s}_{0}})}{4}$.
Since $\{F{T}_{{\lambda}_{s}}({x}_{s})\}$ is bounded, and ${\lambda}_{s}$ is continuous with respect to s, ${x}_{s}$ defines a continuous curve from $(0,1)$ into C. □
The following theorem shows that the net $\{{x}_{s}\}$ converges strongly as $s\to 0$ to a minimizer of (1.1), which solves some variational inequality.
 (i)
${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})=(1{\theta}_{s})I+{\theta}_{s}{T}_{{\lambda}_{s}}$ and $\gamma \in (0,2/L)$;
 (ii)
${\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4}$;
 (iii)
${\lambda}_{s}$ is continuous with respect to s and ${\lambda}_{s}=o(s)$.
Equivalently, we have ${Proj}_{S}(I\mu F){x}^{\ast}={x}^{\ast}$.
The strong monotonicity of F implies that $\tilde{x}=\stackrel{\u02c6}{x}$ and the uniqueness is proved. Below we use ${x}^{\ast}\in S$ to denote the unique solution of the variational inequality (3.4).
Note that ${Proj}_{C}(I\gamma \mathrm{\nabla}f)z=z$ and ${Proj}_{C}(I\gamma \mathrm{\nabla}f)=\frac{2\gamma L}{4}I+\frac{2+\gamma L}{4}T$, so we get $z=Tz$, i.e., $z\in S=Fix(T)$.
Since $\{{x}_{s}\}$ is bounded, it is obvious that if $\{{s}_{n}\}$ is a sequence in $(0,1)$ such that ${s}_{n}\to 0$, and ${x}_{{s}_{n}}\rightharpoonup \overline{x}$.
So, by Lemma 2.3, we get $\overline{x}\in Fix(T)=S$.
Since ${\lambda}_{s}=o(s)$, we obtain from (3.8) that ${x}_{{s}_{n}}\to \overline{x}\in S$.
Since ${T}_{{\lambda}_{s}}$ is nonexpansive, $I{T}_{{\lambda}_{s}}$ is monotone. Note that, for any given $z\in S$, $z=Tz$ and $\u3008{Proj}_{C}{y}_{s}{y}_{s},{Proj}_{C}{y}_{s}z\u3009\le 0$.
So $\overline{x}\in S$ is a solution of the variational inequality (3.4). We get $\overline{x}={x}^{\ast}$ by uniqueness. Therefore, ${x}_{s}\to {x}^{\ast}$ as $s\to 0$.
 (i)
${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}}$ and $0<\gamma <2/L$;
 (ii)
${\theta}_{n}=\frac{2+\gamma (L+{\lambda}_{n})}{4}$;
 (iii)
${\lambda}_{n}=o({s}_{n})$.
It is proved that the sequence ${\{{x}_{n}\}}_{n=0}^{\mathrm{\infty}}$ converges strongly to a minimizer ${x}^{\ast}\in S$ of (1.1), which also solves the variational inequality (3.4). □
 (C1)
${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}}$ and $\gamma \in (0,2/L)$;
 (C2)
${\theta}_{n}=\frac{2+\gamma (L+{\lambda}_{n})}{4}$ for all n;
 (C3)
${lim}_{n\to \mathrm{\infty}}{s}_{n}=0$ and ${\sum}_{n=0}^{\mathrm{\infty}}{s}_{n}=\mathrm{\infty}$;
 (C4)
${\sum}_{n=0}^{\mathrm{\infty}}{s}_{n+1}{s}_{n}<\mathrm{\infty}$;
 (C5)
${\lambda}_{n}=o({s}_{n})$ and ${\sum}_{n=0}^{\mathrm{\infty}}{\lambda}_{n+1}{\lambda}_{n}<\mathrm{\infty}$.
Then the sequence $\{{x}_{n}\}$ generated by the explicit scheme (3.10) converges strongly to a minimizer ${x}^{\ast}$ of (1.1), which is also a solution of the variational inequality (3.4).
 (a)$\tilde{x}\in C$ solves the minimization problem (1.1) if and only if $\tilde{x}$ solves the fixedpoint equation$\tilde{x}={Proj}_{C}(I\gamma \mathrm{\nabla}f)\tilde{x}=\frac{2\gamma L}{4}\tilde{x}+\frac{2+\gamma L}{4}T\tilde{x},$
 (b)
the gradient ∇f is $1/L$ism.
 (c)${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})$ is $\frac{2+\gamma (L+{\lambda}_{n})}{4}$ averaged for $\gamma \in (0,2/L)$, in particular, the following relation holds:${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=\frac{2\gamma (L+{\lambda}_{n})}{4}I+\frac{2+\gamma (L+{\lambda}_{n})}{4}{T}_{{\lambda}_{n}}=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}}.$
Consequently, $\{{x}_{n}\}$ is bounded. It implies that $\{{T}_{{\lambda}_{n}}({x}_{n})\}$ is also bounded.
By Lemma 2.5, we obtain $\parallel {x}_{n+1}{x}_{n}\parallel \to 0$.
where ${x}^{\ast}\in S$ is a solution of the variational inequality (3.4).
Without loss of generality, we may assume that ${x}_{{n}_{k}}\rightharpoonup \tilde{x}$.
By (3.14), we get $\parallel {x}_{n}T{x}_{n}\parallel \to 0$.
In terms of Lemma 2.3, we get $\tilde{x}\in Fix(T)=S$.
Finally, we show that ${x}_{n}\to {x}^{\ast}$.
Then, ${x}_{n+1}={Proj}_{C}{y}_{n}{y}_{n}+{y}_{n}$.
where ${\delta}_{n}=\frac{2}{1+{s}_{n}\tau}\u3008\mu F({x}^{\ast}),{x}_{n+1}{x}^{\ast}\u3009+\frac{2{\lambda}_{n}}{{s}_{n}}{L}^{\mathrm{\prime}}$.
By (3.15) and ${\lambda}_{n}=o({s}_{n})$, we get ${lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\delta}_{n}\le 0$. Now applying Lemma 2.5 to (3.17) concludes that ${x}_{n}\to {x}^{\ast}$ as $n\to \mathrm{\infty}$. □
4 Application
In this section, we give an application of Theorem 3.3 to the split feasibility problem (say SFP, for short), which was introduced by Censor and Elfving [22]. Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [7, 23, 24]) due to its applications in signal processing and image reconstruction, with particular progress in intensitymodulated radiation therapy.
where C and Q are nonempty, closed and convex subset of Hilbert space ${H}_{1}$ and ${H}_{2}$, respectively. $B:{H}_{1}\to {H}_{2}$ is a bounded linear operator.
where $0<\gamma <2/{\parallel B\parallel}^{2}$. He obtained that the sequence $\{{x}_{n}\}$ generated by (4.3) converges weakly to a solution of the (SFP).
 (C1)
${Proj}_{C}(I\gamma ({B}^{\ast}(I{Proj}_{Q})B+{\lambda}_{n}I))=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}}$ and $\gamma \in (0,2/L)$;
 (C2)
${\theta}_{n}=\frac{2+\gamma (L+{\lambda}_{n})}{4}$ for all n,
where $F:C\to H$ is kLipschitzian and ηstrongly monotone operator with constant $k>0$, $\eta >0$ such that $0<\mu <2\eta /{k}^{2}$. We can show that the sequence $\{{x}_{n}\}$ generated by (4.4) converges strongly to a solution of the (SFP) (4.1) if the sequence $\{{s}_{n}\}\subset (0,1)$ and the sequence $\{{\lambda}_{n}\}$ of parameters satisfy appropriate conditions.
Applying Theorem 3.3, we obtain the following result.
Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence $\{{x}_{n}\}$ be generated by (4.4). Where the sequence $\{{s}_{n}\}\subset (0,1)$ and the sequence $\{{\lambda}_{n}\}$ satisfy the conditions (C3)(C5). Then the sequence $\{{x}_{n}\}$ converges strongly to a solution of the split feasibility problem (4.1).
where $L={\parallel B\parallel}^{2}$.
 (C1)
${Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}}$ and $\gamma \in (0,2/L)$;
 (C2)
${\theta}_{n}=\frac{2+\gamma (L+{\lambda}_{n})}{4}$ for all n.
Due to Theorem 3.3, we have the conclusion immediately. □
Declarations
Acknowledgements
The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).
Authors’ Affiliations
References
 Brezis H: Operateurs Maximaux Monotones et SemiGroups de Contractions dans les Espaces de Hilbert. NorthHolland, Amsterdam; 1973.Google Scholar
 Jaiboon C, Kumam P: A Hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815Google Scholar
 Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro Mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point prolems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051Google Scholar
 Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965MATHMathSciNetView ArticleGoogle Scholar
 Han D, Lo HK: Solving nonadditive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S03772217(03)004235MATHMathSciNetView ArticleGoogle Scholar
 Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710MATHMathSciNetView ArticleGoogle Scholar
 Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/02665611/20/1/006MATHMathSciNetView ArticleGoogle Scholar
 Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157MATHMathSciNetView ArticleGoogle Scholar
 Xu HK: Averaged mappings and the gradientprojection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s109570119837zMATHMathSciNetView ArticleGoogle Scholar
 Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.View ArticleGoogle Scholar
 Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1996, 6: 787–823.Google Scholar
 Calamai PH, Moré JJ: Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073MATHView ArticleGoogle Scholar
 Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.Google Scholar
 Su M, Xu HK: Remarks on the gradientprojection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.MathSciNetGoogle Scholar
 Jung JS: A general iterative approach to variational inequality problems and optimization problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/284363Google Scholar
 Jung JS: A general composite iterative method for generalized mixed equilibrium problems, variational inequalities problems and optimization problems. J. Inequal. Appl. 2011. doi:10.1186/1029–242x2011–51Google Scholar
 Jitpeera T, Kumam P: A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems. Fixed Point Theory Appl. 2012., 2012: Article ID 111. doi:10.1186/1687–1812–2012–111Google Scholar
 Witthayarat U, Jitpeera T, Kumam P: A new modified hybrid steepestdescent by using a viscosity approximation method with a weakly contractive mapping for a system of equilibrium problems and fixed point problems with minimization problems. Abstr. Appl. Anal. 2012., 2012: Article ID 206345Google Scholar
 Xu HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018Google Scholar
 MartinezYanes C, Xu HK: Strong convergence of the CQ method for fixedpoint iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018MATHMathSciNetView ArticleGoogle Scholar
 Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004MATHMathSciNetView ArticleGoogle Scholar
 Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692MATHMathSciNetView ArticleGoogle Scholar
 López G, MartinMárquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowlege of matrix norms. Inverse Probl. 2012., 28: Article ID 085004Google Scholar
 Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multipleset split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005MATHMathSciNetView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.