- Research
- Open Access
Mathematical programming for the sum of two convex functions with applications to lasso problem, split feasibility problems, and image deblurring problem
- Chih Sheng Chuang^{1},
- Zenn-Tsun Yu^{2} and
- Lai-Jiu Lin^{3}Email author
https://doi.org/10.1186/s13663-015-0388-0
© Chuang et al. 2015
- Received: 14 May 2015
- Accepted: 23 July 2015
- Published: 19 August 2015
Abstract
In this paper, two iteration processes are used to find the solutions of the mathematical programming for the sum of two convex functions. In infinite Hilbert space, we establish two strong convergence theorems as regards this problem. As applications of our results, we give strong convergence theorems as regards the split feasibility problem with modified CQ method, strong convergence theorem as regards the lasso problem, and strong convergence theorems for the mathematical programming with a modified proximal point algorithm and a modified gradient-projection method in the infinite dimensional Hilbert space. We also apply our result on the lasso problem to the image deblurring problem. Some numerical examples are given to demonstrate our results. The main result of this paper entails a unified study of many types of optimization problems. Our algorithms to solve these problems are different from any results in the literature. Some results of this paper are original and some results of this paper improve, extend, and unify comparable results in existence in the literature.
Keywords
- lasso problem
- mathematical programming for the sum of two functions
- split feasibility problem
- gradient-projection algorithm
- proximal point algorithm
MSC
- 90C33
- 90C34
- 90C59
1 Introduction
Due to the involvement of the \(\ell_{1}\) norm, which promotes the sparsity phenomena of many real world problems arising from image/signal processing, statistical regression, machining learning and so on, the lasso receives much attention (see Combettes and Wajs [2], Xu [3], and Wang and Xu [4]).
Theorem 1.1
(Douglas-Rachford-algorithm) [5]
- (i)
\(\operatorname{prox}_{\gamma g} x\in\arg\min_{x\in H}(f+g)(x)\);
- (ii)
\(\{y_{n}-z_{n}\}_{n\in\mathbb{N}}\) converges strongly to 0;
- (iii)
\(\{x_{n}\}_{n\in\mathbb{N}}\) converges weakly to x;
- (iv)
\(\{y_{n}\}_{n\in\mathbb{N}}\) and \(\{z_{n}\}_{n\in\mathbb{N}}\) converge weakly to \(\operatorname{prox}_{\gamma g}x\);
- (v)suppose that one of the following holds:
- (a)
f is uniformly convex on every nonempty subset of \(\operatorname{dom}\partial f\);
- (b)
g is uniformly convex on every nonempty bounded subset of \(\operatorname{dom}\partial g\).
- (a)
Theorem 1.2
(Forward-backward algorithm) [6]
- (i)
\((x_{n})_{n\in\mathbb{N}}\) converges weakly to a point in \(\arg\min_{x\in H}(f+g)(x)\);
- (ii)suppose that \(\inf_{n\in\mathbb{N}}\lambda_{n}\in (0,\infty)\) and one of the following hold:
- (a)
f is uniformly convex on every nonempty bounded subset of \(\operatorname{dom}\partial f\);
- (b)
g is uniformly convex on every nonempty bounded subset of H.
- (a)
Theorem 1.3
(Tseng’s algorithm) [7]
- (a)
\(\{x_{n}\}_{n\in\mathbb{N}}\) and \(\{z_{n}\}_{n\in\mathbb{N}}\) converges weakly to a point in \(C\cap\arg\min_{x\in H}(f+g)(x)\);
- (b)
suppose that f or g is uniformly convex on every nonempty subset of \(\operatorname{dom}\partial f\).
- (a)
\(x_{n+1}:=\operatorname{prox}_{\lambda_{n}g}((1-\gamma_{n})x_{n}-\delta _{n}\nabla f(x_{n}))\) for all \(n\in\mathbb{N}\);
- (b)
\(x_{n+1}:=(I-\gamma_{n}\operatorname{prox}_{\lambda_{n}g}(I-\delta _{n}\nabla f))x_{n}\) for all \(n\in\mathbb{N}\).
Let I be the identity function of H, and let \(f_{1}: C\times C\rightarrow H\) be a bifunction. Let \(g_{1}:H\rightarrow (-\infty ,\infty]\) be a proper convex Fréchet differentiable function with Fréchet derivative \(\nabla g_{1}\) on \(\operatorname{int}(\operatorname{dom}(g_{1}))\), \(C \subset \operatorname{int}(\operatorname{dom}(g_{1}))\), and \(h_{1}:H\rightarrow (-\infty,\infty]\) be a proper convex lower semicontinuous function. Let \(P_{C}\) be the metric projection of H into C. Throughout this paper, we use these notations unless specified otherwise.
Motivated by the results of the above problems, in this paper, we introduce the following iterations to study problem (1.1).
Iteration (I)
Iteration (II)
Then we establish two strong convergence theorems without the uniformly convex assumption on the functions we consider. Our results improve Combettes and Wajs [2], Xu [3], Douglas-Rachford [5], Theorem 1.2, and Tseng [7]. Our results is also different from Wang and Xu [4].
- (AP1)Split feasibility problem:where \(A:H\rightarrow H_{1}\) is a linear and bounded operator.$$ \text{Find } \bar{x}\in H \text{ such that } \bar{x}\in C \text{ and }A\bar{x}\in Q, $$(SFP)
In 1994, the split feasibility problem (SFP) in finite dimensional Hilbert spaces was first introduced by Censor and Elfving [8] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. Since then, the split feasibility problem (SFP) has received much attention due to its applications in signal processing, image reconstruction, with particular progress in intensity-modulated radiation therapy, approximation theory, control theory, biomedical engineering, communications, and geophysics. For examples, one can refer to [8–13] and related literature.
- (AP2)Lasso problem:$$ \mathop{\arg\min}_{x\in\mathbb{R}^{n}}\frac{1}{2}\|Ax-b\|^{2}_{2}+ \gamma\|x\|_{1}. $$
- (AP3)Mathematical programming for convex function:$$ \mathop{\arg\min}_{x\in H}h_{1}(x),\mbox{where }h_{1}\in\Gamma_{0}(H). $$
- (AP4)Mathematical programming for convex Fréchet differentiable function:$$ \mbox{Find } \mathop{\arg\min}_{x\in H}g_{1}(x) , \mbox{where } g_{1}:H\rightarrow \mathbb {R} \mbox{ is a Fr\'{e}chet differentiable function}. $$
A special case of one of our iteration is modified gradient-projection algorithm. We use this modified gradient-projection algorithm to establish a strong convergence theorem of this problem (AP4), and our results improve recent results given by Xu in [16].
In this paper, we apply a recent result of Yu and Lin [17] to find the solution of the mathematical programming for two convex functions, then we apply our results on mathematical programming for two convex functions to study the above problems. We establish a strongly convergent theorem for these problems and apply our result on the lasso problem to the image deblurring problem. Some numerical examples are given to demonstrate our results. The main result of this paper gives a unified study of many types of optimization problems. Our algorithms to solve these problems are different from any results in the literature. Some results of this paper are original and some results of this paper improve, extend, and unify comparable results existence in the literature.
2 Preliminaries
- (i)
T is called nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\).
- (ii)
T is strongly monotone if there exists \(\bar{\gamma}> 0\) such that \(\langle x-y, Tx-Ty\rangle\geq\bar{\gamma}\|x-y\|^{2}\) for all \(x, y\in C\).
- (iii)
T is Lipschitz continuous if there exists \(L > 0\) such that \(\|Tx-Ty\|\leq L\|x-y\|\) for all \(x,y\in C\).
- (iv)
Let \(\alpha>0\). Then T is α-inverse-strongly monotone if \(\langle x-y,Tx-Ty\rangle \geq\alpha\|Tx-Ty\|^{2}\) for all \(x,y\in C\). We denote T is α-ism if T is α-inverse-strongly monotone.
- (v)T is firmly nonexpansive ifthat is,$$\|Tx-Ty\|^{2}\leq\|x-y\|^{2}-\bigl\Vert (I-T)x-(I-T)y\bigr\Vert ^{2}\quad \text{for every }x,y\in C, $$$$\|Tx-Ty\|^{2}\leq\langle x-y,Tx-Ty\rangle\quad \text{for every }x,y\in C. $$
- (i)
B is a monotone operator on H if \(\langle x-y,u-v\rangle \geq0\) for all \(x, y\in D(B)\), \(u\in Bx\) and \(v\in By\).
- (ii)
B is a maximal monotone operator on H if B is a monotone operator on H and its graph is not properly contained in the graph of any other monotone operator on H.
Lemma 2.1
[18]
- (i)
For each \(r>0\), \(J_{r}^{G_{1}}\) is single-valued and firmly nonexpansive.
- (ii)
\(\mathcal{D}(J_{r}^{G_{1}})=H_{1}\) and \(\operatorname{Fix}(J_{r}^{G_{1}})=\{ x\in \mathcal{D}(G_{1}):0\in G_{1}x\}\).
- (A1)
\(g(x,x)=0\) for each \(x\in C\);
- (A2)
g is monotone, i.e., \(g(x, y) +g(y, x)\leq0\) for any \(x,y\in C\);
- (A3)
for each \(x,y,z\in C\), \(\limsup_{t\downarrow0}g(tz+(1-t)x,y)\leq g(x,y)\);
- (A4)
for each \(x\in C\), the scalar function \(y\rightarrow g(x,y)\) is convex and lower semicontinuous.
We have the following result from Blum and Oettli [20].
Lemma 2.2
[20]
In 2005, Combettes and Hirstoaga [21] established the following important properties of resolvent operator.
Lemma 2.3
[21]
- (i)
\(T_{r}^{g}\) is single-valued;
- (ii)
\(T_{r}^{g}\) is firmly nonexpansive, that is, \(\|T_{r}^{g}x-T_{r}^{g}y\|^{2}\leq\langle x-y,T_{r}^{g}x-T_{r}^{g}y\rangle\) for all \(x,y\in H\);
- (iii)
\(\{x\in H: T_{r}^{g}x=x\}=\{x\in C: g(x,y)\geq0, \forall y\in C\}\);
- (iv)
\(\{x\in C: g(x,y)\geq0, \forall y\in C\}\) is a closed and convex subset of C.
We call such \(T_{r}^{g}\) the resolvent of g for \(r>0\).
Takahashi et al. [22] gave the following lemma.
Lemma 2.4
[22]
Let \(f:H\rightarrow (-\infty,\infty]\) be proper. The subdifferential ∂f of f is the set valued operator, defined \(\partial f= \{u\in H: f(y)\geq f(x)+\langle y-x,u\rangle\text{ for all } y\in H\}\).
Let \(x\in H\). Then f is subdifferentiable at x if \(\partial f(x)\neq\emptyset\). Then the elements of \(\partial f(x)\) are called the subgradient of f at x.
Let \(x\in \operatorname{dom} f\) and suppose that \(f'(x,y)\) is linear and continuous, then f is said to be Gâteaux differentiable at x. By the Riesz representation theorem, there exists a unique vector \(\nabla f(x)\in H\) such that \(f'(x,y)=\langle y,\nabla f(x)\rangle\) for all \(y\in H\).
Lemma 2.5
[4]
- (i)
If C is a nonempty, closed, convex subset of H and \(g=i_{C}\) is the indicator function of C, then the proximal operator \(\operatorname{prox}_{\lambda g}=P_{C}\) for all \(\lambda\in(0,\infty)\), where \(P_{C}\) is the metric projection operator from H to C.
- (ii)
\(\operatorname{prox}_{\lambda g}\) is firmly nonexpansive.
- (iii)
\(\operatorname{prox}_{\lambda g}=(I+\lambda\partial g)^{-1}=J_{\lambda }^{\partial g}\).
Lemma 2.6
[3]
Let \(f,g\in\Gamma _{0}(H)\). Let \(x^{*}\in H\) and \(\lambda\in(0,\infty)\). Assume that f is finite valued and Fréchet differentiable function on H with Fréchet derivative ∇f. Then \(x^{*}\) is a solution to the problem \(\arg\min_{x\in H}f(x)+g(x)\) if and only if \(x^{*}=\operatorname{prox}_{\lambda g}(I-\lambda\nabla f)x^{*}\).
Lemma 2.7
[23]
Let \(C\subset H\) be nonempty, closed, convex subset, let \(A:H\rightarrow H\), and let \(f:H\rightarrow \mathbb{R}\) be convex and Fréchet differentiable. Let A be the Fréchet derivative of f. Then \(\operatorname{VI}(C, A)=\arg\min_{x\in C}f(x)\).
Lemma 2.9
[6]
- (i)
\(\operatorname{dom} f\cap \operatorname{int} (\operatorname{dom}) g\neq\emptyset\);
- (ii)
\(\operatorname{dom} g=H\).
A mapping \(T_{\alpha}:H\rightarrow H\) is said to be averaged if \(T_{\alpha}=(1-\alpha)I +\alpha T\), where \(\alpha\in(0, 1)\) and \(T: H\rightarrow H \) is nonexpansive. In this case, we say that \(T_{\alpha}\) is α-averaged. Clearly, a firmly nonexpansive mapping is \(\frac{1}{2}\)-averaged.
Lemma 2.10
[24]
- (i)
T is nonexpansive if and only if the complement \((I-T)\) is \(1/2\)-ism;
- (ii)
if S is υ-ism, then for \(\gamma>0\), γS is \(\upsilon/\gamma\)-ism;
- (iii)
S is averaged if and only if the complement \(I-S\) is υ-ism for some \(\upsilon> 1/2\);
- (iv)
if S and T are both averaged, then the product (composite) ST is averaged;
- (v)
if the mappings \(\{T_{i}\}_{i=1}^{n}\) are averaged and have a common fixed point, then \(\bigcap_{i=1}^{n}\operatorname{Fix}(T_{i}) = \operatorname{Fix}(T_{1}\cdots T_{n})\).
Lemma 2.11
[6]
Let \(f:H\rightarrow (-\infty,\infty]\) be proper and convex. Suppose that f is Gâteaux differentiable at x. Then \(\partial f(x)=\{\nabla f(x)\}\).
3 Common solution of variational inequality problem, fixed point, and Ky Fan inequalities problem
For each \(i=1,2\), let \(F_{i}:C\rightarrow H\) be a \(\kappa _{i}\)-inverse-strongly monotone mapping of C into H with \(\kappa_{i}>0\). For each \(i=1,2\), let \(G_{i}\) be a maximal monotone mapping on H such that the domain of \(G_{i}\) is included in C and define the set \(G_{i}^{-1}0\) as \(G_{i}^{-1}0= \{x\in H: 0\in G_{i}x\}\). Let \(J_{\lambda}^{G_{1}}=(I+\lambda G_{1})^{-1}\) and \(J_{r}^{G_{2}}=(I+r G_{2})^{-1}\) for each \(n\in\mathbb{N}\), \(\lambda >0\) and \(r>0\). Let \(\{\theta_{n}\}\subset H\) be a sequence. Let V be a \(\bar{\gamma}\)-strongly monotone and L-Lipschitz continuous operator with \(\bar{\gamma}>0\) and \(L > 0\). Let \(T : C\rightarrow H\) be a nonexpansive mapping. Throughout this paper, we use these notations and assumptions unless specified otherwise.
- (i)
\(0<\liminf_{n\rightarrow\infty}\alpha_{n}\leq\limsup_{n\rightarrow\infty}\alpha_{n}<1\);
- (ii)
\(\lim_{n\rightarrow \infty}\beta_{n}=0\), and \(\sum_{n=1}^{\infty}\beta_{n}=\infty\);
- (iii)
\(\lim_{n\rightarrow\infty}\theta_{n}=0\).
The following strong convergence theorem is needed in this paper.
Theorem 3.1
[17]
Theorem 3.2
Proof
As a simple consequence of Theorem 3.2, we study the common solution of the Ky Fan inequalities problems.
Theorem 3.3
Proof
Let \(I|_{C}\) and \(i_{C}\) be the restriction of the identity function on C and the indicate function on C respectively and let \(T=I|_{C}\), \(f_{2}=i_{C}\) in Theorem 3.2, then Theorem 3.3 follows from Theorem 3.2. □
Theorem 3.4
4 Mathematical programming for the sum of two convex functions
Theorem 4.1
Proof
Example 4.1
Let \(h_{1}(x)=x^{2}\), \(g_{1}(x)=x^{2}+2x+1\), \(C=[-1,1]\), \(H=\mathbb{R}\), \(\lambda=1\), \(V=I\), \(\alpha_{n}=\frac{1}{2}\) for all \(n\in \mathbb{N}\), \(\beta_{n}=\frac{1}{1\text{,}000n}\), \(C=[-1,1]\), \(\theta_{n}=0\), \(f_{1}(x,y)=\langle y-x,\nabla g_{1}(x)\rangle+h_{1}(y)-h_{1}(x)\). Then \(f_{1}(x,y)=(y-x)(2x+y+x+2)\), this implies that \(f_{1}(-\frac{1}{2},y)=(y+\frac{1}{2})^{2}\geq0\) for all \(y\in[-1,1]\), and \(-\frac{1}{2}\in H-\operatorname{VI}([-1,1],\nabla g_{1},h_{1})\neq\emptyset\).
We also see \(A_{f_{1}}(-1)=(-\infty,-2]\), \(A_{f_{1}}(1)=[6,\infty)\), and \(A_{f_{1}}(x)=4x+2\) if \(x\in(-1,1)\). Let \(y_{n}=J_{1}^{A_{f_{1}}}P_{C}x_{n}\), \(x_{n+1}=\frac{1}{2}x_{n}+\frac{1}{2}(1-\frac{1}{1\text{,}000n})y_{n}\).
It is easy to see that \(P_{C}x_{n}=5y_{n}+2\) and \(y_{n}=\frac{1}{5}(P_{C}x_{n}-2)\).
Hence \(x_{n+1}=\frac{1}{2}x_{n}+\frac{1}{10}(1-\frac{1}{1\text{,}000n})(P_{C}x_{n}-2)\). It is easy to see all the conditions of Theorem 4.1 are satisfied.
Let \(x_{1}=0\), then \(x_{2}=-0.1998\), \(x_{3}=-0.31977001\), \(x_{4}=-0.39177967\), \(x_{5}=-0.435008\), … , we see \(\lim_{n\rightarrow \infty} x_{n}=\bar{x}=-\frac {1}{2}\in\arg\min_{x\in [-1,1]}g_{1}x+h_{1}x\).
Corollary 4.1
Proof
By Lemma 2.7, we know that \(\operatorname{VI}(C,\nabla g_{1})=\arg\min_{y\in C}g_{1}(y)\). Therefore, Corollary 4.1 follows immediately from Theorem 4.1 by letting \(h_{1}=0\). □
Corollary 4.2
Theorem 4.2
We give two different proofs for this theorem.
Proof I
Proof II
Remark 4.1
- (a)
- (b)
Example 4.2
We see all conditions of Theorem 4.2 are satisfied.
We obtain \((I-\nabla g_{1})x_{n}=(I+\partial h_{1})y_{n}=y_{n}+2y_{n}=3y_{n}=x_{n}-2x_{n}-2\).
Corollary 4.3
Remark 4.2
Theorem 4.3
Proof
Remark 4.3
Corollary 4.4
5 Split feasibility problems and lasso problems
In the following theorem, a modified Byrne CQ iteration is used to find the solution of the following split feasibility problem: Find \(\bar{x}\in C\) such that \(A\bar{x}\in Q\).
Theorem 5.1
Proof
Remark 5.1
In 2010, Xu [14] used various algorithms to establish weak convergence theorems in infinite dimensional Hilbert spaces for the split feasibility problem (see Theorems 3.1, 3.3, 3.4, 4.1 and 5.7 of [14]). Also, Xu [14] established a strongly theorem for this problem in the infinite dimensional Hilbert space (see Theorem 5.5 of [14]). Now, Theorem 5.1 gives an algorithm to study the split feasibility problem which converges strongly to the split feasibility problem, and this result improves Byrne’s CQ algorithm [9], and the results given by Xu [14].
By Theorem 5.1, we get the following result for split feasibility problem.
Corollary 5.1
Proof
Let \(\theta_{n}=\theta=0\) for all \(n\in \mathbb{N}\) and \(V=I\) in Theorem 5.1, then Corollary 5.1 follows from Theorem 5.1. □
Theorem 5.2
Proof
The following is a special case of Theorem 5.2.
Corollary 5.2
Proof
Let \(\theta_{n}=\theta=0\) for all \(n\in\mathbb{N}\) and \(V=I\) in Theorem 5.2, then Corollary 5.2 follows from Theorem 5.2. □
Apply Corollary 4.1, an iteration is used to find the solution to the split feasibility problem: Find \(\bar{x}\in C\), \(A\bar{x}\in Q\).
Theorem 5.3
Proof
Let \(g_{1}(x)=\frac{\|Ax-P_{Q}Ax\|^{2}_{2}}{2}\), then shows that \(g_{1}\) is Fréchet differentiable with Fréchet derivative \(\nabla g_{1}=A^{*}(A-P_{Q}A)\), and \(\nabla g_{1}\) a Lipschitz function with Lipschitz constant \(\|A\|^{2}\). Applying Corollary 4.1 and following the same argument as Theorem 5.1, we can prove Theorem 5.3. □
6 Image deblurring problem
This section mainly focuses on the image deblurring problems, which has received a lot of attention in recent years. Until now, some researchers have proposed many novel algorithms for this problem based on different deblurring models; for examples, see [26]. Now, by Corollary 5.2, we can consider the image deblurring problem.
All pixels of the original images described in the examples were first scaled into the range between 0 and 1.
Remark 6.1
In the literature, we may observe that there are many fast algorithms for the image deblurring problem. Here, we show that we can also consider this problem by Corollary 5.2.
7 Conclusion and remarks
In this paper, we apply a recent fixed point theorem in [17] to study mathematical programming for the sum of two convex functions, mathematical programming of convex function, the split feasibility problem, and the lasso problem. We establish strong convergence theorems as regards these problems. The study of such problems will give many other applications in science, nonlinear analysis, and statistics.
Declarations
Acknowledgements
Prof. CS Chuang was supported by the National Science Council of Republic of China.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Tibshirani, R: Regression shrinkage and selection for the lasso. J. R. Stat. Soc., Ser. B 58, 267-288 (1996) MathSciNetMATHGoogle Scholar
- Combettes, PL, Wajs, R: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168-1200 (2005) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Properties and iterative methods for the lasso and its variants. Chin. Ann. Math., Ser. B 35, 501-518 (2014) MathSciNetView ArticleMATHGoogle Scholar
- Wang, Y, Xu, HK: Strong convergence for the proximal gradient methods. J. Nonlinear Convex Anal. 15, 581-593 (2014) MathSciNetMATHGoogle Scholar
- Douglas, J, Rachford, HH: On the numerical solution of heat conduction in two and three space variable. Trans. Am. Math. Soc. 82, 421-439 (1956) MathSciNetView ArticleMATHGoogle Scholar
- Bauschken, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Space. Springer, Berlin (2011) View ArticleGoogle Scholar
- Tseng, P: Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming. Math. Program., Ser. B 48, 249-263 (1990) View ArticleMATHGoogle Scholar
- Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 8, 221-239 (1994) MathSciNetView ArticleMATHGoogle Scholar
- Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002) MathSciNetView ArticleMATHGoogle Scholar
- Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353-2365 (2003) View ArticleGoogle Scholar
- López, G, Martín-Márquez, V, Xu, HK: Iterative algorithms for the multiple-sets split feasibility problem. In: Censor, Y, Jiang, M, Wang, G (eds.) Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, pp. 243-279. Medical Physics Publishing, Madison (2010) Google Scholar
- Stark, H: Image Recovery: Theory and Applications. Academic Press, San Diego (1987) MATHGoogle Scholar
- Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010) View ArticleGoogle Scholar
- Rockafellar, TA: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976) MathSciNetView ArticleMATHGoogle Scholar
- Xu, HK: Averaged mappings and the gradient projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Yu, ZT, Lin, LJ: Hierarchical problems with applications to mathematical programming with multiple sets split feasibility constraints. Fixed Point Theory Appl. 2013, 283 (2013) MathSciNetView ArticleGoogle Scholar
- Takahashi, W: Nonlinear Functional Analysis: Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000) Google Scholar
- Fan, K: A minimax inequalities and its applications. In: Shisha, O (ed.) Inequalities III, pp. 103-113. Academic Press, San Diego (1972) Google Scholar
- Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-146 (1994) MathSciNetMATHGoogle Scholar
- Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005) MathSciNetMATHGoogle Scholar
- Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010) MathSciNetView ArticleMATHGoogle Scholar
- Ekaland, I, Temam, R: Convex Analysis and Variational Problems. North-Holland, Amsterdam (1976) Google Scholar
- Combettes, PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53(5-6), 475-504 (2004) MathSciNetView ArticleMATHGoogle Scholar
- Baillon, JB, Haddad, G: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26, 137-150 (1977) MathSciNetView ArticleMATHGoogle Scholar
- Chambolle, A: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89-97 (2004) MathSciNetView ArticleGoogle Scholar