Skip to main content

Iterative methods for solving the multiple-sets split feasibility problem with splitting self-adaptive step size

Abstract

We introduce an iterative algorithm for solving the multiple-sets split feasibility problem with splitting self-adaptive step size. This step size is calculated directly from the iteration process without need to know the spectral norm of linear operators. We also generalize the chosen step size to a relaxed iterative algorithm. Theoretical convergence is proved in an infinite dimensional Hilbert space. Some numerical experiments are presented to verify the effectiveness of our proposed methods.

1 Introduction

Linear inverse problems often arise in many real-world application problems such as signal and image processing, medical image reconstruction, etc. In 2005, Censor et al. [1] first introduced the multiple-sets split feasibility problem, which was motivated by the inverse problem of intensity modulated radiation therapy (IMRT). The multiple-sets split feasibility problem (MSSFP for short) required to find a point closest to the intersection of a family of closed convex sets in one space such that its image under a linear transformation A will be closest to the intersection of another family of closed convex sets in the image space. The MSSFP includes the two-set split feasibility problem as its special case, which was originally proposed by Censor and Elfving [2]. The two-set split feasibility problem is usually called the split feasibility problem (SFP) for simple. Byrne [3, 4] proposed the so-called CQ algorithm to solve the SFP. It has been proved that the CQ algorithm is equivalent to the gradient projection algorithms for solving a constrained optimization problem, see, for example, Xu [5].

The CQ algorithm used a fixed step size, which relies on the spectral norm of the linear operator A. Qu and Xiu [6] improved it by using the Armijo-like search method to solve the SFP, for which there is no need to know the prior norm of A. They also used the Armijo-like search way to a relaxed CQ algorithm. The relaxed CQ algorithm with constant step size was introduced by Yang [7]. It was used to solve the SFP, where the closed convex sets are level sets of convex functions. In comparison with the CQ algorithm, the relaxed CQ algorithm used orthogonal projections onto half-spaces instead of projections onto the original convex sets. Since projections onto half-spaces can be directly calculated, it reduced a lot of work for computing projections. Lopez et al. [8] introduced a new self-adaptive step size to improve the CQ and the relaxed CQ algorithm, which also did not require to know the norm of A. This step size is a modification of Yang [9]. They proved the convergence of an iterative sequence with the new self-adaptive step size and weakened the assumptions used by Yang [9].

Since the intersection of a family of closed convex sets is also a closed convex set, the MSSFP could be viewed as the SFP as well. However, the iterative projection methods for solving the SFP cannot be directly transferred to solve the MSSFP because one needs to calculate the projection of the intersection of a family of sets instead of a single set. To solve the MSSFP, Censor et al. [1] defined a function (see (3)) to measure the distance of a point to all sets and proposed a gradient projection algorithm. In [10], Xu transferred the MSSFP to finding a common fixed point of a finite family of averaged mappings. Then, he proposed several iterative algorithms to solve the MSSFP, which was inspired by fixed point searching methods. Note that these iterative algorithms also used a fixed step size. To overcome this shortage, Zhang et al. [11] proposed a self-adaptive projection method for solving the MSSFP, which was inspired by the work of He et al. [12]. Zhao and Yang [13] generalized the method used in [6] to solve the MSSFP and also gave a self-adaptive projection method. These iterative algorithms have a common feature that an inner iteration number should be chosen before the updated iterative sequence. See also [14–16]. Recently, Zhao and Yang [17] introduced a simple self-adaptive step size way to solve the MSSFP. The new step size is computed by the objective function and its gradient information without an inner iteration number. Inspired by this idea, Wen et al. [18] also suggested a self-adaptive step size to improve the results of Xu [10]. They proposed a cyclic and simultaneous iterative sequence with self-adaptive step size to solve the MSSFP.

In this paper, we propose a new iterative algorithm for solving the multiple-sets split feasibility problem. The new iterative algorithm is combined with a splitting self-adaptive step size. Thus, we need not know the prior norm of the linear operator. Further, we give a relaxed version of this iterative algorithm to solve the MSSFP where the closed convex sets are level sets of convex functions. The theoretical convergence results are proved in infinite dimensional Hilbert spaces. To verify the effectiveness of our proposed methods, we also give some numerical experiments.

The paper is organized as follows. In the next section, we introduce some definitions and lemmas, which will be used in the following. In Section 3, we propose an iterative algorithm with splitting self-adaptive step size and prove its convergence. Further, a relaxed iterative algorithm with splitting self-adaptive step size will be proposed in Section 4. In Section 5, we present some preliminary numerical experiments to test these proposed methods and compare with other methods. We give some conclusions in the final section.

2 Preliminaries

Let H be a real Hilbert space, \(\langle\cdot, \cdot\rangle\) and \(\|\cdot\|\) be the inner product and the norm, respectively, in H. We adopt the following notations: Ω denotes the solution set of the MSSFP; \(x^{k} \rightarrow x\) (\(x^{k} \rightharpoonup x\)) represents \(\{x^{k}\}\) converging strongly (weakly) to x, respectively; \(\omega_{w}(x^{k})\) means the weak cluster of the sequence \(\{x^{k}\}\).

In this section, we introduce some definitions and basic results.

Definition 2.1

([19])

Let T be a mapping from \(C\subseteq H\) into H. Then

  1. (i)

    T is said to be nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\), \(\forall x,y\in C\);

  2. (ii)

    T is said to be firmly nonexpansive if \(\langle x-y, Tx - Ty \rangle\geq\|Tx-Ty\|^{2}\), \(\forall x,y \in C\);

  3. (iii)

    T is said to be an averaged mapping if there exist a nonexpansive mapping S and a real number \(t \in(0,1)\) satisfying \(T = (1-t)I + t S\), where I represents the identity mapping.

It is easily seen that a firmly nonexpansive mapping is nonexpansive due to the Cauchy-Schwarz inequality. Recall that the orthogonal projection \(P_{C}\) from H onto a nonempty closed convex subset \(C\subset H\) is defined by the following:

$$P_{C}(x) = \arg\min_{y\in C} \|x-y\|. $$

The orthogonal projection has the following well-known properties.

Lemma 2.1

([19])

Let C be a nonempty closed and convex set in H, then for all \(x, y\in H\) and \(z\in C\),

  1. (i)

    \(\langle x - P_{C}x, z- P_{C}x \rangle\leq0\);

  2. (ii)

    \(\|P_{C}x - P_{C}y\|^{2} \leq\langle P_{C}x - P_{C}y, x-y \rangle\);

  3. (iii)

    \(\|P_{C}x - z\|^{2} \leq\|x-z\|^{2} - \|P_{C}x -x\|^{2}\).

We see from Lemma 2.1 that the orthogonal projection mapping is firmly nonexpansive and nonexpansive. Moreover, it is not hard to show that \(I-P_{C}\) is also firmly nonexpansive and nonexpansive.

The mathematical form of the MSSFP can be formulated as finding a point \(x^{*}\) with the property

$$ x^{*}\in\bigcap_{i=1}^{t}C_{i} \text{ such that } Ax^{*} \in\bigcap_{j=1}^{r}Q_{j}, $$
(1)

where \(t, r\geq1\) are nonnegative integers, \(\{C_{i}\}_{i=1}^{t}\subseteq H_{1}\), \(\{Q_{j}\}_{j=1}^{r}\subseteq H_{2}\) are closed convex sets of Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1} \rightarrow H_{2}\) is a bounded linear operator. Letting \(t=r=1\), then the MSSFP reduces to the split feasibility problem (SFP) as follows:

$$ \text{Find a point } x^{*}\in C \text{ such that } Ax^{*}\in Q, $$
(2)

where \(C\subseteq H_{1}\) and \(Q\subseteq H_{2}\) are nonempty, closed and convex sets, respectively.

Censor et al. [1] first defined a function \(g(x)\) to measure the distance of a point to all sets

$$ g(x) := \frac{1}{2}\sum_{i=1}^{t} \alpha_{i} \bigl\Vert x-P_{C_{i}}(x)\bigr\Vert ^{2} + \frac{1}{2}\sum_{j=1}^{r} \beta_{j} \bigl\Vert Ax - P_{Q_{j}}(Ax)\bigr\Vert ^{2}, $$
(3)

where \(\alpha_{i} >0\), \(\beta_{j} >0\) for all i and j, respectively, and \(\sum_{i=1}^{t}\alpha_{i} + \sum_{j=1}^{r}\beta_{j} =1\). They proved the following results.

Proposition 2.1

([1])

Suppose that the solution set of the MSSFP is nonempty, then the following statements hold:

  1. (i)

    \(x^{*}\) is a solution of the MSSFP iff \(g(x^{*})=0\);

  2. (ii)

    the proximity function \(g(x)\) is convex and differentiable with gradient

    $$ \nabla g(x) = \sum_{i=1}^{t} \alpha_{i} (x-P_{C_{i}}x) + \sum_{j=1}^{r} \beta_{j} A^{*}(I-P_{Q_{j}}) (Ax), $$
    (4)

    and the Lipschitz constant of \(\nabla g(x)\) is \(L=\sum_{i=1}^{t}\alpha_{i} + \|A\|^{2}\sum_{j=1}^{r}\beta_{j}\).

Fejér monotone sequences are very useful in the analysis of optimization iterative algorithms.

Definition 2.2

([19])

Let C be a nonempty subset of H and let \(\{x^{k}\}\) be a sequence in H. Then \(\{x^{k}\}\) is Fejér monotone with respect to C if

$$\bigl\Vert x^{k+1}-z\bigr\Vert \leq\bigl\Vert x^{k} -z \bigr\Vert ,\quad \forall z\in C. $$

It is easy to see that a Fejér monotone sequence \(\{x^{k}\}\) is bounded and the limit \(\lim_{k\rightarrow\infty}\|x^{k} - z\|\) exists.

The demiclosedness principle for nonexpansive mappings is well known in the Hilbert spaces.

Lemma 2.2

(Demiclosedness principle of nonexpansive mappings [19])

Let C be a closed convex subset of H, \(T:C\rightarrow C\) be a nonexpansive mapping with nonempty fixed point sets. If \(\{x^{k}\}\) is a sequence in C converging weakly to x and \(\{(I-T)x^{k}\}\) converges strongly to y, then \((I-T)x=y\). In particular, if \(y=0\), then \(x=Tx\).

The following lemma is essential in establishing theoretical convergence results for some iteration methods.

Lemma 2.3

([19])

Let K be a nonempty, closed and convex subset of a Hilbert space H. Let \(\{x^{k}\}\) be a sequence in H satisfying the properties:

  1. (i)

    \(\lim_{k\rightarrow\infty}\|x^{k} - x\|\) exists for each \(x\in K\);

  2. (ii)

    \(\omega_{w}(x^{k})\subset K\).

Then \(\{x^{k}\}\) converges weakly to a point in K.

We will use convex functions to define the closed convex sets \(\{C_{i}\}_{i=1}^{t}\) and \(\{Q_{j}\}_{j=1}^{r}\). Recall that a function \(\varphi: H\rightarrow R\) is said to be convex if

$$ \varphi\bigl(\lambda x + (1-\lambda)y\bigr) \leq\lambda\varphi(x) + (1-\lambda) \varphi(y) $$
(5)

for all \(\lambda\in[0,1]\) and for all \(x,y\in H\). Let \(x_{0}\in H\). We say that φ is subdifferentiable at \(x_{0}\) if there exists \(\xi\in H\) such that

$$ \varphi(y) \geq\varphi(x_{0}) + \langle\xi, y - x_{0} \rangle\quad \text{for all } y\in H. $$
(6)

The subdifferential of φ at \(x_{0}\) is denoted by \(\partial \varphi(x_{0})\) which consists of all ξ satisfying relation (6).

The following lemma provides an important boundedness property of the subdifferential in finite-dimensional Hilbert spaces.

Lemma 2.4

([20])

Suppose that \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a convex function, then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded on any bounded subset of \(\mathbb{R}^{n}\).

3 An iterative algorithm with splitting self-adaptive step size for solving the MSSFP

In this section, we propose a new iteration method with splitting self-adaptive step size for solving the MSSFP.

Theorem 3.1

Assume that MSSFP (1) is consistent (i.e., the solution set Ω is nonempty). For any initial \(x_{0}\in H_{1}\), the iteration scheme \(\{x^{k}\}\) with splitting self-adaptive step size is defined by the following:

$$\begin{aligned} x^{k+1} = &x^{k} + \frac{\rho_{1}^{k} \sum_{i=1}^{t}\alpha_{i} \| P_{C_{i}}(x^{k}) - x^{k} \|^{2} }{\| \sum_{i=1}^{t}\alpha_{i} (P_{C_{i}}(x^{k}) - x^{k} ) \|^{2}} \sum _{i=1}^{t}\alpha_{i} \bigl( P_{C_{i}}\bigl(x^{k}\bigr) - x^{k} \bigr) \\ &{} + \frac{\rho_{2}^{k} \sum_{j=1}^{r}\beta_{j} \| P_{Q_{j}}(Ax^{k}) - Ax^{k} \|^{2} }{\|\sum_{j=1}^{r}\beta_{j} A^{*}(P_{Q_{j}}(Ax^{k}) - Ax^{k} )\|^{2}} \sum_{j=1}^{r} \beta_{j} A^{*} \bigl(P_{Q_{j}}\bigl(Ax^{k} \bigr) - Ax^{k} \bigr), \end{aligned}$$
(7)

where \(0< \underline{\rho}_{1} \leq\rho_{1}^{k} \leq \overline{\rho}_{1} < 1\), \(0< \underline{\rho}_{2} \leq\rho_{2}^{k} \leq\overline{\rho}_{2} < 1\), and the parameters \(\{\alpha_{i}\}_{i=1}^{t} > 0\) and \(\{\beta_{j}\}_{j=1}^{r} > 0\), then the iterative sequence \(\{x^{k}\}\) converges weakly to a solution of the MSSFP.

Proof

In order to facilitate our proof, we introduce some notations first. Let \(d_{C_{i}}^{k} = P_{C_{i}}(x^{k}) - x^{k}\) and \(d_{Q_{j}}^{k} = P_{Q_{j}}(Ax^{k}) - Ax^{k}\) for \(i=1, 2, \ldots, t \) and \(j=1, 2, \ldots, r\), respectively. Define

$$\lambda_{1}^{k} = \frac{\rho_{1}^{k} \sum_{i=1}^{t}\alpha_{i} \|d_{C_{i}}^{k}\|^{2} }{\| \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \|^{2}}\quad \text{and} \quad \lambda_{2}^{k} = \frac{\rho_{2}^{k} \sum_{j=1}^{r}\beta_{j} \| d_{Q_{j}}^{k} \|^{2} }{ \| \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \|^{2} }. $$

Then the iterative sequence \(\{x^{k}\}\) in (7) can be rewritten as follows:

$$ x^{k+1} = x^{k} + \lambda_{1}^{k} \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} + \lambda_{2}^{k} \sum _{j=1}^{r}\beta_{j} A^{*} d_{Q_{j}}^{k}. $$
(8)

Let \(p\in\Omega\) (the set of Ω is the solution set of MSSFP (1)). By (8), we have

$$\begin{aligned} \bigl\Vert x^{k+1} - p \bigr\Vert ^{2} =& \Biggl\Vert x^{k} + \lambda_{1}^{k}\sum _{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} + \lambda_{2}^{k}\sum_{j=1}^{r} \beta_{j} A^{*}d_{Q_{j}}^{k} - p \Biggr\Vert ^{2} \\ =& \bigl\Vert x^{k}-p\bigr\Vert ^{2} + 2 \Biggl\langle x^{k} - p, \lambda_{1}^{k}\sum _{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Biggr\rangle \\ &{}+ 2 \Biggl\langle x^{k} - p, \lambda_{2}^{k} \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\rangle + \Biggl\Vert \lambda_{1}^{k}\sum_{i=1}^{t} \alpha_{i} d_{C_{i}}^{k} \Biggr\Vert ^{2} \\ &{}+ \Biggl\Vert \lambda_{2}^{k} \sum _{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\Vert ^{2} + 2 \Biggl\langle \lambda_{1}^{k} \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k}, \lambda_{2}^{k}\sum _{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\rangle \\ \leq&\bigl\Vert x^{k}-p\bigr\Vert ^{2} + 2 \Biggl\langle x^{k} - p, \lambda_{1}^{k}\sum _{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Biggr\rangle \\ &{}+ 2 \Biggl\langle x^{k} - p, \lambda_{2}^{k} \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\rangle + 2 \Biggl\Vert \lambda_{1}^{k}\sum_{i=1}^{t} \alpha_{i} d_{C_{i}}^{k} \Biggr\Vert ^{2} \\ &{}+ 2 \Biggl\Vert \lambda_{2}^{k} \sum _{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\Vert ^{2}. \end{aligned}$$
(9)

In order to prove that the iterative sequence \(\{x_{k}\}\) is Fejér-monotone with respect to Ω, we make the following estimations, which are based on the property of projection operator (Lemma 2.1(i)),

$$\begin{aligned} \Biggl\langle x^{k} - p, \lambda_{1}^{k} \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Biggr\rangle =& \lambda_{1}^{k}\sum_{i=1}^{t} \alpha_{i} \bigl\langle x^{k} - p, d_{C_{i}}^{k} \bigr\rangle \\ =& \lambda_{1}^{k} \sum_{i=1}^{t} \alpha_{i} \bigl( \bigl\langle x^{k} - P_{C_{i}} \bigl(x^{k}\bigr), d_{C_{i}}^{k} \bigr\rangle + \bigl\langle P_{C_{i}}\bigl(x^{k}\bigr) - p, d_{C_{i}}^{k} \bigr\rangle \bigr) \\ \leq&- \lambda_{1}^{k}\sum _{i=1}^{t}\alpha_{i} \bigl\Vert d_{C_{i}}^{k} \bigr\Vert ^{2} \end{aligned}$$
(10)

and

$$\begin{aligned}& \Biggl\langle x^{k} - p, \lambda_{2}^{k} \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\rangle \\& \quad = \lambda_{2}^{k} \sum_{j=1}^{r} \beta_{j} \bigl\langle Ax^{k} - Ap, d_{Q_{j}}^{k} \bigr\rangle \\& \quad = \lambda_{2}^{k} \sum_{j=1}^{r} \beta_{j} \bigl( \bigl\langle Ax^{k} - P_{Q_{j}} \bigl(Ax^{k}\bigr), d_{Q_{j}}^{k} \bigr\rangle + \bigl\langle P_{Q_{j}}\bigl(Ax^{k}\bigr) - Ap, d_{Q_{j}}^{k} \bigr\rangle \bigr) \\& \quad \leq-\lambda_{2}^{k} \sum _{j=1}^{r}\beta_{j} \bigl\Vert d_{Q_{j}}^{k}\bigr\Vert ^{2}. \end{aligned}$$
(11)

Inserting (10) and (11) into (9) yields

$$\begin{aligned} \bigl\Vert x^{k+1}-p\bigr\Vert ^{2} \leq& \bigl\Vert x^{k}-p\bigr\Vert ^{2} - 2 \lambda_{1}^{k}\sum_{i=1}^{t} \alpha_{i} \bigl\Vert d_{C_{i}}^{k}\bigr\Vert ^{2} - 2 \lambda_{2}^{k} \sum _{j=1}^{r}\beta_{j} \bigl\Vert d_{Q_{j}}^{k}\bigr\Vert ^{2} \\ &{} + 2 \Biggl\Vert \lambda_{1}^{k}\sum _{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Biggr\Vert ^{2} + 2 \Biggl\Vert \lambda_{2}^{k} \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Biggr\Vert ^{2} \\ = &\bigl\Vert x^{k}-p\bigr\Vert ^{2} - 2 \rho_{1}^{k}\bigl(1-\rho_{1}^{k}\bigr) \frac{ ( \sum_{i=1}^{t}\alpha_{i} \|d_{C_{i}}^{k}\|^{2} )^{2}}{\Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Vert ^{2}} \\ &{} - 2 \rho_{2}^{k} \bigl(1-\rho_{2}^{k} \bigr) \frac{ ( \sum_{j=1}^{r}\beta_{j} \|d_{Q_{j}}^{k}\|^{2} )^{2} }{ \Vert \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}}^{k} \Vert ^{2} }. \end{aligned}$$
(12)

Since \(0 < \underline{\rho}_{1} \leq\rho_{1}^{k} \leq \overline{\rho}_{1} < 1\) and \(0 < \underline{\rho}_{2} \leq \rho_{2}^{k} \leq\overline{\rho}_{2} < 1 \), it follows from (12) that

$$ \bigl\Vert x^{k+1} - p\bigr\Vert \leq\bigl\Vert x^{k} - p\bigr\Vert . $$

Therefore, the iterative sequence \(\{x^{k}\}\) is Fejér-monotone with respect to \(p\in\Omega\). As a consequence, \(\lim_{k\rightarrow \infty}\|x^{k} - p\|\) exists.

Noticing that \(\rho_{1}^{k}\in[\underline{\rho}_{1}, \overline{\rho}_{1}] \subset(0,1)\), we can obtain from (12) that

$$\begin{aligned} 2 \underline{\rho}_{1} (1-\overline{ \rho}_{1})\frac{ ( \sum_{i=1}^{t}\alpha_{i} \|d_{C_{i}}^{k}\|^{2} )^{2}}{ \Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Vert ^{2} } & \leq2 \rho_{1}^{k} \bigl(1-\rho_{1}^{k}\bigr) \frac{ ( \sum_{i=1}^{t}\alpha_{i} \|d_{C_{i}}^{k}\|^{2} )^{2}}{ \Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Vert ^{2} } \\ & \leq\bigl\Vert x^{k}-p\bigr\Vert ^{2} - \bigl\Vert x^{k+1}-p\bigr\Vert ^{2}. \end{aligned}$$
(13)

This implies that

$$ \lim_{k\rightarrow\infty}\frac{ ( \sum_{i=1}^{t}\alpha_{i} \Vert d_{C_{i}}^{k} \Vert ^{2} )^{2}}{ \Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Vert ^{2} } = 0. $$
(14)

Since \(d_{C_{i}}^{k}\) is 1-Lipschitz continuous and the sequence \(\{x^{k}\}\) is bounded, there exists a constant \(M>0\) such that \(\Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}}^{k} \Vert ^{2} \leq M\). Then we can deduce from (14) that

$$\lim_{k\rightarrow\infty}\sum_{i=1}^{t} \alpha_{i} \bigl\Vert d_{C_{i}}^{k} \bigr\Vert ^{2} = 0. $$

Hence, for \(i =1 , 2, \ldots, t\), we obtain

$$ \lim_{k\rightarrow\infty}\bigl\Vert d_{C_{i}}^{k} \bigr\Vert = 0. $$
(15)

Similarly, we can prove that

$$ \lim_{k\rightarrow\infty} \bigl\Vert d_{Q_{j}}^{k} \bigr\Vert = 0 $$
(16)

for any \(j= 1, 2, \ldots, r\).

Next, we show that the weak limit points of \(\{x^{k}\}\) belong to Ω, i.e., \(\omega_{w}(x^{k})\subset\Omega\). Indeed, since the iterative sequence \(\{x^{k}\}\) is bounded, then \(\omega_{w}(x^{k}) \neq\emptyset\). Let \(\hat{x}\in\omega_{w}(x^{k})\) and \(\{x^{k_{n}}\}\) be a subsequence of \(\{x^{k}\}\) which converges weakly to x̂. Using the demiclosedness principle of nonexpansive mappings, we can conclude from (15) and (16) that

$$\hat{x}\in\bigcap_{i=1}^{t}C_{i} \quad \text{and}\quad A\hat{x} \in\bigcap_{j=1}^{r}Q_{j}, $$

i.e., \(\hat{x}\in\Omega\). So \(\omega_{w}(x^{k})\subset \Omega\).

Since conditions (i) and (ii) of Lemma 2.3 are satisfied, by Lemma 2.3 we can get that the iterative sequence \(\{x^{k}\}\) converges weakly to a solution of MSSFP (1). This completes the proof. □

Let \(r=t=1\) in Theorem 3.1, we obtain a new iteration process for solving the SFP.

Corollary 3.2

Assume that SFP (2) is consistent. For any initial \(x_{0}\in H_{1}\), the iteration scheme \(\{x^{k}\}\) with splitting self-adaptive step size is defined by the following:

$$\begin{aligned} x^{k+1} =& x^{k} + \rho_{1}^{k} \bigl( P_{C}\bigl(x^{k}\bigr) - x^{k} \bigr) \\ &{} + \rho_{2}^{k} \frac{ \| P_{Q}(Ax^{k}) - Ax^{k} \|^{2} }{\| A^{*}(P_{Q}(Ax^{k}) - Ax^{k} )\|^{2}} A^{*} \bigl(P_{Q}\bigl(Ax^{k}\bigr) - Ax^{k} \bigr), \end{aligned}$$
(17)

where \(0< \underline{\rho}_{1} \leq\rho_{1}^{k} \leq \overline{\rho}_{1} < 1\), \(0< \underline{\rho}_{2} \leq\rho_{2}^{k} \leq\overline{\rho}_{2} < 1\), then the iterative sequence \(\{x^{k}\}\) converges weakly to a solution of the SFP.

4 A relaxed iterative algorithm with splitting self-adaptive step size for solving the MSSFP

In this section, we give a relaxed projection scheme of (7). The relaxed projection scheme uses orthogonal projections onto half-spaces instead of projections onto the original closed convex sets. In what follows, we assume that the convex sets \(\{C_{i}\}_{i=1}^{t}\) and \(\{Q_{j}\}_{j=1}^{r}\) satisfy the following assumptions (A1) and (A2).

(A1) Define the closed convex sets \(\{C_{i}\}_{i=1}^{t}\) as the level sets:

$$C_{i} = \bigl\{ x\in H_{1} : c_{i}(x)\leq0 \bigr\} , $$

where \(c_{i} : H_{1}\rightarrow\mathbb{R}\), \(i=1, 2, \ldots, t\), are convex functions. The set \(\{Q_{j}\}_{j=1}^{r}\) is given by

$$Q_{j} = \bigl\{ y\in H_{2} : q_{j} (y)\leq0 \bigr\} , $$

where \(q_{j}: H_{2}\rightarrow\mathbb{R}\), \(j=1, 2, \ldots, r\), are convex functions. Assume that both \(c_{i}\) and \(q_{j}\) are subdifferentiable on \(H_{1}\) and \(H_{2}\), respectively, and that \(\partial c_{i}\) and \(\partial q_{j}\) are bounded operators (i.e., bounded on bounded sets). It is worth mentioning that these assumptions are automatically satisfied if \(H_{1}\) and \(H_{2}\) are finite dimensional Hilbert spaces (see Lemma 2.4). Namely, the subdifferentials

$$\partial c_{i}(x) = \bigl\{ \xi_{i} \in H_{1} : c_{i}(z) \geq c_{i}(x) + \langle\xi_{i}, z -x \rangle, \forall z \in H_{1} \bigr\} $$

for all \(x\in C_{i}\), \(i=1, 2, \ldots, t\), and

$$\partial q_{j}(y) = \bigl\{ \eta_{j} \in H_{2} : q_{j}(u) \geq q_{j}(y) + \langle\eta_{j}, u -y \rangle, \forall u\in H_{2} \bigr\} $$

for all \(y\in Q_{j}\), \(j=1, 2, \ldots, r\).

(A2) Define \(C_{i}^{k}\) and \(Q_{j}^{k}\) to be the following half-spaces:

$$C_{i}^{k} = \bigl\{ x\in H_{1} : c_{i}\bigl(x^{k}\bigr) + \bigl\langle \xi_{i}^{k}, x - x^{k} \bigr\rangle \leq0 \bigr\} , $$

where \(\xi_{i}^{k}\in\partial c_{i}(x^{k})\), \(i=1, 2, \ldots, t\), and

$$Q_{j}^{k} = \bigl\{ y\in H_{2} : q_{j} \bigl(Ax^{k}\bigr) + \bigl\langle \eta_{j}^{k}, y - Ax^{k} \bigr\rangle \leq0 \bigr\} , $$

where \(\eta_{j}^{k}\in\partial q_{j}(Ax^{k})\), \(j=1, 2, \ldots, r\).

By the definition of the subgradient, it is clear that \(C_{i} \subseteq C_{i}^{k}\), \(Q_{j} \subseteq Q_{j}^{k}\), and the orthogonal projections onto \(C_{i}^{k}\) and \(Q_{j}^{k}\) can be directly calculated.

Since the projections onto half-spaces \(C_{i}^{k}\) and \(Q_{j}^{k}\) have closed-form expressions, the following algorithm is easy to be implemented.

Theorem 4.1

Assume that MSSFP (1) is consistent (i.e., the solution set Ω is nonempty). For any initial \(x_{0}\in H_{1}\), define the iteration scheme with splitting self-adaptive step size as follows:

$$\begin{aligned} x^{k+1} =& x^{k} + \frac{\rho_{1}^{k} \sum_{i=1}^{t}\alpha_{i} \| P_{C_{i}^{k}}(x^{k}) - x^{k} \|^{2} }{\| \sum_{i=1}^{t}\alpha_{i} (P_{C_{i}^{k}}(x^{k}) - x^{k} ) \|^{2}} \sum _{i=1}^{t}\alpha_{i} \bigl( P_{C_{i}^{k}}\bigl(x^{k}\bigr) - x^{k} \bigr) \\ &{} + \frac{\rho_{2}^{k} \sum_{j=1}^{r}\beta_{j} \| P_{Q_{j}^{k}}(Ax^{k}) - Ax^{k} \|^{2} }{\|\sum_{j=1}^{r}\beta_{j} A^{*}(P_{Q_{j}^{k}}(Ax^{k}) - Ax^{k} )\|^{2}} \sum_{j=1}^{r} \beta_{j} A^{*} \bigl(P_{Q_{j}^{k}}\bigl(Ax^{k} \bigr) - Ax^{k} \bigr), \end{aligned}$$
(18)

where \(0< \underline{\rho}_{1} \leq\rho_{1}^{k} \leq \overline{\rho}_{1} < 1\), \(0< \underline{\rho}_{2} \leq\rho_{2}^{k} \leq\overline{\rho}_{2} < 1\), and the parameters \(\{\alpha_{i}\}_{i=1}^{t} > 0\) and \(\{\beta_{j}\}_{j=1}^{r}> 0\). Assume that conditions (A1) and (A2) hold, then the iterative sequence \(\{x^{k}\}\) converges weakly to a solution of the MSSFP.

Proof

For convenience, we define some notations first. Let \(d_{C_{i}^{k}}^{k} = P_{C_{i}^{k}}(x^{k}) - x^{k}\) for \(i= 1, 2, \ldots, t\), \(d_{Q_{j}^{k}}^{k} = P_{Q_{j}^{k}}(Ax^{k}) - Ax^{k}\) for \(j = 1, 2, \ldots, r\), and

$$\lambda_{1}^{k} = \frac{\rho_{1}^{k}\sum_{i=1}^{t}\alpha_{i} \Vert d_{C_{i}^{k}}^{k} \Vert ^{2} }{ \Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k} \Vert ^{2} }, \qquad \lambda_{2}^{k} = \frac{\rho_{2}^{k} \sum_{j=1}^{r}\beta_{j} \Vert d_{Q_{j}^{k}}^{k} \Vert ^{2}}{ \Vert \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}^{k}}^{k} \Vert ^{2} }, $$

respectively.

Then the iterative sequence \(\{x^{k}\}\) of (18) can be reformulated as follows:

$$ x^{k+1} = x^{k} + \lambda_{1}^{k}\sum _{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k} + \lambda_{2}^{k} \sum _{j=1}^{r}\beta_{j} A^{*} d_{Q_{j}^{k}}^{k}. $$
(19)

Let \(p\in\Omega\), by the same argument of Theorem 3.1, we can obtain the following inequality:

$$\begin{aligned} \bigl\Vert x^{k+1} - p\bigr\Vert ^{2} \leq&\bigl\Vert x^{k}-p\bigr\Vert ^{2} - 2 \rho_{1}^{k}\bigl(1-\rho_{1}^{k}\bigr) \frac{ ( \sum_{i=1}^{t}\alpha_{i} \Vert d_{C_{i}^{k}}^{k} \Vert ^{2} )^{2}}{\Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k} \Vert ^{2}} \\ &{} - 2 \rho_{2}^{k} \bigl(1-\rho_{2}^{k} \bigr) \frac{ ( \sum_{j=1}^{r}\beta_{j} \Vert d_{Q_{j}^{k}}^{k} \Vert ^{2} )^{2} }{ \Vert \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}^{k}}^{k} \Vert ^{2} }. \end{aligned}$$
(20)

Notice the conditions of \(\{\rho_{1}^{k}\}\) and \(\{\rho_{2}^{k}\}\), we can see from (20) that the iterative sequence \(\{x^{k}\}\) is a Fejér-monotone sequence, and the limit \(\lim_{k\rightarrow\infty}\|x^{k}-p\|\) exists.

From (20) and \(0< \underline{\rho}_{1} \leq\rho_{1}^{k} \leq\overline{\rho}_{1}<1\), we obtain

$$\begin{aligned} \begin{aligned}[b] 2 \underline{\rho}_{1} (1- \overline{ \rho}_{1} ) \frac{ ( \sum_{i=1}^{t}\alpha_{i} \Vert d_{C_{i}^{k}}^{k} \Vert ^{2} )^{2}}{\Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k} \Vert ^{2}} & \leq2 \rho_{1}^{k} \bigl(1- \rho_{1}^{k} \bigr) \frac{ ( \sum_{i=1}^{t}\alpha_{i} \Vert d_{C_{i}^{k}}^{k} \Vert ^{2} )^{2}}{\Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k} \Vert ^{2}} \\ & \leq\bigl\Vert x^{k}-p\bigr\Vert ^{2} - \bigl\Vert x^{k+1}-p\bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(21)

Letting \(k\rightarrow\infty\) on both sides of the above inequality, we have

$$ \lim_{k\rightarrow\infty}\frac{ ( \sum_{i=1}^{t}\alpha_{i} \Vert d_{C_{i}^{k}}^{k} \Vert ^{2} )^{2}}{\Vert \sum_{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k} \Vert ^{2}} = 0. $$
(22)

Similarly, we can get that

$$ \lim_{k\rightarrow\infty} \frac{ ( \sum_{j=1}^{r}\beta_{j} \Vert d_{Q_{j}^{k}}^{k} \Vert ^{2} )^{2}}{\Vert \sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}^{k}}^{k} \Vert ^{2}} = 0. $$
(23)

Since \(\{d_{C_{i}^{k}}^{k}\}\) and \(\{d_{Q_{j}^{k}}^{k}\}\) are 1-Lipschitz continuous and the iterative sequence \(\{x^{k}\}\) is bounded, then \(\sum_{i=1}^{t}\alpha_{i} d_{C_{i}^{k}}^{k}\) and \(\sum_{j=1}^{r}\beta_{j} A^{*}d_{Q_{j}^{k}}^{k}\) are bounded. Therefore, we can get from (22) and (23) that

$$ \lim_{k\rightarrow\infty} \bigl\Vert d_{C_{i}^{k}}^{k} \bigr\Vert = 0\quad \text{for } i =1, 2, \ldots, t $$
(24)

and

$$ \lim_{k\rightarrow\infty} \bigl\Vert d_{Q_{j}^{k}}^{k} \bigr\Vert =0\quad \text{for } j =1, 2, \ldots, r. $$
(25)

Next, we will prove that \(\omega_{w}(x^{k})\subset\Omega\). For each \(j=1, 2, \ldots, r\), since \(\partial q_{j}\) is bounded on bounded sets, there exists \(\eta>0\) such that \(\|\eta_{j}^{k}\| \leq\eta\), where \(\eta_{j}^{k}\in\partial q_{j} (Ax^{k})\). Then, for \(j=1, 2, \ldots, r\), notice that \(P_{Q_{j}^{k}}(Ax^{k})\in Q_{j}^{k}\), we have

$$\begin{aligned} q_{j}\bigl(Ax^{k}\bigr) & \leq \bigl\langle \eta_{j}^{k}, Ax^{k} - P_{Q_{j}^{k}} \bigl(Ax^{k}\bigr) \bigr\rangle \\ & \leq\eta\bigl\Vert Ax^{k}- P_{Q_{j}^{k}}\bigl(Ax^{k} \bigr) \bigr\Vert . \end{aligned}$$
(26)

By (25), we know that

$$ \limsup_{k\rightarrow\infty}q_{j} \bigl(Ax^{k}\bigr) \leq0 $$
(27)

for any \(j=1, 2, \ldots, r\).

Let \(\hat{x}\in\omega_{w}(x^{k})\), there exists a subsequence \(\{x^{k_{n}}\}\subset\{x^{k}\}\) such that \(x^{k_{n}}\rightharpoonup \hat{x}\) as \(n\rightarrow\infty\). By the weak lower semicontinuity of the convex function \(q_{j}\) and (27), we have

$$q_{j}(A\hat{x}) \leq\liminf_{n\rightarrow \infty}q_{j} \bigl(Ax^{k_{n}}\bigr) \leq0, $$

which means that \(A\hat{x} \in Q_{j}\) for \(j=1 ,2, \ldots, r\), i.e., \(A\hat{x} \in\bigcap_{j=1}^{r}Q_{j}\).

Similarly, since \(\epsilon_{i}^{k} \in\partial c_{i}(x^{k})\) is bounded, \(P_{C_{i}^{k}}(x^{k})\in C_{i}^{k}\) and (24), for each \(i=1, 2, \ldots, t\), we obtain

$$\begin{aligned} \begin{aligned}[b] c_{i}\bigl(x^{k}\bigr) & \leq \bigl\langle \epsilon_{i}^{k}, x^{k}- P_{C_{i}^{k}} \bigl(x^{k}\bigr) \bigr\rangle \\ & \leq\bigl\Vert \epsilon_{i}^{k} \bigr\Vert \bigl\Vert x^{k} - P_{C_{i}^{k}}\bigl(x^{k}\bigr) \bigr\Vert \\ & \rightarrow0 \quad \text{as } k \rightarrow\infty. \end{aligned} \end{aligned}$$
(28)

By the weak lower semicontinuity of the convex function \(c_{i}\), we get

$$ c_{i}(\hat{x})\leq\liminf_{n\rightarrow\infty}c_{i} \bigl(x^{k_{n}}\bigr)\leq 0. $$
(29)

Consequently, \(\hat{x}\in C_{i}\), \(i=1,2,\ldots,t\). Therefore, \(\hat{x} \in\Omega\). Notice that for any \(p\in\Omega\), \(\lim_{k\rightarrow\infty}\|x^{k} - p\|\) exists and \(\omega_{w}(x^{k})\subset\Omega\). Now we can apply Lemma 2.3 to get that the full iterative sequence \(\{x^{k}\}\) converges weakly to a solution of MSSFP (1). This completes the proof. □

Based on Theorem 4.1, we can obtain the following corollary immediately.

Corollary 4.2

Assume that SFP (2) is consistent. For any initial \(x_{0}\in H_{1}\), define the iteration scheme with splitting self-adaptive step size as follows:

$$\begin{aligned} x^{k+1} =& x^{k} + \rho_{1}^{k} \bigl( P_{C^{k}}\bigl(x^{k}\bigr) - x^{k} \bigr) \\ &{} + \rho_{2}^{k}\frac{ \| P_{Q^{k}}(Ax^{k}) - Ax^{k} \|^{2} }{\| A^{*}(P_{Q^{k}}(Ax^{k}) - Ax^{k} )\|^{2}} A^{*} \bigl(P_{Q^{k}}\bigl(Ax^{k}\bigr) - Ax^{k} \bigr), \end{aligned}$$
(30)

where \(0< \underline{\rho}_{1} \leq\rho_{1}^{k} \leq \overline{\rho}_{1} < 1\), \(0< \underline{\rho}_{2} \leq\rho_{2}^{k} \leq\overline{\rho}_{2} < 1\). Assume that conditions (A1) and (A2) hold, where \(r=t=1\). Then the iterative sequence \(\{x^{k}\}\) converges weakly to a solution of the SFP.

5 Numerical experiments

In this section, we present some preliminary numerical results and show the efficiency of our proposed methods. All the experiments are performed on a personal Lenovo ThinkStation computer with Intel Core i3-4150 CPU 3.5 GHz and RAM 4.00 GB.

We make two experiments: the first is LASSO problem, the second is two examples of MSSFP introduced by Zhao and Yang [17].

5.1 LASSO problem

In this part, we consider the least absolute shrinkage and selection operator (LASSO) problem

$$ \min_{x} \frac{1}{2}\|Ax-b \|_{2}^{2}\quad \text{subject to } \|x\|_{1} \leq \epsilon, $$
(31)

where A is an \(m\times n\) feature matrix, b is \(m\times1\) observed data, predictors x is \(n\times1\), and \(\epsilon>0\) is a tuning parameter. LASSO (31) was first introduced by Tibshirani [21]. It presented good performance in the variable selection problem of an ordinary linear regression model. Some properties of the LASSO model were established in Xu [22].

A closely related problem of LASSO is the basis pursuit denoising (BPDN) [23] problem,

$$ \min_{x} \|x\|_{1}\quad \text{subject to } \frac{1}{2}\|Ax-b\|_{2}^{2} \leq t, $$
(32)

where \(t>0\).

It has been proved that the LASSO and BPDN can be solved by the following unconstrained optimization problem under a proper choice of parameter \(\lambda>0\),

$$ \min_{x} \frac{1}{2}\|Ax-b\|_{2}^{2} + \lambda\|x\|_{1}. $$
(33)

However, there does not exist a safe rule to choose the regularization parameter λ in practice. The LASSO is still promising, especially when the prior norm of \(\|x\|_{1}\) is known in advance.

If the objective function reaches zero, then the LASSO problem is a special case of SFP (2), where \(Q :=\{b\}\), \(C := \{x | \|x\|_{1} \leq\epsilon\}\). Thus, \(P_{C}(\cdot)\) is the Euclidean projection onto the \(\ell_{1}\)-ball. We compare three methods to solve the LASSO problem.

  1. (1)

    The CQ algorithm with constant step size of Byrne [4].

  2. (2)

    Self-adaptive step size of the CQ algorithm by Lopez et al. [8].

  3. (3)

    The proposed iterative algorithm (17) with splitting self-adaptive step size.

The data were generated from the model \(b=Ax\), where \(A_{m\times n}\) is generated randomly from a standard normal distribution \(N(0,1)\), and the K-sparse signal \(x_{n\times1}\) is constructed from a uniform distribution in the interval \([-2,2]\) with K-values not equal to zero. The stopping criteria for all methods is set as \(\|x^{k+1}-x^{k}\|_{2} \leq1\mathrm{e}{-}6\). The comparison results of the three iterative algorithms are reported in Table 1.

Table 1 The performance of the three iterative algorithms solving the LASSO in terms of the objective function values

Figure 1, Figure 2 and Figure 3 show the simulated example with \(m=240\), \(n=1\text{,}024\) and \(K=20\) by the above three iterative methods. Since we found that all the three iterative methods exhibit nearly the same performance when the K-sparse signal is recovered, so we report the objective function values when the iteration process is stopped under the given criteria. We can see from Table 1 that our proposed iterative sequence reaches smaller values than the other two methods.

Figure 1
figure 1

The random sparse signal is recovered by our proposed iterative sequence ( 17 ).

Figure 2
figure 2

The random sparse signal is recovered by Byrne [ 4 ].

Figure 3
figure 3

The random sparse signal is recovered by Lopez et al. [ 8 ].

5.2 Two MSSFP problems

In this part, we present two examples of MSSFP and compare several existing iterative algorithms.

  1. (1)

    Self-adaptive methods proposed by Zhao and Yang [17].

  2. (2)

    Cyclic iterative algorithm and simultaneous iterative algorithm with self-adaptive step size proposed by Wen et al. [18].

  3. (3)

    Our proposed iterative algorithms (7) and (18).

In what follows, we define a vector \(\mathbf{e}_{\mathbf{1}}=\{1, 1, \ldots, 1\}^{T}\) and choose \(\rho_{1}^{k} = 0.9\) and \(\rho_{2}^{k} = 0.9\) in the iterative sequences (7) and (18), respectively. The iterative parameters in Zhao and Yang [17] and Wen et al. [18] were chosen as suggested by the authors.

Example 5.1

The MSSFP with \(C_{i} = \{x\in\mathbb{R}^{n} | \|x-d_{i}\|_{2} \leq r_{i} \}\), \(i=1, 2, \ldots, t\), and \(Q_{j} = \{y\in\mathbb{R}^{m} | L_{j} \leq y \leq U_{j} \}\), \(j=1, 2, \ldots, r\). Let \(A=(a_{ij})_{m\times n}\) and \(a_{ij}\in[0,1]\), where \(d_{i}\in [\mathbf{e}_{\mathbf{0}}, 10\mathbf{e}_{\mathbf{1}}]\), \(r_{i}\in[40, 60]\), \(L_{j}\in [10\mathbf{e}_{\mathbf{1}}, 40\mathbf{e}_{\mathbf{1}}]\) and \(U_{j}\in [50\mathbf{e}_{\mathbf{1}}, 100\mathbf{e}_{\mathbf{1}}]\) are all generated randomly.

Example 5.2

The MSSFP with \(C_{i} = \{x\in\mathbb{R}^{n} | \|x-d_{i}\|_{2} \leq r_{i} \}\), \(i=1, 2, \ldots, t\), and \(Q_{j} = \{y\in \mathbb{R}^{m} | \frac{1}{2}y^{T}B_{j} y + b_{j}^{T}y + c_{j} \leq0 \}\), \(j=1, 2, \ldots, r\), where \(d_{i}\in(6\mathbf{e}_{\mathbf{1}}, 16\mathbf{e}_{\mathbf{1}})\), \(r_{i}\in(100, 120)\), \(b_{j}\in (-30\mathbf{e}_{\mathbf{1}}, -20\mathbf{e}_{\mathbf{1}})\), \(c_{j}\in(-50, -60)\), and all elements of the matrix \(B_{j}\) (in the interval \((2, 10)\)) are all generated randomly. The matrix A is the same as in Example 5.1.

The numerical results are reported in Table 2 and Table 3, respectively.

Table 2 Example 5.1 with \(\pmb{t=r=20}\) , \(\pmb{m=60}\) , \(\pmb{n=80}\)
Table 3 Example 5.2 with \(\pmb{t=r=30}\) , \(\pmb{m=50}\) , \(\pmb{n=60}\)

It follows from Table 2 and Table 3 that we find the (relaxed) cyclic iterative sequence of Wen et al. [18] converges with less iteration numbers than the other iterative sequences. Because of the (relaxed) cyclic iteration method, an updated iteration number after all constrained sets is calculated. Our new proposed iterative sequence (7) and relaxed iterative sequence (18) are better than the simultaneous iterative sequence of Wen et al. [18] in Example 5.1 and Zhao and Yang [17] in Example 5.2, respectively. From a practical point of view, it is better to use all the proposed methods to solve a desired problem in practice and decide to choose a suitable one.

6 Conclusions

There has been much interest in the MSSFP in the past few years. Many efficient iterative algorithms have been proposed to solve it. In this paper, we proposed a new iterative algorithm with splitting self-adaptive step size. The new self-adaptive step size is different from Zhao and Yang [17] and Wen et al. [18]. It also gives a new self-adaptive way to solve the SFP. Under mild assumptions, we proved the convergence of the iterative algorithms in an infinite dimensional Hilbert space. Further, we gave a relaxed projection algorithm with the new step size and proved its convergence. Numerical experiments in the LASSO problem and two MSSFP examples showed that our proposed methods perform better than other methods in some ways.

References

  1. Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071-2084 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  2. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  3. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  4. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  5. Xu, HK: Iterative methods for the split feasibility problem in infinite dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)

    Article  Google Scholar 

  6. Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655-1665 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  7. Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20, 1261-1266 (2004)

    Article  MATH  Google Scholar 

  8. Lopez, G, Martin-Marquez, V, Wang, F, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)

    Article  MathSciNet  Google Scholar 

  9. Yang, Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302, 166-179 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  10. Xu, HK: A variable Krasnoselskii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22, 2021-2034 (2006)

    Article  MATH  Google Scholar 

  11. Zhang, W, Han, D, Li, Z: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 25, 115001 (2009)

    Article  MathSciNet  Google Scholar 

  12. He, BS, He, XZ, Liu, HX, Wu, T: Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 196, 43-48 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  13. Zhao, J, Yang, Q: Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 27, 035009 (2011)

    Article  Google Scholar 

  14. Chen, Y, Guo, YS, Yu, YR, Chen, RD: Self-adaptive and relaxed self-adaptive projection methods for solving the multiple-set split feasibility problem. Abstr. Appl. Anal. 2012, 958040 (2012)

    MathSciNet  Google Scholar 

  15. Zhao, JL, Yang, QZ: Several acceleration schemes for solving the multiple-sets split feasibility problem. Linear Algebra Appl. 437, 1648-1657 (2012)

    Article  MathSciNet  Google Scholar 

  16. Zhao, JL, Zhang, YJ, Yang, Q: Modified projection methods for the split feasibility problem and the multiple-sets split feasibility problem. Appl. Math. Comput. 219, 1644-1653 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  17. Zhao, JL, Yang, Q: A simple projection method for solving the multiple-sets split feasibility problem. Inverse Probl. Sci. Eng. 21, 537-546 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  18. Wen, M, Peng, JG, Tang, YC: A cyclic and simultaneous iterative method for solving the multiple-sets split feasibility problem. J. Optim. Theory Appl. 166(3), 844-860 (2015)

    Article  MathSciNet  Google Scholar 

  19. Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, London (2011)

    Book  MATH  Google Scholar 

  20. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  21. Tibshirani, R: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Stat. Methodol. 58, 267-288 (1996)

    MATH  MathSciNet  Google Scholar 

  22. Xu, HK: Properties and iterative methods for the lasso and its variants. Chin. Ann. Math., Ser. B 35(3), 501-518 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  23. Chen, S, Donoho, D, Saunders, M: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33-61 (1998)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the editor and referees for their helpful comments and suggestions. This work was also supported by the National Natural Science Foundation of China (11361042, 11071108, 11401293, 11461046) and the Natural Science Foundation of Jiangxi Province (20151BAB211010, 20142BAB211016, 20132BAB201001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuchao Tang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, Y., Zhu, C. & Yu, H. Iterative methods for solving the multiple-sets split feasibility problem with splitting self-adaptive step size. Fixed Point Theory Appl 2015, 178 (2015). https://doi.org/10.1186/s13663-015-0430-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0430-2

MSC

Keywords