 Research
 Open Access
 Published:
Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces
Fixed Point Theory and Applications volume 2017, Article number: 6 (2016)
Abstract
In this paper, we consider a type of split feasibility problem by focusing on the solution sets of two important problems in the setting of Hilbert spaces that are the sum of monotone operators and fixed point problems. By assuming the existence of solutions, we provide a suitable algorithm for finding a solution point. Some important applications and numerical experiments of the considered problem and constructed algorithm are also discussed.
Introduction
Many applications of the split feasibility problem (SFP), which was first introduced by Censor and Elfving [1], have appeared in various fields of science and technology, such as in signal processing, medical image reconstruction and intensitymodulated radiation therapy; for more information, see [2, 3] and the references therein. In fact, Censor and Elfving [1] studied SFP in a finitedimensional space, by considering the problem of finding a point
where C and Q are nonempty closed convex subsets of \(\mathbb {R}^{n}\), and A is an \(n\times n\) matrix. They proposed the following algorithm: for arbitrary \(x_{1}\in\mathbb{R}^{n}\),
where \(A(C)=\{y \in\mathbb{R}^{n}\vert y=Ax, \text{ for some } x\in C\}\) and \(P_{Q}\), \(P_{A(C)}\) denote the metric projections onto Q and \(A(C)\), respectively. It was noticed that the algorithm involves the complicated computations of matrix inverses, which may lead to an expensive computation. Consequently, Byrne [2] suggested a new algorithm, which generates a sequence \(\{x_{n}\}\) by using the transpose of the matrix A, instead of the inverse of the matrix A: for arbitrary \(x_{1}\in\mathbb{R}^{n}\),
where \(\gamma\in(0,2/\Vert A\Vert^{2})\), A is a real \(m\times n\) matrix, \(A^{t}\) is the transpose of the considered matrix A, and \(P_{C}\) and \(P_{Q}\) denote the metric projections onto C and Q, respectively. Observe that it is not only a less complicated computation, the algorithm (1.2) also can be used for solving the problem (1.1) when C and Q are belong to different Euclidean spaces. Later on, inspired by the algorithm (1.2), Xu [4] considered SFP in infinitedimensional Hilbert spaces: let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. Let C and Q be nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Xu [4] proposed the following algorithm: for a given \(x_{1}\in H_{1}\),
where \(\gamma\in(0,2/\Vert L\Vert^{2})\) and \(L^{\ast}\) is the adjoint operator of L. The weak convergence of the sequence \(\{x_{n}\}\) to a solution of SFP was considered; see also [5] for related work.
On the other hand, variational inclusion problems are being used as mathematical programming models to study a large number of optimization problems arising in finance, economics, network, transportation and engineering science. The formal form of a variational inclusion problem is the problem of finding \(x^{\ast}\in H\) such that
where \(B:H\rightarrow2^{H}\) is a setvalued operator. If B is a maximal monotone operator, the elements in the solution set of the problem (1.4) are called the zeros of maximal monotone operator. This problem was introduced by Martinet [6], and later it has been studied by many authors. It is well known that the popular iteration method that was used for solving the problem (1.4) is the following proximal point algorithm: for a given \(x_{1}\in H\),
where \(\{\lambda_{n}\}\subset(0,\infty)\) and \(J_{\lambda _{n}}^{B}=(I+\lambda_{n}B)^{1}\) is the resolvent of the considered maximal monotone operator B corresponding to \(\lambda_{n}\); see also [7–11] for more details. Subsequently, inspired by the concept of SFP, Byrne et al. [12] introduced and studied the following split null point problem (SNPP): given setvalued mappings \(B_{1}:H_{1}\rightarrow2^{H_{1}}\) and \(B_{2}:H_{2}\rightarrow 2^{H_{2}}\), respectively, and bounded linear operators \(L:H_{1}\rightarrow H_{2}\), SNPP is the problem of finding a point \(x^{\ast}\in H_{1}\) such that
They considered the following iterative algorithm: for \(\lambda>0\) and an arbitrary \(x_{1}\in H_{1}\),
where \(L^{\ast}\) is the adjoint of L, \(\gamma\in(0,2/\Vert L\Vert ^{2})\), and \(J_{\lambda}^{B_{1}}\) and \(J_{\lambda}^{B_{2}}\) are the resolvent of maximal monotone operators corresponding to \(B_{1}\) and \(B_{2}\), respectively. They proved, under some suitable control conditions, that \(\{x_{n}\}\) converges weakly to a point \(x^{\ast}\) in the solution set of problem (1.5).
A related topic to the above variational inclusion problem: it is well known that fixed point theory has been a very powerful and important tool in the study of mathematical models. Of course, many authors were interested in and studied the approximating of a fixed point of nonlinear mappings by using iterative methods, and applied the obtained results to many important problems, such as the equilibrium problem, the null point problem, the variational inequality problem, optimization problems, etc.; see [13–15] for example and the references therein. In view of SFP and the fixed point problem, Takahashi et al. [16] considered the problem of finding a point \(x^{\ast}\in H\) such that
where \(B:H_{1}\rightarrow2^{H_{1}}\) is a maximal monotone operator, \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator and \(T:H_{2}\rightarrow H_{2}\) is a nonexpansive mapping. They considered the following iterative algorithm: for any \(x_{1}\in H_{1}\),
where \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy some suitable control conditions, and \(J_{\lambda_{n}}^{B}\) is the resolvent of a maximal monotone operator B associated to \(\lambda_{n}\), and provided the weak convergence theorems of algorithm (1.7) to a point \(x^{\ast}\in B^{1}0\cap L^{1}F(T)\).
One may note that finding the zeros of maximal monotone operator can be solved via a fixed point of its resolvent operator. This is because \(0\in Bx^{\ast}\) if and only if \(J_{\lambda}^{B}x^{\ast}=x^{\ast}\), when \(B:H\rightarrow2^{H}\) is a maximal monotone operator and \(\lambda >0\). Thus the problem of type (1.6) contains problem SCNPP as a special case in some sense.
Now, let us back to consider the variational inclusion problem (1.4). A type of generalization of the problem (1.4) is to find a point \(x^{\ast}\in H\) such that
where \(A:H\rightarrow H\) is a singlevalued mapping and \(B:H\rightarrow 2^{H}\) is a setvalued operator. It is well known that there are many kinds of applications of the problem (1.8), such as evolution equations, complementarity problems, minimax problems, variational inequalities and optimization problems etc.; see [17–20] for example and the references therein. We will discuss some of them in Section 4. In the case that \(B:H\rightarrow 2^{H}\) is a setvalued monotone operator, and \(A:H\rightarrow H\) is a singlevalued monotone operator, the elements in the solution set of the problem (1.8) are called the zeros of the sum of monotone operators.
In this paper, motivated and inspired by the above literature, we are going to consider the problem of finding a point \(x^{\ast}\in H\) such that
when \(A:H_{1}\rightarrow H_{1}\) is a monotone operator, \(B:H_{1}\rightarrow2^{H_{1}}\) is a maximal monotone operator, \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator and \(T:H_{2}\rightarrow H_{2}\) is a nonexpansive mapping. We will write \(\Omega_{L,T}^{A+B}\) for the solution set of the problem (1.9), and show that the algorithm
converges weakly to an element in \(\Omega_{L,T}^{A+B}\). We would like to point out that, in fact, the problem (1.9) can be written in the form of the problem (1.6). Moreover, as in our consideration, B is a maximal monotone operator and A is a continuous and monotone operator, we know that the operator \(A+B\) is a maximal monotone operator (see [21]), and hence one may try to find a solution of the problem (1.9) by using the algorithm (1.7). However, it was suggested and discussed that the inverse of \(I+\lambda(A+B)\) may be hard to compute; see [18] for example. Consequently, because of this reason, a popular iterative method used for solving the problem of type (1.8) is the forwardbackward splitting method, which defines a sequence \(\{x_{n}\}\) by the following algorithm: for any \(x_{1}\in H\),
where \(\{\lambda_{n}\}\) is a sequence of positive real numbers, \(A:H\rightarrow H\) and \(B:H\rightarrow2^{H}\) are maximal monotone operators, see Passty [22]. Of course, our proposed algorithm (1.10) is also motivated by (1.11).
Preliminaries
Throughout this paper, we denote by \(\mathbb{N}\) the set of positive integers, and by \(\mathbb{R}\) the set of real numbers. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\Vert\cdot\Vert\), respectively. When \(\{x_{n}\}\) is a sequence in H, we denote the weak convergence of \(\{x_{n}\}\) to x in H by \(x_{n}\rightharpoonup x\).
Let \(T:H\rightarrow H\) be a mapping. We say that T is a Lipschitz mapping if there exists \(L\geq0\) such that
The number L, associated with T, is called a Lipschitz constant. If \(L=1\), we say that T is a nonexpansive mapping, that is,
We will say that T is firmly nonexpansive if
Note that T is firmly nonexpansive if and only if \(T=(I+V)/2\) for some nonexpansive mapping V; see [23], Proposition 11.2.
The set of fixed points of T will be denoted by \(F(T)\), that is, \(F(T)=\{x\in H : Tx=x\}\). It is well known that, if T is nonexpansive, then \(F(T)\) is closed and convex. Moreover, if T is a firmly nonexpansive mapping on H into itself with \(F(T)\neq\emptyset \), then we have
see [16].
A mapping \(T:H\rightarrow H\) is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,
where \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is a nonexpansive mapping; see [24]. More precisely, when (2.2) holds, we say that T is αaveraged. It should be observed that firmly nonexpansive mappings are \(\frac{1}{2}\)averaged mappings.
Let \(A:H\rightarrow H\) be a singlevalued mapping. For a positive real number β, we will say that A is βinverse strongly monotone (βism) if
The classes of inverse strongly monotone mapping have been studied by many authors; see [17, 25, 26].
We now collect some important properties, which are needed in this work.
Lemma 2.1
We have

(i)
The composite of finitely many averaged mappings is averaged. In particular, if \(T_{i}\) is \(\alpha_{i}\)averaged, where \(\alpha_{i}\in(0,1)\) for \(i=1,2\), then the composite \(T_{1}T_{2}\) is αaveraged, where \(\alpha=\alpha_{1}+\alpha_{2}\alpha _{1}\alpha_{2}\).

(ii)
If A is βism and \(r\in(0,\beta]\), then \(T:=IrA\) is firmly nonexpansive.

(iii)
A mapping \(T:H\rightarrow H\) is nonexpansive if and only if \(IT\) is \(\frac{1}{2}\)ism.

(iv)
If A is βism, then, for \(\gamma>0\), γA is \(\frac{\beta}{\gamma}\)ism.

(v)
T is averaged if and only if the complement \(IT\) is βism for some \(\beta>\frac{1}{2}\). Indeed, for \(\alpha\in (0,1)\), T is αaveraged if and only if \(IT\) is \(\frac {1}{2\alpha}\)ism.
The following result can be found in [27], but here we modify the presentation for showing a finer conclusion of the considered mapping T.
Lemma 2.2
[27]
Let \(T=(1\alpha)A+\alpha N\) for some \(\alpha\in(0,1)\). If A is βaveraged and N is nonexpansive then T is \(\alpha +(1\alpha)\beta\)averaged.
Proof
Since A is a βaveraged mapping, there is a nonexpansive mapping S such that \(A=(1\beta)I+\beta S\). We see that
where \(\delta:=\alpha+(1\alpha)\beta\). Note that \((1\alpha)\beta \delta ^{1}+\alpha\delta^{1}=1\), then it follows that \((1\alpha)\beta \delta ^{1}S+\alpha\delta^{1}N\) is a nonexpansive mapping. This means that T is a δaveraged mapping. □
Let \(B:H\rightarrow2^{H}\) be a setvalued mapping. The effective domain of B is denoted by \(D(B)\), that is, \(D(B)=\{x\in H:Bx\neq \emptyset\}\). Recall that B is said to be monotone if
A monotone mapping B is said to be maximal if its graph is not properly contained in the graph of any other monotone operator. To a maximal monotone operator \(B:H\rightarrow2^{H}\) and \(r>0\), its resolvent \(J_{r}^{B}\) is defined by
It is well known that, if B is a maximal monotone operator and r is a positive number, then the resolvent \(J_{r}^{B}\) is singlevalued and firmly nonexpansive, and \(F(J_{r}^{B})=B^{1}0\equiv\{x \in H: 0\in Bx\} \), \(\forall r>0\); see [16, 23, 28].
We use the following lemmas for proving the main result.
Lemma 2.3
[16]
Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(L:H_{1}\rightarrow H_{2}\) be a nonzero bounded linear operator and \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping. If \(B:H_{1}\rightarrow2^{H_{1}}\) is a maximal monotone operator, then

(i)
\(L^{\ast}(IT)L\) is \(\frac{1}{2\Vert L\Vert^{2}}\)ism,

(ii)
for \(0< r<\frac{1}{\Vert L\Vert^{2}}\),

(iia)
\(IrL^{\ast}(IT)L\) is \(r\Vert L\Vert ^{2}\)averaged,

(iib)
\(J_{\lambda}^{B}(IrL^{\ast}(IT)L)\) is \(\frac {1+r\Vert L\Vert^{2}}{2}\)averaged, for \(\lambda>0\),

(iia)

(iii)
if \(r=\Vert L\Vert^{2}\), then \(IrL^{\ast}(IT)L\) is nonexpansive.
Lemma 2.4
[29]
Let \(B:H\rightarrow2^{H}\) be a maximal monotone operator with the resovent \(J_{\lambda}^{B}=(I+\lambda B)^{1}\) for \(\lambda >0\). Then we have the following resolvent identity:
for all \(\mu>0\) and \(x\in H\).
Lemma 2.5
[30]
Let C be a closed convex subset of a Hilbert space H and let T be a nonexpansive mapping of C into itself. Then \(U:=IT\) is demiclosed, i.e., \(x_{n}\rightharpoonup x_{0}\) and \(Ux_{n}\rightarrow y_{0}\) imply \(Ux_{0}=y_{0}\).
Lemma 2.6
[16]
Let H be a Hilbert space and let \(\{x_{n}\}\) be a sequence in H such that there exists a nonempty closed convex subset \(C\subset H\) satisfying the properties:

(i)
for every \(x^{\ast}\in C\), \(\lim_{n\rightarrow\infty }\Vert x_{n}x^{\ast}\Vert\) exists;

(ii)
if a subsequence \(\{x_{n_{j}}\}\subset\{x_{n}\}\) converges weakly to \(x^{\ast}\), then \(x^{\ast}\in C\).
Then there exists \(x_{0}\in C\) such that \(x_{n}\rightharpoonup x_{0}\).
The following fundamental identity is also used in our proof:
for all \(x,y\in H\) and \(\lambda\in\mathbb{R}\); see [28].
Main results
We start by considering an equivalence theorem.
Theorem 3.1
Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(A:H_{1}\rightarrow H_{1}\) be a βism, \(B:H_{1}\rightarrow2^{H_{1}}\) a maximal monotone operator, \(T:H_{2}\rightarrow H_{2}\) a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) a bounded linear operator. If \(\Omega _{L,T}^{A+B}\neq\emptyset\) then the following are equivalent:

(i)
\(z\in\Omega_{L,T}^{A+B}\),

(ii)
\(z=J_{\lambda}^{B} ((I\lambda A)\gamma L^{\ast }(IT)L )z\),

(iii)
\(0\in L^{\ast}(IT)Lz+(A+B)z\),
where λ, \(\gamma>0\) and \(z\in H_{1}\).
Proof
Since \(\Omega_{L,T}^{A+B}\neq\emptyset\), there exists \(z_{0}\in D(B)\) such that \(0\in(A+B)z_{0}\) and \(Lz_{0}\in F(T)\). Let us put \(S=\frac{1}{2}(I+T)\). It follows that S is a firmly nonexpansive mapping and \(F(T)=F(S)\). Moreover, we have \(L^{\ast}(IT)L=2L^{\ast}(IS)L\).
(i) ⇒ (ii) Assume that \(z\in\Omega_{L,T}^{A+B}\). It follows that \(Lz\in F(T)\) and \(z\in(A+B)^{1}0\). These findings imply that \(L^{\ast}(IT)Lz=0\) and \(J_{\lambda}^{B}(I\lambda A)z=z\). Thus we have
(ii) ⇒ (iii) By (ii) and the above statement, we have \(z=J_{\lambda}^{B} ((I\lambda A)2\gamma L^{\ast}(IS)L )z\). This means \((I+\lambda B)z\ni ((I\lambda A)2\gamma L^{\ast }(IS)L )z\), which implies
Since \(A+B\) is monotone and \(0\in(A+B)z_{0}\), we obtain
Subsequently, we have
On the other hand, since S is a firmly nonexpansive mapping and \(Lz_{0}\in F(S)\), in view of (2.1) we see that
Adding up (3.1) and (3.2), we have
That is, \(Lz\in F(S)\), and it follows that \((IT)Lz=0\), which implies \(L^{\ast}(IT)Lz=0\). Using this one, we also see that: (i) reduces to \(z=J_{\lambda}^{B}(I\lambda A)z\), which is equivalent to \(0\in(A+B)z\). Thus we conclude that \(0\in L^{\ast}(IT)Lz+(A+B)z\).
(iii) ⇒ (i) By (iii), we have \(L^{\ast}(IT)Lz\in(A+B)z\), equivalently, \(2L^{\ast}(IS)Lz\in(A+B)z\). By the monotonicity of \(A+B\), we get
since \(0\in(A+B)z_{0}\). This implies
Adding (3.2) and (3.3), we get
This shows \(Lz\in F(S)\). Then it follows that \(z\in L^{1}F(T)\) and \(L^{\ast}(IT)Lz=0\). Hence the assumption \(0\in L^{\ast}(IT)Lz+(A+B)z\) is reduced to the relation \(0\in(A+B)z\), that is, \(z\in(A+B)^{1}0\). Consequently, we have \(z\in\Omega_{L,T}^{A+B}\). These results complete the proof. □
Now, in view of Theorem 3.1, we are in a position to present our main algorithm and show its convergence theorem.
Theorem 3.2
Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(A:H_{1}\rightarrow H_{1}\) be a βism, \(B:H_{1}\rightarrow2^{H_{1}}\) a maximal monotone operator, \(T:H_{2}\rightarrow H_{2}\) a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) a bounded linear operator. For any \(x_{1}\in H_{1}\), define
where the sequences \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following control conditions:

(i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{\beta}{2}\),

(ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\),
for some \(a, b_{1}, b_{2}\in\mathbb{R}\). If \(\Omega _{L,T}^{A+B}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{A+B}\).
Proof
Set
for each \(n\in\mathbb{N}\). By Theorem 3.1, we have \(\Omega _{L,T}^{A+B}=F(T_{n})\), for all \(n\in\mathbb{N}\).
Note that, for each \(n\in\mathbb{N}\), we have
Also, by condition (i) and Lemma 2.1(ii), we know that \(I2\lambda_{n}A\) is a firmly nonexpansive mapping and this implies that \(I2\lambda_{n}A\) must be a nonexpansive mapping. On the other hand, by Lemma 2.3(iia), we know that \(I2\gamma_{n}L^{\ast }(IT)L\) is \(2\gamma_{n}\L\^{2}\)averaged. Thus, by Lemma 2.2, we see that \((I\lambda_{n}A)\gamma_{n}L^{\ast}(IT)L\) is \(\frac{1+2\gamma_{n}\L\^{2}}{2}\)averaged. Consequently, since \(J_{\lambda_{n}}^{B}\) is \(\frac{1}{2}\)averaged, by Lemma 2.1(i) we see that \(T_{n}\) is \(\frac{3+2\gamma_{n}\L\ ^{2}}{4}\)averaged. Thus, for each \(n\in\mathbb{N}\), we can write
where \(\alpha_{n}:=\frac{3+2\gamma_{n}\L\^{2}}{4}\) and \(V_{n}\) is a nonexpansive mapping.
Subsequently, we also have \(\Omega_{L,T}^{A+B}=F(T_{n})=F(V_{n})\), for all \(n\in\mathbb{N}\). Using this fact, for each \(x^{\ast}\in\Omega _{L,T}^{A+B}\), we see that
for each \(n\in\mathbb{N}\). Since \(IT_{n}=\alpha_{n}(IV_{n})\), in view of (3.6) we get
for each \(n\in\mathbb{N}\). Thus
for each \(n\in\mathbb{N}\). Since \(\alpha_{n}=\frac{3+2\gamma_{n}\ L\ ^{2}}{4}\in (\frac{3}{4},1 )\), we see that (3.7) implies
for each \(n\in\mathbb{N}\). Thus, by (3.7) and (3.8), we obtain

(a)
for each \(x^{\ast}\in\Omega_{L,T}^{A+B}\), \(\lim_{n\rightarrow\infty}\x_{n}x^{\ast}\\) exists;

(b)
\(\sum_{n=1}^{\infty}(12\gamma_{n}\L\^{2})\ x_{n}T_{n}x_{n}\^{2}<\infty\).
Consequently, by condition (ii) and (b), we must have \(\sum_{n=1}^{\infty}\x_{n}T_{n}x_{n}\^{2}<\infty\). It turns out that
Next, we will denote \(\omega_{w}(x_{n})\) for the set of all weak cluster points of \(\{x_{n}\}\). Let \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) and \(x_{n_{j}}\rightharpoonup\hat{x}\), for some \(\hat {x}\in\omega_{w}(x_{n})\). Also, we assume that \(\lambda_{n_{j}}\rightarrow\hat{\lambda}\in (0,\frac{\beta}{2})\) and \(\gamma_{n_{j}}\rightarrow\hat{\gamma }\in (0,\frac{1}{2\Vert L\Vert^{2}})\).
Set
It follows that T̂ is \(\frac{3+2\hat{\gamma}\L\ ^{2}}{4}\)averaged and \(F(\hat{T})=\Omega_{L,T}^{A+B}\).
Consider, for each \(j\in\mathbb{N}\),
where \(z_{j}= ((I\lambda_{n_{j}}A)\gamma_{n_{j}}L^{\ast }(IT)L )x_{n_{j}}\). Now, the last term in (3.10) is estimated as follows:
for each \(j\in\mathbb{N}\). Thus, it follows that
Next, by using Lemma 2.4, we estimate
for each \(j\in\mathbb{N}\). This suggests to consider the following computation, for each \(j\in\mathbb{N}\):
This implies that \(\{\Vert(J_{\lambda _{n_{j}}}^{B}z_{j}z_{j})\Vert \}\) is a bounded sequence. Consequently, in view of (3.12), we have
Substituting (3.9), (3.11) and (3.13) into (3.10), we get
Thus, by Lemma 2.5, it follows that \(\hat{x}\in F(\hat {T})=\Omega_{L,T}^{A+B}\). This shows \(\omega_{w}(x_{n})\subset\Omega _{L,T}^{A+B}\). Using this one and (a) we can apply Lemma 2.6 to conclude that the sequence \(\{x_{n}\}\) converges weakly to an element in \(\Omega _{L,T}^{A+B}\). This completes the proof. □
Remark 3.3

(i)
If \(A:=0\), the zero operator, then our presented algorithm (3.4) and algorithm (1.7) coincide.

(ii)
If \(H_{1}=H_{2}\) and L is the identity operator, then the problem (1.9) is reduced to the problem of finding an element of the set \((A+B)^{1}0\cap F(T)\). This type of problem was also studied and considered by many authors; see [31–34] for example.
We will discuss more applications of our main Theorem 3.2 in the next section.
Applications
In this section, we will show some applications of the problem (1.9) and Theorem 3.2.
Variational inequality problem
Recall that the normal cone to C at \(u\in C\) is defined as
It is well known that \(N_{C}\) is a maximal monotone operator. In the case \(B:=N_{C}:H\rightarrow2^{H}\) we can verify that the problem (1.8) is reduced to the problem of finding \(x^{\ast}\in C\) such that
We will denote \(\operatorname{VIP}(C,A)\) for the solution set of the problem (4.1). Also, in this case, we also have \(J_{\lambda}^{B}=:P_{C}\) (the metric projection of H onto C). By the above setting, the problem (1.9) is reduced to finding
Here, we should denote by \(\Omega_{L,T}^{A,C}\) the solution set of the problem (4.2). Subsequently, by applying Theorem 3.2, we obtain the following convergence theorem.
Theorem 4.1
Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces and let C be a nonempty closed convex subset of \(H_{1}\). Let \(A:H_{1}\rightarrow H_{1}\) be a βism, \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. For any \(x_{1}\in H_{1}\), define
where the sequence \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following conditions:

(i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{\beta}{2}\),

(ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\).
If \(\Omega_{L,T}^{A,C}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{A,C}\).
Remark 4.2
If \(H_{1}=H_{2}\) and L is the identity operator, the problem (4.2) was also considered by Takahashi and Toyoda in [35].
Convex minimization problem
We will consider a convex function \(f:H\rightarrow R\), which is also Fréchet differentiable. Let C be a given closed convex subset of H. In this case, by setting \(A:=\nabla f\), the gradient of f, and \(B:=N_{C}\), the problem of finding \(x^{\ast}\in(A+B)^{1}0\) is equivalent to find a point \(x^{\ast}\in C\) such that
Note that (4.3) is equivalent to the following minimization problem: find \(x^{\ast}\in C\) such that
Thus, in this situation, the problem (1.9) is reduced to the problem of finding
We will denote by \(\Omega_{L,T}^{f,C}\) the solution set of the problem (4.4). Then, by applying Theorem 3.2, we obtain the following result.
Theorem 4.3
Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces and let C be a nonempty closed convex subset of \(H_{1}\). Let \(f:H_{1}\rightarrow\mathbb{R}\) be convex and Fréchet differentiable, ∇f be αLipschitz, \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. For any \(x_{1}\in H_{1}\), define
where the sequences \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following conditions:

(i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{1}{2\alpha }\),

(ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\).
If \(\Omega_{L,T}^{f,C}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{f,C}\).
Proof
Note that if \(f:H\rightarrow\mathbb{R}\) is convex and \(\nabla f:H\rightarrow H\) is αLipschitz continuous for \(\alpha>0\) then ∇f is \(\frac{1}{\alpha}\)ism (see [36]). Thus, the required result can be obtained immediately from Theorem 3.2. □
Remark 4.4
The problem of finding an element in \(\Omega_{L,T}^{f,C}\), as in Theorem 4.3, was studied by Iiduka [37], when L is the identity operator on \(H_{1}\).
Split common fixed point problem
Let \(V:H_{1}\rightarrow H_{1}\) be a nonexpansive mapping. Then, by Lemma 2.1(iii), we know that \(A:=IV\) is a \(\frac {1}{2}\)ism. Furthermore, since \(Ax^{\ast}=0\) if and only if \(x^{\ast }\in F(V)\), we may see that the problem (1.9) can be reduced to the problem of finding
where \(T:H_{2}\rightarrow H_{2}\) and \(L:H_{1}\rightarrow H_{2}\). We will denote by \(\Omega_{L,T}^{V}\) the solution set of the problem (4.5). This problem is called the split common fixed point problem (SCFP), and was studied by many authors; see [38–41] for example. By applying Theorem 3.2, we can obtain the following result.
Theorem 4.5
Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(V:H_{1}\rightarrow H_{1}\) and \(T:H_{2}\rightarrow H_{2}\) be nonexpansive mappings and \(L:H_{1}\rightarrow H_{2}\) a bounded linear operator. For any \(x_{1}\in H_{1}\), define
where the sequence \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following conditions:

(i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{1}{4}\),

(ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\).
If \(\Omega_{L,T}^{V}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{V}\).
Proof
We consider \(B:=0\), the zero operator. The required result follows from the fact that the zero operator is monotone and continuous, hence it is a maximal monotone. Moreover, in this case, we see that \(J_{\lambda}^{B}\) is the identity operator on \(H_{1}\), for each \(\lambda>0\). Thus the algorithm (3.4) reduces to (4.6), by setting \(A:=IV\) and \(B:=0\). □
Remark 4.6
The algorithm (4.6), by considering \(V:=P_{C}\) and \(T:=P_{Q}\), respectively, can be applied for solving the problem (1.1), which was considered by Xu [4]. In fact, by using Lemma 2.2 in [16], we can show that \(\{P_{C}x_{n}\}\) converges strongly to an element of the problem (1.1).
Numerical experiments
In this section, we will show some numerical results and discuss on the possible good choices of step size parameters \(\lambda_{n}\) and \(\gamma _{n}\), those satisfy the control conditions in Theorem 3.2.
Let \(H_{1}=\mathbb{R}^{2}\) and \(H_{2}=\mathbb{R}^{3}\) be equipped with the Euclidean norm. Let \(\hat{x}:=\bigl( {\scriptsize\begin{matrix}{} 3\cr 2 \end{matrix}}\bigr) \in H_{1} \) be fixed. We consider an 1ism operator \(P_{C}\), where C is the following convex subset of \(H_{1}\):
Next, for each \(x:= \bigl({\scriptsize\begin{matrix}{} x_{1}\cr x_{2} \end{matrix}}\bigr) \in H_{1} \), we will also concern ourselves with the following two norms:
Consider a function \(f:H_{1}\rightarrow\mathbb{R}\) which is defined by
We know that f is a convex function and its subdifferential, ∂f, is
Moreover, since f is a convex function, it is well known that \(\partial f(\cdot)\) must be a maximal monotone operator.
On the other hand, let
be two fixed vectors in \(H_{2}\). We consider a nonempty convex subset \(Q_{1}\cap Q_{2}\) of \(H_{2}\), where \(Q_{1}:=\{x\in H_{2}:\Vert\tilde{x}x\Vert\leq3\}\) and \(Q_{2}:=\{x\in H_{2}:\langle\bar{x},x\rangle\leq2\}\). We notice that \(P_{Q_{1}}P_{Q_{2}}\) is a nonexpansive single value mapping on \(H_{2}\). Furthermore, since \(Q_{1}\cap Q_{2}\) is a nonempty set, we also know that \(F(P_{Q_{1}}P_{Q_{2}})= Q_{1}\cap Q_{2}\).
Now, let us consider a \(3\times2\) matrix
We see that L is a bounded linear operator on \(H_{1}\) into \(H_{2}\) with \(\Vert L\Vert=1.3330\).
Based on the above settings, we will present some numerical experiments to show the efficiency of the constructed algorithm (3.4). That is, we are going to show that the algorithm (3.4) converges to a point \(x^{\ast}\in H_{1}\) such that
We will consider the following five cases of the step size parameters \(\lambda_{n}\) and \(\gamma_{n}\), with the initial vectors \(\bigl({\scriptsize\begin{matrix}{} 1\cr 1 \end{matrix}}\bigr) \), \(\bigl({\scriptsize\begin{matrix}{} 0\cr 0 \end{matrix}}\bigr) \) and \(\bigl({\scriptsize\begin{matrix}{} 1\cr 1 \end{matrix}}\bigr) \) in \(H_{1}\):
 Case 1.:

\(\lambda_{n}=0.25\), \(\gamma_{n}=0.14\).
 Case 2.:

\(\lambda_{n}=1.0e^{04}+\frac{1}{10n}\), \(\gamma _{n}=1.0e^{04}+\frac{1}{10n}\).
 Case 3.:

\(\lambda_{n}=1.0e^{04}+\frac{1}{10n}\), \(\gamma _{n}=0.2799\frac{1}{10n}\).
 Case 4.:

\(\lambda_{n}=0.4999\frac{1}{10n}\), \(\gamma _{n}=1.0e^{04}+\frac{1}{10n}\).
 Case 5.:

\(\lambda_{n}=0.4999\frac{1}{10n}\), \(\gamma _{n}=0.2799\frac{1}{10n}\).
Note that the solution set of the problem (5.1) is \(\bigl\{ \bigl({\scriptsize\begin{matrix}{} x\cr\frac{2x1}{3} \end{matrix}}\bigr) \in H_{1}: \frac{1}{2}\le x\le\frac{79}{56} \bigr\}\). From Tables 1, 2 and 3, we may suggest that the larger step size of the parameters \(\lambda_{n}\) should provide a faster convergence, while the step size parameters \(\gamma_{n}\) seem to have less impact on the speed of algorithm (3.4) to a solution of the problem (5.1).
Remark 5.1
Note that, for each \(x:= \bigl({\scriptsize\begin{matrix}{} x_{1}\cr x_{2} \end{matrix}}\bigr) \in H_{1}\) and \(\lambda>0\), we have
where f is defined as above and sgn is for the signum function. On the other hand, one can see that, trying to compute \(J_{\lambda}^{P_{C}+\partial f}\) will be harder.
Concluding remarks
This paper can be considered as a refinement of work by Takahashi et al. [16], by providing an algorithm for finding a solution of the main problem (1.9), which is a generalization of the problem that was considered in [16]. Some sufficient conditions for the weak convergence of such introduced algorithm are given. Also, in order to show the significance of the considered problem, some important applications are discussed. Since in this paper we are considering and focusing on the weak convergent type of the constructive algorithm, it should be a natural direction for the next research to study the algorithms and sufficient conditions and focus on strong convergence type.
References
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221239 (1994)
Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441453 (2002)
Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 51, 23532365 (2006)
Xu, HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 26, Article ID 105018 (2010)
Masad, E, Reich, S: A note on the multipleset split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367371 (2007)
Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Opér. 3, 154158 (1970)
Bruck, RE, Reich, S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 3, 459470 (1977)
Eckstein, J, Bertsckas, DP: On the Douglas Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293318 (1992)
Marino, G, Xu, HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 3, 791808 (2004)
Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240256 (2002)
Yao, Y, Noor, MA: On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 217, 4655 (2008)
Byrne, C, Censor, Y, Gibali, A, Reich, S: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759775 (2012)
Cegielski, A: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Math., vol. 2057. Springer, Heidelberg (2012)
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877898 (1976)
Zhang, L, Hao, Y: Fixed point methods for solving solutions of a generalized equilibrium problem. J. Nonlinear Sci. Appl. 9, 149159 (2016)
Takahashi, W, Xu, HK, Yao, JC: Iterative methods for generalized split feasibility problems in Hilbert spaces. SetValued Var. Anal. 23, 205221 (2015)
Boikanyo, OA: The viscosity approximation forwardbackward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, Article ID 2371857 (2016)
Moudafi, A, Thera, M: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94(2), 425448 (1997)
Qin, X, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014)
Tseng, P: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431446 (2000)
Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 7588 (1970)
Passty, GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383390 (1979)
Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
Baillon, JB, Bruck, RE, Reich, S: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 19 (1978)
Kassay, G, Reich, S, Sabach, S: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 21, 13191344 (2011)
Xu, HK: Averaged mappings and the gradientprojection algorithm. J. Optim. Theory Appl. 150, 360378 (2011)
Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004)
Takahashi, W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff International Publishing, Leiden (1976)
Takahashi, W: Nonlinear Functional Analysis: Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000)
Li, D, Zhao, J: Approximation of solutions of quasivariational inclusion and fixed points of nonexpansive mappings. J. Nonlinear Sci. Appl. 9, 152159 (2016)
Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 2741 (2010)
Yao, Z, Cho, SY, Kang, SM, Zhu, LJ: Approximating iterations for nonexpansive and maximal monotone operators. Abstr. Appl. Anal. 2015, Article ID 451320 (2015)
Zhang, S, Lee, JHW, Chan, CK: Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 29(5), 571581 (2008)
Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417428 (2003)
Baillon, JB, Haddad, G: Quelques propriétés des opérateurs anglebornés et ncycliquement monotones. Isr. J. Math. 26(2), 137150 (1977)
Iiduka, H: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236, 17331742 (2012)
Cui, H, Wang, F: Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, Article ID 78 (2014)
Moudafi, A: A note on the split common fixedpoint problem for quasinonexpansive operators. Nonlinear Anal., Theory Methods Appl. 74, 40834087 (2011)
Shimizu, T, Takahashi, W: Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 211, 7183 (1997)
Zhao, J, He, S: Strong convergence of the viscosity approximation process for the split common fixedpoint problem of quasinonexpansive mappings. J. Appl. Math. 2012, Article ID 438023 (2012)
Acknowledgements
The authors are thankful to the referees and the editor for their constructive comments and suggestions which have been useful for the improvement of the paper. This research has been funded by Naresuan University and the Thailand Research Fund under the project RTA5780007.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Suwannaprapa, M., Petrot, N. & Suantai, S. Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl 2017, 6 (2016). https://doi.org/10.1186/s1366301705997
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366301705997
MSC
 26A18
 47H04
 47H05
 47H10
 54A20
Keywords
 split feasibility problems
 maximal monotone operators
 inverse strongly monotone operator
 fixed point problems
 weak convergence theorems