Skip to main content

Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces

Abstract

In this paper, we consider a type of split feasibility problem by focusing on the solution sets of two important problems in the setting of Hilbert spaces that are the sum of monotone operators and fixed point problems. By assuming the existence of solutions, we provide a suitable algorithm for finding a solution point. Some important applications and numerical experiments of the considered problem and constructed algorithm are also discussed.

1 Introduction

Many applications of the split feasibility problem (SFP), which was first introduced by Censor and Elfving [1], have appeared in various fields of science and technology, such as in signal processing, medical image reconstruction and intensity-modulated radiation therapy; for more information, see [2, 3] and the references therein. In fact, Censor and Elfving [1] studied SFP in a finite-dimensional space, by considering the problem of finding a point

$$\begin{aligned} x^{\ast}\in C \quad\text{such that } Ax^{\ast }\in Q, \end{aligned}$$
(1.1)

where C and Q are nonempty closed convex subsets of \(\mathbb {R}^{n}\), and A is an \(n\times n\) matrix. They proposed the following algorithm: for arbitrary \(x_{1}\in\mathbb{R}^{n}\),

$$\begin{aligned} x_{n+1}=A^{-1}P_{Q}\bigl(P_{A(C)}(Ax_{n}) \bigr), \quad\forall n\in \mathbb{N}, \end{aligned}$$

where \(A(C)=\{y \in\mathbb{R}^{n}\vert y=Ax, \text{ for some } x\in C\}\) and \(P_{Q}\), \(P_{A(C)}\) denote the metric projections onto Q and \(A(C)\), respectively. It was noticed that the algorithm involves the complicated computations of matrix inverses, which may lead to an expensive computation. Consequently, Byrne [2] suggested a new algorithm, which generates a sequence \(\{x_{n}\}\) by using the transpose of the matrix A, instead of the inverse of the matrix A: for arbitrary \(x_{1}\in\mathbb{R}^{n}\),

$$\begin{aligned} x_{n+1}=P_{C}\bigl(x_{n}+\gamma A^{t}(P_{Q}-I)Ax_{n}\bigr),\quad \forall n \in\mathbb{N}, \end{aligned}$$
(1.2)

where \(\gamma\in(0,2/\Vert A\Vert^{2})\), A is a real \(m\times n\) matrix, \(A^{t}\) is the transpose of the considered matrix A, and \(P_{C}\) and \(P_{Q}\) denote the metric projections onto C and Q, respectively. Observe that it is not only a less complicated computation, the algorithm (1.2) also can be used for solving the problem (1.1) when C and Q are belong to different Euclidean spaces. Later on, inspired by the algorithm (1.2), Xu [4] considered SFP in infinite-dimensional Hilbert spaces: let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. Let C and Q be nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Xu [4] proposed the following algorithm: for a given \(x_{1}\in H_{1}\),

$$\begin{aligned} x_{n+1}=P_{C}\bigl(x_{n}+\gamma L^{\ast}(P_{Q}-I)Lx_{n}\bigr),\quad\forall n\in\mathbb{N}, \end{aligned}$$
(1.3)

where \(\gamma\in(0,2/\Vert L\Vert^{2})\) and \(L^{\ast}\) is the adjoint operator of L. The weak convergence of the sequence \(\{x_{n}\}\) to a solution of SFP was considered; see also [5] for related work.

On the other hand, variational inclusion problems are being used as mathematical programming models to study a large number of optimization problems arising in finance, economics, network, transportation and engineering science. The formal form of a variational inclusion problem is the problem of finding \(x^{\ast}\in H\) such that

$$\begin{aligned} 0\in Bx^{\ast}, \end{aligned}$$
(1.4)

where \(B:H\rightarrow2^{H}\) is a set-valued operator. If B is a maximal monotone operator, the elements in the solution set of the problem (1.4) are called the zeros of maximal monotone operator. This problem was introduced by Martinet [6], and later it has been studied by many authors. It is well known that the popular iteration method that was used for solving the problem (1.4) is the following proximal point algorithm: for a given \(x_{1}\in H\),

$$\begin{aligned} x_{n+1}=J_{\lambda_{n}}^{B}x_{n},\quad \forall n\in\mathbb{N}, \end{aligned}$$

where \(\{\lambda_{n}\}\subset(0,\infty)\) and \(J_{\lambda _{n}}^{B}=(I+\lambda_{n}B)^{-1}\) is the resolvent of the considered maximal monotone operator B corresponding to \(\lambda_{n}\); see also [7–11] for more details. Subsequently, inspired by the concept of SFP, Byrne et al. [12] introduced and studied the following split null point problem (SNPP): given set-valued mappings \(B_{1}:H_{1}\rightarrow2^{H_{1}}\) and \(B_{2}:H_{2}\rightarrow 2^{H_{2}}\), respectively, and bounded linear operators \(L:H_{1}\rightarrow H_{2}\), SNPP is the problem of finding a point \(x^{\ast}\in H_{1}\) such that

$$\begin{aligned} 0\in B_{1}\bigl(x^{\ast}\bigr)\quad \text{and} \quad 0\in B_{2}\bigl(Lx^{\ast}\bigr). \end{aligned}$$
(1.5)

They considered the following iterative algorithm: for \(\lambda>0\) and an arbitrary \(x_{1}\in H_{1}\),

$$\begin{aligned} x_{n+1}=J_{\lambda}^{B_{1}}\bigl(x_{n}- \gamma L^{\ast}\bigl(I-J_{\lambda }^{B_{2}}\bigr)Lx_{n} \bigr), \quad \forall n\in\mathbb{N}, \end{aligned}$$

where \(L^{\ast}\) is the adjoint of L, \(\gamma\in(0,2/\Vert L\Vert ^{2})\), and \(J_{\lambda}^{B_{1}}\) and \(J_{\lambda}^{B_{2}}\) are the resolvent of maximal monotone operators corresponding to \(B_{1}\) and \(B_{2}\), respectively. They proved, under some suitable control conditions, that \(\{x_{n}\}\) converges weakly to a point \(x^{\ast}\) in the solution set of problem (1.5).

A related topic to the above variational inclusion problem: it is well known that fixed point theory has been a very powerful and important tool in the study of mathematical models. Of course, many authors were interested in and studied the approximating of a fixed point of nonlinear mappings by using iterative methods, and applied the obtained results to many important problems, such as the equilibrium problem, the null point problem, the variational inequality problem, optimization problems, etc.; see [13–15] for example and the references therein. In view of SFP and the fixed point problem, Takahashi et al. [16] considered the problem of finding a point \(x^{\ast}\in H\) such that

$$\begin{aligned} 0\in Bx^{\ast}\quad \text{and}\quad Lx^{\ast}\in F(T), \end{aligned}$$
(1.6)

where \(B:H_{1}\rightarrow2^{H_{1}}\) is a maximal monotone operator, \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator and \(T:H_{2}\rightarrow H_{2}\) is a nonexpansive mapping. They considered the following iterative algorithm: for any \(x_{1}\in H_{1}\),

$$\begin{aligned} x_{n+1}=J_{\lambda_{n}}^{B}\bigl(I- \gamma_{n}L^{\ast}(I-T)L\bigr)x_{n},\quad \forall n\in\mathbb{N}, \end{aligned}$$
(1.7)

where \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy some suitable control conditions, and \(J_{\lambda_{n}}^{B}\) is the resolvent of a maximal monotone operator B associated to \(\lambda_{n}\), and provided the weak convergence theorems of algorithm (1.7) to a point \(x^{\ast}\in B^{-1}0\cap L^{-1}F(T)\).

One may note that finding the zeros of maximal monotone operator can be solved via a fixed point of its resolvent operator. This is because \(0\in Bx^{\ast}\) if and only if \(J_{\lambda}^{B}x^{\ast}=x^{\ast}\), when \(B:H\rightarrow2^{H}\) is a maximal monotone operator and \(\lambda >0\). Thus the problem of type (1.6) contains problem SCNPP as a special case in some sense.

Now, let us back to consider the variational inclusion problem (1.4). A type of generalization of the problem (1.4) is to find a point \(x^{\ast}\in H\) such that

$$\begin{aligned} 0\in Ax^{\ast}+Bx^{\ast}, \end{aligned}$$
(1.8)

where \(A:H\rightarrow H\) is a single-valued mapping and \(B:H\rightarrow 2^{H}\) is a set-valued operator. It is well known that there are many kinds of applications of the problem (1.8), such as evolution equations, complementarity problems, mini-max problems, variational inequalities and optimization problems etc.; see [17–20] for example and the references therein. We will discuss some of them in Section 4. In the case that \(B:H\rightarrow 2^{H}\) is a set-valued monotone operator, and \(A:H\rightarrow H\) is a single-valued monotone operator, the elements in the solution set of the problem (1.8) are called the zeros of the sum of monotone operators.

In this paper, motivated and inspired by the above literature, we are going to consider the problem of finding a point \(x^{\ast}\in H\) such that

$$\begin{aligned} 0\in(A+B)x^{\ast}\quad \text{and}\quad Lx^{\ast }\in F(T), \end{aligned}$$
(1.9)

when \(A:H_{1}\rightarrow H_{1}\) is a monotone operator, \(B:H_{1}\rightarrow2^{H_{1}}\) is a maximal monotone operator, \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator and \(T:H_{2}\rightarrow H_{2}\) is a nonexpansive mapping. We will write \(\Omega_{L,T}^{A+B}\) for the solution set of the problem (1.9), and show that the algorithm

$$\begin{aligned} x_{n+1} = J_{\lambda_{n}}^{B} \bigl((I- \lambda_{n}A)-\gamma _{n}L^{\ast }(I-T)L \bigr)x_{n}, \quad\forall n\in\mathbb{N}, \end{aligned}$$
(1.10)

converges weakly to an element in \(\Omega_{L,T}^{A+B}\). We would like to point out that, in fact, the problem (1.9) can be written in the form of the problem (1.6). Moreover, as in our consideration, B is a maximal monotone operator and A is a continuous and monotone operator, we know that the operator \(A+B\) is a maximal monotone operator (see [21]), and hence one may try to find a solution of the problem (1.9) by using the algorithm (1.7). However, it was suggested and discussed that the inverse of \(I+\lambda(A+B)\) may be hard to compute; see [18] for example. Consequently, because of this reason, a popular iterative method used for solving the problem of type (1.8) is the forward-backward splitting method, which defines a sequence \(\{x_{n}\}\) by the following algorithm: for any \(x_{1}\in H\),

$$\begin{aligned} x_{n+1}=J_{\lambda_{n}}^{B}(I- \lambda_{n}A)x_{n}, \quad \forall n\in\mathbb{N}, \end{aligned}$$
(1.11)

where \(\{\lambda_{n}\}\) is a sequence of positive real numbers, \(A:H\rightarrow H\) and \(B:H\rightarrow2^{H}\) are maximal monotone operators, see Passty [22]. Of course, our proposed algorithm (1.10) is also motivated by (1.11).

2 Preliminaries

Throughout this paper, we denote by \(\mathbb{N}\) the set of positive integers, and by \(\mathbb{R}\) the set of real numbers. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\Vert\cdot\Vert\), respectively. When \(\{x_{n}\}\) is a sequence in H, we denote the weak convergence of \(\{x_{n}\}\) to x in H by \(x_{n}\rightharpoonup x\).

Let \(T:H\rightarrow H\) be a mapping. We say that T is a Lipschitz mapping if there exists \(L\geq0\) such that

$$\|Tx-Ty\|\leq L\|x-y\|, \quad\forall x,y\in H. $$

The number L, associated with T, is called a Lipschitz constant. If \(L=1\), we say that T is a nonexpansive mapping, that is,

$$\|Tx-Ty\|\leq\|x-y\|,\quad \forall x,y\in H. $$

We will say that T is firmly nonexpansive if

$$\langle Tx-Ty,x-y\rangle\geq\|Tx-Ty\|^{2}, \quad\forall x,y\in H. $$

Note that T is firmly nonexpansive if and only if \(T=(I+V)/2\) for some nonexpansive mapping V; see [23], Proposition 11.2.

The set of fixed points of T will be denoted by \(F(T)\), that is, \(F(T)=\{x\in H : Tx=x\}\). It is well known that, if T is nonexpansive, then \(F(T)\) is closed and convex. Moreover, if T is a firmly nonexpansive mapping on H into itself with \(F(T)\neq\emptyset \), then we have

$$\begin{aligned} \langle x-Tx,y-Tx\rangle\leq0 \quad \text{for all } x\in H \text{ and } y\in F(T); \end{aligned}$$
(2.1)

see [16].

A mapping \(T:H\rightarrow H\) is said to be an averaged mapping if it can be written as the average of the identity I and a nonexpansive mapping, that is,

$$\begin{aligned} T=(1-\alpha)I+\alpha S, \end{aligned}$$
(2.2)

where \(\alpha\in(0,1)\) and \(S:H\rightarrow H\) is a nonexpansive mapping; see [24]. More precisely, when (2.2) holds, we say that T is α-averaged. It should be observed that firmly nonexpansive mappings are \(\frac{1}{2}\)-averaged mappings.

Let \(A:H\rightarrow H\) be a single-valued mapping. For a positive real number β, we will say that A is β-inverse strongly monotone (β-ism) if

$$\langle Ax-Ay,x-y\rangle\geq\beta\|Ax-Ay\|^{2},\quad\forall x,y\in H. $$

The classes of inverse strongly monotone mapping have been studied by many authors; see [17, 25, 26].

We now collect some important properties, which are needed in this work.

Lemma 2.1

[17, 26]

We have

  1. (i)

    The composite of finitely many averaged mappings is averaged. In particular, if \(T_{i}\) is \(\alpha_{i}\)-averaged, where \(\alpha_{i}\in(0,1)\) for \(i=1,2\), then the composite \(T_{1}T_{2}\) is α-averaged, where \(\alpha=\alpha_{1}+\alpha_{2}-\alpha _{1}\alpha_{2}\).

  2. (ii)

    If A is β-ism and \(r\in(0,\beta]\), then \(T:=I-rA\) is firmly nonexpansive.

  3. (iii)

    A mapping \(T:H\rightarrow H\) is nonexpansive if and only if \(I-T\) is \(\frac{1}{2}\)-ism.

  4. (iv)

    If A is β-ism, then, for \(\gamma>0\), γA is \(\frac{\beta}{\gamma}\)-ism.

  5. (v)

    T is averaged if and only if the complement \(I-T\) is β-ism for some \(\beta>\frac{1}{2}\). Indeed, for \(\alpha\in (0,1)\), T is α-averaged if and only if \(I-T\) is \(\frac {1}{2\alpha}\)-ism.

The following result can be found in [27], but here we modify the presentation for showing a finer conclusion of the considered mapping T.

Lemma 2.2

[27]

Let \(T=(1-\alpha)A+\alpha N\) for some \(\alpha\in(0,1)\). If A is β-averaged and N is nonexpansive then T is \(\alpha +(1-\alpha)\beta\)-averaged.

Proof

Since A is a β-averaged mapping, there is a nonexpansive mapping S such that \(A=(1-\beta)I+\beta S\). We see that

$$\begin{aligned} T =&(1-\alpha)A+\alpha N \\ =&(1-\alpha) \bigl[(1-\beta)I+\beta S \bigr]+\alpha N \\ =&(1-\alpha) (1-\beta)I+(1-\alpha)\beta S+\alpha N \\ =&(1-\delta)I+\delta \bigl[(1-\alpha)\beta\delta^{-1}S+\alpha \delta ^{-1}N \bigr], \end{aligned}$$

where \(\delta:=\alpha+(1-\alpha)\beta\). Note that \((1-\alpha)\beta \delta ^{-1}+\alpha\delta^{-1}=1\), then it follows that \((1-\alpha)\beta \delta ^{-1}S+\alpha\delta^{-1}N\) is a nonexpansive mapping. This means that T is a δ-averaged mapping. □

Let \(B:H\rightarrow2^{H}\) be a set-valued mapping. The effective domain of B is denoted by \(D(B)\), that is, \(D(B)=\{x\in H:Bx\neq \emptyset\}\). Recall that B is said to be monotone if

$$\langle x-y,u-v\rangle\geq0, \quad\forall x,y\in D(B),u\in Bx,v\in By. $$

A monotone mapping B is said to be maximal if its graph is not properly contained in the graph of any other monotone operator. To a maximal monotone operator \(B:H\rightarrow2^{H}\) and \(r>0\), its resolvent \(J_{r}^{B}\) is defined by

$$J_{r}^{B}:= (I+rB)^{-1}:H\rightarrow D(B). $$

It is well known that, if B is a maximal monotone operator and r is a positive number, then the resolvent \(J_{r}^{B}\) is single-valued and firmly nonexpansive, and \(F(J_{r}^{B})=B^{-1}0\equiv\{x \in H: 0\in Bx\} \), \(\forall r>0\); see [16, 23, 28].

We use the following lemmas for proving the main result.

Lemma 2.3

[16]

Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(L:H_{1}\rightarrow H_{2}\) be a nonzero bounded linear operator and \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping. If \(B:H_{1}\rightarrow2^{H_{1}}\) is a maximal monotone operator, then

  1. (i)

    \(L^{\ast}(I-T)L\) is \(\frac{1}{2\Vert L\Vert^{2}}\)-ism,

  2. (ii)

    for \(0< r<\frac{1}{\Vert L\Vert^{2}}\),

    1. (iia)

      \(I-rL^{\ast}(I-T)L\) is \(r\Vert L\Vert ^{2}\)-averaged,

    2. (iib)

      \(J_{\lambda}^{B}(I-rL^{\ast}(I-T)L)\) is \(\frac {1+r\Vert L\Vert^{2}}{2}\)-averaged, for \(\lambda>0\),

  3. (iii)

    if \(r=\Vert L\Vert^{-2}\), then \(I-rL^{\ast}(I-T)L\) is nonexpansive.

Lemma 2.4

[29]

Let \(B:H\rightarrow2^{H}\) be a maximal monotone operator with the resovent \(J_{\lambda}^{B}=(I+\lambda B)^{-1}\) for \(\lambda >0\). Then we have the following resolvent identity:

$$J_{\lambda}^{B}x=J_{\mu}^{B} \biggl( \frac{\mu}{\lambda}x+ \biggl(1-\frac{\mu }{\lambda} \biggr)J_{\lambda}^{B}x \biggr), $$

for all \(\mu>0\) and \(x\in H\).

Lemma 2.5

[30]

Let C be a closed convex subset of a Hilbert space H and let T be a nonexpansive mapping of C into itself. Then \(U:=I-T\) is demiclosed, i.e., \(x_{n}\rightharpoonup x_{0}\) and \(Ux_{n}\rightarrow y_{0}\) imply \(Ux_{0}=y_{0}\).

Lemma 2.6

[16]

Let H be a Hilbert space and let \(\{x_{n}\}\) be a sequence in H such that there exists a nonempty closed convex subset \(C\subset H\) satisfying the properties:

  1. (i)

    for every \(x^{\ast}\in C\), \(\lim_{n\rightarrow\infty }\Vert x_{n}-x^{\ast}\Vert\) exists;

  2. (ii)

    if a subsequence \(\{x_{n_{j}}\}\subset\{x_{n}\}\) converges weakly to \(x^{\ast}\), then \(x^{\ast}\in C\).

Then there exists \(x_{0}\in C\) such that \(x_{n}\rightharpoonup x_{0}\).

The following fundamental identity is also used in our proof:

$$\begin{aligned} \big\Vert \lambda x+(1-\lambda)y\big\Vert ^{2}=\lambda\Vert x\Vert ^{2}+(1-\lambda )\Vert y\Vert^{2}-\lambda(1-\lambda)\Vert x-y \Vert^{2}, \end{aligned}$$
(2.3)

for all \(x,y\in H\) and \(\lambda\in\mathbb{R}\); see [28].

3 Main results

We start by considering an equivalence theorem.

Theorem 3.1

Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(A:H_{1}\rightarrow H_{1}\) be a β-ism, \(B:H_{1}\rightarrow2^{H_{1}}\) a maximal monotone operator, \(T:H_{2}\rightarrow H_{2}\) a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) a bounded linear operator. If \(\Omega _{L,T}^{A+B}\neq\emptyset\) then the following are equivalent:

  1. (i)

    \(z\in\Omega_{L,T}^{A+B}\),

  2. (ii)

    \(z=J_{\lambda}^{B} ((I-\lambda A)-\gamma L^{\ast }(I-T)L )z\),

  3. (iii)

    \(0\in L^{\ast}(I-T)Lz+(A+B)z\),

where λ, \(\gamma>0\) and \(z\in H_{1}\).

Proof

Since \(\Omega_{L,T}^{A+B}\neq\emptyset\), there exists \(z_{0}\in D(B)\) such that \(0\in(A+B)z_{0}\) and \(Lz_{0}\in F(T)\). Let us put \(S=\frac{1}{2}(I+T)\). It follows that S is a firmly nonexpansive mapping and \(F(T)=F(S)\). Moreover, we have \(L^{\ast}(I-T)L=2L^{\ast}(I-S)L\).

(i) ⇒ (ii) Assume that \(z\in\Omega_{L,T}^{A+B}\). It follows that \(Lz\in F(T)\) and \(z\in(A+B)^{-1}0\). These findings imply that \(L^{\ast}(I-T)Lz=0\) and \(J_{\lambda}^{B}(I-\lambda A)z=z\). Thus we have

$$J_{\lambda}^{B} \bigl((I-\lambda A)-\gamma L^{\ast}(I-T)L \bigr)z=J_{\lambda }^{B}(I-\lambda A)z=z. $$

(ii) ⇒ (iii) By (ii) and the above statement, we have \(z=J_{\lambda}^{B} ((I-\lambda A)-2\gamma L^{\ast}(I-S)L )z\). This means \((I+\lambda B)z\ni ((I-\lambda A)-2\gamma L^{\ast }(I-S)L )z\), which implies

$$-\frac{2\gamma}{\lambda}L^{\ast}(I-S)Lz\in(A+B)z. $$

Since \(A+B\) is monotone and \(0\in(A+B)z_{0}\), we obtain

$$\biggl\langle -\frac{2\gamma}{\lambda}L^{\ast}(I-S)Lz,z-z_{0} \biggr\rangle \geq0. $$

Subsequently, we have

$$\begin{aligned} \langle Lz-SLz,Lz-Lz_{0}\rangle \leq& 0. \end{aligned}$$
(3.1)

On the other hand, since S is a firmly nonexpansive mapping and \(Lz_{0}\in F(S)\), in view of (2.1) we see that

$$\begin{aligned} \langle Lz-SLz,Lz_{0}-SLz\rangle \leq& 0. \end{aligned}$$
(3.2)

Adding up (3.1) and (3.2), we have

$$\|Lz-SLz\|^{2}=\langle Lz-SLz,Lz-SLz\rangle\leq0. $$

That is, \(Lz\in F(S)\), and it follows that \((I-T)Lz=0\), which implies \(L^{\ast}(I-T)Lz=0\). Using this one, we also see that: (i) reduces to \(z=J_{\lambda}^{B}(I-\lambda A)z\), which is equivalent to \(0\in(A+B)z\). Thus we conclude that \(0\in L^{\ast}(I-T)Lz+(A+B)z\).

(iii) ⇒ (i) By (iii), we have \(-L^{\ast}(I-T)Lz\in(A+B)z\), equivalently, \(-2L^{\ast}(I-S)Lz\in(A+B)z\). By the monotonicity of \(A+B\), we get

$$\bigl\langle -2L^{\ast}(I-S)Lz,z-z_{0}\bigr\rangle \geq0, $$

since \(0\in(A+B)z_{0}\). This implies

$$\begin{aligned} \langle Lz-SLz,Lz-Lz_{0}\rangle\leq0. \end{aligned}$$
(3.3)

Adding (3.2) and (3.3), we get

$$\|Lz-SLz\|^{2}=\langle Lz-SLz,Lz-SLz\rangle\leq0. $$

This shows \(Lz\in F(S)\). Then it follows that \(z\in L^{-1}F(T)\) and \(L^{\ast}(I-T)Lz=0\). Hence the assumption \(0\in L^{\ast}(I-T)Lz+(A+B)z\) is reduced to the relation \(0\in(A+B)z\), that is, \(z\in(A+B)^{-1}0\). Consequently, we have \(z\in\Omega_{L,T}^{A+B}\). These results complete the proof. □

Now, in view of Theorem 3.1, we are in a position to present our main algorithm and show its convergence theorem.

Theorem 3.2

Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(A:H_{1}\rightarrow H_{1}\) be a β-ism, \(B:H_{1}\rightarrow2^{H_{1}}\) a maximal monotone operator, \(T:H_{2}\rightarrow H_{2}\) a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) a bounded linear operator. For any \(x_{1}\in H_{1}\), define

$$\begin{aligned} x_{n+1} = J_{\lambda_{n}}^{B} \bigl((I- \lambda_{n}A)-\gamma _{n}L^{\ast }(I-T)L \bigr)x_{n},\quad\forall n\in\mathbb{N}, \end{aligned}$$
(3.4)

where the sequences \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following control conditions:

  1. (i)

    \(0< a\leq\lambda_{n}\leq b_{1}< \frac{\beta}{2}\),

  2. (ii)

    \(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\|L\| ^{2}}\),

for some \(a, b_{1}, b_{2}\in\mathbb{R}\). If \(\Omega _{L,T}^{A+B}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{A+B}\).

Proof

Set

$$\begin{aligned} T_{n}:=J_{\lambda_{n}}^{B} \bigl((I- \lambda_{n}A)-\gamma_{n}L^{\ast }(I-T)L \bigr), \end{aligned}$$
(3.5)

for each \(n\in\mathbb{N}\). By Theorem 3.1, we have \(\Omega _{L,T}^{A+B}=F(T_{n})\), for all \(n\in\mathbb{N}\).

Note that, for each \(n\in\mathbb{N}\), we have

$$\begin{aligned} (I-\lambda_{n}A)-\gamma_{n}L^{\ast}(I-T)L =& \frac{1}{2}(I-2\lambda _{n}A)+\frac{1}{2} \bigl(I-2 \gamma_{n}L^{\ast}(I-T)L \bigr). \end{aligned}$$

Also, by condition (i) and Lemma 2.1(ii), we know that \(I-2\lambda_{n}A\) is a firmly nonexpansive mapping and this implies that \(I-2\lambda_{n}A\) must be a nonexpansive mapping. On the other hand, by Lemma 2.3(iia), we know that \(I-2\gamma_{n}L^{\ast }(I-T)L\) is \(2\gamma_{n}\|L\|^{2}\)-averaged. Thus, by Lemma 2.2, we see that \((I-\lambda_{n}A)-\gamma_{n}L^{\ast}(I-T)L\) is \(\frac{1+2\gamma_{n}\|L\|^{2}}{2}\)-averaged. Consequently, since \(J_{\lambda_{n}}^{B}\) is \(\frac{1}{2}\)-averaged, by Lemma 2.1(i) we see that \(T_{n}\) is \(\frac{3+2\gamma_{n}\|L\| ^{2}}{4}\)-averaged. Thus, for each \(n\in\mathbb{N}\), we can write

$$T_{n}=(1-\alpha_{n})I+\alpha_{n}V_{n}, $$

where \(\alpha_{n}:=\frac{3+2\gamma_{n}\|L\|^{2}}{4}\) and \(V_{n}\) is a nonexpansive mapping.

Subsequently, we also have \(\Omega_{L,T}^{A+B}=F(T_{n})=F(V_{n})\), for all \(n\in\mathbb{N}\). Using this fact, for each \(x^{\ast}\in\Omega _{L,T}^{A+B}\), we see that

$$\begin{aligned} \big\| x_{n+1}-x^{\ast}\big\| ^{2} =& \big\| T_{n}x_{n}-x^{\ast}\big\| ^{2} \\ =&\big\| (1-\alpha_{n})x_{n}+\alpha_{n}V_{n}x_{n}-x^{\ast} \big\| ^{2} \\ =&\big\| (1-\alpha_{n}) \bigl(x_{n}-x^{\ast} \bigr)+\alpha_{n}\bigl(V_{n}x_{n}-x^{\ast } \bigr)\big\| ^{2} \\ =&(1-\alpha_{n})\big\| x_{n}-x^{\ast} \big\| ^{2}+\alpha_{n}\big\| V_{n}x_{n}-x^{\ast } \big\| ^{2}-\alpha_{n}(1-\alpha_{n})\|x_{n}-V_{n}x_{n} \|^{2} \\ \leq&\big\| x_{n}-x^{\ast}\big\| ^{2}-\alpha_{n}(1- \alpha_{n})\| x_{n}-V_{n}x_{n} \|^{2}, \end{aligned}$$
(3.6)

for each \(n\in\mathbb{N}\). Since \(I-T_{n}=\alpha_{n}(I-V_{n})\), in view of (3.6) we get

$$\big\| x_{n+1}-x^{\ast}\big\| ^{2}\leq\big\| x_{n}-x^{\ast} \big\| ^{2}-\frac{1-\alpha _{n}}{\alpha_{n}}\|x_{n}-T_{n}x_{n} \|^{2}, $$

for each \(n\in\mathbb{N}\). Thus

$$\begin{aligned} \frac{1-\alpha_{n}}{\alpha_{n}}\|x_{n}-T_{n}x_{n} \|^{2} \leq&\big\| x_{n}-x^{\ast}\big\| ^{2}- \big\| x_{n+1}-x^{\ast}\big\| ^{2}, \end{aligned}$$
(3.7)

for each \(n\in\mathbb{N}\). Since \(\alpha_{n}=\frac{3+2\gamma_{n}\| L\| ^{2}}{4}\in (\frac{3}{4},1 )\), we see that (3.7) implies

$$\begin{aligned} 0\leq\big\| x_{n}-x^{\ast}\big\| ^{2}- \big\| x_{n+1}-x^{\ast}\big\| ^{2}, \end{aligned}$$
(3.8)

for each \(n\in\mathbb{N}\). Thus, by (3.7) and (3.8), we obtain

  1. (a)

    for each \(x^{\ast}\in\Omega_{L,T}^{A+B}\), \(\lim_{n\rightarrow\infty}\|x_{n}-x^{\ast}\|\) exists;

  2. (b)

    \(\sum_{n=1}^{\infty}(1-2\gamma_{n}\|L\|^{2})\| x_{n}-T_{n}x_{n}\|^{2}<\infty\).

Consequently, by condition (ii) and (b), we must have \(\sum_{n=1}^{\infty}\|x_{n}-T_{n}x_{n}\|^{2}<\infty\). It turns out that

$$\begin{aligned} \lim_{n\rightarrow\infty}\|x_{n}-x_{n+1}\|= \lim_{n\rightarrow \infty}\| x_{n}-T_{n}x_{n}\|=0. \end{aligned}$$
(3.9)

Next, we will denote \(\omega_{w}(x_{n})\) for the set of all weak cluster points of \(\{x_{n}\}\). Let \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) and \(x_{n_{j}}\rightharpoonup\hat{x}\), for some \(\hat {x}\in\omega_{w}(x_{n})\). Also, we assume that \(\lambda_{n_{j}}\rightarrow\hat{\lambda}\in (0,\frac{\beta}{2})\) and \(\gamma_{n_{j}}\rightarrow\hat{\gamma }\in (0,\frac{1}{2\Vert L\Vert^{2}})\).

Set

$$\hat{T}=J_{\hat{\lambda}}^{B} \bigl((I-\hat{\lambda}A)-\hat{\gamma }L^{\ast }(I-T)L \bigr). $$

It follows that TÌ‚ is \(\frac{3+2\hat{\gamma}\|L\| ^{2}}{4}\)-averaged and \(F(\hat{T})=\Omega_{L,T}^{A+B}\).

Consider, for each \(j\in\mathbb{N}\),

$$\begin{aligned} \|x_{n_{j}}-\hat{T}x_{n_{j}}\| \leq& \|x_{n_{j}}-x_{n_{j}+1}\|+\| T_{n_{j}}x_{n_{j}}- \hat{T}x_{n_{j}}\| \\ \leq&\|x_{n_{j}}-x_{n_{j}+1}\|+\big\| J_{\lambda _{n_{j}}}^{B}z_{j}-J_{\hat {\lambda}}^{B}z_{j} \big\| \\ &+\big\| J_{\hat{\lambda}}^{B}z_{j}-\hat{T}x_{n_{j}}\big\| , \end{aligned}$$
(3.10)

where \(z_{j}= ((I-\lambda_{n_{j}}A)-\gamma_{n_{j}}L^{\ast }(I-T)L )x_{n_{j}}\). Now, the last term in (3.10) is estimated as follows:

$$\begin{aligned} \big\| J_{\hat{\lambda}}^{B}z_{j}-\hat{T}x_{n_{j}} \big\| =&\big\| J_{\hat{\lambda }}^{B} \bigl((I-\lambda_{n_{j}}A)- \gamma_{n_{j}}L^{\ast}(I-T)L \bigr)x_{n_{j}} -J_{\hat{\lambda}}^{B} \bigl((I-\hat{\lambda}A)-\hat{\gamma }L^{\ast }(I-T)L \bigr)x_{n_{j}}\big\| \\ \leq&\big\Vert \bigl((I-\lambda_{n_{j}}A)-\gamma_{n_{j}}L^{\ast }(I-T)L \bigr)x_{n_{j}}- \bigl((I-\hat{\lambda}A)-\hat{\gamma}L^{\ast}(I-T)L \bigr)x_{n_{j}}\big\Vert \\ \leq&\big\Vert (\lambda_{n_{j}}- \hat{\lambda})Ax_{n_{j}}\big\Vert +\big\Vert (\gamma _{n_{j}}-\hat{ \gamma})L^{\ast}(I-T)Lx_{n_{j}}\big\Vert \\ \leq&\vert\lambda_{n_{j}}-\hat{ \lambda}\vert\Vert Ax_{n_{j}}\Vert +2\vert \gamma_{n_{j}}-\hat{\gamma} \vert\big\Vert L^{\ast}\big\Vert \Vert L\Vert \big\Vert x_{n_{j}}-x^{\ast} \big\Vert , \end{aligned}$$

for each \(j\in\mathbb{N}\). Thus, it follows that

$$\begin{aligned} \lim_{j\rightarrow\infty}\big\| J_{\hat{\lambda}}^{B}z_{j}- \hat {T}x_{n_{j}}\big\| =0. \end{aligned}$$
(3.11)

Next, by using Lemma 2.4, we estimate

$$\begin{aligned} \big\| J_{\lambda_{n_{j}}}^{B}z_{j}-J_{\hat{\lambda}}^{B}z_{j} \big\| =& \biggl\Vert J_{\hat{\lambda}}^{B} \biggl(\frac{\hat{\lambda}}{\lambda }_{n_{j}}z_{j}+ \biggl(1-\frac{\hat{\lambda}}{\lambda}_{n_{j}} \biggr)J_{\lambda_{n_{j}}}^{B}z_{j} \biggr)-J_{\hat{\lambda }}^{B}z_{j} \biggr\Vert \\ \leq& \biggl\Vert \frac{\hat{\lambda}}{\lambda}_{n_{j}}z_{j}+ \biggl(1-\frac {\hat{\lambda}}{\lambda}_{n_{j}} \biggr)J_{\lambda _{n_{j}}}^{B}z_{j}-z_{j} \biggr\Vert \\ =& \biggl\Vert \biggl(1-\frac{\hat{\lambda}}{\lambda}_{n_{j}} \biggr)J_{\lambda_{n_{j}}}^{B}z_{j}- \biggl(1- \frac{\hat{\lambda }}{\lambda }_{n_{j}} \biggr)z_{j} \biggr\Vert \\ =& \biggl\Vert \biggl(1-\frac{\hat{\lambda}}{\lambda}_{n_{j}} \biggr) \bigl(J_{\lambda_{n_{j}}}^{B}z_{j}-z_{j}\bigr) \biggr\Vert \\ =& \bigg| 1-\frac{\hat{\lambda}}{\lambda}_{n_{j}} \bigg| \bigl\Vert J_{\lambda_{n_{j}}}^{B}z_{j}-z_{j} \bigr\Vert , \end{aligned}$$
(3.12)

for each \(j\in\mathbb{N}\). This suggests to consider the following computation, for each \(j\in\mathbb{N}\):

$$\begin{aligned} \bigl\Vert J_{\lambda_{n_{j}}}^{B}z_{j}-z_{j} \bigr\Vert =&\Vert T_{n_{j}}x_{n_{j}}-z_{j}\Vert \\ =&\big\Vert x_{n_{j}+1}-x_{n_{j}}+\lambda_{n_{j}}Ax_{n_{j}}+ \gamma _{n_{j}}L^{\ast}(I-T)Lx_{n_{j}}\big\Vert \\ \leq&\Vert x_{n_{j}+1}-x_{n_{j}}\Vert+ \lambda_{n_{j}}\Vert Ax_{n_{j}}\Vert+\gamma_{n_{j}}\big\Vert L^{\ast}(I-T)Lx_{n_{j}}\big\Vert \\ \leq&\Vert x_{n_{j}+1}-x_{n_{j}}\Vert+ \lambda_{n_{j}}\Vert Ax_{n_{j}}\Vert+2\gamma_{n_{j}}\big\Vert L^{\ast}\big\Vert \Vert L\Vert \big\Vert x_{n_{j}}-x^{\ast}\big\Vert . \end{aligned}$$

This implies that \(\{\Vert(J_{\lambda _{n_{j}}}^{B}z_{j}-z_{j})\Vert \}\) is a bounded sequence. Consequently, in view of (3.12), we have

$$\begin{aligned} \lim_{j\rightarrow\infty}\big\| J_{\lambda_{n_{j}}}^{B}z_{j}-J_{\hat {\lambda }}^{B}z_{j} \big\| = 0. \end{aligned}$$
(3.13)

Substituting (3.9), (3.11) and (3.13) into (3.10), we get

$$\begin{aligned} \lim_{j\rightarrow\infty}\|x_{n_{j}}- \hat{T}x_{n_{j}}\|=0. \end{aligned}$$
(3.14)

Thus, by Lemma 2.5, it follows that \(\hat{x}\in F(\hat {T})=\Omega_{L,T}^{A+B}\). This shows \(\omega_{w}(x_{n})\subset\Omega _{L,T}^{A+B}\). Using this one and (a) we can apply Lemma 2.6 to conclude that the sequence \(\{x_{n}\}\) converges weakly to an element in \(\Omega _{L,T}^{A+B}\). This completes the proof. □

Remark 3.3

  1. (i)

    If \(A:=0\), the zero operator, then our presented algorithm (3.4) and algorithm (1.7) coincide.

  2. (ii)

    If \(H_{1}=H_{2}\) and L is the identity operator, then the problem (1.9) is reduced to the problem of finding an element of the set \((A+B)^{-1}0\cap F(T)\). This type of problem was also studied and considered by many authors; see [31–34] for example.

We will discuss more applications of our main Theorem 3.2 in the next section.

4 Applications

In this section, we will show some applications of the problem (1.9) and Theorem 3.2.

4.1 Variational inequality problem

Recall that the normal cone to C at \(u\in C\) is defined as

$$N_{C}(u)=\bigl\{ z\in H:\langle z,y-u\rangle\leq0, \forall y\in C\bigr\} . $$

It is well known that \(N_{C}\) is a maximal monotone operator. In the case \(B:=N_{C}:H\rightarrow2^{H}\) we can verify that the problem (1.8) is reduced to the problem of finding \(x^{\ast}\in C\) such that

$$\begin{aligned} \bigl\langle Ax^{\ast},x-x^{\ast}\bigr\rangle \geq0,\quad \forall x\in C. \end{aligned}$$
(4.1)

We will denote \(\operatorname{VIP}(C,A)\) for the solution set of the problem (4.1). Also, in this case, we also have \(J_{\lambda}^{B}=:P_{C}\) (the metric projection of H onto C). By the above setting, the problem (1.9) is reduced to finding

$$\begin{aligned} x^{\ast}\in \operatorname{VIP}(C,A)\quad \text{such that } Lx^{\ast}\in F(T). \end{aligned}$$
(4.2)

Here, we should denote by \(\Omega_{L,T}^{A,C}\) the solution set of the problem (4.2). Subsequently, by applying Theorem 3.2, we obtain the following convergence theorem.

Theorem 4.1

Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces and let C be a nonempty closed convex subset of \(H_{1}\). Let \(A:H_{1}\rightarrow H_{1}\) be a β-ism, \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. For any \(x_{1}\in H_{1}\), define

$$x_{n+1} = P_{C} \bigl((I-\lambda_{n}A)- \gamma_{n}L^{\ast}(I-T)L \bigr)x_{n},\quad\forall n\in\mathbb{N}, $$

where the sequence \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following conditions:

  1. (i)

    \(0< a\leq\lambda_{n}\leq b_{1}< \frac{\beta}{2}\),

  2. (ii)

    \(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\|L\| ^{2}}\).

If \(\Omega_{L,T}^{A,C}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{A,C}\).

Remark 4.2

If \(H_{1}=H_{2}\) and L is the identity operator, the problem (4.2) was also considered by Takahashi and Toyoda in [35].

4.2 Convex minimization problem

We will consider a convex function \(f:H\rightarrow R\), which is also Fréchet differentiable. Let C be a given closed convex subset of H. In this case, by setting \(A:=\nabla f\), the gradient of f, and \(B:=N_{C}\), the problem of finding \(x^{\ast}\in(A+B)^{-1}0\) is equivalent to find a point \(x^{\ast}\in C\) such that

$$\begin{aligned} \bigl\langle \nabla f\bigl(x^{\ast}\bigr),x-x^{\ast} \bigr\rangle \geq0, \quad\forall x\in C. \end{aligned}$$
(4.3)

Note that (4.3) is equivalent to the following minimization problem: find \(x^{\ast}\in C\) such that

$$\begin{aligned} x^{\ast}\in\arg\min_{x\in C} f(x). \end{aligned}$$

Thus, in this situation, the problem (1.9) is reduced to the problem of finding

$$\begin{aligned} x^{\ast}\in\arg\min_{x\in C} f(x) \quad \text{such that } Lx^{\ast}\in F(T). \end{aligned}$$
(4.4)

We will denote by \(\Omega_{L,T}^{f,C}\) the solution set of the problem (4.4). Then, by applying Theorem 3.2, we obtain the following result.

Theorem 4.3

Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces and let C be a nonempty closed convex subset of \(H_{1}\). Let \(f:H_{1}\rightarrow\mathbb{R}\) be convex and Fréchet differentiable, ∇f be α-Lipschitz, \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. For any \(x_{1}\in H_{1}\), define

$$x_{n+1} = P_{C} \bigl((I-\lambda_{n}\nabla f)- \gamma_{n}L^{\ast }(I-T)L \bigr)x_{n},\quad \forall n\in\mathbb{N}, $$

where the sequences \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following conditions:

  1. (i)

    \(0< a\leq\lambda_{n}\leq b_{1}< \frac{1}{2\alpha }\),

  2. (ii)

    \(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\|L\| ^{2}}\).

If \(\Omega_{L,T}^{f,C}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{f,C}\).

Proof

Note that if \(f:H\rightarrow\mathbb{R}\) is convex and \(\nabla f:H\rightarrow H\) is α-Lipschitz continuous for \(\alpha>0\) then ∇f is \(\frac{1}{\alpha}\)-ism (see [36]). Thus, the required result can be obtained immediately from Theorem 3.2. □

Remark 4.4

The problem of finding an element in \(\Omega_{L,T}^{f,C}\), as in Theorem 4.3, was studied by Iiduka [37], when L is the identity operator on \(H_{1}\).

4.3 Split common fixed point problem

Let \(V:H_{1}\rightarrow H_{1}\) be a nonexpansive mapping. Then, by Lemma 2.1(iii), we know that \(A:=I-V\) is a \(\frac {1}{2}\)-ism. Furthermore, since \(Ax^{\ast}=0\) if and only if \(x^{\ast }\in F(V)\), we may see that the problem (1.9) can be reduced to the problem of finding

$$\begin{aligned} x^{\ast}\in F(V) \quad\text{such that } Lx^{\ast }\in F(T), \end{aligned}$$
(4.5)

where \(T:H_{2}\rightarrow H_{2}\) and \(L:H_{1}\rightarrow H_{2}\). We will denote by \(\Omega_{L,T}^{V}\) the solution set of the problem (4.5). This problem is called the split common fixed point problem (SCFP), and was studied by many authors; see [38–41] for example. By applying Theorem 3.2, we can obtain the following result.

Theorem 4.5

Let \(H_{1}\) and \(H_{2}\) be Hilbert spaces. Let \(V:H_{1}\rightarrow H_{1}\) and \(T:H_{2}\rightarrow H_{2}\) be nonexpansive mappings and \(L:H_{1}\rightarrow H_{2}\) a bounded linear operator. For any \(x_{1}\in H_{1}\), define

$$\begin{aligned} x_{n+1} = (1-\lambda_{n})x_{n}+ \lambda_{n}Vx_{n}-\gamma_{n}L^{\ast }(I-T)Lx_{n},\quad \forall n\in\mathbb{N}, \end{aligned}$$
(4.6)

where the sequence \(\{\lambda_{n}\}\) and \(\{\gamma_{n}\}\) satisfy the following conditions:

  1. (i)

    \(0< a\leq\lambda_{n}\leq b_{1}< \frac{1}{4}\),

  2. (ii)

    \(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\|L\| ^{2}}\).

If \(\Omega_{L,T}^{V}\neq\emptyset\) then \(\{x_{n}\}\) converges weakly to an element in \(\Omega_{L,T}^{V}\).

Proof

We consider \(B:=0\), the zero operator. The required result follows from the fact that the zero operator is monotone and continuous, hence it is a maximal monotone. Moreover, in this case, we see that \(J_{\lambda}^{B}\) is the identity operator on \(H_{1}\), for each \(\lambda>0\). Thus the algorithm (3.4) reduces to (4.6), by setting \(A:=I-V\) and \(B:=0\). □

Remark 4.6

The algorithm (4.6), by considering \(V:=P_{C}\) and \(T:=P_{Q}\), respectively, can be applied for solving the problem (1.1), which was considered by Xu [4]. In fact, by using Lemma 2.2 in [16], we can show that \(\{P_{C}x_{n}\}\) converges strongly to an element of the problem (1.1).

5 Numerical experiments

In this section, we will show some numerical results and discuss on the possible good choices of step size parameters \(\lambda_{n}\) and \(\gamma _{n}\), those satisfy the control conditions in Theorem 3.2.

Let \(H_{1}=\mathbb{R}^{2}\) and \(H_{2}=\mathbb{R}^{3}\) be equipped with the Euclidean norm. Let \(\hat{x}:=\bigl( {\scriptsize\begin{matrix}{} 3\cr 2 \end{matrix}}\bigr) \in H_{1} \) be fixed. We consider an 1-ism operator \(P_{C}\), where C is the following convex subset of \(H_{1}\):

$$C:=\bigl\{ y\in H_{1} :\langle\hat{x}, y\rangle\leq-5\bigr\} . $$

Next, for each \(x:= \bigl({\scriptsize\begin{matrix}{} x_{1}\cr x_{2} \end{matrix}}\bigr) \in H_{1} \), we will also concern ourselves with the following two norms:

$$\Vert x\Vert_{1}=|x_{1}|+|x_{2}|\quad \text{and} \quad \Vert x\Vert_{\infty}=\max\bigl\{ |x_{1}|, |x_{2}|\bigr\} . $$

Consider a function \(f:H_{1}\rightarrow\mathbb{R}\) which is defined by

$$f(x)=\Vert x\Vert_{1}, \quad\text{for all } x \in H_{1}. $$

We know that f is a convex function and its subdifferential, ∂f, is

$$\begin{aligned} \partial f(x)= \bigl\{ z\in H_{1}:\langle x,z\rangle=\Vert x \Vert_{1}, \Vert z\Vert_{\infty}\leq1 \bigr\} ,\quad \text{for all } x\in H_{1}. \end{aligned}$$

Moreover, since f is a convex function, it is well known that \(\partial f(\cdot)\) must be a maximal monotone operator.

On the other hand, let

$$\tilde{x}= \begin{pmatrix} 1\\1\\-2 \end{pmatrix} \quad\text{and}\quad \bar{x}= \begin{pmatrix} 1\\1\\-1 \end{pmatrix} $$

be two fixed vectors in \(H_{2}\). We consider a nonempty convex subset \(Q_{1}\cap Q_{2}\) of \(H_{2}\), where \(Q_{1}:=\{x\in H_{2}:\Vert\tilde{x}-x\Vert\leq3\}\) and \(Q_{2}:=\{x\in H_{2}:\langle\bar{x},x\rangle\leq2\}\). We notice that \(P_{Q_{1}}P_{Q_{2}}\) is a nonexpansive single value mapping on \(H_{2}\). Furthermore, since \(Q_{1}\cap Q_{2}\) is a nonempty set, we also know that \(F(P_{Q_{1}}P_{Q_{2}})= Q_{1}\cap Q_{2}\).

Now, let us consider a \(3\times2\) matrix

$$L:= \begin{bmatrix} 1 & \frac{1}{2}\\ \frac{1}{2} & \frac{1}{3}\\ \frac{1}{3} & \frac{1}{4} \end{bmatrix} . $$

We see that L is a bounded linear operator on \(H_{1}\) into \(H_{2}\) with \(\Vert L\Vert=1.3330\).

Based on the above settings, we will present some numerical experiments to show the efficiency of the constructed algorithm (3.4). That is, we are going to show that the algorithm (3.4) converges to a point \(x^{\ast}\in H_{1}\) such that

$$\begin{aligned} 0\in(P_{C}+\partial f) \bigl(x^{\ast}\bigr) \quad \text{and}\quad Lx^{\ast}\in Q_{1}\cap Q_{2}. \end{aligned}$$
(5.1)

We will consider the following five cases of the step size parameters \(\lambda_{n}\) and \(\gamma_{n}\), with the initial vectors \(\bigl({\scriptsize\begin{matrix}{} -1\cr 1 \end{matrix}}\bigr) \), \(\bigl({\scriptsize\begin{matrix}{} 0\cr 0 \end{matrix}}\bigr) \) and \(\bigl({\scriptsize\begin{matrix}{} 1\cr -1 \end{matrix}}\bigr) \) in \(H_{1}\):

Case 1.:

\(\lambda_{n}=0.25\), \(\gamma_{n}=0.14\).

Case 2.:

\(\lambda_{n}=1.0e^{-04}+\frac{1}{10n}\), \(\gamma _{n}=1.0e^{-04}+\frac{1}{10n}\).

Case 3.:

\(\lambda_{n}=1.0e^{-04}+\frac{1}{10n}\), \(\gamma _{n}=0.2799-\frac{1}{10n}\).

Case 4.:

\(\lambda_{n}=0.4999-\frac{1}{10n}\), \(\gamma _{n}=1.0e^{-04}+\frac{1}{10n}\).

Case 5.:

\(\lambda_{n}=0.4999-\frac{1}{10n}\), \(\gamma _{n}=0.2799-\frac{1}{10n}\).

Note that the solution set of the problem (5.1) is \(\bigl\{ \bigl({\scriptsize\begin{matrix}{} x\cr\frac{2x-1}{3} \end{matrix}}\bigr) \in H_{1}: \frac{1}{2}\le x\le\frac{79}{56} \bigr\}\). From Tables 1, 2 and 3, we may suggest that the larger step size of the parameters \(\lambda_{n}\) should provide a faster convergence, while the step size parameters \(\gamma_{n}\) seem to have less impact on the speed of algorithm (3.4) to a solution of the problem (5.1).

Table 1 Influence of the step size parameters \(\pmb{\lambda_{n}}\) and \(\pmb{\gamma_{n}}\) for the initial vector \(\pmb{(-1,1)}\) with the 4 decimal places
Table 2 Influence of the step size parameters \(\pmb{\lambda_{n}}\) and \(\pmb{\gamma_{n}}\) for the initial vector \(\pmb{(0,0)}\) with the 4 decimal places
Table 3 Influence of the step size parameters \(\pmb{\lambda_{n}}\) and \(\pmb{\gamma_{n}}\) for the initial vector \(\pmb{(1,-1)}\) with the 4 decimal places

Remark 5.1

Note that, for each \(x:= \bigl({\scriptsize\begin{matrix}{} x_{1}\cr x_{2} \end{matrix}}\bigr) \in H_{1}\) and \(\lambda>0\), we have

$$J_{\lambda}^{\partial f}(x)= \left\{ \begin{pmatrix} u_{1}\\u_{2} \end{pmatrix} \in H_{1}: u_{i}= x_{i}- \bigl(\min \bigl\{ |x_{i}|, \lambda\bigr\} \bigr)\operatorname{sgn}(x_{i}), \text{ for } i=1, 2 \right\}, $$

where f is defined as above and sgn is for the signum function. On the other hand, one can see that, trying to compute \(J_{\lambda}^{P_{C}+\partial f}\) will be harder.

6 Concluding remarks

This paper can be considered as a refinement of work by Takahashi et al. [16], by providing an algorithm for finding a solution of the main problem (1.9), which is a generalization of the problem that was considered in [16]. Some sufficient conditions for the weak convergence of such introduced algorithm are given. Also, in order to show the significance of the considered problem, some important applications are discussed. Since in this paper we are considering and focusing on the weak convergent type of the constructive algorithm, it should be a natural direction for the next research to study the algorithms and sufficient conditions and focus on strong convergence type.

References

  1. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  2. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353-2365 (2006)

    Article  Google Scholar 

  4. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, Article ID 105018 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Masad, E, Reich, S: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367-371 (2007)

    MathSciNet  MATH  Google Scholar 

  6. Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Opér. 3, 154-158 (1970)

    MATH  Google Scholar 

  7. Bruck, RE, Reich, S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 3, 459-470 (1977)

    MathSciNet  MATH  Google Scholar 

  8. Eckstein, J, Bertsckas, DP: On the Douglas Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293-318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  9. Marino, G, Xu, HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 3, 791-808 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  10. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  11. Yao, Y, Noor, MA: On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 217, 46-55 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. Byrne, C, Censor, Y, Gibali, A, Reich, S: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759-775 (2012)

    MathSciNet  MATH  Google Scholar 

  13. Cegielski, A: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Math., vol. 2057. Springer, Heidelberg (2012)

    MATH  Google Scholar 

  14. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877-898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  15. Zhang, L, Hao, Y: Fixed point methods for solving solutions of a generalized equilibrium problem. J. Nonlinear Sci. Appl. 9, 149-159 (2016)

    MathSciNet  MATH  Google Scholar 

  16. Takahashi, W, Xu, HK, Yao, JC: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23, 205-221 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  17. Boikanyo, OA: The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, Article ID 2371857 (2016)

    Article  MathSciNet  Google Scholar 

  18. Moudafi, A, Thera, M: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94(2), 425-448 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  19. Qin, X, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Tseng, P: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431-446 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  21. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  22. Passty, GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383-390 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  23. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  24. Baillon, JB, Bruck, RE, Reich, S: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 1-9 (1978)

    MathSciNet  MATH  Google Scholar 

  25. Kassay, G, Reich, S, Sabach, S: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 21, 1319-1344 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Xu, HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360-378 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  27. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  28. Takahashi, W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)

    MATH  Google Scholar 

  29. Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff International Publishing, Leiden (1976)

    Book  MATH  Google Scholar 

  30. Takahashi, W: Nonlinear Functional Analysis: Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  31. Li, D, Zhao, J: Approximation of solutions of quasi-variational inclusion and fixed points of nonexpansive mappings. J. Nonlinear Sci. Appl. 9, 152-159 (2016)

    MathSciNet  Google Scholar 

  32. Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27-41 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  33. Yao, Z, Cho, SY, Kang, SM, Zhu, LJ: Approximating iterations for nonexpansive and maximal monotone operators. Abstr. Appl. Anal. 2015, Article ID 451320 (2015)

    MathSciNet  Google Scholar 

  34. Zhang, S, Lee, JHW, Chan, CK: Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 29(5), 571-581 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  35. Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417-428 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  36. Baillon, JB, Haddad, G: Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones. Isr. J. Math. 26(2), 137-150 (1977)

    Article  MATH  Google Scholar 

  37. Iiduka, H: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236, 1733-1742 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  38. Cui, H, Wang, F: Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, Article ID 78 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Moudafi, A: A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal., Theory Methods Appl. 74, 4083-4087 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  40. Shimizu, T, Takahashi, W: Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 211, 71-83 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  41. Zhao, J, He, S: Strong convergence of the viscosity approximation process for the split common fixed-point problem of quasi-nonexpansive mappings. J. Appl. Math. 2012, Article ID 438023 (2012)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the referees and the editor for their constructive comments and suggestions which have been useful for the improvement of the paper. This research has been funded by Naresuan University and the Thailand Research Fund under the project RTA5780007.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Narin Petrot.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Suwannaprapa, M., Petrot, N. & Suantai, S. Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl 2017, 6 (2016). https://doi.org/10.1186/s13663-017-0599-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-017-0599-7

MSC

Keywords