 Research
 Open Access
Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces
 Montira Suwannaprapa^{1},
 Narin Petrot^{1}Email author and
 Suthep Suantai^{2}
https://doi.org/10.1186/s1366301705997
© The Author(s) 2017
 Received: 20 December 2016
 Accepted: 20 April 2017
 Published: 28 April 2017
Abstract
In this paper, we consider a type of split feasibility problem by focusing on the solution sets of two important problems in the setting of Hilbert spaces that are the sum of monotone operators and fixed point problems. By assuming the existence of solutions, we provide a suitable algorithm for finding a solution point. Some important applications and numerical experiments of the considered problem and constructed algorithm are also discussed.
Keywords
 split feasibility problems
 maximal monotone operators
 inverse strongly monotone operator
 fixed point problems
 weak convergence theorems
MSC
 26A18
 47H04
 47H05
 47H10
 54A20
1 Introduction
One may note that finding the zeros of maximal monotone operator can be solved via a fixed point of its resolvent operator. This is because \(0\in Bx^{\ast}\) if and only if \(J_{\lambda}^{B}x^{\ast}=x^{\ast}\), when \(B:H\rightarrow2^{H}\) is a maximal monotone operator and \(\lambda >0\). Thus the problem of type (1.6) contains problem SCNPP as a special case in some sense.
2 Preliminaries
Throughout this paper, we denote by \(\mathbb{N}\) the set of positive integers, and by \(\mathbb{R}\) the set of real numbers. Let H be a real Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\Vert\cdot\Vert\), respectively. When \(\{x_{n}\}\) is a sequence in H, we denote the weak convergence of \(\{x_{n}\}\) to x in H by \(x_{n}\rightharpoonup x\).
We now collect some important properties, which are needed in this work.
Lemma 2.1
 (i)
The composite of finitely many averaged mappings is averaged. In particular, if \(T_{i}\) is \(\alpha_{i}\)averaged, where \(\alpha_{i}\in(0,1)\) for \(i=1,2\), then the composite \(T_{1}T_{2}\) is αaveraged, where \(\alpha=\alpha_{1}+\alpha_{2}\alpha _{1}\alpha_{2}\).
 (ii)
If A is βism and \(r\in(0,\beta]\), then \(T:=IrA\) is firmly nonexpansive.
 (iii)
A mapping \(T:H\rightarrow H\) is nonexpansive if and only if \(IT\) is \(\frac{1}{2}\)ism.
 (iv)
If A is βism, then, for \(\gamma>0\), γA is \(\frac{\beta}{\gamma}\)ism.
 (v)
T is averaged if and only if the complement \(IT\) is βism for some \(\beta>\frac{1}{2}\). Indeed, for \(\alpha\in (0,1)\), T is αaveraged if and only if \(IT\) is \(\frac {1}{2\alpha}\)ism.
The following result can be found in [27], but here we modify the presentation for showing a finer conclusion of the considered mapping T.
Lemma 2.2
[27]
Let \(T=(1\alpha)A+\alpha N\) for some \(\alpha\in(0,1)\). If A is βaveraged and N is nonexpansive then T is \(\alpha +(1\alpha)\beta\)averaged.
Proof
We use the following lemmas for proving the main result.
Lemma 2.3
[16]
 (i)
\(L^{\ast}(IT)L\) is \(\frac{1}{2\Vert L\Vert^{2}}\)ism,
 (ii)for \(0< r<\frac{1}{\Vert L\Vert^{2}}\),
 (iia)
\(IrL^{\ast}(IT)L\) is \(r\Vert L\Vert ^{2}\)averaged,
 (iib)
\(J_{\lambda}^{B}(IrL^{\ast}(IT)L)\) is \(\frac {1+r\Vert L\Vert^{2}}{2}\)averaged, for \(\lambda>0\),
 (iia)
 (iii)
if \(r=\Vert L\Vert^{2}\), then \(IrL^{\ast}(IT)L\) is nonexpansive.
Lemma 2.4
[29]
Lemma 2.5
[30]
Let C be a closed convex subset of a Hilbert space H and let T be a nonexpansive mapping of C into itself. Then \(U:=IT\) is demiclosed, i.e., \(x_{n}\rightharpoonup x_{0}\) and \(Ux_{n}\rightarrow y_{0}\) imply \(Ux_{0}=y_{0}\).
Lemma 2.6
[16]
 (i)
for every \(x^{\ast}\in C\), \(\lim_{n\rightarrow\infty }\Vert x_{n}x^{\ast}\Vert\) exists;
 (ii)
if a subsequence \(\{x_{n_{j}}\}\subset\{x_{n}\}\) converges weakly to \(x^{\ast}\), then \(x^{\ast}\in C\).
3 Main results
We start by considering an equivalence theorem.
Theorem 3.1
 (i)
\(z\in\Omega_{L,T}^{A+B}\),
 (ii)
\(z=J_{\lambda}^{B} ((I\lambda A)\gamma L^{\ast }(IT)L )z\),
 (iii)
\(0\in L^{\ast}(IT)Lz+(A+B)z\),
Proof
Since \(\Omega_{L,T}^{A+B}\neq\emptyset\), there exists \(z_{0}\in D(B)\) such that \(0\in(A+B)z_{0}\) and \(Lz_{0}\in F(T)\). Let us put \(S=\frac{1}{2}(I+T)\). It follows that S is a firmly nonexpansive mapping and \(F(T)=F(S)\). Moreover, we have \(L^{\ast}(IT)L=2L^{\ast}(IS)L\).
Now, in view of Theorem 3.1, we are in a position to present our main algorithm and show its convergence theorem.
Theorem 3.2
 (i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{\beta}{2}\),
 (ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\),
Proof
 (a)
for each \(x^{\ast}\in\Omega_{L,T}^{A+B}\), \(\lim_{n\rightarrow\infty}\x_{n}x^{\ast}\\) exists;
 (b)
\(\sum_{n=1}^{\infty}(12\gamma_{n}\L\^{2})\ x_{n}T_{n}x_{n}\^{2}<\infty\).
Next, we will denote \(\omega_{w}(x_{n})\) for the set of all weak cluster points of \(\{x_{n}\}\). Let \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) and \(x_{n_{j}}\rightharpoonup\hat{x}\), for some \(\hat {x}\in\omega_{w}(x_{n})\). Also, we assume that \(\lambda_{n_{j}}\rightarrow\hat{\lambda}\in (0,\frac{\beta}{2})\) and \(\gamma_{n_{j}}\rightarrow\hat{\gamma }\in (0,\frac{1}{2\Vert L\Vert^{2}})\).
Remark 3.3
We will discuss more applications of our main Theorem 3.2 in the next section.
4 Applications
In this section, we will show some applications of the problem (1.9) and Theorem 3.2.
4.1 Variational inequality problem
Theorem 4.1
 (i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{\beta}{2}\),
 (ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\).
4.2 Convex minimization problem
Theorem 4.3
 (i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{1}{2\alpha }\),
 (ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\).
4.3 Split common fixed point problem
Theorem 4.5
 (i)
\(0< a\leq\lambda_{n}\leq b_{1}< \frac{1}{4}\),
 (ii)
\(0< a\leq\gamma_{n}\leq b_{2}< \frac{1}{2\L\ ^{2}}\).
Proof
We consider \(B:=0\), the zero operator. The required result follows from the fact that the zero operator is monotone and continuous, hence it is a maximal monotone. Moreover, in this case, we see that \(J_{\lambda}^{B}\) is the identity operator on \(H_{1}\), for each \(\lambda>0\). Thus the algorithm (3.4) reduces to (4.6), by setting \(A:=IV\) and \(B:=0\). □
5 Numerical experiments
In this section, we will show some numerical results and discuss on the possible good choices of step size parameters \(\lambda_{n}\) and \(\gamma _{n}\), those satisfy the control conditions in Theorem 3.2.
 Case 1.:

\(\lambda_{n}=0.25\), \(\gamma_{n}=0.14\).
 Case 2.:

\(\lambda_{n}=1.0e^{04}+\frac{1}{10n}\), \(\gamma _{n}=1.0e^{04}+\frac{1}{10n}\).
 Case 3.:

\(\lambda_{n}=1.0e^{04}+\frac{1}{10n}\), \(\gamma _{n}=0.2799\frac{1}{10n}\).
 Case 4.:

\(\lambda_{n}=0.4999\frac{1}{10n}\), \(\gamma _{n}=1.0e^{04}+\frac{1}{10n}\).
 Case 5.:

\(\lambda_{n}=0.4999\frac{1}{10n}\), \(\gamma _{n}=0.2799\frac{1}{10n}\).
Influence of the step size parameters \(\pmb{\lambda_{n}}\) and \(\pmb{\gamma_{n}}\) for the initial vector \(\pmb{(1,1)}\) with the 4 decimal places
Case →  1  2  3  4  5  

#(Iters) ↓  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors 
50  \(\begin{pmatrix} 0.4924\\0 \end{pmatrix} \)  0.0076  \(\begin{pmatrix} 0.0343\\0.5622 \end{pmatrix} \)  0.7300  \(\begin{pmatrix} 0.0343\\0.5622 \end{pmatrix} \)  0.7300  \(\begin{pmatrix} 0.4999\\0 \end{pmatrix} \)  \(1.0e^{04}\)  \(\begin{pmatrix} 0.4999\\0 \end{pmatrix} \)  \(1.0e^{04}\) 
60  \(\begin{pmatrix} 0.4966\\0 \end{pmatrix} \)  0.0034  \(\begin{pmatrix} 0.0420\\0.5505 \end{pmatrix} \)  0.7161  \(\begin{pmatrix} 0.0420\\0.5505 \end{pmatrix} \)  0.7161  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
120  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.0708\\0.5073 \end{pmatrix} \)  0.6645  \(\begin{pmatrix} 0.0708\\0.5073 \end{pmatrix} \)  0.6645  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
250,000  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.4999\\0 \end{pmatrix} \)  \(1.0e^{04}\)  \(\begin{pmatrix} 0.4999\\0 \end{pmatrix} \)  \(1.0e^{04}\)  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
275,000  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
Influence of the step size parameters \(\pmb{\lambda_{n}}\) and \(\pmb{\gamma_{n}}\) for the initial vector \(\pmb{(0,0)}\) with the 4 decimal places
Case →  1  2  3  4  5  

#(Iters) ↓  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors 
50  \(\begin{pmatrix} 0.4901\\0 \end{pmatrix} \)  0.0099  \(\begin{pmatrix} 0.0654\\0 \end{pmatrix} \)  0.4346  \(\begin{pmatrix} 0.0654\\0 \end{pmatrix} \)  0.4346  \(\begin{pmatrix} 0.4998\\0 \end{pmatrix} \)  \(2.0e^{04}\)  \(\begin{pmatrix} 0.4998\\0 \end{pmatrix} \)  \(2.0e^{04}\) 
60  \(\begin{pmatrix} 0.4956\\0 \end{pmatrix} \)  0.0044  \(\begin{pmatrix} 0.0680\\0 \end{pmatrix} \)  0.4320  \(\begin{pmatrix} 0.0680\\0 \end{pmatrix} \)  0.4320  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
120  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.0779\\0 \end{pmatrix} \)  0.4221  \(\begin{pmatrix} 0.0779\\0 \end{pmatrix} \)  0.4221  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
275,000  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.4999\\0 \end{pmatrix} \)  \(1.0e^{04}\)  \(\begin{pmatrix} 0.4999\\0 \end{pmatrix} \)  \(1.0e^{04}\)  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
300,000  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.5000\\0 \end{pmatrix} \)  0 
Influence of the step size parameters \(\pmb{\lambda_{n}}\) and \(\pmb{\gamma_{n}}\) for the initial vector \(\pmb{(1,1)}\) with the 4 decimal places
Case →  1  2  3  4  5  

#(Iters) ↓  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors  \(\boldsymbol{x_{\mathrm{Iter}}}\)  Errors 
50  \(\begin{pmatrix} 0.6997\\0.1331 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.8477\\0.0097 \end{pmatrix} \)  0.1848  \(\begin{pmatrix} 0.8477\\0.0097 \end{pmatrix} \)  0.1848  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0 
60  \(\begin{pmatrix} 0.6997\\0.1331 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.8457\\0.0126 \end{pmatrix} \)  0.1813  \(\begin{pmatrix} 0.8457\\0.0126 \end{pmatrix} \)  0.1813  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0 
120  \(\begin{pmatrix} 0.6997\\0.1331 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.8384\\0.0236 \end{pmatrix} \)  0.1681  \(\begin{pmatrix} 0.8384\\0.0236 \end{pmatrix} \)  0.1681  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0 
50,000  \(\begin{pmatrix} 0.6997\\0.1331 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.7455\\0.1629 \end{pmatrix} \)  \(6.3791e^{04}\)  \(\begin{pmatrix} 0.7455\\0.1629 \end{pmatrix} \)  \(6.3791e^{04}\)  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0 
75,000  \(\begin{pmatrix} 0.6997\\0.1331 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.7452\\0.1634 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.7452\\0.1634 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0  \(\begin{pmatrix} 0.6758\\0.1172 \end{pmatrix} \)  0 
Remark 5.1
6 Concluding remarks
This paper can be considered as a refinement of work by Takahashi et al. [16], by providing an algorithm for finding a solution of the main problem (1.9), which is a generalization of the problem that was considered in [16]. Some sufficient conditions for the weak convergence of such introduced algorithm are given. Also, in order to show the significance of the considered problem, some important applications are discussed. Since in this paper we are considering and focusing on the weak convergent type of the constructive algorithm, it should be a natural direction for the next research to study the algorithms and sufficient conditions and focus on strong convergence type.
Declarations
Acknowledgements
The authors are thankful to the referees and the editor for their constructive comments and suggestions which have been useful for the improvement of the paper. This research has been funded by Naresuan University and the Thailand Research Fund under the project RTA5780007.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221239 (1994) MathSciNetView ArticleMATHGoogle Scholar
 Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441453 (2002) MathSciNetView ArticleMATHGoogle Scholar
 Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 51, 23532365 (2006) View ArticleGoogle Scholar
 Xu, HK: Iterative methods for the split feasibility problem in infinitedimensional Hilbert spaces. Inverse Probl. 26, Article ID 105018 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Masad, E, Reich, S: A note on the multipleset split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367371 (2007) MathSciNetMATHGoogle Scholar
 Martinet, B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Opér. 3, 154158 (1970) MATHGoogle Scholar
 Bruck, RE, Reich, S: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 3, 459470 (1977) MathSciNetMATHGoogle Scholar
 Eckstein, J, Bertsckas, DP: On the Douglas Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293318 (1992) MathSciNetView ArticleMATHGoogle Scholar
 Marino, G, Xu, HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 3, 791808 (2004) MathSciNetView ArticleMATHGoogle Scholar
 Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240256 (2002) MathSciNetView ArticleMATHGoogle Scholar
 Yao, Y, Noor, MA: On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 217, 4655 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Byrne, C, Censor, Y, Gibali, A, Reich, S: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759775 (2012) MathSciNetMATHGoogle Scholar
 Cegielski, A: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Math., vol. 2057. Springer, Heidelberg (2012) MATHGoogle Scholar
 Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877898 (1976) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, L, Hao, Y: Fixed point methods for solving solutions of a generalized equilibrium problem. J. Nonlinear Sci. Appl. 9, 149159 (2016) MathSciNetMATHGoogle Scholar
 Takahashi, W, Xu, HK, Yao, JC: Iterative methods for generalized split feasibility problems in Hilbert spaces. SetValued Var. Anal. 23, 205221 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Boikanyo, OA: The viscosity approximation forwardbackward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, Article ID 2371857 (2016) MathSciNetView ArticleGoogle Scholar
 Moudafi, A, Thera, M: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94(2), 425448 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Qin, X, Cho, SY, Wang, L: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Tseng, P: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431446 (2000) MathSciNetView ArticleMATHGoogle Scholar
 Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 7588 (1970) MathSciNetView ArticleMATHGoogle Scholar
 Passty, GB: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383390 (1979) MathSciNetView ArticleMATHGoogle Scholar
 Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984) MATHGoogle Scholar
 Baillon, JB, Bruck, RE, Reich, S: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houst. J. Math. 4, 19 (1978) MathSciNetMATHGoogle Scholar
 Kassay, G, Reich, S, Sabach, S: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 21, 13191344 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Xu, HK: Averaged mappings and the gradientprojection algorithm. J. Optim. Theory Appl. 150, 360378 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103120 (2004) MathSciNetView ArticleMATHGoogle Scholar
 Takahashi, W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009) MATHGoogle Scholar
 Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Noordhoff International Publishing, Leiden (1976) View ArticleMATHGoogle Scholar
 Takahashi, W: Nonlinear Functional Analysis: Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama (2000) MATHGoogle Scholar
 Li, D, Zhao, J: Approximation of solutions of quasivariational inclusion and fixed points of nonexpansive mappings. J. Nonlinear Sci. Appl. 9, 152159 (2016) MathSciNetGoogle Scholar
 Takahashi, S, Takahashi, W, Toyoda, M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 2741 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Yao, Z, Cho, SY, Kang, SM, Zhu, LJ: Approximating iterations for nonexpansive and maximal monotone operators. Abstr. Appl. Anal. 2015, Article ID 451320 (2015) MathSciNetGoogle Scholar
 Zhang, S, Lee, JHW, Chan, CK: Algorithms of common solutions to quasi variational inclusion and fixed point problems. Appl. Math. Mech. 29(5), 571581 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417428 (2003) MathSciNetView ArticleMATHGoogle Scholar
 Baillon, JB, Haddad, G: Quelques propriétés des opérateurs anglebornés et ncycliquement monotones. Isr. J. Math. 26(2), 137150 (1977) View ArticleMATHGoogle Scholar
 Iiduka, H: Fixed point optimization algorithm and its application to network bandwidth allocation. J. Comput. Appl. Math. 236, 17331742 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Cui, H, Wang, F: Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, Article ID 78 (2014) MathSciNetView ArticleMATHGoogle Scholar
 Moudafi, A: A note on the split common fixedpoint problem for quasinonexpansive operators. Nonlinear Anal., Theory Methods Appl. 74, 40834087 (2011) MathSciNetView ArticleMATHGoogle Scholar
 Shimizu, T, Takahashi, W: Strong convergence to common fixed points of families of nonexpansive mappings. J. Math. Anal. Appl. 211, 7183 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Zhao, J, He, S: Strong convergence of the viscosity approximation process for the split common fixedpoint problem of quasinonexpansive mappings. J. Appl. Math. 2012, Article ID 438023 (2012) MathSciNetMATHGoogle Scholar