- Research
- Open access
- Published:
Hybrid projected subgradient-proximal algorithms for solving split equilibrium problems and split common fixed point problems of nonexpansive mappings in Hilbert spaces
Fixed Point Theory and Applications volume 2018, Article number: 5 (2018)
Abstract
In this paper, we propose two strongly convergent algorithms which combines diagonal subgradient method, projection method and proximal method to solve split equilibrium problems and split common fixed point problems of nonexpansive mappings in a real Hilbert space: fixed point set constrained split equilibrium problems (FPSCSEPs) in real Hilbert spaces. The computations of first algorthim requires prior knowledge of operator norm. To estimate the norm of an operator is not always easy, and if it is not easy to estimate the norm of an operator, we purpose another iterative algorithm with a way of selecting the step-sizes such that the implementation of the algorithm does not need any prior information as regards the operator norm. The strong convergence properties of the algorithms are established under mild assumptions on equilibrium bifunctions. We also report some applications and numerical results to compare and illustrate the convergence of the proposed algorithms.
1 Introduction
In 1994 Censor and Elfving [1] introduced a notion of the split feasibility problem, which is to find an element of a closed convex subset of the Euclidean space whose image under a linear operator is an element of another closed convex subset of a Euclidean space. Then, in 2009 Censor and Segal [2] introduced the split common fixed point problem (SCFPP) where split feasibility problem becomes a special case of SCFPP. Many convex optimization problems in a Hilbert space can be written in the form of SCFPP and SCFPPs have played an import role in the study of several unrelated problems arising in physics, finance, economics, network analysis, elasticity, optimization, water resources, medical images, structural analysis, image analysis and several other real-world applications (see, e.g., [3, 4]). As they have a wide range of applications SCFPPs have emerged as an interesting and fascinating research area of mathematics.
Let Δ be a nonempty closed convex subset of a real Hilbert space H equipped with the inner product \(\langle\cdot,\cdot\rangle\) and with the corresponding norm \(\|\cdot\|\) and let \(U:\Delta\rightarrow \Delta\) be an operator. We denote by \(\operatorname{Fix}U=\{x\in\Delta:Ux=x\}\) the subset of fixed points of U. We say that U is nonexpansive if \(\|U(x)-U(y)\|\leq\|x-y\|\) \(\forall x,y\in\Delta\).
Throughout the paper, unless otherwise is stated, we assume that \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1}\rightarrow {H_{2}}\) be a nonzero bounded linear operator. Suppose C be nonempty closed convex subset of \(H_{1}\) and \(T:C\rightarrow C\) be nonexpansive operator, and D be nonempty closed convex subset of \(H_{2}\) and \(V:D\rightarrow D\) be nonexpansive operator. Given two bifunctions \(f:C\times C\rightarrow{\mathbb{R}}\) and \(g:D\times D\rightarrow {\mathbb{R}}\). The notation \(\operatorname{EP}(f,C)\) represents the following equilibrium problem: find \(x^{*}\in C\) such that \(f(x^{*},y)\geq0\) \(\forall y\in C\), and \(\operatorname{SEP}(f,C)\) represents its solution set. Many problems in physics, optimization, and economics can be reduced to find the solution of equilibrum problem \(\operatorname{EP}(f,C)\); see, e.g., [5]. In 1997, Combettes and Hirstoaga [6] introduced an iterative scheme of finding the solution of \(\operatorname{EP}(f,C)\) under the assumption that \(\operatorname{SEP}(f,C)\) is nonempty. Later on, many iterative algorithms are considered to find the element of \(\operatorname{Fix}T \cap \operatorname{SEP}(f,C)\); see [7–10]. In 2013, Kazmi and Rizvi [11] considered a split equilibrium problem (SEP):
They introduced the iterative scheme which converges strongly to a common solution of the split equilibrium problem, the variational inequality problem and the fixed point problem for a nonexpansive mapping. Many researchers have also been proposed algorithms for finding solution point of SEP; see, for example, [12–14] and the references therein. Hieu [14] proposed an algorithm for solving SEP which combines three methods including the projection method, the proximal method and the diagonal subgradient method. Recently, Dinh, Son, and Anh [15] considered the following fixed point set-constrained split equilibrium problems (FPSCSEPs):
Let \(\operatorname{SFPSCSEP}(f,C,T;g,D,V)\) or simply S denotes the solution set of FPSCSEP (1). The problem (1) includes two fixed point set-constrained equilibrium problems (FPSCEPs). Consider the following fixed point set-constrained equilibrium problem (\(\operatorname{FPSCEP}(f, C,T)\)):
and let \(\operatorname{SFPSCEP}(f,C,T)\) or simply \({S_{1}}\) denotes its solution set. Similarly, let \(\operatorname{FPSCEP}(g, D,V)\) denote the fixed point set-constrained equilibrium problem
and \(\operatorname{SFPSCEP}(g,D,V)\) or simply \(S_{2}\) denotes its solution set. Therefore, from (1), (2), and (3) we have \(S=\{x^{*}\in S_{1}:Ax^{*}\in S_{2}\}\). Moreover, \(S_{1}=\{ x^{*}\in C: x^{*}\in \operatorname{SEP}(f,C)\cap \operatorname{Fix}T\}\). Similarly, \(S_{2}=\{u^{*}\in D:u^{*}\in \operatorname{SEP}(g,D)\cap \operatorname{Fix}V\}\). In [15], Dinh, Son, and Anh proposed the extragradient algorithms for finding a solution of the problem (FPSCSEP). Under certain conditions on parameters, the proposed iteration sequences are proved to be weakly and strongly convergent to a solution of (FPSCSEP). Furthermore, Dinh, Son, Jiao and Kim [16] proposed the linesearch algorithm which combines the extragradient method incorporated with the Armijo linesearch rule for solving the problem (FPSCSEP) in real Hilbert spaces under the assumptions that the first bifunction is pseudomonotone with respect to its solution set, the second bifunction is monotone, and fixed point mappings are nonexpansive. For obtaining a strong convergence result, they combined the proposed algorithm with hybrid cutting technique. The main advantages of the two mentioned extragradient methods are that they can be worked with pseudomonotone bifunctions and also the subproblems can be numerically solved more easily than subproblems in the proximal method. However, the problems of solving strongly convex optimization subproblems and of finding shrinking projections in [15, 16] is expensive excepts special cases when the feasible set has a simple structure.
In this paper, we propose two strongly convergent algorithms for finding a solution of the problem (FPSCSEP). In the first algorithm, two projections on feasible set and a projected subgradient step followed by a proximal step is need to be computed per each iteration. In the second algorithm, we propose a modification of the first algorithm where the second projection is performed on feasible set while the first projection over C is replaced by a projection onto a tangent plane to C in order to reduce the number of optimization subproblems to be solved. Moreover, in the second algorithm, a way of selecting an adaptive step-size in the second projection has allowed us to avoid the prior knowledge of operator norm. Comparing with the algorithms in [15, 16], the proposed algorithms has a simple structure, and the metric projection, in general, is simpler than solving strongly convex optimization subproblems on a same feasible set and finding shrinking projections.
The paper is organized as follows. In the next section we describe the properties and lemmas which will be used in the proof for the convergence of the proposed algorithms. The algorithms and the convergence analysis of the algorithms is presented in the third section. Finally, in the last section we will see applications supported by an example and numerical results.
2 Preliminary
To investigate the convergence of our proposed algorithm, in this section we will introduce notations, and recall properties and technical lemmas which will be used in the sequel. We write \(x_{n}\rightharpoonup{x}\) to indicate that the sequence \(\{ x_{n}\}\) converges weakly to x as \(n\rightarrow{\infty}\), and \(x_{n}\rightarrow{x}\) means that \(\{x_{n}\}\) converges strongly to x. It is well known that adjoint operator \(A^{*}\) of a bounded linear operator \(A: H_{1}\rightarrow{H_{2}}\) exists.
Let Δ be a subset of a real Hilbert space H and \(f:\Delta \times\Delta\rightarrow{\mathbb{R}}\) be a bifunction. Then f is said to be
-
(i)
strongly monotone on Δ, if there is \(M > 0\) (shortly M-strongly monotone on Δ) iff
$$f(x,y)+f(y,x)\leq{-M\|y-x\|^{2}},\quad \forall{x,y\in\Delta}; $$ -
(ii)
monotone on Δ iff
$$f(x,y)+f(y,x)\leq{0},\quad \forall{x,y\in\Delta}; $$ -
(iii)
pseudomonotone on Δ with respect to \(x\in \Delta\) iff
$$f(x,y)\geq{0} \quad \mbox{implies}\quad f(y,x)\leq{0},\quad \forall{y\in \Delta}. $$
We say that f is pseudomonotone on Δ with respect to \(\Psi \subset\Delta\) if it is pseudomonotone on Δ with respect to every \(x\in\Psi\). When \(\Psi=\Delta\), f is called pseudomonotone on Δ. Clearly, \((\mathrm{i})\Rightarrow(\mathrm{ii})\Rightarrow(\mathrm{iii})\) for every \(x\in\Delta\).
Definition 2.1
Let Δ be a nonempty closed convex subset of a real Hilbert space H. The metric projection on Δ is a mapping \(P_{\Delta} : H\rightarrow{\Delta}\) defined by
Properties
Let Δ be a nonempty closed convex subset of a real Hilbert space H and let \(P_{\Delta}\) is a metric projection on Δ. Since Δ is nonempty, closed and convex, \(P_{\Delta}(x)\) exists and is unique. From the definition of \(P_{\Delta}\), it is easy to show that \(P_{\Delta}\) has the following characteristic properties.
-
(i)
For all \(y\in\Delta\),
$$\bigl\Vert P_{\Delta}(x)-x \bigr\Vert \leq{ \Vert x-y \Vert }. $$ -
(ii)
For all \(x,y\in\Delta\),
$$\bigl\Vert P_{\Delta}(x)-P_{\Delta}(y) \bigr\Vert ^{2} \leq\bigl\langle P_{\Delta }(x)-P_{\Delta}(y), x-y\bigr\rangle , \quad \forall{x,y\in{H}}. $$ -
(iii)
For all \(x\in{\Delta}\), \(y\in{H}\),
$$\bigl\Vert x-P_{\Delta}(y) \bigr\Vert ^{2}+ \bigl\Vert P_{\Delta}(y)-y \bigr\Vert ^{2}\leq{ \Vert x-y \Vert ^{2}}. $$ -
(iv)
\(z=P_{\Delta}(x)\) if and only if \(\langle x-z, y-z\rangle\leq0\), \(\forall y\in{\Delta}\).
Definition 2.2
Let H be a Hilbert space and \(f:\Delta\times\Delta\rightarrow\mathbb{R}\) be a bifunction where \(f(x,\cdot)\) is convex function for each x in Δ. Then for \(\epsilon\geq0\) the ϵ-subdifferential (ϵ-diagonal subdifferential) of f at x, denoted by \(\partial_{\epsilon }f(x,\cdot)(x)\) or \(\partial_{\epsilon}f(x,x)\), is given by
Lemma 2.1
Given \(\lambda\in[0,1]\), \(x,y\in H\) where H is Hilbert space. Then
Lemma 2.2
(Opial’s condition)
For any sequence \(\{x^{k}\}\) in the Hilbert space H with \(x^{k}\rightharpoonup{x}\), the inequality
holds for each \(y\in H\) with \(y\neq x\).
The next lemma will be a useful tool to obtain the boundedness of the sequences generated by the algorithms and also to obtain the convergence of the whole sequence to the solution.
Lemma 2.3
If \(\{a_{k}\}_{k=0}^{\infty}\) and \(\{b_{k}\}_{k=0}^{\infty}\) are two nonnegative real sequences such that
with \(\sum_{k=0}^{\infty}b_{k} < \infty\), then the sequence \(\{a_{k}\}_{k=0}^{\infty}\) converges.
Lemma 2.4
Let Δ be closed and convex subset of a Hilbert space H. If \(U:\Delta\rightarrow\Delta\) is nonexpansive, then FixU is closed and convex.
Now, we assume that the bifunctions \(g:D\times D\rightarrow{\mathbb {R}}\) and \(f:C\times C\rightarrow{\mathbb{R}}\) satisfy the following assumptions, Condition A and Condition B, respectively.
Condition A
-
(A1)
\(g(u,u) = 0\), for all \(u\in D\).
-
(A2)
g is monotone on D, i.e., \(g(u, v)+g(v,u)\leq0\), for all \(u,v\in D\).
-
(A3)
For each \(u,v,w\in D\),
$$\limsup_{\alpha\downarrow{0}}g\bigl(\alpha w+(1-\alpha)u,v\bigr)\leq g(u,v). $$ -
(A4)
\(g(u,\cdot)\) is convex and lower semicontinuous on D for each \(u\in D\).
Condition B
-
(B1)
\(f(x,x)=0\) for all \(x\in C\).
-
(B2)
f is pseudomonotone on C with respect to \(x\in \operatorname{SEP}(f,C)\), i.e., if \(x\in \operatorname{SEP}(f,C)\) then \(f(x,y)\geq0\) implies \(f(y,x)\leq0\), \(\forall y\in C\).
-
(B3)
f satisfies the following condition, called the strict paramonotonicity property:
$$x\in \operatorname{SEP}(f,C), y\in C, \quad f(y,x)=0\Rightarrow y\in \operatorname{SEP}(f,C). $$ -
(B4)
f is jointly weakly upper semicontinuous on \(C\times C\) in the sense that, if \(x,y\in C\) and \(\{x^{k}\}, \{y^{k}\}\subset C\) converge weakly to x and y, respectively, then \(f(x^{k},y^{k})\rightarrow f(x,y)\) as \(k\rightarrow\infty\).
-
(B5)
\(f(x,\cdot)\) is convex, lower semicontinuous and subdifferentiable on C, for all \(x\in C\).
-
(B6)
If \(\{x^{k}\}\) is bounded sequence in C and \(\epsilon _{k}\rightarrow0\), then the sequence \(\{w^{k}\}\) with \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\) is bounded.
The following three results are from equilibrium programming in Hilbert spaces.
Lemma 2.5
([17, Lemma 2.12])
Let g satisfies Condition A. Then, for each \(r>0\) and \(u\in H_{2}\), there exists \(w\in D\) such that
Lemma 2.6
([17, Lemma 2.12])
Let g satisfy Condition A. Then, for each \(r>0\) and \(u\in H_{2}\), define a mapping (called the resolvent of g), given by
Then the following holds:
-
(i)
\(T_{r}^{g}\) is single-valued;
-
(ii)
\(T_{r}^{g}\) is a firmly nonexpansive, i.e., for all \(u,v\in H\),
$$\bigl\Vert T_{r}^{g}(u)-T_{r}^{g}(v) \bigr\Vert ^{2}\leq\bigl\langle T_{r}^{g}(u)-T_{r}^{g}(v), u-v\bigr\rangle ; $$ -
(iii)
\(\operatorname{Fix}(T_{r}^{g})=\operatorname{SEP}(g,D)\), where \(\operatorname{Fix}(T_{r}^{g})\) is the fixed point set of \(T_{r}^{g}\);
-
(iv)
\(\operatorname{SEP}(g,D)\) is closed and convex.
Lemma 2.7
([17, Lemma 2.12])
For \(r,s>0\) and \(u,v\in H_{2}\). Under the assumptions of Lemma 2.6, then
3 Main result
In this section, we propose two strongly convergent algorithms for solving FPSCSEPs (1) which combines three methods including the projection method, the proximal method and the diagonal subgradient method.
3.1 Projected subgradient-proximal algorithm
Algorithm 3.1
Initialization: Choose \(x^{0}\in C\). Take \(\{\rho _{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{\delta _{k}\}\) and \(\{\mu_{k}\}\) such that
Step 1: Take \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Calculate
and
Step 3: Evaluate
Step 4: Evaluate
Step 5: Evaluate
Step 6: Set \(k:=k+1\) and go to Step 1.
Remark 3.1
Since \(f(x,\cdot)\) is a lower semicontinuous convex function and \(C\subset \operatorname{dom}f(x,\cdot)\) for every \(x\in C\), then the \(\epsilon_{k}\)-diagonal subdifferential \(\partial _{\epsilon_{k}}f(x^{k},\cdot)(x^{k}) \neq\emptyset\) for every \(\epsilon_{k} > 0\). Moreover, \(\rho _{k}\geq\rho > 0\). Therefore, each step of the algorithm are well defined, implying that Algorithm 3.1 is well defined.
Remark 3.2
f is pseudomonotone on C with respect to \(\operatorname{SEP}(f,C)\), then under Condition B ((B1) and (B4)), the set \(\operatorname{SEP}(f,C)\) is closed and convex.
Therefore, by Lemma 2.4, Remark 3.2 and by the linearity property of the operator A the solution set S of the FPSCSEP is convex and closed. In this paper, the solution set S is assumed to be nonempty.
Lemma 3.1
Let \(\{y^{k}\}\), \(\{t^{k}\}\) and \(\{x^{k}\}\) be sequences generated by Algorithm 3.1. For \(x^{*}\in S\),
where
and
Proof
Let \(x^{*}\in S\). From \(y^{k}=P_{C}(x^{k}-\frac{\beta _{k}}{\eta_{k}}w^{k})\) and \(x^{*}\in S\) we have
implying that
But also \(x^{k}\in C\). Thus,
and this together with (4) gives us
That is,
Thus,
Since \(x^{k}\in C\) and \(w^{k}\in\partial_{\epsilon _{k}}f(x^{k},\cdot)(x^{k})\) we have
Using the definitions of \(\alpha_{k}\) and \(\eta_{k}\) we obtain
But
Then by definition of \(t^{k}\) we have
and this together with (10) we have
That is,
where
and
 □
Remark 3.3
Since \(x^{*}\in \operatorname{SEP}(C,f)\) we have \(f(x^{*},x)\geq0\) for all \(x\in C\), and by pseudomonotonicity of f with respect to \(\operatorname{SEP}(C,f)\) we have \(f(x,x^{*})\leq0\) for all \(x\in C\). Thus since the sequence \(\{x^{k}\}\) is in C we have \(f(x^{k},x^{*})\leq0\). Thus, we can also have
Lemma 3.2
Let \(\{y^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\) be sequences generated by Algorithm 3.1. Let \(x^{*}\in S\). Then
where
and
Proof
Let \(x^{*}\in S\). By Lemma 2.6, we have
That is,
In view of (12), we have
Thus,
which gives
Hence,
Then from (13) and (15) we have
That is,
Therefore, from Lemma 3.1 and from (16) we have
That is,
where
and
 □
Lemma 3.3
Let \(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\) be sequences generated by Algorithm 3.1. Then:
-
(i)
For \(x^{*}\in S\), the limit of the sequence \(\{\|x^{k}-x^{*}\|^{2}\} \) exists (and \(\{x^{k}\}\) is bounded).
-
(ii)
\(\limsup_{k\rightarrow\infty} f(x^{k},x)=0\) for all \(x\in S\).
-
(iii)
$$\begin{aligned}& \lim_{k\rightarrow\infty} \bigl\Vert V\bigl(u^{k} \bigr)-At^{k} \bigr\Vert =\lim_{k\rightarrow \infty} \bigl\Vert u^{k}-At^{k} \bigr\Vert =0, \\& \lim_{k\rightarrow\infty} \bigl\Vert x^{k}-y^{k} \bigr\Vert =\lim_{k\rightarrow\infty } \bigl\Vert T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert =0. \end{aligned}$$
-
(iv)
$$\lim_{k\rightarrow\infty} \bigl\Vert t^{k}-x^{k} \bigr\Vert =\lim_{k\rightarrow\infty } \bigl\Vert T\bigl(x^{k} \bigr)-x^{k} \bigr\Vert = \lim_{k\rightarrow\infty} \bigl\Vert V \bigl(u^{k}\bigr)-u^{k} \bigr\Vert =0. $$
Proof
(i) Let \(x^{*}\in S\). Since \(f(x^{k},x^{*})\leq0\), \(K_{k}\geq0\), from Lemma 3.2 we can have
Observing that \(\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1-\delta_{k})\beta_{k}^{2}\leq2\frac{\beta_{k}\epsilon _{k}}{\rho_{k}}+2\beta_{k}^{2}\) and using the initialization condition of the parameters we can see that \(\sum_{k=0}^{\infty}\xi _{k}<\infty\).
Therefore, \(\lim_{k\rightarrow\infty} \|x^{k}-x^{*}\|^{2}\) exists and this implies that the sequence \(\{x^{k}\}\) is bounded.
(ii) From lemma 3.2 we have
Summing up the above inequalities for every N, we obtain
This will yield
Letting \(N\rightarrow+\infty\), we have
Hence,
and
Since the sequence \(\{x^{k}\}\) is bounded by Condition B(B6) the sequence \(\{w^{k}\}\) is also bounded. Thus, there is a real number \(w\geq\rho\) such that \(\|w^{k}\|\leq w\). Thus,
Noting
we have
That is,
Since \(\sum_{k=0}^{\infty}\frac{\beta_{k}}{\rho_{k}}=+\infty\) and \(-f(x^{*},x^{k})\leq0\) we can conclude that
for all \(x\in S\).
(iii) From (19) and since \(0 < c\leq\mu_{k}\leq b < \frac{1}{\|A\|^{2}}\), \(0<\delta_{k}<1\) we have
Hence, the result follows.
(iv) The result follows from (iii) and from the following inequalities:
and \(\|V(u^{k})-u^{k}\|\leq\|V(u^{k})-At^{k}\|+\|u^{k}-At^{k}\|\). □
Theorem 3.4
Assume Condition A and Condition B are satisfied and let \(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\} \), be sequences generated by Algorithm 3.1. Then the sequences \(\{ y^{k}\}\), \(\{t^{k}\}\) and \(\{x^{k}\}\) converge strongly to a point \(p\in S\) and \(\{u^{k}\}\) converge strongly to a point \(Ap\in S_{2}\). Moreover,
Proof
Let \(x^{*}\in S\). From Lemma 3.3(i) we have seen that the sequence \(\{x^{k}\}\) is bounded. There exists a subsequence \(\{ x^{k_{j}}\}\) of \(\{x^{k}\}\) such that \(x^{k_{j}}\rightharpoonup p\) as \(j\rightarrow+\infty\), where \(p\in C\) and
But by the weakly upper semicontinuity of \(f(\cdot,x^{*})\) and by Lemma 3.3(ii) we have
Since \(x^{*}\in S\) and \(p\in C\) we have \(f(x^{*},p)\geq0\). As f is pseudomonotone we have \(f(p,x^{*})\leq0\). Thus, this together with the above fact gives \(f(x^{*},p)=0\). Hence, by Condition B(B3) we have \(p\in \operatorname{SEP}(f,C)\).
Since
and using \(\lim_{k\rightarrow{+\infty}}\|x^{k}-y^{k}\|=0\) from Lemma 3.3 we have \(y^{k_{j}}\rightharpoonup p\) as \(j\rightarrow +\infty\). Therefore, \(Ay^{k_{j}}\rightharpoonup Ap\) as \(j\rightarrow +\infty\). Similarly, we can have \(t^{k_{j}}\rightharpoonup p\) as \(j\rightarrow+\infty\) and hence \(At^{k_{j}}\rightharpoonup Ap\) as \(j\rightarrow+\infty\).
Assume \(p\notin \operatorname{Fix}T\), that is, \(T(p)\neq p\). Thus, using Opial’s condition and Lemma 3.3
which is a contradiction. Hence, it must be the case that \(p\in \operatorname{Fix}T\).
Hence,
Since \(\lim_{k\rightarrow{+\infty}}\|u^{k}-At^{k}\|=0\) and
we have \(u^{k_{j}}\rightharpoonup Ap\) as \(j\rightarrow+\infty\). Assume \(Ap\notin \operatorname{Fix}V\). Thus, using Opial’s condition and Lemma 3.2
which is a contradiction. Hence, it must be the case that \(Ap\in \operatorname{Fix}V\). Let \(r>0\). Assume \(Ap\notin \operatorname{Fix}(T_{r}^{g})\). Thus, \(T_{r}^{g}(Ap)\neq Ap\). Thus, using Opial’s condition, Lemma 3.2, Lemma 3.3 we obtain the following:
which is a contradiction. Hence, it must be the case that \(Ap\in \operatorname{Fix}(T_{r}^{g})\). By Lemma 2.6(iii) we have \(Ap\in \operatorname{SEP}(g,D)\). Therefore,
Therefore, from (22) and (23) we have \(p\in S\). That is, \(p\in S\) and p is a weak cluster point of the sequence \(\{ x^{k}\}\). By Lemma 3.3 \(\{\|x^{k}-p\|^{2}\}\) converges. Hence, we conclude that the sequence \(\{x^{k}\}\) strongly converges to p. As a result of this it is easy to see that \(t^{k}\rightarrow p\) and \(y^{k}\rightarrow p\) as \(j\rightarrow+\infty\). Moreover, \(Ay^{k}\rightarrow Ap\), \(At^{k}\rightarrow Ap\), and \(Ax^{k}\rightarrow Ap\). From
we have \(u^{k}\rightarrow Ap\). We will end the proof by showing \(p=\lim_{k\rightarrow+\infty}P_{S}(x^{k})\). From Lemma 3.2 we have
Let \(z^{k}=P_{S}(x^{k})\). Since \(P_{S}(x^{k})\in S\) we have
But by property of metric projection we have
Thus,
Since \(\sum_{k=0}^{\infty}\xi_{k}<\infty\), by Lemma 2.3 we see that \(\lim_{k\rightarrow+\infty}\|x^{k}-z^{k}\|^{2}\) exists. Using the definition of a metric projection we can have
Let \(m\geq n\). Then using (24) and (27) we have
As a result of \(\sum_{k=0}^{\infty}\xi_{k}<\infty\) and \(\lim_{k\rightarrow+\infty}\|x^{k}-z^{k}\|^{2}\) exists if we let \(m,n\rightarrow+\infty\) we can see that \(\|z^{n}-z^{m}\| ^{2}\rightarrow0\). This implies the sequence \(\{z^{k}\}\) is a Cauchy sequence and hence it converges to some point z in S. Since \(z^{k}=P_{S}(x^{k})\) we have
Thus
Thus,
Hence, \(p=z\) and \(\lim_{k\rightarrow+\infty}P_{S}(x^{k})=p\). □
Let Id represents identity operator. Then, if \(T=\mathrm{Id}\) and \(V=\mathrm{Id}\), then FPSCSEP (1) is reduced to SEP. Hence, Algorithm 3.1 can be rewritten as follows.
Algorithm 3.1B
Initialization: Choose \(x^{0}\in C\). Take \(\{\rho_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{ \delta_{k}\}\) and \(\{\mu_{k}\}\) such that
Step 1: Take \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Calculate
and
Step 3: Evaluate
Step 4: Evaluate
Step 5: Evaluate
Step 6: Set \(k:=k+1\) and go to Step 1.
The following corollary is an immediate consequence of Theorem 3.4.
Corollary 3.5
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1}\rightarrow{H_{2}}\) be a nonzero bounded linear operator. Suppose C be nonempty closed convex subset of \(H_{1}\), D be nonempty closed convex subset of \(H_{2}\), and \(f:C\times C\rightarrow{\mathbb{R}}\) and \(g:D\times D\rightarrow{\mathbb{R}}\) be bifunction. Assume Condition A and Condition B are satisfied and let \(\{y^{k}\}\), \(\{ t^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\), be sequences generated by Algorithm 3.1B. If \(S=\{x^{*}\in \operatorname{SEP}(f,C):Ax^{*}\in \operatorname{SEP}(g,D)\}\neq \emptyset\), then sequences \(\{y^{k}\}\), \(\{t^{k}\}\) and \(\{x^{k}\}\) converge strongly to a point \(p\in S\) and \(\{u^{k}\}\) converges strongly to a point \(Ap\in \operatorname{SEP}(g,D)\).
3.2 Modified projected subgradient-proximal algorithm
The computation of Algorithm 3.1 involves the evaluation of two projections on the feasible set C and the estimated value of operator norm \(\|A\|\). It is not an easy task to calculate or at least to estimate the operator norm A. Based on Algorithm 3.1, we propose an algorithm with a way of selecting the step-sizes such that its implementation does not need any prior information as regards the operator norm, and the algorithm involves only one projection on the feasible set C.
For any \(\alpha > 0\) define \(h_{\alpha}(x)=\frac{1}{2}\| VT^{g}_{\alpha}A(x)-A(x)\|^{2}\) for all \(x\in H_{1}\), and so \(\nabla h_{\alpha}(x)=A^{*}(VT^{g}_{\alpha}A(x)-A(x))\).
Algorithm 3.2
Initialization: Choose \(x^{0}\in C\). Take \(\{\rho _{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{\delta _{k}\}\) and \(\{\eta_{k}\}\) such that
Step 1: Find \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Evaluate \(y^{k}=P_{T_{k}}(x^{k}-\alpha _{k}w^{k})\) where \(\alpha_{k}=\frac{\beta_{k}}{\eta_{k}}\), \(\eta_{ k}:=\max\{\rho_{k},\|w^{k}\|\}\), and \(T_{0}=C\), \(T_{k}= \{z\in H_{1}:\langle t^{k-1}+\mu _{k-1}\nabla h_{r}(t^{k-1})-x^{k},z-x^{k}\rangle\leq0\} \) for \(k=1,2,3,\ldots\) .
Step 3: Evaluate \(t^{k}=\delta_{k}x^{k}+(1-\delta _{k})T(y^{k})\).
Step 4: Evaluate \(u^{k}=T_{r}^{g}(At^{k})\).
Step 5: Evaluate
where
Step 6: Set \(k=k+1\) and go to Step 1.
Remark 3.4
By definition of \(T_{k}\), we see that \(T_{k}\) is either half-space or the whole space \(H_{1}\). Therefore, for each k, \(T_{k}\) is closed and convex set, and the computation of projection \(y^{k}=P_{T_{k}}(x^{k}-\alpha_{k}w^{k})\) in Step 2 of Algorithm 3.2 is explicit and easier than the computation of projection \(y^{k}=P_{C}(x^{k}-\alpha_{k}w^{k})\) in Step 2 of Algorithm 3.1 when C has a complex structure. Moreover, by a similar reasoning to Algorithm 3.1, Algorithm 3.2 is well defined and obviously the solution set S of the FPSCSEP is convex and closed.
Lemma 3.6
Let \(\{y^{k}\}\), \(\{t^{k}\}\) and \(\{x^{k}\}\) be sequences generated by Algorithm 3.2.
-
(i)
\(C\subset T_{k}\) for all \(k\geq0\).
-
(ii)
For \(x^{*}\in S\),
$$\bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2 \alpha_{k}(1-\delta _{k})f\bigl(x^{k},x^{*} \bigr)-L_{k}+\xi_{k}, $$where
$$L_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} $$and
$$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$
Proof
(i) From \(x^{k}=P_{C}(t^{k-1}+\mu_{k-1}\nabla h_{r}(t^{k-1}))\) and by property of metric projection we have
which together with the definition of \(T_{k}\) implies that \(C\subset T_{k}\).
(ii) Let \(x^{*}\in S\). From \(y^{k}=P_{T_{k}}(x^{k}-\frac{\beta _{k}}{\eta_{k}}w^{k})\) and \(x^{*},x^{k}\in C\subset T_{k}\) we have
Then, with a similar proof as for Lemma 3.1 we have
where
and
 □
Lemma 3.7
Let \(\{y^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\) be sequences generated by Algorithm 3.2. For \(x^{*}\in S\)
where
and
Proof
Let \(x^{*}\in S\). By Lemma 2.6,
That is,
In view at (28) we get
Hence,
Using (29) we have
That is,
By Lemma 2.6 and (30), we have
That is,
Therefore, using (31) and Lemma 3.6, we have
where
and
Note that by the definition of \(\mu_{k}\) we have
 □
Lemma 3.8
Let \(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\) be sequences generated by Algorithm 3.2. Then:
-
(i)
For \(x^{*}\in S\), the limit of the sequence \(\{\|x^{k}-x^{*}\|^{2}\} \) exists (and \(\{x^{k}\}\) is bounded).
-
(ii)
\(\limsup_{k\rightarrow\infty} f(x^{k},x)=0\) for all \(x\in S\).
-
(iii)
$$\begin{aligned}& \lim_{k\rightarrow\infty} \bigl\Vert u^{k}-At^{k} \bigr\Vert =\lim_{k\rightarrow \infty} \bigl\Vert x^{k}-y^{k} \bigr\Vert =\lim_{k\rightarrow\infty} \bigl\Vert T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert =0, \\& \lim_{k\rightarrow\infty} \bigl\Vert t^{k}-x^{k} \bigr\Vert =\lim_{k\rightarrow\infty } \bigl\Vert T\bigl(x^{k} \bigr)-x^{k} \bigr\Vert =0. \end{aligned}$$
-
(iv)
$$\lim_{k\rightarrow\infty} h_{r}\bigl(t^{k}\bigr)=\lim _{k\rightarrow\infty} \bigl\Vert V\bigl(u^{k}\bigr)-u^{k} \bigr\Vert =0. $$
Proof
(i) Let \(x^{*}\in S\). Since \(f(x^{k},x^{*})\leq0\), \(K_{k}\geq0\), \(\omega_{k}\geq0\) from Lemma 3.2 we can have
Therefore, the result follows.
(ii) From Lemma 3.7 we can have
Summing up the above inequalities for every N, we obtain
This will yield
Letting \(N\rightarrow+\infty\), we have
Hence,
In the same way as proving Lemma 3.2 the result follows.
(iii) From \(\sum_{k=0}^{\infty}K_{k}<+\infty\) and \(0<\delta _{k}<1\) we have
The remaining result follows from the following inequalities:
and
(iv) From (32) we have \(\sum_{k=0}^{\infty} [4\mu _{k}h_{r}(t^{k})-(\mu_{k}\|\nabla h_{r}(t^{k})\|)^{2}] < +\infty\). Without loss of generality, we can assume that \(\nabla h_{r}(t^{k})\neq0\) for all k. Thus, \(\sum_{k=0}^{\infty} [4\mu_{k}h_{r}(t^{k})-(\mu_{k}\|\nabla h_{r}(t^{k})\|)^{2}] < +\infty\) implies that
Since \(0 < \eta\leq\eta_{k}\leq4-\eta\) we have
Since \(\lim_{k\rightarrow\infty}\|t^{k}-x^{k}\|=0\) and \(\{ x^{k}\}\) is bounded, \(\{t^{k}\}\) is also bounded. Thus, it follows from the Lipschitz continuity of \(\nabla h_{r}(\cdot)\) that \(\{\|\nabla h_{r}(t^{k})\|^{2}\}\) is bounded. This together with the last relation implies that \(\lim_{k\rightarrow\infty}h_{r}(t^{k})=0\). The inequality \(\|V(u^{k})-u^{k}\|\leq(2h_{r}(t^{k}))^{\frac{1}{2}}+\| u^{k}-At^{k}\|\) yields
 □
Theorem 3.9
Assume Condition A and Condition B are satisfied and let \(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\), be sequences generated by Algorithm 3.2. Then the sequences \(\{y^{k}\}\), \(\{t^{k}\}\) and \(\{ x^{k}\}\) converge strongly to a point \(p\in S\) and \(\{u^{k}\}\) converge strongly to a point \(Ap\in S_{2}\). Moreover,
Proof
With consideration of the definition of \(h_{r}(t^{k})\) the proof remains the same as for Theorem 3.4. □
For any \(\alpha > 0\) define \(h_{\alpha}(x)=\frac{1}{2}\| T^{g}_{\alpha}A(x)-A(x)\|^{2}\) for all \(x\in H_{1}\), and so \(\nabla h_{\alpha}(x)=A^{*}(T^{g}_{\alpha}A(x)-A(x))\). Setting \(T=\mathrm{Id}\) and \(V=\mathrm{Id}\), the FPSCSEP (1) is reduced to SEP. Hence, Algorithm 3.2 can be rewritten as follows:
Algorithm 3.2B
Initialization: Choose \(x^{0}\in C\). Take \(\{\rho_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{ \delta_{k}\}\) and \(\{\eta_{k}\}\) such that
Step 1: Find \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Evaluate \(y^{k}=P_{T_{k}}(x^{k}-\alpha _{k}w^{k})\) where \(\alpha_{k}=\frac{\beta_{k}}{\eta_{k}}\), \(\eta_{ k}:=\max\{\rho_{k},\|w^{k}\|\}\) and
Step 3: Evaluate \(t^{k}=\delta_{k}x^{k}+(1-\delta _{k})y^{k}\).
Step 4: Evaluate \(u^{k}=T_{r}^{g}(At^{k})\).
Step 5: Evaluate
where
Step 6: Set \(k=k+1\) and go to Step 1.
The following corollary is an immediate consequence of Theorem 3.9.
Corollary 3.10
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1}\rightarrow{H_{2}}\) be a nonzero bounded linear operator. Suppose C be nonempty closed convex subset of \(H_{1}\), D be nonempty closed convex subset of \(H_{2}\), and \(f:C\times C\rightarrow{\mathbb{R}}\) and \(g:D\times D\rightarrow{\mathbb{R}}\) be bifunction. Assume Condition A and Condition B are satisfied and let \(\{y^{k}\}\), \(\{ t^{k}\}\), \(\{u^{k}\}\), and \(\{x^{k}\}\), be sequences generated by Algorithm 3.2B. If \(S=\{x^{*}\in \operatorname{SEP}(f,C):Ax^{*}\in \operatorname{SEP}(g,D)\}\neq \emptyset\), then sequences \(\{y^{k}\}\), \(\{t^{k}\}\) and \(\{x^{k}\}\) converge strongly to a point \(p\in S\) and \(\{u^{k}\}\) converges strongly to a point \(Ap\in \operatorname{SEP}(g,D)\).
4 Application and numerical result
In this section we will see some applications and we perform several numerical experiments to illustrate the computational performance of the proposed algorithms (Algorithm 3.1 and Algorithm 3.2) and we compare the convergence of one with the other.
Let \(A:H_{1}\rightarrow H_{2}\) be nonzero bounded linear operator where \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, and C and D be two nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Let \(\psi:C\rightarrow\mathbb{R}\) and \(\phi:D\rightarrow\mathbb {R}\) be functions with ψ and ϕ are convex and lower semicontinuous, and ψ is upper semicontinuous and ϵ-subdifferentiable at every point in C. Then the following is an optimization problem:
Set \(f(x,y)=\psi(y)-\psi(x)\) and \(g(u,v)=\phi(v)-\phi(u)\). Thus, g satisfies Condition A and f satisfies Condition B as a result of the given conditions satisfied by ψ and ϕ. Therefore, optimization problem (33) is SEP which is particular case of FPSCSEP, and Algorithm 3.1B and Algorithm 3.2B solves (33).
Let H be real Hilbert spaces, and C be nonempty closed convex subset of H. Let \(\psi:C\rightarrow\mathbb{R}\) and \(\phi :C\rightarrow\mathbb{R}\) be functions with ψ and ϕ are convex, lower semicontinuous, upper semicontinuous and ϵ-subdifferentiable at every point in C. The following is a multi-objective optimization problem:
Therefore, multi-objective optimization problem (34) is equilibrium problem which is also a particular case of FPSCSEP. Next we will see simple case optimization problem and its numerical result as an application. The algorithms are coded in Matlab R2017a (9.2.0.556344) and are operated on MacBook 1.1Â GHz Intel Core m3 8Â GB 1867 MHz LPDDR3.
Example 4.1
Consider the fixed point constrained optimization problem
where \(\mathbb{R}=H_{1}\), \(\mathbb{R}^{2}=H_{2}\), \(A:H_{1}\rightarrow H_{2}\) given by \(A(x)=(-\frac{x}{2},\frac{x}{2})\), \(C=\{x\in\mathbb {R}:x\geq1\}\), \(D=\{(u_{1},u_{2})\in\mathbb{R}^{2}:u_{2}-u_{1}\geq 1\}\), \(\psi:C\rightarrow\mathbb{R}\) given by \(\psi(x)=2x+5\), and \(\phi:D\rightarrow\mathbb{R}\) given by \(\phi(u)=\phi (u_{1},u_{2})=u_{2}-u_{1}\), and the nonexpansive mappings \(T:C\rightarrow C\) given by \(T(x)=\frac{u+1}{2}\) and \(V:D\rightarrow D\) given by \(V(u)=V(u_{1},u_{2})=(-u_{2},-u_{1})\).
Set \(f(x,y)=\psi(y)-\psi(x)=2y-2x\) and \(g(u,v)=\phi(v)-\phi (u)=(v_{2}-v_{1})-(u_{2}-u_{1})\).
It is easy to check that g and f satisfy Condition A and Condition B, respectively. It is also clear to see that \(A^{*}(u)=A^{*}(u_{1},u_{2})=-\frac{1}{2}u_{1}+\frac{1}{2}u_{2}\) and \(\|A\|=\frac{1}{2}\). Hence, \(\operatorname{Fix}T=\{1\}\), \(\operatorname{SEP}(f,C)=\{1\}\), \(\operatorname{Fix}V=\{ (u_{1},u_{2})\in D: u_{2}=-u_{1}\}\), and \(\operatorname{SEP}(g,D)=\{(u_{1},u_{2})\in D: u_{2}-u_{1}=1\}\). Therefore, \(\operatorname{SFPSCEP}(f,C,T)=\{1\}\) and \(\operatorname{SFPSCEP}(g,D,V)=\{(-\frac{1}{2},\frac{1}{2})\}\). Since \(A(1)=(-\frac {1}{2},\frac{1}{2})\), we see that the solution set of this problem is singleton set \(S=\{p\}\) where \(p={1}\).
Initialization for Algorithm 3.1: Take \(\rho_{k}=1\), \(\epsilon_{k}=0\), \(\mu_{k}=\frac{1}{2}\), \(r_{k}=\frac{1}{1000}\), \(\beta_{k}=\frac{\log(k+4)}{8k+16}\) and \(\delta_{k}=\frac {3^{k+1}+100}{100(3^{k+1})}\).
Initialization for Algorithm 3.2: Take \(\rho_{k}=1\), \(\epsilon_{k}=0\), \(\eta_{k}=1\), \(r_{k}=r=\frac{1}{1000}\), \(\beta_{k}=\frac{\log(k+4)}{8k+16}\) and \(\delta_{k}=\frac{3^{k+1}+100}{100(3^{k+1})}\).
Note that this choice of parameters satisfies the initialization of each of the algorithms. Choose \(x^{0}\in C\). Let \(x^{k}\), \(w^{k}\), \(y^{k}\), \(t^{k}\), x, y are in \(\mathbb{R}\), and \(u^{k}=(u_{1}^{k},u_{2}^{k})\), \(v=(v_{1},v_{2})\) in \(\mathbb{R}^{2}\). For this example Algorithm 3.1 is expressed as an iteration,
and Algorithm 3.2 is expressed as an iteration,
By using Matlab, we compute the numerical experiment results of iteration (35) and (36) for their respective parameter sequence given with the same initial point \(x^{0}=100\in C\).
Let \(\{z^{k}\}\) be a sequence in C. Set \(D_{k}^{z^{k}}= D_{k}=\| z^{k}-p\|\). The convergence of the sequences \(\{D_{k}^{y^{k}}\}\), \(\{ D_{k}^{t^{k}}\}\), and \(\{D_{k}^{x^{k}}\}\) to 0 implies that \(\{y^{k}\} \), \(\{t^{k}\}\), and \(\{x^{k}\}\) converges to the solution of the problem p. Hence, from Figures 1 and 2, we see that the sequences \(\{ y_{k}\}\), \(\{t_{k}\}\), and \(\{x_{k}\}\) converge to 1, and from Figure 3, we see that \(\{u_{1}^{k}\}\) converges to \(-\frac{1}{2}\) and \(\{ u_{2}^{k}\}\) converges to \(\frac{1}{2}\) (implying that \(\{u^{k}\}\) converges to \(A(1)=(-\frac{1}{2},\frac{1}{2})\)). Moreover, for the solution control parameter values and initialization given above for iteration (35) and iteration (36), iteration (36) converge to the solution faster than iteration (35).
5 Conclusion
We have proposed two strongly convergent algorithms using a projected subgradient-proximal method for solving a fixed point set-constrained split equilibrium problem \(\operatorname{FPSCSEP}(f,C,T;g,D,V)\) in real Hilbert spaces in which the bifunction f is pseudomonotone on C with respect to its solution set, the bifunction g is monotone on D, and T and V are nonexpansive mappings. The strong convergence of the iteration sequence generated by the algorithms to a solution of this problem are obtained. Finally, we have seen the application in solving optimization problems and numerical result to analyze and also compare the convergence speed of the algorithms for our particular example.
References
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8(2), 221-239 (1994)
Censor, Y, Segal, A: The split common fixed point problem for directed operators. J. Convex Anal. 16(2), 587-600 (2009)
Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21(6), 2071-2084 (2005)
Yukawa, M, Slavakis, K, Yamada, I: Multi-domain adaptive filtering by feasibility splitting. In: IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 3814-3817 (2010)
Chang, S-S, Lee, HWJ, Chan, CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal., Theory Methods Appl. 70(9), 3307-3319 (2009)
Flåm, SD, Antipin, AS: Equilibrium programming using proximal-like algorithms. Math. Program. 78(1), 29-41 (1996)
Anh, PN, Muu, LD: A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems. Optim. Lett. 8(2), 727-738 (2014)
Tada, A, Takahashi, W: Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 133(3), 359-370 (2007)
Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331(1), 506-515 (2007)
Takahashi, S, Takahashi, W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal., Theory Methods Appl. 69(3), 1025-1033 (2008)
Kazmi, KR, Rizvi, SH: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21(1), 44-51 (2013)
Quoc, TD, Dung, ML, Nguyen, VH: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749-776 (2008)
Dinh, BV, Muu, LD: A projection algorithm for solving pseudomonotone equilibrium problems and its application to a class of bilevel equilibria. Optimization 64(3), 559-575 (2015)
Hieu, DV: Two hybrid algorithms for solving split equilibrium problems. Int. J. Comput. Math. 95, 561-583 (2018)
Dinh, BV, Son, DX, Anh, TV: Extragradient algorithms for split equilibrium problem and nonexpansive mapping. arXiv preprint (2015). arXiv:1508.04914
Dinh, BV, Son, DX, Jiao, L, Kim, DS: Linesearch algorithms for split equilibrium problems and nonexpansive mappings. Fixed Point Theory Appl. 2016, 27 (2016)
Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6(1), 117-136 (2005)
Acknowledgements
This research was partially supported by Naresuan University.
Author information
Authors and Affiliations
Contributions
The authors contributed equally and significantly in writing this article. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Gebrie, A.G., Wangkeeree, R. Hybrid projected subgradient-proximal algorithms for solving split equilibrium problems and split common fixed point problems of nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl 2018, 5 (2018). https://doi.org/10.1186/s13663-018-0630-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-018-0630-7