Skip to main content

Linesearch algorithms for split equilibrium problems and nonexpansive mappings

Abstract

In this paper, we first propose a weak convergence algorithm, called the linesearch algorithm, for solving a split equilibrium problem and nonexpansive mapping (SEPNM) in real Hilbert spaces, in which the first bifunction is pseudomonotone with respect to its solution set, the second bifunction is monotone, and fixed point mappings are nonexpansive. In this algorithm, we combine the extragradient method incorporated with the Armijo linesearch rule for solving equilibrium problems and the Mann method for finding a fixed point of an nonexpansive mapping. We then combine the proposed algorithm with hybrid cutting technique to get a strong convergence algorithm for SEPNM. Special cases of these algorithms are also given.

1 Introduction

Throughout the paper, unless otherwise is stated, we assume that \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\) are real Hilbert spaces endowed with inner products and induced norms denoted by \(\langle\cdot, \cdot \rangle\) and \(\| \cdot\|\), respectively, whereas \(\mathbb{H}\) refers to any of these spaces. We write \(x^{k} \to x\) or \(x^{k} \rightharpoonup x\) iff \(x^{k}\) converges strongly or weakly to x, respectively, as \(k \to \infty\). Let C, Q be nonempty closed convex subsets in \(\mathbb{H}_{1}\), \(\mathbb {H}_{2}\), respectively, and \(A: \mathbb{H}_{1} \to \mathbb{H}_{2}\) be a bounded linear operator. The split feasible problem (SFP) in the sense of Censor and Elfving [1] is to find \(x^{*} \in C\) such that \(Ax^{*} \in Q\). It turns out that SFP provides a unified framework for the study of many significant real-world problems such as in signal processing, medical image reconstruction, intensity-modulated radiation therapy, et cetera; see, for example, [2–5]. To find a solution of SFP in finite-dimensional Hilbert spaces, a basic scheme proposed by Byrne [6], called the CQ-algorithm, is defined as follows:

$$x^{k+1} = P_{C}\bigl(x^{k} + \gamma A^{T}(P_{Q} - I) Ax^{k}\bigr), $$

where I is the identity mapping, and \(P_{C}\) is projection mapping onto C. Xu [7] investigated the SFP setting in infinite-dimensional Hilbert spaces. In this case, the CQ-algorithm becomes

$$x^{k+1} = P_{C}\bigl(x^{k} + \gamma A^{*}(P_{Q} - I) Ax^{k}\bigr), $$

where \(A^{*}\) is the adjoint operator of A.

The split feasibility problem when C or Q are fixed points of mappings or common fixed points of mappings and solutions of variational inequality problems was considered in some recent research papers; see, for instance, [8–15]. Recently, Moudafi [16] (see also [17–20]) considered the split equilibrium problems (SEP), more precisely:

Let \(f : C \times C \to\mathbb{R}\), \(g : Q \times Q \to\mathbb{R}\) be equilibrium bifunctions, that is, \(f(x, x) = g(u, u) = 0\) for all \(x \in C\) and \(u \in Q\). The split equilibrium problem takes the form

$$ \text{Find } x^{*} \in C \text{ such that } x^{*} \in\operatorname{Sol}(C, f) \text{ and }Ax^{*} \in\operatorname{Sol}(Q, g), $$

where \(\operatorname{Sol}(C, f)\) is the solution set of the following equilibrium problem (\(\operatorname{EP}(C, f)\)):

$$ \text{Find } \bar{x} \in C \text{ such that } f(\bar{x}, y) \geq0, \forall y \in C, $$

and \(\operatorname{Sol}(Q, g)\) is the solution set of the equilibrium problem \(\operatorname{EP}(Q, g)\). See [21, 22] for more detail on equilibrium problems.

For obtaining a solution of SEP, He [23] introduced an iterative method, which generates a sequence \(\{x^{k}\}\) by

$$ \textstyle\begin{cases} x^{0} \in C, \{r_{k}\} \subset(0, +\infty), \quad \mu> 0, \\ f(y^{k}, y) + \frac{1}{r_{k}} \langle y - y^{k}, y^{k} - x^{k} \rangle\geq0,\quad \forall y \in C, \\ g(u^{k}, v) + \frac{1}{r_{k}} \langle v - u^{k}, u^{k} - Ay^{k} \rangle\geq0,\quad \forall v \in Q, \\ x^{k+1} = P_{C}(y^{k} + \mu A^{*}(u^{k} - Ay^{k})), \quad \forall k \geq0. \end{cases} $$

Under certain conditions on bifunctions and parameters, the author shows that \(\{x^{k}\}\), \(\{y^{k}\}\) weakly converges to a solution of SEP, provided that f and g are monotone on C and Q, respectively.

On the other hand, many researchers have been proposed numerical algorithms for finding a common element of the set of solutions of monotone equilibrium problems and the set of fixed points of nonexpansive mappings; see, for example, [24–26] and the references therein.

This paper focuses mainly on a split equilibrium problem and nonexpansive mapping involving pseudomonotone and monotone equilibrium bifunctions in real Hilbert spaces. In detail, let \(f : C \times C \to \mathbb{R}\) be a pseudomonotone bifunction with respect to its solution set, \(g : Q \times Q \to\mathbb{R}\) be a monotone bifunction, and \(S: C \to C\) and \(T: Q \to Q\) be nonexpansive mappings. The problem considered in this paper can be stated as follows (\(\operatorname{SEPNM}(C, Q, A, f, g, S, T)\) or SEPNM for short):

$$ \text{Find } x^{*} \in C \text{ such that } x^{*} \in\operatorname {Sol}(C, f) \cap\operatorname{Fix}(S) \text{ and } Ax^{*} \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T), $$

where \(\operatorname{Fix}(S)\) and \(\operatorname{Fix}(T)\) are the fixed points of the mappings S and T, respectively.

It should be noticed that, under the monotonicity assumption on f and g, the solution sets \(\operatorname{Sol}(C, f)\) and \(\operatorname{Sol}(C, g)\) of the equilibrium problems \(\operatorname{EP}(C, f)\) and \(\operatorname{EP}(Q, g)\) are closed convex sets whenever f and g are lower semicontinuous and convex with respect to the second variables. In addition, the nonexpansiveness assumption of S and T also implies that \(\operatorname{Fix}(S)\) and \(\operatorname{Fix}(T)\) are closed convex sets. Hence, \(\operatorname{Sol}(C, f) \cap\operatorname{Fix}(S)\) and \(\operatorname {Sol}(Q, g) \cap\operatorname{Fix}(T)\) are closed convex sets. However, the main difficulty is that, even if these sets are convex, they are not given explicitly as in a standard mathematical programming problem, and therefore the projection onto those sets cannot be computed, and consequently, available methods (see, e.g., [2, 27, 28] and the references therein) cannot be applied for solving SEPNM directly.

In this paper, we first propose a weak convergence algorithm for solving SEPNM by using a combination of the extragradient method with Armijo linesearch type rule for an equilibrium problem [29] (see also [30–32] for more detail on extragradient algorithms) and the Mann method [33] (see also [34, 35]) for a fixed point problem. We then combine this algorithm with hybrid cutting technique [36] (see also [37]) to get a strong convergence algorithm for SEPNM.

The paper is organized as follows. The next section presents some preliminary results. A weak convergence algorithm and its special case are presented in Section 3. In the last section, we combine the method presented in Section 3 with the hybrid projection method for obtaining a strong convergence algorithm for SEPNM.

2 Preliminaries

Let \(\mathbb{H}\) be a real Hilbert space, and C a nonempty closed convex subset of \(\mathbb{H}\). By \(P_{C}\) we denote the metric projection operator onto C, that is,

$$ P_{C}(x) \in C\mbox{:}\quad \bigl\Vert x - P_{C}(x)\bigr\Vert \leq \Vert x - y \Vert , \quad \forall y \in C. $$

The following well-known results will be used in the sequel.

Lemma 1

Suppose that C is a nonempty closed convex subset in \(\mathbb{H}\). Then \(P_{C}\) has the following properties:

  1. (a)

    \(P_{C}(x)\) is singleton and well defined for every x;

  2. (b)

    \(z=P_{C}(x)\) if and only if \(\langle x-z, y-z\rangle\leq0\), \(\forall y\in C\);

  3. (c)

    \(\Vert P_{C}(x)-P_{C}(y) \Vert^{2} \leq\langle P_{C}(x) - P_{C}(y), x-y \rangle\), \(\forall x, y \in\mathbb{H}\);

  4. (d)

    \(\Vert P_{C}(x)-P_{C}(y)\Vert^{2} \leq\Vert x-y \Vert^{2} - \Vert x-P_{C}(x) - y + P_{C}(y) \Vert^{2}\), \(\forall x, y \in\mathbb{H}\).

Lemma 2

Let \(\mathbb{H}\) be a real Hilbert space. Then, for all \(x, y\in\mathbb {H}\) and \(\alpha\in[0, 1]\), we have

$$\bigl\Vert \alpha x + (1-\alpha)y \bigr\Vert ^{2} = \alpha \Vert x \Vert ^{2}+ (1-\alpha ) \Vert y \Vert ^{2}-\alpha(1- \alpha)\Vert x-y\Vert ^{2}. $$

Lemma 3

(Opial’s condition)

For any sequence \(\{x^{k}\}\subset\mathbb{H}\) with \(x^{k}\rightharpoonup x\), we have the inequality

$$\liminf_{k\rightarrow +\infty}\bigl\Vert x^{k}-x \bigr\Vert < \liminf_{k \rightarrow +\infty} \bigl\Vert x^{k}-y \bigr\Vert $$

for all \(y\in\mathbb{H}\) such that \(y \neq x\).

Definition 1

We say that an operator \(T: \mathbb{H} \to\mathbb{H}\) is demiclosed at 0 if, for any sequence \(\{x^{k}\}\) such that \(x^{k} \rightharpoonup x\) and \(Tx^{k} \to0\) as \(k \to\infty\), we have \(Tx = 0\).

It is well known that, for a nonexpansive operator \(T: \mathbb{H} \to \mathbb{H}\), the operator \(I - T\) is demiclosed at 0; see [38], Lemma 2.

Now, we assume that the equilibrium bifunctions \(g : Q \times Q \to\mathbb {R}\) and \(f: C \times C \to\mathbb{R}\) satisfy the following assumptions, respectively.

Assumption A

(A1):

g is monotone on Q, that is, \(g(u, v) + g(v, u) \leq 0\) for all \(u, v \in Q\);

(A2):

\(g(u, \cdot)\) is convex and lower semicontinuous on Q for each \(u \in Q\);

(A3):

for all \(u, v, w \in Q\),

$$\limsup_{\lambda\downarrow0}g\bigl(\lambda w+(1-\lambda)u, v\bigr)\leq g(u, v). $$

Assumption B

(B1):

f is pseudomonotone on C, that is, if \(f(x, y)\geq 0\) implies \(f(y, x)\leq0\) for all \(x, y\in C\);

(B2):

\(f(x, \cdot)\) is convex and subdifferentiable on C for all \(x\in C\);

(B3):

f is jointly weakly continuous on \(C\times C\) in the sense that, if \(x, y\in C\) and \(\{x^{k}\}, \{y^{k}\}\subset C\) converge weakly to x and y, respectively, then \(f(x^{k}, y^{k}) \to f(x, y)\) as \(k \to+\infty\).

Let φ be an equilibrium bifunction defined on \(C \times C\). For \(x, y \in C\), we denote by \(\partial_{2}\varphi(x, y)\) the subgradient of the convex function \(\varphi(x, \cdot)\) at y, that is,

$$\partial_{2} \varphi(x, y) := \bigl\{ \hat{\xi}\in\mathbb{H} : \varphi(x, z) \geq\varphi(x, y) + \langle\hat{\xi}, z - y \rangle, \forall z \in C \bigr\} . $$

In particular,

$$\partial_{2} \varphi(x , x) = \bigl\{ \hat{\xi}\in\mathbb{H} : \varphi(x, z) \geq \langle\hat{\xi}, z - x \rangle, \forall z \in C \bigr\} . $$

Let Δ be an open convex set containing C. The next lemma can be considered as an infinite-dimensional version of Theorem 24.5 in [39].

Lemma 4

([40], Proposition 4.3)

Let \(\varphi: \varDelta \times \varDelta \to\mathbb{R}\) be an equilibrium bifunction satisfying conditions (A1) on Δ and (A2) on C. Let \(\bar{x}, \bar{y} \in \varDelta \), and let \(\{x^{k}\}\), \(\{y^{k}\}\) be two sequences in Δ converging weakly to x̄, ȳ, respectively. Then, for any \(\varepsilon > 0\), there exist \(\eta>0\) and \(k_{\varepsilon } \in\mathbb{N}\) such that

$$\partial_{2} \varphi\bigl(x^{k}, y^{k}\bigr) \subset\partial_{2}\varphi(\bar{x}, \bar {y}) + \frac{\varepsilon }{\eta}B $$

for every \(k \geq k_{\varepsilon }\), where B denotes the closed unit ball in \(\mathbb{H}\).

Lemma 5

Let the equilibrium bifunction φ satisfy assumptions (A1) on Δ and (A2) on C, and \(\{x^{k} \} \subset C \), \(0 < \underline{\rho} \leq\bar{\rho} \), \(\{\rho_{k}\} \subset[\underline{\rho} , \bar{\rho}] \). Consider the sequence \(\{ y^{k}\}\) defined as

$$y^{k} = \arg\min \biggl\{ \varphi\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y\in C \biggr\} . $$

If \(\{x^{k} \}\) is bounded, then \(\{y^{k}\}\) is also bounded.

Proof

First, we show that if \(\{x^{k} \}\) converges weakly to \(x^{*}\), then \(\{ y^{k}\}\) is bounded. Indeed,

$$y^{k} = \operatorname{arg}\min \biggl\{ \varphi\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y\in C \biggr\} $$

and

$$\varphi\bigl(x^{k}, x^{k}\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert x^{k}-x^{k}\bigr\Vert ^{2} = 0. $$

Therefore,

$$\varphi\bigl(x^{k}, y^{k}\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y^{k}-x^{k}\bigr\Vert ^{2} \leq0,\quad \forall k. $$

In addition, for all \(\hat{\xi}^{k} \in\partial_{2}\varphi(x^{k}, x^{k})\), we have

$$\varphi\bigl(x^{k}, y^{k}\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y^{k}-x^{k}\bigr\Vert ^{2} \geq\bigl\langle \hat{\xi}^{k}, y^{k} - x^{k} \bigr\rangle + \frac{1}{2\rho_{k}} \bigl\Vert y^{k} - x^{k}\bigr\Vert ^{2}. $$

This implies

$$-\bigl\Vert \hat{\xi}^{k}\bigr\Vert \bigl\Vert y^{k} - x^{k} \bigr\Vert + \frac{1}{2\rho_{k}} \bigl\Vert y^{k} - x^{k}\bigr\Vert ^{2} \leq0. $$

Hence,

$$\bigl\Vert y^{k} - x^{k}\bigr\Vert \leq2 \rho_{k} \bigl\Vert \hat{\xi}^{k}\bigr\Vert , \quad \forall k. $$

Because \(\{\rho_{k}\}\) is bounded, \(\{x^{k}\}\) converges weakly to \(x^{*}\) and \(\hat{\xi}^{k} \in\partial_{2}\varphi(x^{k}, x^{k})\). By Lemma 4 the sequence \(\{\hat{\xi}^{k}\}\) is bounded; combining this with the boundedness of \(\{x^{k}\}\), we get that \(\{y^{k}\}\) is also bounded.

Now let us prove Lemma 5. Suppose that \(\{y^{k}\}\) is unbounded, that is, there exists a subsequence \(\{y^{k_{i}}\} \subseteq \{y^{k}\}\) such that \(\lim_{i \to\infty}\|y^{k_{i}}\| = + \infty\). By the boundedness of \(\{x^{k}\}\) this implies that \(\{x^{k_{i}}\}\) is also bounded, and without loss of generality, we may assume that \(\{x^{k_{i}}\} \) converges weakly to some \(x^{*}\). By the same argument as before, we get that \(\{y^{k_{i}}\}\) is bounded, a contradiction. Therefore, \(\{y^{k}\}\) is bounded. □

The following lemmas are well known in the theory of monotone equilibrium problems.

Lemma 6

([21])

Let g satisfy Assumption  A. Then, for all \(\alpha>0\) and \(u \in \mathbb{H}\), there exists \(w \in Q\) such that

$$g(w, v)+ \frac{1}{\alpha}\langle v-w, w-u\rangle\geq0,\quad \forall v \in Q. $$

Lemma 7

([41])

Under the assumptions of Lemma  6, the mapping \(T_{\alpha}^{g}\) defined on \(\mathbb{H}\) as

$$ T_{\alpha}^{g}(u)= \biggl\{ w\in Q: g(w, v)+ \frac{1}{\alpha}\langle v-w, w-u\rangle\geq0, \forall v\in Q \biggr\} $$

has following properties:

  1. (i)

    \(T_{\alpha}^{g}\) is single-valued;

  2. (ii)

    \(T_{\alpha}^{g}\) is firmly nonexpansive, that is, for any \(u, v\in\mathbb{H}\),

    $$\bigl\Vert T_{\alpha}^{g}(u)-T_{\alpha}^{g}(v) \bigr\Vert ^{2}\leq\bigl\langle T_{\alpha}^{g}(u)-T_{\alpha}^{g}(v), u-v\bigr\rangle ; $$
  3. (iii)

    \(\operatorname{Fix}(T_{\alpha}^{g}) = \operatorname{Sol}(Q, g)\);

  4. (iv)

    \(\operatorname{Sol}(Q, g)\) is closed and convex.

Lemma 8

([23])

Under the assumptions of Lemma  7, for \(\alpha, \beta>0\) and \(u, v\in\mathbb{H}\), we have

$$\bigl\Vert T_{\alpha}^{g}(u)-T_{\beta}^{g}(v) \bigr\Vert \leq \Vert v-u\Vert + \frac{|\beta -\alpha|}{\beta}\bigl\Vert T_{\beta}^{g}(v)-v\bigr\Vert . $$

3 A weak convergence algorithm

Algorithm 1

  • Initialization. Pick \(x^{0} \in C\) and choose the parameters \(\beta, \eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq\bar{\rho}\), \(\{ \rho_{k} \} \subset[\underline{\rho}, \bar{\rho }]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2\), \(\{ \gamma_{k} \} \subset[\underline{\gamma}, \bar{\gamma} ] \), \(0 < \alpha\), \(\{\alpha _{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).

  • Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:

    1. Step 1.

      Solve the strongly convex program

      $$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$

      to obtain its unique solution \(y^{k}\).

      If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.

    2. Step 2.

      (Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that

      $$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$
      (3.1)

      Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).

    3. Step 3.

      Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).

    4. Step 4.
      $$ \textstyle\begin{cases} v^{k}=(1-\beta)u^{k}+\beta Su^{k}, \\ w^{k}=T_{\alpha_{k}}^{g}Av^{k}. \end{cases} $$
    5. Step 5.

      Take \(x^{k+1}=P_{C}(v^{k}+\mu A^{*}(Tw^{k}-Av^{k}))\) and go to iteration k with k replaced by \(k+1\).

Lemma 9

Suppose that \(p \in\operatorname{Sol}(C, f)\), \(f(x, \cdot)\) is convex and subdifferentiable on C for all \(x \in C\) and that f is pseudomonotone on C. Then, we have:

  1. (a)

    The Armijo linesearch rule (3.1) is well defined;

  2. (b)

    \(f(z^{k}, x^{k}) > 0\);

  3. (c)

    \(0 \notin\partial_{2}f(z^{k}, x^{k})\);

  4. (d)
    $$ \bigl\Vert u^{k} - p\bigr\Vert \leq\bigl\Vert x^{k} - p\bigr\Vert ^{2} - \gamma_{k}( 2 - \gamma_{k}) \bigl(\sigma_{k}\bigl\Vert \xi ^{k} \bigr\Vert \bigr)^{2}. $$

Proof

The proof of Lemma 9 when \(\mathbb{H}_{1}\) is a finite-dimensional space can be found, for example, in [29]. When its dimension is infinite, it can be done in the same way. So we omit it. □

Theorem 1

Let C and Q be two nonempty closed convex subsets in \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\), respectively. Let \(S: C \to C\); \(T: Q \to Q\) be nonexpansive mappings, and let bifunctions g and f satisfy Assumptions A and B, respectively. Let \(A: \mathbb{H}_{1} \to\mathbb {H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega = \{x^{*} \in\operatorname{Sol}(C, f) \cap\operatorname{Fix}(S): Ax^{*} \in\operatorname{Fix}(Q, g) \cap\operatorname{Fix}(T) \} \neq \emptyset\), then the sequences \(\{x^{k}\}\), \(\{u^{k}\}\), \(\{v^{k}\}\) converge weakly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges weakly to \(Ap \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T)\).

Proof

Let \(x^{*} \in \varOmega \). Then \(x^{*} \in\operatorname{Sol}(C, f) \cap \operatorname{Fix}(S)\) and \(Ax^{*} \in\operatorname{Sol}(Q, g) \cap \operatorname{Fix}(T)\).

From Lemma 9(d) we have

$$\begin{aligned} \bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2} & \leq\bigl\Vert x^{k}-x^{*}\bigr\Vert ^{2} - \gamma_{k}( 2 - \gamma _{k}) \bigl(\sigma_{k}\bigl\| \xi^{k}\bigr\| \bigr)^{2} \\ & \leq\bigl\Vert x^{k}-x^{*}\bigr\Vert ^{2}. \end{aligned}$$

By Step 4 we get

$$\begin{aligned} \bigl\Vert v^{k}-x^{*}\bigr\Vert &=\bigl\Vert (1- \beta)u^{k}+\beta Su^{k}-x^{*}\bigr\Vert \\ &=\bigl\Vert (1-\beta) \bigl(u^{k}-x^{*}\bigr)+\beta \bigl(Su^{k}-Sx^{*}\bigr)\bigr\Vert \\ &\leq(1-\beta)\bigl\Vert u^{k}-x^{*}\bigr\Vert +\beta\bigl\Vert Su^{k}-Sx^{*}\bigr\Vert \\ &\leq(1-\beta)\bigl\Vert u^{k}-x^{*}\bigr\Vert +\beta\bigl\Vert u^{k}-x^{*}\bigr\Vert \\ &=\bigl\Vert u^{k}-x^{*}\bigr\Vert . \end{aligned}$$

Thus,

$$ \bigl\Vert v^{k}-x^{*} \bigr\Vert \leq\bigl\Vert u^{k}-x^{*} \bigr\Vert \leq\bigl\Vert x^{k}-x^{*}\bigr\Vert . $$
(3.2)

Assertions (iii) and (ii) in Lemma 7 imply that

$$\begin{aligned} \bigl\Vert T_{\alpha_{k}}^{g}Av^{k}-Ax^{*}\bigr\Vert ^{2}& = \bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - T_{\alpha_{k}}^{g}Ax^{*}\bigr\Vert ^{2} \\ & \leq\bigl\langle T_{\alpha_{k}}^{g}Av^{k} - T_{\alpha_{k}}^{g}Ax^{*}, Av^{k}-Ax^{*}\bigr\rangle \\ & = \bigl\langle T_{\alpha_{k}}^{g}Av^{k} - Ax^{*}, Av^{k} - Ax^{*} \bigr\rangle \\ & = \frac{1}{2} \bigl[\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Ax^{*} \bigr\Vert ^{2} + \bigl\Vert Av^{k} - Ax^{*}\bigr\Vert ^{2} -\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Av^{k}\bigr\Vert ^{2} \bigr]. \end{aligned}$$

Hence,

$$\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Ax^{*} \bigr\Vert ^{2} \leq\bigl\Vert Av^{k}-Ax^{*} \bigr\Vert ^{2} - \bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Av^{k} \bigr\Vert ^{2}. $$

Because of the nonexpansiveness of the mapping T, we receive from the last inequality that

$$\begin{aligned} \bigl\Vert Tw^{k} - Ax^{*} \bigr\Vert ^{2} & = \bigl\Vert TT_{\alpha_{k}}^{g}Av^{k}-TAx^{*}\bigr\Vert ^{2} \\ &\leq\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Ax^{*}\bigr\Vert ^{2} \\ &\leq\bigl\Vert Av^{k}-Ax^{*}\bigr\Vert ^{2}-\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Av^{k}\bigr\Vert ^{2}. \end{aligned}$$
(3.3)

Using (3.3), we have

$$\begin{aligned} \bigl\langle A\bigl(v^{k}-x^{*}\bigr), Tw^{k} - Av^{k}\bigr\rangle = &\bigl\langle A\bigl(v^{k}-x^{*} \bigr)+Tw^{k} - Av^{k} - \bigl(Tw^{k}-Av^{k} \bigr), Tw^{k}-Av^{k}\bigr\rangle \\ =& \bigl\langle Tw^{k}-Ax^{*}, Tw^{k}-Av^{k} \bigr\rangle - \bigl\Vert Tw^{k} - Av^{k}\bigr\Vert ^{2} \\ =& \frac{1}{2} \bigl[\bigl\Vert Tw^{k}-Ax^{*}\bigr\Vert ^{2}+\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2}-\bigl\Vert Av^{k}-Ax^{*}\bigr\Vert ^{2} \bigr] \\ &{}-\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2} \\ = &\frac{1}{2} \bigl[\bigl(\bigl\Vert Tw^{k}-Ax^{*}\bigr\Vert ^{2}-\bigl\Vert Av^{k}-Ax^{*}\bigr\Vert ^{2}\bigr)-\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2} \bigr] \\ \leq&- \frac{1}{2}\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Av^{k}\bigr\Vert ^{2}- \frac {1}{2}\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2}. \end{aligned}$$
(3.4)

By the definition of \(x^{k+1}\) we have

$$\begin{aligned} \bigl\Vert x^{k+1}-x^{*}\bigr\Vert ^{2} & = \bigl\Vert P_{C}\bigl(v^{k}+\mu A^{*}\bigl(Tw^{k}-Av^{k} \bigr)\bigr)-P_{C}\bigl(x^{*}\bigr)\bigr\Vert ^{2} \\ & \leq\bigl\Vert \bigl(v^{k}-x^{*}\bigr) + \mu A^{*} \bigl(Tw^{k}-Av^{k}\bigr)\bigr\Vert ^{2} \\ & = \bigl\Vert v^{k}-x^{*}\bigr\Vert ^{2} + \bigl\Vert \mu A^{*}\bigl(Tw^{k}-Av^{k}\bigr)\bigr\Vert ^{2} + 2\mu \bigl\langle v^{k}-x^{*}, A^{*}\bigl(Tw^{k}-Av^{k} \bigr)\bigr\rangle \\ & \leq\bigl\Vert v^{k}-x^{*}\bigr\Vert ^{2}+ \mu^{2}\bigl\Vert A^{*}\bigr\Vert ^{2}\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2}+2\mu\bigl\langle A \bigl(v^{k}-x^{*}\bigr), Tw^{k}-Av^{k}\bigr\rangle . \end{aligned}$$

In combination with (3.4) and (3.2), the last inequality becomes

$$\begin{aligned} \bigl\Vert x^{k+1}-x^{*}\bigr\Vert ^{2} \leq& \bigl\Vert v^{k}-x^{*}\bigr\Vert ^{2} + \mu^{2} \bigl\Vert A^{*}\bigr\Vert ^{2}\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2} \\ &{}- \mu\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2}- \mu\bigl\Vert T_{\alpha_{k}}^{g}Av^{k}-Av^{k} \bigr\Vert ^{2} \\ =&\bigl\Vert v^{k}-x^{*}\bigr\Vert ^{2}-\mu\bigl(1-\mu \Vert A \Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2}- \mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2} \\ \leq&\bigl\Vert x^{k}-x^{*}\bigr\Vert ^{2}-\mu\bigl(1-\mu \Vert A\Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2}-\mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2}. \end{aligned}$$
(3.5)

In view of (3.2), (3.5), and \(\mu\in(0, \frac{1}{\Vert A\Vert ^{2}})\), we get

$$ \bigl\Vert x^{k+1}-x^{*}\bigr\Vert \leq\bigl\Vert v^{k}-x^{*}\bigr\Vert \leq\bigl\Vert u^{k}-x^{*}\bigr\Vert \leq\bigl\Vert x^{k}-x^{*}\bigr\Vert $$
(3.6)

and

$$ \mu\bigl(1-\mu \Vert A\Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2} + \mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2}\leq\bigl\Vert x^{k}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}. $$
(3.7)

Therefore, \(\lim_{k \to+\infty} \Vert x^{k}-x^{*} \Vert\) does exist, and we get from (3.6) and (3.7) that

$$ \begin{aligned} &\lim_{k \to+\infty}\bigl\Vert x^{k}-x^{*}\bigr\Vert = \lim_{k \to+\infty}\bigl\Vert v^{k}-x^{*}\bigr\Vert = \lim_{k\to+\infty}\bigl\Vert u^{k}-x^{*}\bigr\Vert \quad \text{and} \\ &\lim_{k \to+\infty}\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert =\lim_{k \to +\infty}\bigl\Vert w^{k}-Av^{k} \bigr\Vert =0. \end{aligned} $$
(3.8)

From (3.8) and the inequality

$$\bigl\Vert Tw^{k}-w^{k}\bigr\Vert \leq\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert +\bigl\Vert w^{k} - Av^{k}\bigr\Vert $$

we get

$$ \lim_{k\rightarrow +\infty}\bigl\Vert Tw^{k} - w^{k}\bigr\Vert =0. $$
(3.9)

Besides, Lemma 9(d) implies

$$ \bigl\Vert u^{k} - x^{*}\bigr\Vert ^{2} \leq\bigl\Vert x^{k} - x^{*}\bigr\Vert ^{2} - \gamma_{k}(2- \gamma_{k}) \bigl(\sigma_{k}\bigl\Vert \xi^{k} \bigr\Vert \bigr)^{2}. $$

Hence,

$$\begin{aligned} \gamma_{k}(2-\gamma_{k}) \bigl(\sigma_{k}\bigl\Vert \xi^{k}\bigr\Vert \bigr)^{2} & \leq \bigl\Vert x^{k} - x^{*}\bigr\Vert ^{2} - \bigl\Vert u^{k} - x^{*}\bigr\Vert ^{2} \\ & = \bigl(\bigl\Vert x^{k} - x^{*}\bigr\Vert - \bigl\Vert u^{k} - x^{*}\bigr\Vert \bigr) \bigl(\bigl\Vert x^{k} - x^{*} \bigr\Vert + \bigl\Vert u^{k} - x^{*}\bigr\Vert \bigr). \end{aligned}$$

In view of (3.8), we get

$$ \lim_{k \to+\infty}\sigma_{k}\bigl\Vert \xi^{k}\bigr\Vert = 0. $$
(3.10)

In addition, by the definition of \(u^{k}\), \(u^{k} = P_{C}( x^{k} - \gamma _{k}\sigma_{k} \xi^{k} ) \). We have

$$\bigl\Vert u^{k}-x^{k} \bigr\Vert \leq \gamma_{k}\sigma_{k}\bigl\Vert \xi^{k} \bigr\Vert . $$

So we get from (3.10) that

$$ \lim_{k \to+\infty}\bigl\Vert u^{k} - x^{k}\bigr\Vert = 0. $$
(3.11)

Using \(v^{k}=(1-\beta)u^{k} + \beta Su^{k}\), Lemma 2, and the nonexpansiveness of S, we have

$$\begin{aligned} \bigl\Vert v^{k}-x^{*}\bigr\Vert ^{2} & = \bigl\Vert (1-\beta)u^{k}+\beta Su^{k}-x^{*}\bigr\Vert ^{2} \\ &=\bigl\Vert (1-\beta) \bigl(u^{k}-x^{*}\bigr)+\beta \bigl(Su^{k}-x^{*}\bigr)\bigr\Vert ^{2} \\ &=(1-\beta)\bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}+\beta \bigl\Vert Su^{k}-x^{*}\bigr\Vert ^{2}-\beta(1-\beta )\bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2} \\ &=(1-\beta)\bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}+\beta \bigl\Vert Su^{k}-Sx^{*}\bigr\Vert ^{2}-\beta (1-\beta)\bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2} \\ &\leq(1-\beta)\bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}+\beta \bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}-\beta (1-\beta)\bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2} \\ &=\bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}-\beta(1-\beta) \bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2}. \end{aligned}$$
(3.12)

Therefore,

$$\beta(1-\beta)\bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2}\leq\bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}- \bigl\Vert v^{k}-x^{*}\bigr\Vert ^{2}. $$

Combining the last inequality with (3.8), we obtain that

$$ \lim_{k \to+\infty}\bigl\Vert Su^{k}-u^{k} \bigr\Vert =0. $$
(3.13)

In addition,

$$\begin{aligned} \bigl\Vert v^{k}-x^{k}\bigr\Vert &\leq\bigl\Vert v^{k}-u^{k}\bigr\Vert +\bigl\Vert u^{k}-x^{k} \bigr\Vert \\ &=\alpha\bigl\Vert Su^{k}-u^{k} \bigr\Vert + \bigl\Vert u^{k}-x^{k} \bigr\Vert . \end{aligned}$$

Therefore, we get from (3.11) and (3.13) that

$$ \lim_{k\rightarrow +\infty}\bigl\Vert v^{k}-x^{k} \bigr\Vert =0. $$
(3.14)

Because \(\lim_{k \to+\infty}\Vert x^{k}-x^{*}\Vert\) exists, \(\{x^{k}\}\) is bounded. By Lemma 5, \(\{y^{k}\}\) is bounded, and consequently \(\{z^{k}\}\) is bounded. By Lemma 4, \(\{\xi^{k}\}\) is bounded. Step 3 and (3.10) yield

$$ \lim_{k \to\infty} f\bigl(z^{k}, x^{k}\bigr) = \lim_{k \to\infty} \bigl[\sigma_{k} \bigl\Vert \xi ^{k}\bigr\Vert \bigr] \bigl\Vert \xi^{k} \bigr\Vert = 0. $$
(3.15)

We have

$$\begin{aligned} 0 &= f\bigl(z^{k}, z^{k}\bigr) = f\bigl(z^{k}, (1 - \eta_{k}) x^{k} + \eta_{k} y^{k}\bigr) \\ & \leq (1 - \eta_{k})f\bigl(z^{k}, x^{k}\bigr) + \eta_{k} f\bigl(z^{k}, y^{k}\bigr), \end{aligned}$$

so, we get from (3.1) that

$$\begin{aligned} f\bigl(z^{k}, x^{k}\bigr) & \geq\eta_{k} \bigl[f \bigl(z^{k}, x^{k}\bigr) - f\bigl(z^{k}, y^{k}\bigr)\bigr] \\ &\geq \frac{\theta}{2\rho_{k}} \eta_{k} \bigl\Vert x^{k} - y^{k} \bigr\Vert ^{2}. \end{aligned}$$

Combining this with (3.15), we have

$$ \lim_{k \to\infty} \eta_{k}\bigl\Vert x^{k} - y^{k}\bigr\Vert ^{2} = 0. $$
(3.16)

Suppose that p is a weak accumulation point of \(\{x^{k}\}\), that is, there exists a subsequence \(\{x^{k_{j}}\}\) of \(\{x^{k}\}\) such that \(x^{k_{j}}\) converges weakly to \(p \in C\) as \(j \to+\infty\). Then, it follows from (3.11) and (3.14) that \(u^{k_{j}} \rightharpoonup p\), \(v^{k_{j}} \rightharpoonup p\), and \(Av^{k_{j}} \rightharpoonup Ap\).

Since \(\lim_{k \to+\infty}\Vert w^{k}-Av^{k}\Vert=0\), we deduce that \(w^{k_{j}} \rightharpoonup Ap\). Because \(\{w^{k}\} \subset Q\) and Q is closed and convex, we have that \(Ap \in Q\).

From (3.16) we get

$$ \lim_{i \to\infty} \eta_{k_{i}}\bigl\Vert x^{k_{i}} - y^{k_{i}}\bigr\Vert ^{2} = 0. $$
(3.17)

We now consider two distinct cases.

Case 1. \(\limsup_{i \to\infty}\eta_{k_{i}} > 0\).

In this case, there exist \(\bar{\eta} > 0\) and a subsequence of \(\{\eta _{k_{i}}\}\), denoted again by \(\{\eta_{k_{i}}\}\), such that, for some \(i_{0} > 0\), \(\eta_{{k}_{i}} > \bar{\eta}\) for all \(i \geq i_{0} \). Using this fact and (3.17), we have

$$ \lim_{i \to\infty}{\bigl\Vert x^{{k}_{i}}-y^{{k}_{i}} \bigr\Vert } = 0. $$
(3.18)

Recall that \(x^{k} \rightharpoonup p\), together with (3.18), implies that \(y^{k_{i}} \rightharpoonup p\) as \(i \to\infty\).

By the definition of \(y^{k_{i}}\),

$$y^{k_{i}} = \arg\min\biggl\{ f\bigl(x^{k_{i}}, y\bigr) + \frac{1}{2\rho_{k_{i}}}\bigl\Vert y-x^{k_{i}}\bigr\Vert ^{2}: y \in C\biggr\} , $$

we have

$$0 \in\partial_{2} f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) + \frac{1}{\rho_{k_{i}}}\bigl(y^{k_{i}} - x^{k_{i}}\bigr) + N_{C}\bigl(y^{k_{i}}\bigr), $$

so there exists \(\hat{\xi}^{k_{i}} \in\partial_{2}f(x^{k_{i}}, y^{k_{i}})\) such that

$$\bigl\langle \hat{\xi}^{k_{i}}, y-y^{k_{i}}\bigr\rangle + \frac{1}{\rho_{k_{i}}}\bigl\langle y^{k_{i}} - x^{k_{i}} , y - y^{k_{i}} \bigr\rangle \geq0,\quad \forall y \in C. $$

Combining this with

$$f\bigl(x^{k_{i}}, y\bigr) - f\bigl(x^{k_{i}}, y^{k_{i}} \bigr) \geq\bigl\langle \hat{\xi}^{k_{i}}, y-y^{k_{i}}\bigr\rangle , \quad \forall y \in C, $$

yields

$$ f\bigl(x^{k_{i}}, y\bigr) - f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) + \frac{1}{\rho_{k_{i}}}\bigl\langle y^{k_{i}} - x^{k_{i}} , y - y^{k_{i}} \bigr\rangle \geq0, \quad \forall y \in C. $$
(3.19)

Since

$$\bigl\langle y^{k_{i}} - x^{k_{i}} , y - y^{k_{i}} \bigr\rangle \leq\bigl\Vert y^{k_{i}} - x^{k_{i}} \bigr\Vert \bigl\Vert y - y^{k_{i}} \bigr\Vert , $$

from (3.19) we get that

$$ f\bigl(x^{k_{i}}, y\bigr) - f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) + \frac{1}{\rho_{k_{i}}} \bigl\Vert y^{k_{i}} - x^{k_{i}} \bigr\Vert \bigl\Vert y - y^{k_{i}} \bigr\Vert \geq0. $$
(3.20)

Letting \(i \to\infty\), by the weak continuity of f and (3.18), from (3.20) we obtain in the limit that

$$f(p, y) - f(p, p) \geq0. $$

Hence,

$$f(p,y) \geq0,\quad \forall y \in C, $$

which means that p is a solution of \(\operatorname{EP}(C, f)\).

Case 2. \(\lim_{i \to\infty}{\eta_{k_{i}}} = 0\).

From the boundedness of \(\{y^{k_{i}}\}\), without loss of generality, we may assume that \(y^{k_{i}} \rightharpoonup \bar{y}\) as \(i \to\infty\).

Replacing y by \(x^{k_{i}}\) in (3.19), we get

$$ f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) \leq- \frac{1}{\rho_{k_{i}}} \bigl\Vert y^{k_{i}} - x^{k_{i}} \bigr\Vert ^{2}. $$
(3.21)

On the other hand, by the Armijo linesearch rule (3.1), for \(m_{k_{i}} - 1\), we have

$$ f\bigl(z^{k_{i}, m_{k_{i}} - 1}, x^{k_{i}}\bigr) - f \bigl(z^{k_{i}, m_{k_{i}} - 1}, y^{k_{i}}\bigr) < \frac{\theta}{2\rho_{k_{i}}} \bigl\Vert y^{k_{i}}-x^{k_{i}}\bigr\Vert ^{2}. $$

Combining this with (3.21), we get

$$ f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) \leq- \frac{1}{\rho_{k_{i}}} \bigl\Vert y^{k_{i}} - x^{k_{i}} \bigr\Vert ^{2} \leq\frac{2}{\theta} \bigl[f\bigl(z^{k_{i}, m_{k_{i}} - 1}, y^{k_{i}}\bigr) - f\bigl(z^{k_{i}, m_{k_{i}} - 1}, x^{k_{i}}\bigr) \bigr]. $$
(3.22)

According to the algorithm, we have \(z^{k_{i}, m_{k_{i}} - 1} = (1-\eta ^{m_{k_{i}} - 1})x^{k_{i}} + \eta^{m_{k_{i}} - 1}y^{k_{i}}\). Since \(\eta^{k_{i}, m_{k_{i}} - 1} \to0\), \(x^{k_{i}} \) converges weakly to p, and \(y^{k_{i}}\) converges weakly to ȳ, this implies that \(z^{k_{i}, m_{k_{i}} - 1} \rightharpoonup p\) as \(i \to\infty\). Beside that, \(\{\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\}\) is bounded, so without loss of generality we may assume that \(\lim_{i \to+\infty}\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\) exists. Hence, in the limit, from (3.22) we get that

$$ f(p, \bar{y}) \leq- \lim_{i \to+\infty}\frac{1}{\rho_{k_{i}}} \bigl\Vert y^{k_{i}} - x^{k_{i}}\bigr\Vert ^{2} \leq \frac{2}{\theta}f(p, \bar{y}). $$

Therefore, \(f(p, \bar{y}) = 0\) and \(\lim_{i \to+\infty}\|y^{k_{i}} - x^{k_{i}}\|^{2} = 0\). By Case 1 we get \(p \in\operatorname{Sol}(C, f)\).

Besides that, (3.13) implies that \(\|Su^{k_{j}} - u^{k_{j}}\| \to0\) as \(j \to\infty\); together with \(u^{k_{j}} \rightharpoonup p \) and the demiclosedness of \(I - S\), we get \(p \in\operatorname{Fix}(S)\).

Therefore,

$$ p \in\operatorname{Sol}(C, f) \cap\operatorname{Fix}(S). $$
(3.23)

Next, we need to show that \(Ap \in\operatorname{Sol}(Q, g) \cap \operatorname{Fix}(T)\).

Indeed, we have \(\operatorname{Sol}(Q, g)= \operatorname{Fix}(T_{\beta}^{g})\). So, if \(T_{\beta}^{g}Ap \neq Ap\), then, using Opial’s condition, we have

$$\begin{aligned} \liminf_{j \to+\infty}\bigl\Vert Av^{k_{j}}-Ap\bigr\Vert &< \liminf_{j \to+\infty }\bigl\Vert Av^{k_{j}}-T_{\beta}^{g}Ap \bigr\Vert \\ &=\liminf_{j \to+\infty}\bigl\Vert Av^{k_{j}}-w^{k_{j}}+w^{k_{j}}-T_{\beta}^{g}Ap\bigr\Vert \\ &\leq\liminf_{j \to+\infty}\bigl(\bigl\Vert Av^{k_{j}}-w^{k_{j}} \bigr\Vert +\bigl\Vert T_{\beta}^{g}Ap-w^{k_{j}}\bigr\Vert \bigr). \end{aligned}$$

So it follows from (3.8) and Lemma 8 that

$$\begin{aligned} \liminf_{j \to+\infty}\bigl\Vert Av^{k_{j}}-Ap\bigr\Vert & < \liminf_{j \to +\infty}\bigl\Vert T_{\beta}^{g}Ap-w^{k_{j}} \bigr\Vert \\ &=\liminf_{j \to+\infty}\bigl\Vert T_{\beta}^{g}Ap-T_{\alpha _{k_{j}}}^{g}Av^{k_{j}} \bigr\Vert \\ &\leq\liminf_{j \to+\infty} \biggl\{ \bigl\Vert Av^{k_{j}}-Ap \bigr\Vert + \frac{|\alpha _{k_{j}}-\beta|}{\alpha_{k_{j}}}\bigl\Vert T_{\alpha _{k_{j}}}^{g}Av^{k_{j}}-Av^{k_{j}} \bigr\Vert \biggr\} \\ &=\liminf_{j \to+\infty} \biggl\{ \bigl\Vert Av^{k_{j}}-Ap\bigr\Vert + \frac{|\alpha _{k_{j}}-\beta|}{\alpha_{k_{j}}}\bigl\Vert w^{k_{j}}-Av^{k_{j}}\bigr\Vert \biggr\} \\ &=\liminf_{j \to+\infty}\bigl\Vert Av^{k_{j}}-Ap\bigr\Vert , \end{aligned}$$

a contradiction. Thus, \(Ap \in\operatorname{Fix}(T_{\alpha}^{g}) = \operatorname{Sol}(Q, g)\).

Moreover, (3.9) shows that \(\lim_{j \to\infty} \|Tw^{k_{j}} - w^{k_{j}}\| = 0\). Combining this with \(w^{k_{j}} \rightharpoonup Ap\) and the fact that \(I - T\) is demiclosed at 0, it is immediate that \(Ap \in \operatorname{Fix}(T)\). Therefore,

$$ Ap \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T). $$
(3.24)

From (3.23) and (3.24) we obtain that \(p \in \varOmega \).

To complete the proof, we must show that the whole sequence \(\{x^{k}\}\) converges weakly to p. Indeed, if there exists a subsequence \(\{ x^{l_{i}}\}\) of \(\{x^{k}\}\) such that \(x^{l_{i}} \rightharpoonup q\) with \(q \neq p \), then we have \(q \in \varOmega \). By Opial’s condition this yields

$$\begin{aligned} \liminf_{i \to+\infty}\bigl\Vert x^{l_{i}} - q \bigr\Vert & < \liminf_{i \to +\infty}\bigl\Vert x^{l_{i}} - p\bigr\Vert \\ &=\liminf_{j \to+\infty}\bigl\Vert x^{k} - p \bigr\Vert \\ &= \liminf_{j \to+\infty}\bigl\Vert x^{k_{j}}-p \bigr\Vert \\ &< \liminf_{j \to+\infty}\bigl\Vert x^{k_{j}}-q \bigr\Vert \\ &=\liminf_{i \to+\infty}\bigl\Vert x^{l_{i}}-q\bigr\Vert , \end{aligned}$$

a contradiction. Hence, \(\{x^{k}\}\) converges weakly to p.

Combining this with (3.8), it is immediate that \(\{u^{k}\} \), \(\{ v^{k}\}\) also converge weakly to p and \(w^{k} \rightharpoonup Ap \in \operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T)\). □

A particular case of the problem SEPNM is the split equilibrium problem SEP, that is, \(S = I_{\mathbb{H}_{1}}\) and \(T = I_{\mathbb{H}_{2}}\). In this case, we have the following linesearch algorithm for SEP.

Algorithm 2

  • Initialization. Pick \(x^{0} \in C\) and choose the parameters \(\eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq\bar {\rho}\), \(\{ \rho_{k} \} \subset[\underline{\rho}, \bar{\rho}]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2\), \(\{ \gamma_{k} \} \subset [\underline{\gamma}, \bar{\gamma} ] \), \(0 < \alpha\), \(\{\alpha_{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).

  • Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:

    1. Step 1.

      Solve the strongly convex program

      $$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$

      to obtain its unique solution \(y^{k}\).

      If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.

    2. Step 2.

      (Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that

      $$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$

      Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).

    3. Step 3.

      Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).

    4. Step 4.

      \(w^{k}=T_{\alpha_{k}}^{g}Au^{k}\).

    5. Step 5.

      Take \(x^{k+1}=P_{C}(u^{k}+\mu A^{*}(w^{k}-Au^{k}))\) and go to iteration k with k is replaced by \(k+1\).

The following corollary is an immediate consequence of Theorem 1.

Corollary 1

Suppose that g, f are bifunctions satisfying Assumptions A and B, respectively. Let \(A: \mathbb{H}_{1} \to\mathbb{H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega = \{x^{*} \in\operatorname{Sol}(C, f): Ax^{*} \in\operatorname {Sol}(Q, g) \} \neq\emptyset\), then the sequences \(\{x^{k}\}\) and \(\{ u^{k}\}\) converge weakly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges weakly to \(Ap \in\operatorname{Sol}(Q, g)\).

4 A strong convergence algorithm

Algorithm 3

  • Initialization. Pick \(x^{g} \in C_{0} = C\) and choose the parameters \(\beta, \eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq \bar{\rho} \), \(\{ \rho_{k} \} \subset[\underline{\rho} , \bar{\rho } ]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2 \), \(\{ \gamma_{k} \} \subset[ \underline{\gamma}, \bar{\gamma}] \), \(0 < \alpha\), \(\{\alpha _{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).

  • Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:

    1. Step 1.

      Solve the strongly convex program

      $$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$

      to obtain its unique solution \(y^{k}\).

      If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.

    2. Step 2.

      (Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that

      $$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$
      (4.1)

      Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).

    3. Step 3.

      Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).

    4. Step 4.
      $$ \textstyle\begin{cases} v^{k}=(1-\beta)u^{k} + \beta Su^{k}, \\ w^{k}=T_{\beta_{k}}^{g}Av^{k}. \end{cases} $$
    5. Step 5.

      \(t^{k}=P_{C}(v^{k}+\mu A^{*}(Tw^{k}-Av^{k}))\).

    6. Step 6.

      Define \(C_{k+1} = \{x \in C_{k}: \|x - t^{k}\| \leq \| x - v^{k}\| \leq\|x - x^{k}\| \}\). Compute \(x^{k+1} = P_{C_{k+1}}(x^{g})\) and go to iteration k with k is replaced by \(k+1\).

Theorem 2

Let C and Q be two nonempty closed convex subsets in \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\), respectively. Let \(S: C \to C\); \(T: Q \to Q\) be nonexpansive mappings, and let bifunctions g and f satisfy Assumptions A and B, respectively. Let \(A: \mathbb{H}_{1} \to\mathbb {H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega = \{x^{*} \in\operatorname{Sol}(C, f) \cap\operatorname {Fix}(S): Ax^{*} \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T) \} \neq\emptyset\), then the sequences \(\{x^{k}\}\), \(\{u^{k}\}\), \(\{v^{k}\}\) converge strongly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges strongly to \(Ap \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T)\).

Proof

First, we observe that the linesearch rule (4.1) is well defined by Lemma 9. Let \(x^{*} \in \varOmega \). From (3.5), (3.12), and (3.2) we have

$$\begin{aligned} \bigl\Vert t^{k}-x^{*}\bigr\Vert ^{2} \leq& \bigl\Vert v^{k}-x^{*} \bigr\Vert ^{2} - \mu\bigl(1-\mu \Vert A\Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2}-\mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2} \\ \leq&\bigl\Vert u^{k}-x^{*}\bigr\Vert ^{2}-\beta(1-\beta) \bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2} \\ &{}-\mu \bigl(1-\mu \Vert A\Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2}-\mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2} \\ \leq&\bigl\Vert x^{k}-x^{*}\bigr\Vert ^{2}-\beta(1-\beta) \bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2} \\ &{}-\mu \bigl(1-\mu \Vert A\Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k} \bigr\Vert ^{2}-\mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2}. \end{aligned}$$
(4.2)

Since \(\mu\in (0, \frac{1}{\Vert A\Vert^{2}} )\), (4.2) implies that

$$ \bigl\Vert t^{k}-x^{*}\bigr\Vert \leq\bigl\Vert v^{k}-x^{*}\bigr\Vert \leq\bigl\Vert u^{k}-x^{*}\bigr\Vert \leq\bigl\Vert x^{k}-x^{*}\bigr\Vert , \quad \forall k. $$
(4.3)

Since \(x^{*} \in C_{0}\), from(4.3) we get by induction that \(x^{*} \in C_{k}\) for all \(k \in\mathbb{N}^{*}\)and, consequently, \(\varOmega \subset C_{k}\) for all k.

By setting

$$D_{k} = \bigl\{ x \in\mathbb{H}_{1}: \bigl\Vert x - t^{k} \bigr\Vert \leq\bigl\Vert x - v^{k} \bigr\Vert \leq\bigl\Vert x - x^{k} \bigr\Vert \bigr\} ,\quad k\in\mathbb{N}, $$

it is clear that \(D_{k}\) is closed and convex for all k. In addition, \(C_{0} = C\) is also closed and convex, and \(C_{k+1}=C_{k} \cap D_{k}\). Hence, \(C_{k}\) is closed and convex for all k.

From the definition of \(x^{k + 1}\) we have \(x^{k+1}\in C_{k+1}\subset C_{k}\) and \(x^{k}=P_{C_{k}}(x^{g})\), so

$$\bigl\Vert x^{k}-x^{g}\bigr\Vert \leq\bigl\Vert x^{k+1}-x^{g} \bigr\Vert \quad \text{for all } k. $$

Since \(x^{*} \in C_{k+1}\), this implies that

$$\bigl\Vert x^{k+1}-x^{g} \bigr\Vert \leq\bigl\Vert x^{*}-x^{g} \bigr\Vert . $$

Thus,

$$\bigl\Vert x^{k}-x^{g} \bigr\Vert \leq\bigl\Vert x^{k+1}-x^{g} \bigr\Vert \leq\bigl\Vert x^{*}-x^{g} \bigr\Vert ,\quad \forall k. $$

Consequently, \(\{\Vert x^{k}-x^{g}\Vert\}\) is nondecreasing and bounded, so \(\lim_{k \to+\infty}\Vert x^{k}-x^{g}\Vert\) does exist.

Combining this with (4.3), we obtain that \(\{t^{k}\}\) and \(\{v^{k}\}\) are also bounded.

For all \(m > n\), we have that \(x^{m} \in C_{m} \subset C_{n}\) and \(x^{n}=P_{C_{n}}(x^{g})\). Combining this fact with Lemma 1, we get

$$\begin{aligned} \bigl\Vert x^{m}-x^{n} \bigr\Vert ^{2} & \leq \bigl\Vert x^{m}-x^{g} \bigr\Vert ^{2}- \bigl\Vert x^{n}-x^{g}\bigr\Vert ^{2} \\ & = \bigl(\bigl\Vert x^{m}-x^{g} \bigr\Vert - \bigl\Vert x^{n}-x^{g} \bigr\Vert \bigr) \bigl(\bigl\Vert x^{m}-x^{g} \bigr\Vert + \bigl\Vert x^{n}-x^{g} \bigr\Vert \bigr). \end{aligned}$$

Since \(\lim_{k \to+\infty}\Vert x^{k}-x^{g}\Vert\) exists, this implies that \(\lim_{m,n \to\infty} \Vert x^{m}-x^{n} \Vert = 0\), i.e., \(\{x^{k}\} \) is a Cauchy sequence, so

$$ \lim_{k \to\infty}x^{k} = p. $$
(4.4)

By Step 6 we get

$$\bigl\Vert t^{k}-x^{k+1} \bigr\Vert \leq\bigl\Vert v^{k}-x^{k+1} \bigr\Vert \leq\bigl\Vert x^{k}-x^{k+1}\bigr\Vert . $$

Therefore,

$$\begin{aligned} \bigl\Vert t^{k}-x^{k}\bigr\Vert &\leq \bigl\Vert t^{k}-x^{k+1}\bigr\Vert +\bigl\Vert x^{k+1}-x^{k}\bigr\Vert \\ &\leq\bigl\Vert x^{k}-x^{k+1}\bigr\Vert +\bigl\Vert x^{k}-x^{k+1}\bigr\Vert \\ &=2\bigl\Vert x^{k}-x^{k+1}\bigr\Vert \end{aligned}$$
(4.5)

and

$$\begin{aligned} \bigl\Vert v^{k}-x^{k}\bigr\Vert &\leq \bigl\Vert v^{k}-x^{k+1}\bigr\Vert +\bigl\Vert x^{k+1}-x^{k}\bigr\Vert \\ &\leq\bigl\Vert x^{k}-x^{k+1}\bigr\Vert +\bigl\Vert x^{k}-x^{k+1}\bigr\Vert \\ &=2\bigl\Vert x^{k}-x^{k+1}\bigr\Vert . \end{aligned}$$
(4.6)

So, from (4.5), (4.6), and (4.4) we get that

$$ \lim_{k \to\infty} \bigl\Vert t^{k} - x^{k} \bigr\Vert = \lim_{k \to\infty} \bigl\Vert v^{k} - x^{k} \bigr\Vert = 0. $$
(4.7)

In view of (4.2) and (4.7), we have

$$\begin{aligned}& \beta(1-\beta)\bigl\Vert Su^{k}-u^{k}\bigr\Vert ^{2}+\mu\bigl(1-\mu \Vert A\Vert ^{2}\bigr)\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert ^{2}+\mu\bigl\Vert w^{k}-Av^{k}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert x^{k}-x^{*}\bigr\Vert ^{2}-\bigl\Vert t^{k}-x^{*}\bigr\Vert ^{2} \\& \quad =\bigl(\bigl\Vert x^{k}-x^{*}\bigr\Vert +\bigl\Vert t^{k}-x^{*}\bigr\Vert \bigr) \bigl(\bigl\Vert x^{k}-x^{*} \bigr\Vert -\bigl\Vert t^{k}-x^{*}\bigr\Vert \bigr) \\& \quad \leq\bigl\Vert x^{k}-t^{k}\bigr\Vert \bigl(\bigl\Vert x^{k}-x^{*}\bigr\Vert +\bigl\Vert t^{k}-x^{*}\bigr\Vert \bigr) \to0\quad \text{as } k \to\infty. \end{aligned}$$
(4.8)

Since \(\beta\in(0, 1)\) and \(\mu\in(0, \frac{1}{\|A\|})\), we deduce from (4.8) that

$$ \begin{aligned} &\lim_{k \to+\infty}\bigl\Vert Su^{k}-u^{k}\bigr\Vert =0, \qquad \lim_{k \to+\infty }\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert = 0, \quad \text{and} \\ &\lim_{k \to+\infty}\bigl\Vert w^{k}-Av^{k}\bigr\Vert =0. \end{aligned} $$
(4.9)

In addition, from the inequality

$$\bigl\Vert Tw^{k}-w^{k}\bigr\Vert \leq\bigl\Vert Tw^{k}-Av^{k}\bigr\Vert +\bigl\Vert w^{k}-Av^{k} \bigr\Vert , $$

combined with (4.9), we get

$$ \lim_{k \to+\infty}\bigl\Vert Tw^{k}-w^{k} \bigr\Vert =0. $$
(4.10)

Besides, (3.11), (4.6), and \(\lim_{k \to+\infty}x^{k}=p\) it imply

$$ \lim_{k \to+\infty}u^{k}=p,\qquad \lim _{k \to+\infty}v^{k}=p. $$
(4.11)

Since

$$\begin{aligned} \Vert Sp-p\Vert &\leq\bigl\Vert Sp-Su^{k}\bigr\Vert +\bigl\Vert Su^{k}-u^{k}\bigr\Vert +\bigl\Vert u^{k}-p\bigr\Vert \\ &\leq\bigl\Vert p-u^{k}\bigr\Vert +\bigl\Vert Su^{k}-u^{k}\bigr\Vert +\bigl\Vert u^{k}-p\bigr\Vert \\ &=2\bigl\Vert u^{k}-p\bigr\Vert +\bigl\Vert Su^{k}-u^{k} \bigr\Vert , \end{aligned}$$

from (4.9) and (4.11) we get that \(\|Sp - p\| = 0\), that is, \(p \in\operatorname{Fix}(S)\).

From (3.16) we have

$$ \lim_{k \to\infty} \eta_{k}\bigl\Vert x^{k} - y^{k}\bigr\Vert ^{2} = 0. $$
(4.12)

We now consider two distinct cases.

Case 1. \(\limsup_{k \to\infty}\eta_{k} > 0\).

Then there exist \(\bar{\eta} > 0\) and a subsequence \(\{ \eta_{k_{i}} \} \subset\{ \eta_{k} \}\) such that \(\eta_{{k}_{i}} > \bar{\eta}\) for all i. So we get from (4.12) that

$$ \lim_{i \to\infty}{\bigl\Vert x^{{k}_{i}}-y^{{k}_{i}} \bigr\Vert } = 0. $$
(4.13)

Since \(x^{k} \to p\), (4.13) implies that \(y^{k_{i}} \to p\) as \(i \to\infty\).

For each \(y \in C\), we get from (3.20) that

$$ f\bigl(x^{k_{i}}, y\bigr) - f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) + \frac{1}{\rho_{k_{i}}} \bigl\Vert y^{k_{i}} - x^{k_{i}} \bigr\Vert \bigl\Vert y - y^{k_{i}} \bigr\Vert \geq0. $$
(4.14)

Letting \(i \to\infty\), by the continuity of f, since \(x^{k_{i}} \to p\) and \(y^{k_{i}} \to p\), in the limit, from (4.14) we obtain that

$$f(p,y) - f(p, p) \geq0. $$

Hence,

$$f(p,y) \geq0, \quad \forall y \in C, $$

so p is a solution of \(\operatorname{EP}(C, f)\).

Case 2. \(\lim_{k \to\infty}{\eta_{k}} = 0\).

From the boundedness of \(\{y^{k}\}\) we deduce that there exists \(\{ y^{k_{i}}\} \subset\{y^{k}\} \) such that \(y^{k_{i}} \rightharpoonup \bar {y}\) as \(i \to\infty\).

Replacing y by \(y^{k_{i}}\) in (3.19), we get

$$ f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) + \frac{1}{\rho_{k_{i}}}\bigl\Vert y^{k_{i}} - x^{k_{i}} \bigr\Vert ^{2} \leq0. $$
(4.15)

In the other hand, by the Armijo linesearch rule (4.1), for \(m_{k_{i}} - 1\), there exists \(z^{k_{i}, m_{k_{i}}-1} \) such that

$$ f\bigl(z^{k_{i}, m_{k_{i}}-1}, x^{k_{i}}\bigr) - f \bigl(z^{k_{i}, m_{k_{i}}-1}, y^{k_{i}}\bigr) < \frac {\theta}{2\rho_{k_{i}}}\bigl\Vert y^{k_{i}}-x^{k_{i}}\bigr\Vert ^{2}. $$

Combining this with (4.15), we get

$$ f\bigl(z^{k_{i}, m_{k_{i}} - 1}, y^{k_{i}} \bigr) - f \bigl(z^{k_{i}, m_{k_{i}}-1}, x^{k_{i}}\bigr) > - \frac{\theta}{2\rho_{k_{i}}} \bigl\Vert y^{k_{i}}-x^{k_{i}}\bigr\Vert ^{2} \geq \frac{2}{\theta} f\bigl(x^{k_{i}}, y^{k_{i}}\bigr) . $$
(4.16)

According to the algorithm, we have \(z^{k_{i}, m_{k_{i}} - 1} = (1-\eta ^{m_{k_{i}} - 1})x^{k_{i}} + \eta^{m_{k_{i}} - 1}y^{k_{i}}\). Since \(\eta^{k_{i}, m_{k_{i}} - 1} \to0\), \(x^{k_{i}} \) converges strongly to p, and \(y^{k_{i}}\) converges weakly to ȳ, this implies that \(z^{k_{i}, m_{k_{i}} - 1} \to p\) as \(i \to\infty\). Besides that, \(\{\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\}\) is bounded, so, without loss of generality, we may assume that \(\lim_{i \to+\infty}\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\) exists. Hence, we get in the limit (4.16) that

$$ f(p, \bar{y}) \geq- 2 \lim_{i \to+\infty}\frac{1}{\rho_{k_{i}}}\bigl\Vert y^{k_{i}} - x^{k_{i}}\bigr\Vert ^{2} \geq\theta f(p, \bar{y}). $$

Therefore, \(f(p, \bar{y}) = 0\) and \(\lim_{i \to+\infty}\|y^{k_{i}} - x^{k_{i}}\|^{2} = 0\). By Case 1 it is immediate that \(p \in\operatorname{Sol}(C, f)\). So

$$ p \in\operatorname{Sol}(C, f) \cap\operatorname{Fix}(S). $$
(4.17)

We obtain from (4.11) that \(\lim_{k \to+\infty}Av^{k}=Ap\). Combining this with (4.9) yields

$$ \lim_{k\rightarrow +\infty}w^{k}=Ap. $$
(4.18)

Moreover,

$$\begin{aligned} \Vert TAp-Ap\Vert & \leq\bigl\Vert TAp-Tw^{k}\bigr\Vert + \bigl\Vert Tw^{k}-w^{k}\bigr\Vert +\bigl\Vert w^{k}-Ap\bigr\Vert \\ & \leq\bigl\Vert Ap-w^{k}\bigr\Vert + \bigl\Vert Tw^{k}-w^{k}\bigr\Vert +\bigl\Vert w^{k}-Ap \bigr\Vert \\ &=2\bigl\Vert w^{k}-Ap \bigr\Vert + \bigl\Vert Tw^{k} - w^{k}\bigr\Vert . \end{aligned}$$

In view of (4.10) and (4.18), we obtain \(\|TAp - Ap\| = 0\). Hence, \(Ap \in\operatorname{Fix}(T)\).

In addition,

$$\begin{aligned} \bigl\Vert T_{\beta}^{g}Ap - Ap \bigr\Vert & \leq\bigl\Vert T_{\beta}^{g}Ap-T_{\alpha _{k}}^{g}Av^{k} \bigr\Vert +\bigl\Vert T_{\alpha_{k}}^{g}Av^{k}-Av^{k} \bigr\Vert + \bigl\Vert Av^{k}-Ap\bigr\Vert \\ &= \bigl\Vert T_{\beta}^{g}Ap - T_{\alpha_{k}}^{g}Av^{k} \bigr\Vert + \bigl\Vert w^{k} - Av^{k}\bigr\Vert + \bigl\Vert Av^{k}-Ap\bigr\Vert \\ & \leq\bigl\Vert Av^{k}-Ap \bigr\Vert + \frac{|\alpha_{k}-\beta|}{\alpha_{k}}\bigl\Vert T_{\alpha_{k}}^{g}Av^{k} - Av^{k} \bigr\Vert + \bigl\Vert w^{k}-Av^{k}\bigr\Vert + \bigl\Vert Av^{k}-Ap\bigr\Vert \\ &=2\bigl\Vert Av^{k}-Ap \bigr\Vert + \frac{|\alpha_{k}-\beta|}{\alpha_{k}}\bigl\Vert w^{k} - Av^{k}\bigr\Vert +\bigl\Vert w^{k}-Av^{k}\bigr\Vert , \end{aligned}$$

where the last inequality comes from Lemma 8. Letting \(k \to \infty\) and recalling that \(\lim_{k \to+\infty}Av^{k}=Ap\), from (4.9) we get

$$\bigl\Vert T_{\alpha}^{g}Ap - Ap\bigr\Vert = 0. $$

Therefore, \(Ap \in\operatorname{Fix}(T_{\alpha}^{g}) = \operatorname {Sol}(Q, g)\).

Hence,

$$ Ap \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T). $$

Combining this with (4.17), we conclude that \(p \in \varOmega \). The proof is completed. □

When \(S=I_{\mathbb{H}_{1}}\) and \(T=I_{\mathbb{H}_{2}}\), Algorithm 3 becomes as follows.

Algorithm 4

  • Initialization. Pick \(x^{g} \in C_{0} = C\) and choose the parameters \(\eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq \bar {\rho} \), \(\{ \rho_{k} \} \subset[\underline{\rho} , \bar{\rho} ]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2 \), \(\{ \gamma_{k} \} \subset[ \underline{\gamma}, \bar{\gamma}] \), \(0 < \alpha\), \(\{\alpha _{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).

  • Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:

    1. Step 1.

      Solve the strongly convex program

      $$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$

      to obtain its unique solution \(y^{k}\).

      If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.

    2. Step 2.

      (Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that

      $$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$

      Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).

    3. Step 3.

      Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).

    4. Step 4.

      \(w^{k}=T_{\beta_{k}}^{g}Au^{k}\).

    5. Step 5.

      \(t^{k}=P_{C}(u^{k}+\mu A^{*}(w^{k}-Au^{k}))\).

    6. Step 6.

      Define \(C_{k+1} = \{x \in C_{k}: \|x - t^{k}\| \leq \| x - u^{k}\| \leq\|x - x^{k}\| \}\). Compute \(x^{k+1} = P_{C_{k+1}}(x^{g})\) and go to iteration k with k is replaced by \(k+1\).

The following result is an immediate consequence of Theorem 2.

Corollary 2

Let \(g: Q \times Q \to\mathbb{R}\) be a bifunction satisfying Assumption A, and \(f: C \times C \to\mathbb{R}\) be a bifunction satisfying Assumption B. Let \(A: \mathbb{H}_{1} \to\mathbb{H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega =\{x^{*}\in\operatorname{Sol}(C, f): Ax^{*} \in\operatorname {Sol}(Q, g)\}\neq\emptyset\), then the sequences \(\{x^{k}\}\) and \(\{u^{k}\} \) converge strongly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges strongly to \(Ap \in\operatorname{Sol}(Q, g)\).

5 Conclusion

Two linesearch algorithms for solving a split equilibrium problem and nonexpansive mapping \(\operatorname{SEPNM}(C, Q, A, f, g, S, T)\) in Hilbert spaces have been proposed, in which the bifunction f is pseudomonotone on C with respect to its solution set, the bifunction g is monotone on Q, and S and T are nonexpansive mappings. The weak and strong convergence of iteration sequences generated by the algorithms to a solution of this problem are obtained.

References

  1. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  2. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 18, 103-120 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  3. Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problem in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353-2365 (2006)

    Article  Google Scholar 

  4. Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071-2084 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Censor, Y, Motova, XA, Segal, A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327, 1244-1256 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Xu, HK: Iterative methods for the split feasibility problem in infinite dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ansari, QH, Rehan, A: An iterative method for split hierarchical monotone variational inclusions. Fixed Point Theory Appl. 2015, 121 (2015)

    Article  MathSciNet  Google Scholar 

  9. Ansari, AH, Rehan, A, Wen, CF: Split hierarchical variational inequality problems and fixed point problems for nonexpansive mappings. Fixed Point Theory Appl. 2015, 274 (2015)

    Article  MathSciNet  Google Scholar 

  10. Censor, Y, Segal, A: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587-600 (2009)

    MathSciNet  MATH  Google Scholar 

  11. Cui, H, Wang, F: Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, 78 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Eslamian, M: General algorithms for split common fixed point problem of demicontractive mappings. Optimization (2015). doi:10.1080/02331934.2015.1053883

    MathSciNet  MATH  Google Scholar 

  13. Kraikaew, R, Saejung, S: On split common fixed point problems. J. Math. Anal. Appl. 415, 513-524 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Moudafi, A: The split common fixed point problem for demicontractive mappings. Inverse Probl. 26, 055007 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Sitthithakerngkiet, K, Deepho, J, Kumam, P: A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion and fixed point problems. Appl. Math. Comput. 250, 986-1001 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275-283 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Censor, Y, Gibali, A, Reich, S: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301-323 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Deepho, J, Kumm, W, Kumm, P: A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems. J. Math. Model. Algorithms 13, 405-423 (2014)

    Article  MathSciNet  Google Scholar 

  19. Deepho, J, Martínez-Moreno, J, Kumam, P: A viscosity of Cesàro mean approximation method for split generalized equilibrium, variational inequality and fixed point problems. J. Nonlinear Sci. Appl. 9, 1475-1496 (2016)

    MATH  Google Scholar 

  20. Deepho, J, Martínez-Moreno, J, Sitthithakerngkiet, K, Kumam, P: Convergence analysis of hybrid projection with Cesàro mean method for the split equilibrium and general system of finite variational inequalities. J. Comput. Appl. Math. (2015). doi:10.1016/j.cam.2015.10.006

    Google Scholar 

  21. Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 127-149 (1994)

    MathSciNet  MATH  Google Scholar 

  22. Muu, LD, Oettli, W: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. TMA 18, 1159-1166 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  23. He, Z: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. (2012). doi:10.1186/1029-242X-2012-162

    MathSciNet  MATH  Google Scholar 

  24. Bnouhachem, A, Al-Homidan, S, Ansari, QH: An iterative method for common solutions of equilibrium problems and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 194 (2014)

    Article  Google Scholar 

  25. Kumam, W, Piri, H, Kumam, P: Solutions of system of equilibrium and variational inequality problems on fixed points of infinite family of nonexpansive mappings. Appl. Math. Comput. 248, 441-455 (2014)

    Article  MathSciNet  Google Scholar 

  26. Kumam, W, Witthayarat, U, Kumam, P, Suantai, S, Wattanawitoon, K: Convergence theorem for equilibrium problem and Bregman strongly nonexpansive mappings in Banach spaces. Optimization 65, 265-280 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  27. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367-426 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  28. López, G, Martín-Márquez, V, Wang, F, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Tran, DQ, Muu, LD, Nguyen, VH: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749-776 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  30. Dinh, BV, Muu, LD: A projection algorithm for solving pseudomonotone equilibrium problems and it’s application to a class of bilevel equilibria. Optimization 64, 559-575 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  31. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)

    MATH  Google Scholar 

  32. Korpelevich, GM: An extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747-756 (1976)

    MathSciNet  MATH  Google Scholar 

  33. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-610 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  34. Anh, TV, Muu, LD: A projection-fixed point method for a class of bilevel variational inequalities with split fixed point constraints. Optimization (2015). doi:10.1080/02331934.2015.1101599

    Google Scholar 

  35. Moudafi, A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 23, 1635-1640 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  36. Takahashi, W, Takeuchi, Y, Kubota, R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276-286 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  37. Censor, Y, Gibali, A, Reich, S: Strong convergence of subgradient extragradient methods for variational inequality problem in Hilbert space. Optim. Methods Softw. 26, 827-845 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Opial, Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591-597 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  39. Rockafellar, RT: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  40. Vuong, PT, Strodiot, JJ, Nguyen, VH: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155, 605-627 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  41. Combettes, PL, Hirstoaga, A: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by a Research Grant of Pukyong National University (2016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Do Sang Kim.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dinh, B.V., Son, D.X., Jiao, L. et al. Linesearch algorithms for split equilibrium problems and nonexpansive mappings. Fixed Point Theory Appl 2016, 27 (2016). https://doi.org/10.1186/s13663-016-0518-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-016-0518-3

MSC

Keywords