Skip to main content

Extra-gradient methods for solving split feasibility and fixed point problems

Abstract

The purpose of this paper is to study the extra-gradient methods for solving split feasibility and fixed point problems involved in pseudo-contractive mappings in real Hilbert spaces. We propose an Ishikawa-type extra-gradient iterative algorithm for finding a solution of the split feasibility and fixed point problems involved in pseudo-contractive mappings with Lipschitz assumption. Moreover, we also suggest a Mann-type extra-gradient iterative algorithm for finding a solution of the split feasibility and fixed point problems involved in pseudo-contractive mappings without Lipschitz assumption. It is proven that under suitable conditions, the sequences generated by the proposed iterative algorithms converge weakly to a solution of the split feasibility and fixed point problems. The results presented in this paper extend and improve some corresponding ones in the literature.

1 Introduction

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A: \mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). Let \(S: \mathcal {H}_{2}\rightarrow\mathcal{H}_{2}\) and \(T: \mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\) be two nonlinear mappings.

The purpose of this paper is to study the following split feasibility and fixed point problems:

$$ \mbox{Find } x^{*}\in C\cap \operatorname{Fix}(T) \mbox{ such that } Ax^{*}\in Q\cap \operatorname{Fix}(S). $$
(1.1)

We use Γ to denote the set of solutions of (1.1), that is,

$$ \Gamma=\bigl\{ x^{*}: x^{*}\in C\cap \operatorname{Fix}(T),Ax^{*}\in Q\cap \operatorname{Fix}(S)\bigr\} . $$

In the sequel, we assume \(\Gamma\neq\emptyset\).

A special case of the split feasibility and fixed point problems is the split feasibility problem (SFP):

$$ \mbox{Find } x^{*}\in C \mbox{ such that } Ax^{*}\in Q. $$
(1.2)

We use \(\Gamma_{0}\) to denote the set of solutions of (1.2), that is,

$$ \Gamma_{0}=\bigl\{ x^{*}: x^{*}\in C,Ax^{*}\in Q\bigr\} . $$

The SFP in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. Recently, it has been found that the SFP can be applied to study intensity-modulated radiation therapy (IMRT) [3–5]. In the recent past, a wide variety of iterative algorithms have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [2, 4, 6–10] and the references therein (see also [11–13] for relevant projection methods for solving image recovery problems).

The original algorithm given in [1] involves the computation of the inverse \(A^{-1}\) (assuming the existence of the inverse of A), and thus does not become popular. A seemingly more popular algorithm that solves the SFP is the CQ algorithm presented by Byrne [2]:

$$ x_{n+1}=P_{C} \bigl(I-\gamma A^{*}(I-P_{Q})A \bigr)x_{n},\quad n\geq0, $$
(1.3)

where the initial guess \(x_{0}\in\mathcal{H}_{1}\) and \(\gamma\in(0,\frac {2}{\lambda})\), with λ being the largest eigenvalue of the matrix \(A^{*}A\). Algorithm (1.3) is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [14]. The CQ algorithm only involves the computations of the projections \(P_{C}\) and \(P_{Q}\) onto the sets C and Q, respectively.

Many authors have also made a continuation of the study on the CQ algorithm and its variant form, refer to [15–21]. In 2010, Xu [15] applied a Mann-type iterative algorithm to the SFP and proposed an average CQ algorithm which was proven to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative paraments, the sequence of iterative algorithm solutions can converge weakly to an exact solution of the SFP.

On the other hand, in 1976, to study the saddle point problem, Korpelevich [22] introduced the so-called extra-gradient method:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda\mathcal{A}x_{n} ), \\ x_{n+1}=P_{C} (x_{n}-\lambda\mathcal{A}y_{n} ),\quad n\geq0, \end{cases} $$

where \(\lambda>0\), operator \(\mathcal{A}\) is both strongly monotone and Lipschitz continuous.

Very recently, Ceng et al. [23] studied extra-gradient method for finding a common element of the solution set \(\Gamma_{0}\) of the SFP and the set \(\operatorname{Fix}(S)\) of fixed points of a nonexpansive mapping S in the setting of infinite-dimensional Hilbert spaces. Motivated and inspired by Nadezhkina and Takahashi [24], the authors proposed an iterative algorithm in the following manner:

$$ \textstyle\begin{cases} x_{0}\in C \quad \mbox{chosen arbitrarily}, \\ y_{n}=P_{C} (I-\lambda_{n} \nabla f )x_{n}, \\ x_{n+1}=\beta_{n}x_{n}+(1-\beta_{n})SP_{C} (x_{n}-\lambda_{n} \nabla f(y_{n}) ), \quad n\geq0, \end{cases} $$
(1.4)

where \(\{\lambda_{n}\}\subset[a,b]\) for some \(a, b\in (0, \frac{1}{\|A\|^{2}} )\) and \(\{\beta_{n}\}\subset[c, d]\) for some \(c, d\in(0, 1)\). The authors proved that the sequences generated by (1.4) converge weakly to an element \(x\in\Gamma_{0}\cap \operatorname{Fix}(S)\).

In 2014, Yao et al. [25] studied the split feasibility and fixed point problems. They constructed an iterative algorithm in the following way:

$$ \textstyle\begin{cases} u_{n}=P_{C} (\alpha_{n}u+(1-\alpha_{n}) (x_{n}-\delta A^{*}(I-SP_{Q})Ax_{n} ) ), \\ x_{n+1}=(1-\beta_{n})u_{n}+\beta_{n}T((1-\gamma_{n})u_{n}+\gamma_{n}Tu_{n}),\quad n\geq0, \end{cases} $$
(1.5)

where \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), \(\{\gamma_{n}\}\) are three real number sequences in \((0,1)\) and δ is a constant in \((0,\frac {1}{\|A\|^{2}} )\). The authors proved that the sequences generated by (1.5) converge strongly to a solution of the split feasibility and fixed point problems.

In this paper, motivated by the work of Ceng et al. [23], Yao et al. [25], we propose an Ishikawa-type extra-gradient iterative algorithm for finding a solution of the split feasibility and fixed point problems involved in pseudo-contractive mappings with Lipschitz assumption. On the other hand, we also suggest a Mann-type extra-gradient iterative algorithm for finding a solution of the split feasibility and fixed point problems involved in pseudo-contractive mappings without Lipschitz assumption. We establish weak convergence theorems for the sequences generated by the proposed iterative algorithms. Our results substantially improve and develop the corresponding results in [15, 23–25]; for example, [15], Theorem 3.6, [23], Theorem 3.2, [24], Theorem 3.1 and [25], Theorem 3.2. It is noteworthy that our results are new and novel.

2 Preliminaries

Let \(\mathcal{H}\) be a real Hilbert space whose inner product and norm are denoted by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\), respectively. Let C be a nonempty closed convex subset of \(\mathcal {H}\). We write \(x_{n}\rightharpoonup x\) to indicate that the sequence \(\{x_{n}\}\) converges weakly to x and \(x_{n}\rightarrow x\) to indicate that the sequence \(\{x_{n}\}\) converges strongly to x. Moreover, we use \(\omega_{w}(x_{n})\) to denote the weak ω-limit set of the sequence \(\{x_{n}\}\), that is,

$$ \omega_{w}(x_{n})=\bigl\{ x:x_{n_{i}}\rightharpoonup x \mbox{ for some subsequence } \{x_{n_{i}}\} \mbox{ of } \{x_{n}\}\bigr\} . $$

Projections are an important tool for our work in this paper. Recall that the (nearest point or metric) projection from \(\mathcal{H}\) onto C, denoted by \(P_{C}\), is defined in such a way that, for each \(x\in \mathcal{H}\), \(P_{C}x\) is the unique point in C with the property

$$ \Vert x-P_{C}x\Vert =\min\bigl\{ \Vert x-y\Vert :y\in C\bigr\} . $$

Some important properties of projections are gathered in the following proposition.

Proposition 2.1

Given \(x\in\mathcal{H}\) and \(z\in C\),

  1. (1)

    \(z=P_{C}x\Leftrightarrow\langle x-z, y-z\rangle\leq0\) for all \(y\in C\);

  2. (2)

    \(z=P_{C}x\Leftrightarrow\|x-z\|^{2}\leq\|x-y\|^{2}-\|y-z\|^{2}\) for all \(y\in C\);

  3. (3)

    \(\langle x-y, P_{C}x-P_{C}y \rangle\geq \Vert P_{C}x-P_{C}y \Vert ^{2}\) for all \(y\in\mathcal{H}\), which hence implies that \(P_{C}\) is nonexpansive.

We also need other sorts of nonlinear operators which are stated as follows.

Definition 2.1

A nonlinear operator \(T: \mathcal{H}\rightarrow\mathcal {H}\) is said to be

  1. (1)

    L-Lipschitzian if there exists \(L>0\) such that

    $$ \Vert Tx-Ty\Vert \leq L\|x-y\|,\quad \forall x,y\in\mathcal{H}; $$

    if \(L=1\), we call T nonexpansive;

  2. (2)

    firmly nonexpansive if \(2T-I\) is nonexpansive, or equivalently,

    $$ \langle x-y, Tx-Ty \rangle\geq \Vert Tx-Ty\Vert ^{2},\quad \forall x,y\in\mathcal{H}; $$

    alternatively, T is firmly nonexpansive if and only if T can be expressed as

    $$ T=\frac{1}{2}(I+S), $$

    where \(S:\mathcal{H}\rightarrow\mathcal{H}\) is nonexpansive;

  3. (3)

    monotone if

    $$ \langle x-y, Tx-Ty \rangle\geq0,\quad \forall x,y\in\mathcal{H}; $$
  4. (4)

    β-strongly monotone, with \(\beta>0\), if

    $$ \langle x-y, Tx-Ty \rangle\geq\beta\|x-y\|^{2},\quad \forall x,y\in \mathcal{H}; $$
  5. (5)

    ν-inverse strongly monotone (ν-ism), with \(\nu>0\), if

    $$ \langle x-y, Tx-Ty \rangle\geq\nu \Vert Tx-Ty\Vert ^{2},\quad \forall x,y\in\mathcal{H}. $$

Inverse strongly monotone (also referred to as co-coercive) operators have been widely applied in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [26, 27].

It is well known that metric projection \(P_{C}:\mathcal{H}\rightarrow C\) is firmly nonexpansive, that is,

$$\begin{aligned}& \langle x-y, P_{C}x-P_{C}y \rangle\geq \Vert P_{C}x-P_{C}y\Vert ^{2} \\ & \quad \Leftrightarrow\quad \Vert P_{C}x-P_{C}y\Vert ^{2}\leq\|x-y\|^{2}-\bigl\Vert (I-P_{C} )x- (I-P_{C} )y\bigr\Vert ^{2}, \quad \forall x,y\in \mathcal{H}. \end{aligned}$$
(2.1)

For all \(x, y\in\mathcal{H}\), the following conclusions hold:

$$ \bigl\Vert tx+(1-t)y\bigr\Vert ^{2}=t\|x \|^{2}+(1-t)\|y\|^{2}-t(1-t)\|x-y\|^{2},\quad t \in[0,1] $$
(2.2)

and

$$ \|x+y\|^{2}=\|x\|^{2}+2\langle x,y\rangle+\|y \|^{2}. $$
(2.3)

On the other hand, in a real Hilbert space \(\mathcal{H}\), a mapping \(T:C\rightarrow C\) is called pseudo-contractive if

$$ \langle Tx-Ty, x-y \rangle\leq\|x-y\|^{2}, \quad \forall x,y\in C. $$

It is well known that T is a pseudo-contractive mapping if and only if

$$ \Vert Tx-Ty\Vert ^{2}\leq\|x-y\|^{2}+\bigl\Vert (I-T )x- (I-T )y\bigr\Vert ^{2} , \quad \forall x,y\in C. $$
(2.4)

The notation \(\operatorname{Fix}(T)\) denotes the set of fixed points of the mapping T, that is, \(\operatorname{Fix}(T)=\{x\in\mathcal{H}:Tx=x\}\).

Proposition 2.2

[6]

Let \(T: \mathcal{H}\rightarrow\mathcal{H}\) be a given mapping.

  1. (1)

    T is nonexpansive if and only if the complement \(I-T\) is \(\frac {1}{2}\)-ism.

  2. (2)

    If T is ν-ism, then for \(\gamma>0\), γT is \(\frac{\nu }{\gamma}\)-ism.

  3. (3)

    T is averaged if and only if the complement \(I-T\) is ν-ism for some \(\nu>\frac{1}{2}\). Indeed, for \(\alpha\in(0,1)\), T is α-averaged if and only if \(I-T\) is \(\frac{1}{2\alpha}\)-ism.

Proposition 2.3

Let T be a pseudo-contractive mapping with the nonempty fixed point set \(\operatorname{Fix}(T)\), then the following conclusion holds:

$$ \bigl\langle Ty-y, Ty-x^{*} \bigr\rangle \leq \Vert Ty-y\Vert ^{2}, \quad \forall y\in C, \forall x^{*}\in \operatorname{Fix}(T). $$

Proof

From the definition of a pseudo-contractive mapping T, we have

$$\begin{aligned} \bigl\langle Ty-y, Ty-x^{*}\bigr\rangle &=\|Ty-y\|^{2}+\bigl\langle Ty-y, y-x^{*}\bigr\rangle \\ &=\|Ty-y\|^{2}+\bigl\langle Ty-x^{*}, y-x^{*}\bigr\rangle -\bigl\Vert y-x^{*}\bigr\Vert ^{2} \\ &\leq\|Ty-y\|^{2}. \end{aligned}$$

 □

Generally speaking, pseudo-contractive mappings are assumed to be L-Lipschitzian with \(L>1\). Next, to overcome the L-Lipschitzian property, we assume that the pseudo-contractive mapping T satisfies the following condition:

$$ \bigl\langle Ty-y, Ty-x^{*}\bigr\rangle \leq0,\quad \forall y\in C, \forall x^{*}\in \operatorname{Fix}(T). $$
(2.5)

The following demiclosedness principle for pseudo-contractive mappings will often be used in the sequel.

Lemma 2.1

[28]

Let \(\mathcal{H}\) be a real Hilbert space, C be a closed convex subset of \(\mathcal{H}\). Let \(T:C\rightarrow C\) be a continuous pseudo-contractive mapping. Then

  1. (1)

    \(\operatorname{Fix}(T)\) is a closed convex subset of C;

  2. (2)

    \((I-T)\) is demiclosed at zero.

The following result is useful when we prove weak convergence of a sequence.

Lemma 2.2

[29]

Let \(\mathcal{H}\) be a Hilbert space and \(\{x_{n}\} \) be a bounded sequence in \(\mathcal{H}\) such that there exists a nonempty closed convex set C of \(\mathcal{H}\) satisfying:

  1. (1)

    for every \(w\in C\), \(\lim_{n\rightarrow\infty}\|x_{n}-w\|\) exists;

  2. (2)

    each weak-cluster point of the sequence \(\{x_{n}\}\) is in C.

Then \(\{x_{n}\}\) converges weakly to a point in C.

We can use fixed point algorithms to solve the SFP on the basis of the following observation.

Let \(\lambda>0\) and assume that \(x^{*}\) solves the SFP. Then \(Ax^{*}\in Q\), which implies that \((I-P_{Q})Ax^{*}=0\), and thus, \(\lambda(I-P_{Q})Ax^{*}=0\). Hence, we have the fixed point equation \(x^{*}= (I-\lambda A^{*}(I-P_{Q})A )x^{*}\). Requiring that \(x^{*}\in C\), we consider the fixed point equation

$$ x^{*}=P_{C} \bigl(I-\lambda A^{*}(I-P_{Q})A \bigr)x^{*}=P_{C} (I-\lambda\nabla f )x^{*}. $$
(2.6)

It is proven in [15] that the solutions of the fixed point equation (2.6) are exactly the solutions of the SFP; namely, for given \(x^{*}\in\mathcal{H}_{1}\), \(x^{*}\) solves the SFP if and only if \(x^{*}\) solves the fixed point equation (2.6).

3 Ishikawa-type extra-gradient iterative algorithm involved in pseudo-contractive mappings with Lipschitz assumption

We are now in a position to propose an Ishikawa-type extra-gradient iterative algorithm for solving the split feasibility and fixed point problems involved in pseudo-contractive mappings with Lipschitz assumption.

Theorem 3.1

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and let \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator. Let \(S:Q\rightarrow Q\) be a nonexpansive mapping and let \(T:C\rightarrow C\) be an L-Lipschitzian pseudo-contractive mapping with \(L>1\). For \(x_{0}\in\mathcal{H}_{1}\) arbitrarily, let \(\{x_{n}\}\) be a sequence defined by the following Ishikawa-type extra-gradient iterative algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n} ), \\ z_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ay_{n} ), \\ w_{n}=(1-\alpha_{n})z_{n}+\alpha_{n}Tz_{n}, \\ x_{n+1}=(1-\beta_{n})z_{n}+\beta_{n}Tw_{n},\quad n\geq0, \end{cases} $$
(3.1)

where \(\{\lambda_{n}\}\subset (0, \frac{1}{2\|A\|^{2}} )\) and \(\{ \alpha_{n}\}, \{\beta_{n}\}\subset(0, 1)\) such that \(0< a<\beta_{n}<c<\alpha _{n}<b<\frac{1}{\sqrt{1+L^{2}}+1}\).

Then the sequence \(\{x_{n}\}\) generated by algorithm (3.1) converges weakly to an element of Γ.

Proof

Taking \(x^{*}\in\Gamma\), we have \(x^{*}\in C\cap \operatorname{Fix}(T)\) and \(Ax^{*}\in Q\cap \operatorname{Fix}(S)\). For simplicity, we write \(\nabla f^{S}=A^{*}(I-SP_{Q})A\), \(v_{n}=P_{Q}Ax_{n}\), \(u_{n}=x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n}\) for all \(n\geq0\). Thus, we have \(y_{n}=P_{C}u_{n}\) for all \(n\geq0\). By (2.1), we get

$$\begin{aligned} \bigl\Vert Sv_{n}-Ax^{*}\bigr\Vert ^{2} &= \bigl\Vert SP_{Q}Ax_{n}-SP_{Q}Ax^{*}\bigr\Vert ^{2} \\ &\leq\bigl\Vert P_{Q}Ax_{n}-P_{Q}Ax^{*}\bigr\Vert ^{2} \\ &\leq\bigl\Vert Ax_{n}-Ax^{*}\bigr\Vert ^{2}-\Vert v_{n}-Ax_{n}\Vert ^{2}. \end{aligned}$$
(3.2)

Since \(P_{C}\) is nonexpansive, using (2.3), we get

$$\begin{aligned} \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2}&= \bigl\Vert P_{C}u_{n}-x^{*}\bigr\Vert ^{2}\leq \bigl\Vert u_{n}-x^{*}\bigr\Vert ^{2} \\ &=\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+2 \lambda_{n}\bigl\langle x_{n}-x^{*}, A^{*}(Sv_{n}-Ax_{n}) \bigr\rangle +\lambda_{n}^{2}\bigl\Vert A^{*}(Sv_{n}-Ax_{n})\bigr\Vert ^{2}. \end{aligned}$$
(3.3)

Since A is a linear operator with its adjoint \(A^{*}\), we have

$$\begin{aligned} \bigl\langle x_{n}-x^{*}, A^{*}(Sv_{n}-Ax_{n}) \bigr\rangle &=\bigl\langle Ax_{n}-Ax^{*}, Sv_{n}-Ax_{n} \bigr\rangle \\ &=\bigl\langle Ax_{n}-Ax^{*}+Sv_{n}-Ax_{n}-(Sv_{n}-Ax_{n}), Sv_{n}-Ax_{n}\bigr\rangle \\ &=\bigl\langle Sv_{n}-Ax^{*}, Sv_{n}-Ax_{n}\bigr\rangle -\|Sv_{n}-Ax_{n}\|^{2}. \end{aligned}$$
(3.4)

Again using (2.3), we obtain

$$ \bigl\langle Sv_{n}-Ax^{*}, Sv_{n}-Ax_{n} \bigr\rangle =\frac{1}{2} \bigl(\bigl\Vert Sv_{n}-Ax^{*}\bigr\Vert ^{2}+\Vert Sv_{n}-Ax_{n}\Vert ^{2}-\bigl\Vert Ax_{n}-Ax^{*}\bigr\Vert ^{2} \bigr). $$
(3.5)

From (3.2), (3.4) and (3.5), we get

$$\begin{aligned}& \bigl\langle x_{n}-x^{*}, A^{*}(Sv_{n}-Ax_{n}) \bigr\rangle \\& \quad =\frac{1}{2} \bigl(\bigl\Vert Sv_{n}-Ax^{*}\bigr\Vert ^{2}+\Vert Sv_{n}-Ax_{n}\Vert ^{2}- \bigl\Vert Ax_{n}-Ax^{*}\bigr\Vert ^{2} \bigr)-\Vert Sv_{n}-Ax_{n}\Vert ^{2} \\& \quad \leq\frac{1}{2} \bigl(\bigl\Vert Ax_{n}-Ax^{*}\bigr\Vert ^{2}-\Vert v_{n}-Ax_{n}\Vert ^{2}+\Vert Sv_{n}-Ax_{n}\Vert ^{2}- \bigl\Vert Ax_{n}-Ax^{*}\bigr\Vert ^{2} \bigr) \\& \qquad {} -\Vert Sv_{n}-Ax_{n}\Vert ^{2} \\& \quad =-\frac{1}{2}\Vert v_{n}-Ax_{n}\Vert ^{2}-\frac{1}{2}\Vert Sv_{n}-Ax_{n} \Vert ^{2}. \end{aligned}$$
(3.6)

Substituting (3.6) into (3.3) and by the assumption of \(\{\lambda_{n}\} \), we deduce

$$\begin{aligned}& \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}+2 \lambda_{n} \biggl(-\frac{1}{2}\Vert v_{n}-Ax_{n} \Vert ^{2}-\frac {1}{2}\Vert Sv_{n}-Ax_{n} \Vert ^{2} \biggr)+\lambda_{n}^{2}\Vert A \Vert ^{2}\Vert Sv_{n}-Ax_{n}\Vert ^{2} \\& \quad =\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \lambda_{n}\Vert v_{n}-Ax_{n}\Vert ^{2}-\lambda_{n} \bigl(1-\lambda _{n}\Vert A \Vert ^{2} \bigr)\Vert Sv_{n}-Ax_{n}\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}. \end{aligned}$$
(3.7)

Since S and \(P_{Q}\) are nonexpansive, we know that composition operator \(SP_{Q}\) is still nonexpansive. By Proposition 2.2(1) the complement \(I-SP_{Q}\) is \(\frac{1}{2}\)-ism. Therefore, it is easy to see that \(\nabla f^{S}=A^{*}(I-SP_{Q})A\) is \(\frac{1}{2\|A\|^{2}}\)-ism, that is,

$$ \bigl\langle x-y, \nabla f^{S}(x)-\nabla f^{S}(y) \bigr\rangle \geq\frac{1}{2\|A\|^{2}}\bigl\Vert \nabla f^{S}(x)-\nabla f^{S}(y)\bigr\Vert ^{2}. $$
(3.8)

This together with Proposition 2.1(2) implies that

$$\begin{aligned}& \bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-\lambda_{n}\nabla f^{S}(y_{n})-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n}-\lambda_{n}\nabla f^{S}(y_{n})-z_{n} \bigr\Vert ^{2} \\& \quad =\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-z_{n}\Vert ^{2}+2\lambda_{n} \bigl\langle \nabla f^{S}(y_{n}), x^{*}-z_{n}\bigr\rangle \\& \quad =\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-z_{n}\Vert ^{2}+2\lambda_{n} \bigl(\bigl\langle \nabla f^{S}(y_{n})-\nabla f^{S} \bigl(x^{*}\bigr), x^{*}-y_{n}\bigr\rangle \\& \qquad {} +\bigl\langle \nabla f^{S}\bigl(x^{*}\bigr), x^{*}-y_{n}\bigr\rangle +\bigl\langle \nabla f^{S}(y_{n}), y_{n}-z_{n}\bigr\rangle \bigr) \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-z_{n}\Vert ^{2}+2\lambda_{n} \bigl\langle \nabla f^{S}(y_{n}), y_{n}-z_{n} \bigr\rangle \\& \quad =\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-y_{n}\Vert ^{2}-\Vert y_{n}-z_{n}\Vert ^{2}+2\bigl\langle x_{n}-\lambda _{n}\nabla f^{S}(y_{n})-y_{n}, z_{n}-y_{n}\bigr\rangle . \end{aligned}$$

Further, by Proposition 2.1(1) and (3.8), we have

$$\begin{aligned}& \bigl\langle x_{n}-\lambda_{n}\nabla f^{S}(y_{n})-y_{n}, z_{n}-y_{n} \bigr\rangle \\& \quad = \bigl\langle x_{n}-\lambda_{n}\nabla f^{S}(x_{n})-y_{n}, z_{n}-y_{n} \bigr\rangle +\lambda_{n} \bigl\langle \nabla f^{S}(x_{n})- \nabla f^{S}(y_{n}), z_{n}-y_{n} \bigr\rangle \\& \quad \leq\lambda_{n} \bigl\langle \nabla f^{S}(x_{n})- \nabla f^{S}(y_{n}), z_{n}-y_{n} \bigr\rangle \\& \quad \leq\lambda_{n}\bigl\Vert \nabla f^{S}(x_{n})- \nabla f^{S}(y_{n})\bigr\Vert \| z_{n}-y_{n} \| \\& \quad \leq2\lambda_{n}\|A\|^{2}\|x_{n}-y_{n} \|\|z_{n}-y_{n}\|. \end{aligned}$$

By the assumption of \(\{\lambda_{n}\}\), we obtain

$$\begin{aligned}& \bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-y_{n}\Vert ^{2}-\Vert y_{n}-z_{n}\Vert ^{2}+2 \bigl\langle x_{n}-\lambda _{n}\nabla f^{S}(y_{n})-y_{n}, z_{n}-y_{n} \bigr\rangle \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-y_{n}\Vert ^{2}-\Vert y_{n}-z_{n}\Vert ^{2}+4\lambda_{n} \Vert A\Vert ^{2}\Vert x_{n}-y_{n}\Vert \Vert z_{n}-y_{n}\Vert \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert x_{n}-y_{n}\Vert ^{2}-\Vert y_{n}-z_{n}\Vert ^{2}+\Vert z_{n}-y_{n}\Vert ^{2}+4\lambda _{n}^{2}\Vert A\Vert ^{4}\Vert x_{n}-y_{n}\Vert ^{2} \\& \quad =\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \bigl(1-4 \lambda_{n}^{2}\Vert A\Vert ^{4} \bigr)\Vert x_{n}-y_{n}\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}. \end{aligned}$$
(3.9)

Similarly, we have

$$\begin{aligned} \bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2} &\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \bigl(1-4\lambda_{n}^{2} \Vert A\Vert ^{4} \bigr)\Vert z_{n}-y_{n} \Vert ^{2} \\ &\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}. \end{aligned}$$

From (2.4), we have

$$ \bigl\Vert Tz_{n}-x^{*}\bigr\Vert ^{2}\leq \bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\|z_{n}-Tz_{n} \|^{2} $$
(3.10)

and

$$\begin{aligned} \bigl\Vert Tw_{n}-x^{*}\bigr\Vert ^{2} =& \bigl\Vert T \bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n} \bigr)-x^{*}\bigr\Vert ^{2} \\ \leq&\bigl\Vert (1-\alpha_{n}) \bigl(z_{n}-x^{*}\bigr)+ \alpha_{n} \bigl(Tz_{n}-x^{*} \bigr)\bigr\Vert ^{2} \\ &{} +\bigl\Vert (1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}-T \bigl((1-\alpha_{n})z_{n}+ \alpha _{n}Tz_{n} \bigr)\bigr\Vert ^{2}. \end{aligned}$$
(3.11)

Applying equality (2.2), we have

$$\begin{aligned}& \bigl\Vert (1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}-T \bigl((1-\alpha_{n})z_{n}+ \alpha _{n}Tz_{n} \bigr)\bigr\Vert ^{2} \\& \quad =\bigl\Vert (1-\alpha_{n}) \bigl(z_{n}-T\bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr) \bigr)+\alpha_{n} \bigl(Tz_{n}-T\bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr) \bigr)\bigr\Vert ^{2} \\& \quad =(1-\alpha_{n})\bigl\Vert z_{n}-T \bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2}+\alpha_{n}\bigl\Vert Tz_{n}-T \bigl((1-\alpha_{n})z_{n}+\alpha _{n}Tz_{n} \bigr)\bigr\Vert ^{2} \\& \qquad {} -\alpha_{n}(1-\alpha_{n})\|z_{n}-Tz_{n} \|^{2}. \end{aligned}$$
(3.12)

Since T is L-Lipschitzian and \(z_{n}- ((1-\alpha_{n})z_{n}+\alpha _{n}Tz_{n} )=\alpha_{n}(z_{n}-Tz_{n})\), by (3.12), we have

$$\begin{aligned}& \bigl\Vert (1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}-T\bigl((1-\alpha_{n})z_{n}+ \alpha _{n}Tz_{n}\bigr)\bigr\Vert ^{2} \\& \quad \leq(1-\alpha_{n})\bigl\Vert z_{n}-T\bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2}+\alpha_{n}^{3}L^{2} \Vert z_{n}-Tz_{n}\Vert ^{2} \\& \qquad {} -\alpha_{n}(1-\alpha_{n})\|z_{n}-Tz_{n} \|^{2} \\& \quad =(1-\alpha_{n})\bigl\Vert z_{n}-T\bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2}+ \bigl(\alpha _{n}^{3}L^{2}+ \alpha_{n}^{2}-\alpha_{n} \bigr) \|z_{n}-Tz_{n}\|^{2}. \end{aligned}$$
(3.13)

By (2.2) and (3.10), we have

$$\begin{aligned}& \bigl\Vert (1-\alpha_{n}) \bigl(z_{n}-x^{*} \bigr)+\alpha_{n}\bigl(Tz_{n}-x^{*}\bigr)\bigr\Vert ^{2} \\& \quad =(1-\alpha_{n})\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\alpha_{n}\bigl\Vert Tz_{n}-x^{*}\bigr\Vert ^{2}-\alpha_{n}(1-\alpha _{n})\Vert z_{n}-Tz_{n}\Vert ^{2} \\& \quad \leq(1-\alpha_{n})\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\alpha_{n} \bigl(\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\Vert z_{n}-Tz_{n}\Vert ^{2} \bigr)-\alpha_{n}(1-\alpha_{n})\Vert z_{n}-Tz_{n}\Vert ^{2} \\& \quad =\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+ \alpha_{n}^{2}\Vert z_{n}-Tz_{n} \Vert ^{2}. \end{aligned}$$
(3.14)

From (3.11), (3.13) and (3.14), we deduce

$$\begin{aligned} \bigl\Vert Tw_{n}-x^{*}\bigr\Vert ^{2} =& \bigl\Vert T\bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)-x^{*}\bigr\Vert ^{2} \\ \leq&\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+(1- \alpha_{n})\bigl\Vert z_{n}-T\bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2} \\ &{} -\alpha_{n} \bigl(1-2\alpha_{n}-\alpha_{n}^{2}L^{2} \bigr)\Vert z_{n}-Tz_{n}\Vert ^{2}. \end{aligned}$$
(3.15)

Since \(\alpha_{n}< b<\frac{1}{\sqrt{1+L^{2}}+1}\), we derive that

$$ 1-2\alpha_{n}-\alpha_{n}^{2}L^{2}>0, \quad n\geq0. $$

This together with (3.15) implies that

$$\begin{aligned} \bigl\Vert Tw_{n}-x^{*}\bigr\Vert ^{2}&= \bigl\Vert T\bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)-x^{*}\bigr\Vert ^{2} \\ &\leq\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+(1- \alpha_{n})\bigl\Vert z_{n}-T\bigl((1- \alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2}. \end{aligned}$$
(3.16)

By (2.2), (3.1) and (3.16), we have

$$\begin{aligned} \begin{aligned}[b] \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}={}& \bigl\Vert (1-\beta_{n})z_{n}+\beta_{n}Tw_{n}-x^{*} \bigr\Vert ^{2} \\ ={}&\bigl\Vert (1-\beta_{n})z_{n}+\beta_{n}T \bigl((1-\alpha_{n})z_{n}+\alpha_{n}Tz_{n} \bigr)-x^{*}\bigr\Vert ^{2} \\ ={}&(1-\beta_{n})\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\beta_{n}\bigl\Vert T\bigl((1-\alpha_{n})z_{n}+ \alpha _{n}Tz_{n}\bigr)-x^{*}\bigr\Vert ^{2} \\ &{} -\beta_{n}(1-\beta_{n})\bigl\Vert z_{n}-T \bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2} \\ \leq{}&\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}- \beta_{n}(\alpha_{n}-\beta_{n})\bigl\Vert z_{n}-T\bigl((1-\alpha _{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)\bigr\Vert ^{2} \\ \leq{}&\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(3.17)

This together with (3.9) implies that

$$ \bigl\Vert x_{n+1}-x^{*}\bigr\Vert \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert $$

for every \(x^{*}\in\Gamma\) and for all \(n\geq0\). Thus, \(\{x_{n}\}\) generated by algorithm (3.1) is the Féjer-monotone with respect to Γ. So, we obtain \(\lim_{n\rightarrow\infty}\|x_{n}-x^{*}\|\) exists immediately, this implies that \(\{x_{n}\}\) is bounded, the sequence \(\{ \|x_{n}-x^{*}\|\}\) is monotonically decreasing. Additionally, we get the boundedness of \(\{y_{n}\}\) and \(\{z_{n}\}\) from (3.7) and (3.9) immediately.

Returning to (3.9) and (3.17), we have

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}&\leq\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2} \\ &\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \bigl(1-4 \lambda _{n}^{2}\Vert A\Vert ^{4} \bigr)\Vert x_{n}-y_{n}\Vert ^{2}. \end{aligned}$$

Hence,

$$ \bigl(1-4\lambda_{n}^{2}\Vert A\Vert ^{4} \bigr)\Vert x_{n}-y_{n}\Vert ^{2}\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}, $$

which implies that

$$ \lim_{n\rightarrow\infty}\|x_{n}-y_{n} \|=0. $$
(3.18)

Similarly, we have

$$ \lim_{n\rightarrow\infty}\|z_{n}-y_{n}\|=0. $$

From (3.7) and (3.18), we have

$$\begin{aligned}& \lambda_{n}\Vert v_{n}-Ax_{n}\Vert ^{2}+\lambda_{n} \bigl(1-\lambda_{n}\Vert A \Vert ^{2} \bigr)\Vert Sv_{n}-Ax_{n}\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2} \\& \quad \leq \bigl(\bigl\Vert x_{n}-x^{*}\bigr\Vert +\bigl\Vert y_{n}-x^{*}\bigr\Vert \bigr)\Vert x_{n}-y_{n} \Vert , \end{aligned}$$

which implies that

$$ \lim_{n\rightarrow\infty}\|v_{n}-Ax_{n}\|=\lim _{n\rightarrow \infty}\|Sv_{n}-Ax_{n}\|=0. $$

So,

$$ \lim_{n\rightarrow\infty}\|v_{n}-Sv_{n}\|=0. $$

From (3.9) and (3.17), we deduce

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} &\leq\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}-\beta_{n}( \alpha_{n}-\beta_{n})\bigl\Vert z_{n}-T \bigl((1-\alpha _{n})z_{n}+\alpha_{n}Tz_{n} \bigr)\bigr\Vert ^{2} \\ &\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \beta_{n}(\alpha_{n}-\beta_{n})\bigl\Vert z_{n}-T\bigl((1-\alpha _{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)\bigr\Vert ^{2}. \end{aligned}$$

It follows that

$$ \beta_{n}(\alpha_{n}-\beta_{n})\bigl\Vert z_{n}-T\bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)\bigr\Vert ^{2} \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}. $$

Therefore,

$$ \lim_{n\rightarrow\infty}\bigl\Vert z_{n}-T \bigl((1-\alpha_{n})z_{n}+\alpha _{n}Tz_{n} \bigr)\bigr\Vert =0. $$
(3.19)

Observe that

$$\begin{aligned} \Vert z_{n}-Tz_{n}\Vert &\leq\bigl\Vert z_{n}-T\bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)\bigr\Vert +\bigl\Vert T\bigl((1- \alpha _{n})z_{n}+\alpha_{n}Tz_{n} \bigr)-Tz_{n}\bigr\Vert \\ &\leq\bigl\Vert z_{n}-T\bigl((1-\alpha_{n})z_{n}+ \alpha_{n}Tz_{n}\bigr)\bigr\Vert +\alpha_{n} L \Vert z_{n}-Tz_{n}\Vert . \end{aligned}$$

Thus,

$$ \|z_{n}-Tz_{n}\|\leq\frac{1}{1-\alpha_{n}L}\bigl\Vert z_{n}-T\bigl((1-\alpha_{n})z_{n}+\alpha _{n}Tz_{n}\bigr)\bigr\Vert . $$

This together with (3.19) implies that

$$ \lim_{n\rightarrow\infty}\|z_{n}-Tz_{n}\|=0. $$

Using the firm nonexpansiveness of \(P_{C}\), (2.1) and (3.3), we have

$$\begin{aligned} \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2}&=\bigl\Vert P_{C}u_{n}-x^{*}\bigr\Vert ^{2}\leq\bigl\Vert u_{n}-x^{*}\bigr\Vert ^{2}-\Vert P_{C}u_{n}-u_{n} \Vert ^{2} \\ &\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert y_{n}-u_{n}\Vert ^{2}. \end{aligned}$$

It follows that

$$\begin{aligned} \Vert y_{n}-u_{n}\Vert ^{2}&\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert y_{n}-x^{*} \bigr\Vert ^{2} \\ &\leq\bigl(\bigl\Vert x_{n}-x^{*}\bigr\Vert +\bigl\Vert y_{n}-x^{*}\bigr\Vert \bigr)\Vert x_{n}-y_{n} \Vert . \end{aligned}$$

From (3.18), we deduce

$$ \lim_{n\rightarrow\infty}\|y_{n}-u_{n}\|=0. $$

Since the sequence \(\{x_{n}\}\) is bounded, we can choose a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{i}}\rightharpoonup\hat {x}\). Consequently, we derive from the above conclusions that

$$ \left \{ \textstyle\begin{array}{l} x_{n_{i}}\rightharpoonup\hat{x}, \\ y_{n_{i}}\rightharpoonup\hat{x}, \\ u_{n_{i}}\rightharpoonup\hat{x}, \\ z_{n_{i}}\rightharpoonup\hat{x} \end{array}\displaystyle \right .\quad \mbox{and} \quad \left \{ \textstyle\begin{array}{l} Ax_{n_{i}}\rightharpoonup A\hat{x}, \\ v_{n_{i}}\rightharpoonup A\hat{x}. \end{array}\displaystyle \right . $$
(3.20)

Applying Lemma 2.1, we deduce

$$ \hat{x}\in \operatorname{Fix}(T) \quad \mbox{and} \quad A\hat{x}\in \operatorname{Fix}(S). $$

Note that \(y_{n_{i}}=P_{C}u_{n_{i}}\in C\) and \(v_{n_{i}}=P_{Q}Ax_{n_{i}}\). From (3.20), we deduce

$$ \hat{x}\in C \quad \mbox{and}\quad A\hat{x}\in Q. $$

To this end, we deduce

$$ \hat{x}\in C\cap \operatorname{Fix}(T) \quad \mbox{and} \quad A\hat{x}\in Q\cap \operatorname{Fix}(S). $$

That is to say, \(\hat{x}\in\Gamma\). This shows that \(\omega _{w}(x_{n})\subset\Gamma\). Since the \(\lim_{n\rightarrow\infty}\| x_{n}-x^{*}\|\) exists for every \(x^{*}\in\Gamma\), the weak convergence of the whole sequence \(\{x_{n}\}\) follows by applying Lemma 2.2. This completes the proof. □

Remark 3.1

Theorem 3.1 improves, extends and develops [15], Theorem 3.6, [23], Theorem 3.2, [24], Theorem 3.1 and [25], Theorem 3.2 in the following aspects.

  • Theorem 3.1 extends the extra-gradient method due to Nadezhkina and Takahashi [24], Theorem 3.1.

  • The corresponding iterative algorithms in [15], Theorem 3.6 and [23], Theorem 3.2 are extended for developing our Ishikawa-type extra-gradient iterative algorithm involved in pseudo-contractive mappings with Lipschitz assumption in Theorem 3.1.

  • The technique of proving weak convergence in Theorem 3.1 is different from those in [15], Theorem 3.6 and [23], Theorem 3.2 because our technique depends only on the demiclosedness principle for pseudo-contractive mappings in Hilbert spaces.

  • The problem of finding an element of Γ is more general than the problem of finding a solution of the SFP in [15], Theorem 3.6 and the problem of finding an element of \(\Gamma_{0}\cap \operatorname{Fix}(S)\) with \(S:C\rightarrow C\) being a nonexpansive mapping in [23], Theorem 3.2.

  • Algorithm 3.1 of Yao et al. [25] is extended to develop the Ishikawa-type extra-gradient iterative algorithm in our Theorem 3.1 by virtue of the extra-gradient method.

Furthermore, we can immediately obtain the following weak convergence results.

Corollary 3.1

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and let \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator. Let \(S:Q\rightarrow Q\) be a nonexpansive mapping and let \(T:C\rightarrow C\) be an L-Lipschitzian pseudo-contractive mapping with \(L>1\). For \(x_{0}\in\mathcal{H}_{1}\) arbitrarily, let \(\{x_{n}\}\) be a sequence defined by the following Ishikawa-type iterative algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n} ), \\ z_{n}=(1-\alpha_{n})y_{n}+\alpha_{n}Ty_{n}, \\ x_{n+1}=(1-\beta_{n})y_{n}+\beta_{n}Tz_{n}, \quad n\geq0, \end{cases} $$
(3.21)

where \(\{\lambda_{n}\}\subset (0, \frac{1}{\|A\|^{2}} )\) and \(\{ \alpha_{n}\}, \{\beta_{n}\}\subset(0, 1)\) such that \(0< a<\beta_{n}<c<\alpha _{n}<b<\frac{1}{\sqrt{1+L^{2}}+1}\).

Then the sequence \(\{x_{n}\}\) generated by algorithm (3.21) converges weakly to an element of Γ.

Proof

Taking \(x^{*}\in\Gamma\), we have \(x^{*}\in C\cap \operatorname{Fix}(T)\) and \(Ax^{*}\in Q\cap \operatorname{Fix}(S)\). For simplicity, we write \(v_{n}=P_{Q}Ax_{n}\), \(u_{n}=x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n}\) for all \(n\geq0\). Thus, we have \(y_{n}=P_{C}u_{n}\) for all \(n\geq0\). Similarly to Theorem 3.1, we have

$$ \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2} \leq \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\lambda_{n} \Vert v_{n}-Ax_{n}\Vert ^{2}- \lambda_{n} \bigl(1-\lambda _{n}\Vert A\Vert ^{2} \bigr)\Vert Sv_{n}-Ax_{n}\Vert ^{2} $$
(3.22)

and

$$ \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}\leq \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2}. $$
(3.23)

Thus, the boundedness of the sequence \(\{x_{n}\}\) yields our result.

From (3.22) and (3.23), we have

$$ \lim_{n\rightarrow\infty}\|v_{n}-Ax_{n} \|=\lim_{n\rightarrow \infty}\|Sv_{n}-Ax_{n}\|=0. $$
(3.24)

So,

$$ \lim_{n\rightarrow\infty}\|v_{n}-Sv_{n}\|=0. $$

Using the firm nonexpansiveness of \(P_{C}\), we have

$$\begin{aligned} \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2}&=\bigl\Vert P_{C}u_{n}-x^{*}\bigr\Vert ^{2}\leq\bigl\Vert u_{n}-x^{*}\bigr\Vert ^{2}-\Vert P_{C}u_{n}-u_{n} \Vert ^{2} \\ &\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\Vert y_{n}-u_{n}\Vert ^{2}. \end{aligned}$$

This together with (3.23) implies that

$$ \Vert y_{n}-u_{n}\Vert ^{2}\leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n+1}-x^{*} \bigr\Vert ^{2}. $$

Hence,

$$ \lim_{n\rightarrow\infty}\|y_{n}-u_{n} \|=0. $$
(3.25)

By setting \(u_{n}=x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n}\), it follows from (3.24) that

$$ \lim_{n\rightarrow\infty}\|u_{n}-x_{n}\|=0. $$

This together with (3.25) implies that

$$ \lim_{n\rightarrow\infty}\|x_{n}-y_{n}\|=0. $$

As in the proof of Theorem 3.1, we have \(\lim_{n\rightarrow\infty}\| y_{n}-Ty_{n}\|=0\).

Therefore, all the conditions in Theorem 3.1 are satisfied. The conclusion of Corollary 3.1 can be obtained from Theorem 3.1 immediately. □

Corollary 3.2

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and let \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator. Let \(T:C\rightarrow C\) be an L-Lipschitzian pseudo-contractive mapping with \(L>1\) such that \(\Gamma_{0}\cap \operatorname{Fix}(T)\neq\emptyset\). For \(x_{0}\in\mathcal{H}_{1}\) arbitrarily, let \(\{x_{n}\}\) be a sequence defined by the following Ishikawa-type extra-gradient iterative algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-P_{Q})Ax_{n} ), \\ z_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-P_{Q})Ay_{n} ), \\ w_{n}=(1-\alpha_{n})z_{n}+\alpha_{n}Tz_{n}, \\ x_{n+1}=(1-\beta_{n})z_{n}+\beta_{n}Tw_{n},\quad n\geq0, \end{cases} $$
(3.26)

where \(\{\lambda_{n}\}\subset (0, \frac{1}{\|A\|^{2}} )\) and \(\{ \alpha_{n}\}, \{\beta_{n}\}\subset(0, 1)\) such that \(0< a<\beta_{n}<c<\alpha _{n}<b<\frac{1}{\sqrt{1+L^{2}}+1}\).

Then the sequence \(\{x_{n}\}\) generated by algorithm (3.26) converges weakly to an element of \(\Gamma_{0}\cap \operatorname{Fix}(T)\).

Remark 3.2

Corollary 3.2 improves, extends and develops [15], Theorem 3.6 and [23], Theorem 3.2 in the following aspects.

  • Compared with [23], Theorem 3.2, Corollary 3.2 is essentially coincident with [23], Theorem 3.2 whenever \(\alpha_{n}=0\) and T is a nonexpansive mapping in the scheme (3.26).

  • The problem of finding an element of \(\Gamma_{0}\cap \operatorname{Fix}(T)\) with \(T:C\rightarrow C\) being a pseudo-contractive mapping is more general than the problem of finding a solution of the SFP in [15], Theorem 3.6 and the problem of finding an element of \(\Gamma_{0}\cap \operatorname{Fix}(S)\) with \(S:C\rightarrow C\) being a nonexpansive mapping in [23], Theorem 3.2.

4 Mann-type extra-gradient iterative algorithm involved in pseudo-contractive mappings without Lipschitz assumption

We are now in a position to propose a Mann-type extra-gradient iterative algorithm for solving the split feasibility and fixed point problems involved in pseudo-contractive mappings without Lipschitz assumption.

Theorem 4.1

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and let \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator. Let \(S:Q\rightarrow Q\) be a nonexpansive mapping and let \(T:C\rightarrow C\) be a continuous pseudo-contractive mapping. For \(x_{0}\in\mathcal{H}_{1}\) arbitrarily, let \(\{x_{n}\}\) be a sequence defined by the following Mann-type extra-gradient iterative algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n} ), \\ z_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ay_{n} ), \\ x_{n+1}=(1-\alpha_{n})z_{n}+\alpha_{n}Tz_{n}, \quad n\geq0, \end{cases} $$
(4.1)

where \(\{\lambda_{n}\}\subset (0, \frac{1}{2\|A\|^{2}} )\) and \(\{ \alpha_{n}\}\subset(0, 1)\) such that \(\liminf_{n\rightarrow\infty}\alpha _{n}(1-\alpha_{n})>0\).

Then the sequence \(\{x_{n}\}\) generated by algorithm (4.1) converges weakly to an element of Γ.

Proof

Taking \(x^{*}\in\Gamma\), we have \(x^{*}\in C\cap \operatorname{Fix}(T)\) and \(Ax^{*}\in Q\cap \operatorname{Fix}(S)\). For simplicity, we write \(u_{n}=x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n}\) for all \(n\geq0\). Thus, we have \(y_{n}=P_{C}u_{n}\) for all \(n\geq0\). As is proven in Theorem 3.1,

$$ \bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2} \leq \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\lambda_{n} \Vert v_{n}-Ax_{n}\Vert ^{2}- \lambda_{n} \bigl(1-\lambda _{n}\Vert A\Vert ^{2} \bigr)\Vert Sv_{n}-Ax_{n}\Vert ^{2} $$
(4.2)

and

$$ \bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2} \leq \bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \bigl(1-4 \lambda_{n}^{2}\Vert A\Vert ^{4} \bigr)\Vert x_{n}-y_{n}\Vert ^{2}. $$
(4.3)

Similarly, we have

$$ \bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2} \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \bigl(1-4\lambda_{n}^{2} \Vert A\Vert ^{4} \bigr)\Vert z_{n}-y_{n} \Vert ^{2}. $$

From (2.2), (2.5), (4.1) and (4.3), we obtain that

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2} =& \bigl\Vert (1-\alpha_{n})z_{n}+\alpha_{n}Tz_{n}-x^{*} \bigr\Vert ^{2} \\ =&(1-\alpha_{n})\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\alpha_{n}\bigl\Vert Tz_{n}-x^{*}\bigr\Vert ^{2}-\alpha_{n}(1-\alpha _{n})\Vert z_{n}-Tz_{n}\Vert ^{2} \\ =&(1-\alpha_{n})\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}+\alpha_{n} \bigl\langle Tz_{n}-z_{n}, Tz_{n}-x^{*} \bigr\rangle \\ &{} +\alpha_{n} \bigl\langle z_{n}-x^{*}, Tz_{n}-x^{*} \bigr\rangle -\alpha_{n}(1-\alpha _{n})\Vert z_{n}-Tz_{n}\Vert ^{2} \\ \leq&\bigl\Vert z_{n}-x^{*}\bigr\Vert ^{2}- \alpha_{n}(1-\alpha_{n})\Vert z_{n}-Tz_{n} \Vert ^{2} \\ \leq&\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}- \bigl(1-4 \lambda_{n}^{2}\Vert A\Vert ^{4} \bigr)\Vert x_{n}-y_{n}\Vert ^{2}-\alpha_{n}(1- \alpha_{n})\Vert z_{n}-Tz_{n}\Vert ^{2}. \end{aligned}$$
(4.4)

It follows from the assumption of \(\{\lambda_{n}\}\) that

$$ \bigl\Vert x_{n+1}-x^{*}\bigr\Vert \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert . $$

As the same argument of Theorem 3.1, the boundedness of the sequence \(\{ x_{n}\}\) yields our result.

Returning to (4.4), we have

$$\begin{aligned}& \alpha_{n}(1-\alpha_{n})\Vert z_{n}-Tz_{n} \Vert ^{2}+ \bigl(1-4\lambda_{n}^{2}\Vert A \Vert ^{4} \bigr)\Vert x_{n}-y_{n}\Vert ^{2} \\& \quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert x_{n+1}-x^{*}\bigr\Vert ^{2}. \end{aligned}$$

Therefore, by the assumption of \(\{\alpha_{n}\}\), we have

$$ \lim_{n\rightarrow\infty}\|z_{n}-Tz_{n}\|=0, $$

and by the assumption of \(\{\lambda_{n}\}\), we have

$$ \lim_{n\rightarrow\infty}\|x_{n}-y_{n} \|=0. $$
(4.5)

Similarly, we have

$$ \lim_{n\rightarrow\infty}\|z_{n}-y_{n}\|=0. $$

Returning to (4.2), we have

$$\begin{aligned} \begin{aligned} &\lambda_{n}\Vert v_{n}-Ax_{n}\Vert ^{2}+\lambda_{n} \bigl(1-\lambda_{n}\Vert A \Vert ^{2} \bigr)\Vert Sv_{n}-Ax_{n}\Vert ^{2} \\ &\quad \leq\bigl\Vert x_{n}-x^{*}\bigr\Vert ^{2}-\bigl\Vert y_{n}-x^{*}\bigr\Vert ^{2} \\ &\quad \leq\bigl(\bigl\Vert x_{n}-x^{*}\bigr\Vert +\bigl\Vert y_{n}-x^{*}\bigr\Vert \bigr)\Vert x_{n}-y_{n} \Vert . \end{aligned} \end{aligned}$$

From (4.5) and by the assumption of \(\{\lambda_{n}\}\), we have

$$ \lim_{n\rightarrow\infty}\|v_{n}-Ax_{n} \|=\lim_{n\rightarrow \infty}\|Sv_{n}-Ax_{n}\|=0. $$
(4.6)

So,

$$ \lim_{n\rightarrow\infty}\|v_{n}-Sv_{n}\|=0. $$

By setting \(u_{n}=x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n}\), it follows from (4.6) that

$$ \lim_{n\rightarrow\infty}\|u_{n}-x_{n}\|=0. $$

This together with (4.5) implies that

$$ \lim_{n\rightarrow\infty}\|u_{n}-y_{n}\|=0. $$

Therefore, all the conditions in Theorem 3.1 are satisfied. The conclusion of Theorem 4.1 can be obtained from Theorem 3.1 immediately. □

Remark 4.1

Theorem 4.1 improves, extends and develops [15], Theorem 3.6, [23], Theorem 3.2, [24], Theorem 3.1 and [25], Theorem 3.2 in the following aspects.

  • Theorem 4.1 extends the extra-gradient method due to Nadezhkina and Takahashi [24], Theorem 3.1.

  • The corresponding iterative algorithms in [15], Theorem 3.6 and [23], Theorem 3.2 are extended for developing our Mann-type extra-gradient iterative algorithm involved in pseudo-contractive mappings without Lipschitz assumption in Theorem 4.1.

  • The technique of proving weak convergence in Theorem 4.1 is different from those in [15], Theorem 3.6 and [23], Theorem 3.2 because our technique depends on the demiclosedness principle for pseudo-contractive mappings and bases on condition (2.5) in Hilbert spaces.

  • The problem of finding an element of Γ is more general than the problem of finding a solution of the SFP in [15], Theorem 3.6 and the problem of finding an element of \(\Gamma_{0}\cap \operatorname{Fix}(S)\) with \(S:C\rightarrow C\) being a nonexpansive mapping in [23], Theorem 3.2.

  • In Algorithm 3.1 of [25], Yao et al. proposed the following iterative algorithm:

    $$\begin{aligned}& u_{n}=P_{C} \bigl(\alpha_{n}u+(1- \alpha_{n}) \bigl(x_{n}-\delta A^{*}(I-SP_{Q})Ax_{n} \bigr) \bigr), \\& x_{n+1}=(1-\beta_{n})u_{n}+\beta_{n}T \bigl((1-\gamma_{n})u_{n}+\gamma_{n}Tu_{n} \bigr), \quad n\geq0. \end{aligned}$$

    Via replacing the first iterative step by the extra-gradient method and replacing the second iterative step by the Mann-type iterative algorithm, we obtain the Mann-type extra-gradient iterative algorithm (4.1) in Theorem 4.1.

Utilizing Theorem 4.1, we have the following two new results in the setting of real Hilbert spaces.

Corollary 4.1

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and let \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator. Let \(S:Q\rightarrow Q\) be a nonexpansive mapping and let \(T:C\rightarrow C\) be a continuous pseudo-contractive mapping. For \(x_{0}\in\mathcal{H}_{1}\) arbitrarily, let \(\{x_{n}\}\) be a sequence defined by the following Mann-type iterative algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n} ), \\ x_{n+1}=(1-\alpha_{n})y_{n}+\alpha_{n}Ty_{n},\quad n\geq0, \end{cases} $$
(4.7)

where \(\{\lambda_{n}\}\subset (0, \frac{1}{\|A\|^{2}} )\) and \(\{ \alpha_{n}\}\subset(0, 1)\) such that \(\liminf_{n\rightarrow\infty}\alpha _{n}(1-\alpha_{n})>0\).

Then the sequence \(\{x_{n}\}\) generated by algorithm (4.7) converges weakly to an element of Γ.

Proof

Taking an \(x^{*}\in\Gamma\), we have \(x^{*}\in C\cap \operatorname{Fix}(T)\) and \(Ax^{*}\in Q\cap \operatorname{Fix}(S)\). For simplicity, we write \(u_{n}=x_{n}-\lambda_{n} A^{*}(I-SP_{Q})Ax_{n}\) for all \(n\geq0\). Thus, we have \(y_{n}=P_{C}u_{n}\) for all \(n\geq0\). Similarly to Theorem 4.1,

$$ \lim_{n\rightarrow\infty}\|y_{n}-Ty_{n}\|=0 $$

and

$$ \lim_{n\rightarrow\infty}\|v_{n}-Ax_{n}\|=\lim _{n\rightarrow \infty}\|Sv_{n}-Ax_{n}\| =\lim _{n\rightarrow\infty}\|v_{n}-Sv_{n}\|=0. $$

Similarly to Corollary 3.1,

$$ \lim_{n\rightarrow\infty}\|u_{n}-y_{n}\|=\lim _{n\rightarrow \infty}\|x_{n}-y_{n}\|=0. $$

Therefore, all the conditions in Theorem 4.1 are satisfied. The conclusion of Corollary 4.1 can be obtained from Theorem 4.1 immediately. □

Corollary 4.2

Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be two real Hilbert spaces and let \(C\subset\mathcal{H}_{1}\) and \(Q\subset\mathcal{H}_{2}\) be two nonempty closed convex sets. Let \(A:\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}\) be a bounded linear operator. Let \(T:C\rightarrow C\) be a continuous pseudo-contractive mapping such that \(\Gamma_{0}\cap \operatorname{Fix}(T)\neq\emptyset\). For \(x_{0}\in\mathcal{H}_{1}\) arbitrarily, let \(\{x_{n}\}\) be a sequence defined by the following Mann-type extra-gradient iterative algorithm:

$$ \textstyle\begin{cases} y_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-P_{Q})Ax_{n} ), \\ z_{n}=P_{C} (x_{n}-\lambda_{n} A^{*}(I-P_{Q})Ay_{n} ), \\ x_{n+1}=(1-\alpha_{n})z_{n}+\alpha_{n}Tz_{n}, \quad n\geq0, \end{cases} $$
(4.8)

where \(\{\lambda_{n}\}\subset (0, \frac{1}{\|A\|^{2}} )\) and \(\{ \alpha_{n}\}\subset(0, 1)\) such that \(\liminf_{n\rightarrow\infty}\alpha _{n}(1-\alpha_{n})>0\).

Then the sequence \(\{x_{n}\}\) generated by algorithm (4.8) converges weakly to an element of \(\Gamma_{0}\cap \operatorname{Fix}(T)\).

Remark 4.2

Corollary 4.2 improves, extends and develops [15], Theorem 3.6 and [23], Theorem 3.2 in the following aspects.

  • Compared with [23], Theorem 3.2, Corollary 4.2 is essentially coincident with [23], Theorem 3.2 whenever T is a nonexpansive mapping. Hence our Corollary 4.2 includes [23], Theorem 3.2 as a special case.

  • The problem of finding an element of \(\Gamma_{0}\cap \operatorname{Fix}(T)\) with \(T:C\rightarrow C\) being a pseudo-contractive mapping is more general than the problem of finding a solution of the SFP in [15], Theorem 3.6 and the problem of finding an element of \(\Gamma_{0}\cap \operatorname{Fix}(S)\) with \(S:C\rightarrow C\) being a nonexpansive mapping in [23], Theorem 3.2.

Example 4.1

[25]

Let \(\mathcal{H}=\mathbf{R}\) with the inner product defined by \(\langle x, y\rangle=xy\) for all \(x, y\in\mathbf{R}\) and the absolute-valued norm \(|\cdot|\). Let \(C=[0, +\infty)\) and \(Tx=x-1+\frac{4}{x+1}\) for all \(x\in C\). Obviously, \(\operatorname{Fix}(T)=3\). It is easy to see that

$$\begin{aligned} \langle Tx-Ty, x-y\rangle&=\biggl\langle x-1+\frac{4}{x+1}-y+1- \frac{4}{y+1}, x-y\biggr\rangle \\ &\leq \biggl(1-\frac{4}{(x+1)(y+1)} \biggr)| x-y|^{2} \\ &\leq|x-y|^{2} \end{aligned}$$

and

$$\begin{aligned} |Tx-Ty|&\leq\biggl\vert x-1+\frac{4}{x+1}-y+1-\frac{4}{y+1}\biggr\vert \\ &\leq\biggl\vert 1-\frac{4}{(x+1)(y+1)}\biggr\vert |x-y| \\ &\leq5 | x-y| \end{aligned}$$

for all \(x, y\in C\). But

$$ \biggl\vert T\biggl(\frac{1}{4}\biggr)-T(0)\biggr\vert = \frac{11}{20}>\frac{1}{4}. $$

Thus, T is a Lipschitzian pseudo-contractive mapping but not a nonexpansive one.

The above example satisfies condition (2.5). Indeed, note that

$$\begin{aligned} \langle Tx-3, x-3\rangle&=\biggl\langle x-1+\frac{4}{x+1}-3, x-3\biggr\rangle \\ &\leq \biggl(1-\frac{1}{x+1} \biggr)|x-3|^{2} \end{aligned}$$

for all \(x\in C\). Hence, we have

$$\begin{aligned} \begin{aligned} \langle Tx-x, Tx-3\rangle&=| Tx-x|^{2}+\langle Tx-3, x-3\rangle- \langle x-3, x-3\rangle \\ &\leq|Tx-x|^{2}+ \biggl(1-\frac{1}{x+1} \biggr)|x-3|^{2}-|x-3|^{2} \\ &=|Tx-x|^{2}-\frac{1}{x+1}|x-3|^{2} \leq|Tx-x|^{2} \end{aligned} \end{aligned}$$

for all \(x\in C\).

So, it follows that

$$\begin{aligned} \langle Tx-x, Tx-3\rangle&=| Tx-x|^{2}+\langle Tx-3, x-3\rangle- \langle x-3, x-3\rangle \\ &\leq|Tx-x|^{2}+ \biggl(1-\frac{1}{x+1} \biggr)|x-3|^{2}-|x-3|^{2} \\ &=|Tx-x|^{2}-\frac{1}{x+1}|x-3|^{2} \\ &= \biggl(1-\frac{4}{x+1} \biggr)^{2}-\frac{1}{x+1}|x-3|^{2} \\ &=-\frac{x^{3}-6x^{2}+9x}{x^{2}+2x+1}\leq0 \end{aligned}$$

for all \(x\in C\). So, it is reasonable that we introduce condition (2.5) in Theorem 4.1. Thus, we can use condition (2.5) to replace the Lipschitz assumption of pseudo-contractive mappings when we study a split feasibility problem or other problems involved in pseudo-contractive mappings.

5 Numerical example

In this section, we consider the following example to illustrate the theoretical result.

Let \(\mathcal{H}_{1}=\mathcal{H}_{2}=\mathbf{R}\) with the inner product defined by \(\langle x, y\rangle=xy\) for all \(x, y\in\mathbf{R}\) and the standard norm \(|\cdot|\). Let \(C=[0, +\infty)\) and \(Tx=x-1+\frac {4}{x+1}\) for all \(x\in C\). Let \(Q=\mathbf{R}\) and \(Sx=\frac{x}{3}+1\) for all \(x\in Q\). Let \(Ax=\frac{1}{2}x\) for all \(x\in\mathbf{R}\). Let \(\lambda_{n}=1\), \(\alpha_{n}=\frac{1}{7}\), \(\beta_{n}=\frac{1}{8}\). Let the sequence \(\{x_{n}\}\) be generated iteratively by (3.1), then the sequence \(\{x_{n}\}\) converges to 3.

Solution: It is easy to see that A is a bounded linear operator with its adjoint \(A^{*}=A\), \(\operatorname{Fix}(T)=3\) and \(\operatorname{Fix}(S)=\frac{3}{2}\). It can be observed that all the assumptions of Theorem 3.1 are satisfied. It is also easy to check that \(\Gamma=\{3\}\). We now rewrite (3.1) as follows:

$$ \textstyle\begin{cases} y_{n}=P_{C} (\frac{5x_{n}}{6}+\frac{1}{2} ), \\ z_{n}=P_{C} (x_{n}-\frac{y_{n}}{6}+\frac{1}{2} ), \\ w_{n}=z_{n}+\frac{4}{7(z_{n}+1)}-\frac{1}{7}, \\ x_{n+1}=z_{n}+\frac{1}{14(z_{n}+1)}+\frac{1}{2z_{n}+\frac{8}{7(z_{n}+1)}+\frac {12}{7}}-\frac{1}{7},\quad n\geq0. \end{cases} $$

Choosing initial values \(x_{0}=8\) and \(x_{0}=-2\) respectively, we see that figures (see Figures 1 and 2) and numerical results (see Tables 1 and 2) demonstrate Theorem 3.1.

Figure 1
figure 1

The convergence of \(\pmb{\{x_{n}\}}\) with initial value 8.

Figure 2
figure 2

The convergence of \(\pmb{\{x_{n}\}}\) with initial value −2.

Table 1 The initial value is number 8
Table 2 The initial value is number −2

References

  1. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  2. Byrne, C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  3. Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353-2365 (2006)

    Article  Google Scholar 

  4. Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 6, 2071-2084 (2005)

    Article  MathSciNet  Google Scholar 

  5. Censor, Y, Motova, A, Segal, A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327, 1244-1256 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  6. Byrne, C: A unified treatment of some iterative algorithm in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655-1665 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  8. Xu, HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 22, 2021-2034 (2006)

    Article  MATH  Google Scholar 

  9. Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20, 1261-1266 (2004)

    Article  MATH  Google Scholar 

  10. Zhao, J, Yang, Q: Several solution methods for the split feasibility problem. Inverse Probl. 21, 1791-1799 (2005)

    Article  MATH  Google Scholar 

  11. Sezan, M, Stark, H: Applications of convex projection theory to image recovery in tomography and related areas. In: Stark, H (ed.) Image Recovery Theory and Applications, pp. 415-462. Academic Press, Orlando (1987)

    Google Scholar 

  12. Youla, D: Mathematical theory of image restoration by the method of convex projection. In: Stark, H (ed.) Image Recovery Theory and Applications, pp. 29-77. Academic Press, Orlando (1987)

    Google Scholar 

  13. Youla, D: On deterministic convergence of iterations of relaxed projection operators. J. Vis. Commun. Image Represent. 1, 12-20 (1990)

    Article  Google Scholar 

  14. Combettes, P, Wajs, V: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168-1200 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  15. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)

    Article  Google Scholar 

  16. Yu, X, Shahzad, N, Yao, Y: Implicit and explicit algorithm for solving the split feasibility problem. Optim. Lett. 6, 1447-1462 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  17. Wang, F, Xu, HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, Article ID 102085 (2010)

    Google Scholar 

  18. Ceng, LC, PetruÅŸel, A, Yao, JC: Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problems. Abstr. Appl. Anal. 2013, Article ID 891232 (2013)

    Google Scholar 

  19. Yao, Y, Wu, J, Liou, Y: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012, Article ID 140679 (2012)

    MathSciNet  Google Scholar 

  20. Ceng, LC, Ansari, QH, Yao, JC: Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 75, 2116-2125 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  21. Yao, Y, Postolache, M, Liou, Y: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, Article ID 201 (2013)

    Article  Google Scholar 

  22. Korpelevich, G: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747-756 (1976)

    MATH  Google Scholar 

  23. Ceng, LC, Ansari, QH, Yao, JC: An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 64, 633-642 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  24. Nadezhkina, N, Takahashi, W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191-201 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  25. Yao, Y, Agarwal, RP, Postolache, M, Liou, YC: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, Article ID 183 (2014)

    Article  MathSciNet  Google Scholar 

  26. Bertsekas, D, Gafni, E: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 17, 139-159 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  27. Han, D, Lo, H: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 159, 529-544 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  28. Zhou, H: Strong convergence of an explicit iterative algorithm for continuous pseudo-contractives in Banach spaces. Nonlinear Anal. 70, 4039-4046 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  29. Kong, ZR, Ceng, LC, Wen, CF: Some modified extragradient methods for solving split feasibility and fixed point problems. Abstr. Appl. Anal. 2012, Article ID 975981 (2012)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), PhD Program Foundation of the Ministry of Education of China (20123127110002) and Program for Shanghai Outstanding Academic Leaders in Shanghai City (15XD1503100).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lu-Chuan Ceng.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, JZ., Ceng, LC., Qiu, YQ. et al. Extra-gradient methods for solving split feasibility and fixed point problems. Fixed Point Theory Appl 2015, 192 (2015). https://doi.org/10.1186/s13663-015-0441-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0441-z

MSC

Keywords