Skip to main content

An inertial S-iteration process

Abstract

In this paper, we establish a new iteration method, called an InerSP (an inertial S-iteration process), by combining a modified S-iteration process with the inertial extrapolation. This strategy is for speeding up the convergence of the algorithm. We then prove the convergence theorems of a sequence generated by our new method for finding a common fixed point of nonexpansive mappings in a Banach space. We also present numerical examples to illustrate that the acceleration of our algorithm is effective.

1 Introduction

In the last half century, mathematicians have been studied the approximation methods for fixed point problems and various iteration schemes for several classes of nonexpansive mappings to solve some mathematical problems such as convex optimization problems, convex feasibility problems, and variational inequalities problems. The details of those studies can be found in [1,2,3,4,5,6,7,8,9,10,11,12].

In 2008, Mainge [13] studied convergence of the inertial Mann algorithm by combining the Mann algorithm and the inertial extrapolation:

$$ \begin{aligned} &w_{n} = x_{n} +\alpha _{n}(x_{n}-x_{n-1}), \\ &x_{n+1}= w_{n} +\beta _{n} \bigl[S(w_{n})-w_{n} \bigr], \end{aligned} $$
(1)

for each \(n\geq 1\). The study is for speeding up the convergence of the given algorithm. The author showed that the sequence \(\{x_{n}\}\) converges weakly to a fixed point of the mapping S under certain assumptions. The author also applied the method to convex feasibility problems, fixed point problems and monotone inclusions.

Dong et al. [14] introduced a modified inertial Mann algorithm and an inertial CQ-algorithm by unifying the accelerated Mann algorithm with the inertial extrapolation as follows: Let \(T:\mathcal{H}\to \mathcal{H}\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq \emptyset \). Choose \(\mu \in (0,1)\), \(\lambda > 0\) and \(x_{0},x_{1}\in \mathcal{H}\) arbitrarily and set \(d_{0}:=(T(x_{0})-x_{0})/\lambda \). Compute \(d_{n+1}\) and \(x_{n+1}\) as follows:

$$ \begin{aligned} &w_{n} = x_{n} +\alpha _{n}(x_{n}-x_{n-1}), \\ &d_{n+1}= \frac{1}{\lambda }\bigl(T(w_{n})-w_{n} \bigr)+\beta _{n}d_{n}, \\ &y_{n}= w_{n} + \lambda d_{n+1}, \\ &x_{n+1}= \mu \gamma _{n} w_{n} +(1-\mu \gamma _{n})y_{n}, \end{aligned} $$
(2)

for each \(n\geq 1\). Under some conditions they proved that the sequence \(\{x_{n}\}\) generated by this algorithm converges weakly to a fixed point of T. They also studied an inertial CQ-algorithm by combining the CQ-algorithm and the inertial extrapolation defined as follows: Let \(\mathcal{H}\) be a Hilbert space and \(T:\mathcal{H}\to \mathcal{H}\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq \emptyset \). Let \(\{\alpha _{n}\}_{n=0}^{\infty }\subset [\alpha _{1}, \alpha _{2}]\), \(\alpha _{1}\in (-\infty ,0]\), \(\alpha _{2}\in [0, \infty )\), \(\{\beta _{n}\}_{n=0}^{\infty }\subset [\beta _{1}, 1]\), \(\beta _{1} \in (0,1)\). Set \(x_{0}, x_{1}\in \mathcal{H}\) arbitrarily. Define the iterative sequence \(\{x_{n}\}\) by the following iteration process:

$$ \begin{aligned} &w_{n} = x_{n} +\alpha _{n}(x_{n}-x_{n-1}), \\ &y_{n}= (1- \beta _{n})w _{n} + \beta _{n} Tw_{n}, \\ &C_{n}= \bigl\{ z\in \mathcal{H} : \Vert y_{n} -z \Vert \leq \Vert w_{n} -z \Vert \bigr\} , \\ &Q_{n} = \bigl\{ z\in \mathcal{H} : \langle x _{n} - z, x_{n} -x_{0} \rangle \leq 0 \bigr\} , \\ &x_{n+1}= P_{C_{n} \cap Q_{n}}x_{0}. \end{aligned} $$
(3)

They showed that the sequence \(\{x_{n}\}\) converges in norm to \(P_{\operatorname{Fix}(T)}(x_{0})\). In this study, they also performed numerical experiments to illustrate that the modified inertial Mann algorithm and inertial CQ-algorithm significantly reduced the running time compared with some previous methods without the inertial extrapolation. Some studies of the inertial algorithm can be found in [15,16,17,18,19,20,21,22,23].

Suparatulatorn et al. [24] introduced a modified S-iteration process defined as follows: \(x_{0}\in C\) and

$$ \begin{aligned} &y_{n} = (1-\beta _{n})x_{n} +\beta _{n} S_{1} x_{n}, \\ &x_{n+1}=(1- \alpha _{n})S_{1}(x_{n}) + \alpha _{n} S_{2}(y_{n}), \end{aligned} $$
(4)

\(n\geq 0\), where C is a nonempty subset of a real Banach space, two sequences \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are in the interval \((0,1)\) and \(S_{1},S_{2}:C \to C\) are G-nonexpansive mappings. Under some given conditions, they proved weak and strong convergence theorems of this iteration process for finding common fixed points of two G-nonexpansive mappings in a uniformly convex Banach space. They also provided an example from a numerical experiment which supported the idea that the sequence generated by the modified S-iteration converges faster than the one generated by an Ishikawa iteration. So, to obtain a faster algorithm revised from a modified S-iteration process, it should be combined with the inertial extrapolation as well.

Therefore, in this article, we focus on a combination of modified S-iteration process and the inertial extrapolation to obtain a new method which accelerates the approximation of a fixed point of a nonexpansive mapping in a Banach space defined as follows: Let \(\mathcal{H}\) be a Banach space and \(S_{1}, S_{2}:\mathcal{H}\to \mathcal{H} \) be nonexpansive mappings such that \(F= \operatorname{Fix}(S_{1})\cap \operatorname{Fix}(S _{2})\neq \emptyset \). Define

$$ \begin{aligned} &\omega _{n} = x_{n}+ \gamma _{n}(x_{n}-x_{n-1}), \\ &y_{n}=(1-\beta _{n}) \omega _{n}+ \beta _{n} S_{1}(\omega _{n}), \\ &x_{n+1}=(1-\alpha _{n})S _{1}(\omega _{n})+ \alpha _{n} S_{2}(y_{n}), \end{aligned} $$
(5)

\(n\geq 1 \), where \(\{\gamma _{n}\}\), \(\{\alpha _{n}\}\) and \(\{ \beta _{n} \}\) satisfy:

  1. (D1)

    \(\sum_{n=1}^{\infty }\gamma _{n}<\infty \), \(\{\gamma _{n}\} \subset [0, \gamma ]\), \(0\leq \gamma < 1\), \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\subset [\delta , 1-\delta ]\) for some \(\delta \in (0, 0.5)\);

  2. (D2)

    \(\{S_{i}(\omega _{n})-\omega _{n}\}\) is bounded for \(i=1,2\);

  3. (D3)

    \(\{S_{i}(\omega _{n})-y\}\) is bounded for any \(y\in F\) for \(i=1,2\).

We prove, under some assumptions, the weak and strong convergence of our new iteration process for finding common fixed points of \(S_{1}\) and \(S_{2}\).

2 Preliminaries

In this section we review some definitions and lemmas which will be used in the next section. We start with the following identity that will be used several times in the paper:

$$ \bigl\Vert \alpha x+(1-\alpha )y \bigr\Vert ^{2}=\alpha \Vert x \Vert ^{2}+(1-\alpha ) \Vert y \Vert ^{2}- \alpha (1- \alpha ) \Vert x-y \Vert ^{2}, $$
(6)

for all \(\alpha \in \mathbb{R}\), \(x,y\in \mathcal{H}\).

Definition 2.1

A Banach space X is said to have Opial’s property if whenever a sequence \(\{x_{n}\} \) in X converges weakly to x, then

$$ \liminf_{n\to \infty } \Vert x_{n}-x \Vert \leq \liminf _{n\to \infty } \Vert x_{n}-y \Vert , $$

for all \(y\in X\), \(y\neq x\).

Definition 2.2

([25])

Let C be a nonempty closed convex subset of a real uniformly convex Banach space X. The mappings \(S_{1}\) and \(S_{2}\) on C are said to satisfy Condition B if there exists a nondecreasing function \(f: [0, \infty )\to [0, \infty )\) with \(f(0)=0\) and \(f(r)>0\) for \(r>0\) such that, for all \(x\in C\),

$$ \max \bigl\{ \bigl\Vert x-S_{1}(x) \bigr\Vert , \bigl\Vert x-S_{2}(x) \bigr\Vert \bigr\} \geq f\bigl(d(x, F)\bigr), $$

where we denote \(F= \operatorname{Fix}(S_{1})\cap \operatorname{Fix}(S_{2})\) and \(\operatorname{Fix}(S_{i})\) is the set of fixed points of \(S_{i}\) for all \(i=1,2\).

Definition 2.3

([25])

Let C be a subset of a metric space \((X, d)\). A mapping \(S: C\to C\) is semicompact if for a sequence \(\{x_{n}\}\) in C with \(\lim_{n\to \infty } d(x_{n},S(x_{n}))=0\), there exists a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{i}}\to p\in C\).

Lemma 2.4

([26])

Let X be a uniformly convex Banach space, and \(\alpha _{n}\) be a sequence in \([\delta , 1-\delta ]\) for \(\delta \in (0,1)\). Suppose that sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) in X are such that \(\liminf_{n\to \infty } \|x_{n}\|\leq c\), \(\liminf_{n\to \infty } \|y_{n}\|\leq c\) and \(\liminf_{n\to \infty } \|\alpha _{n}x_{n}+(1-\alpha _{n})y_{n}\|=c\) for some \(c\geq 0\). Then \(\liminf_{n\to \infty } \|x_{n}-y_{n}\|=0\).

In 2002, Berinde [27] compared the rate of convergence between the two iterative methods by using the following definition.

Definition 2.5

Let \(\{a_{n}\}\) and \(\{b_{n}\}\) be two sequences of positive numbers that converges to a and b, respectively. Assume there exists

$$ \lim_{n\to \infty } \frac{ \vert a_{n}-a \vert }{ \vert b_{n}-b \vert }=l. $$
  1. i.

    If \(l=0\), then it is said that the sequence \(\{a_{n}\}\) converges to a faster than the sequence \(\{b_{n}\}\) to b.

  2. ii.

    If \(0< l<\infty \), then we say that the sequence \(\{a_{n}\}\) and \(\{b_{n}\}\) have the same rate of convergence.

Lemma 2.6

([28])

Let X be a Banach space that has Opial’s property, and let \(\{x_{n}\}\) be a sequence in X. Let x, y in X be such that \(\lim_{n\to \infty } \|x_{n}-x\|\) and \(\lim_{n\to \infty } \|x_{n}-y\|\) exist. If \(\{x_{n_{j}}\}\) and \(\{x_{n_{k}}\}\) are subsequences of \(\{x_{n}\}\) that converge to x and y, respectively, then \(x=y\).

Lemma 2.7

([29])

Let \(\{\psi _{n}\}\), \(\{\delta _{n}\}\), and \(\{\alpha _{n} \}\) be sequences in \([0, \infty )\) such that \(\psi _{n+1}\leq \psi _{n}+ \alpha _{n}(\psi _{n}-\psi _{n-1})+\delta _{n}\) for all \(n\geq 1\), \(\sum_{n=1}^{\infty }\delta _{n} <\infty \) and there exists a real number α with \(0\leq \alpha _{n}\leq \alpha <1\) for all \(n\geq 1\). Then the following hold:

  1. 1.

    \(\sum_{n\geq 1}[\psi _{n}-\psi _{n-1}]_{+} < \infty \) where \([t]_{+}= \max \{t, 0\}\).

  2. 2.

    there exists \(\psi ^{*}\in [0, \infty ) \) such that \(\lim_{n\to \infty }\psi _{n}= \psi ^{*}\).

Lemma 2.8

([30])

Let C be a nonempty set of a real Hilbert space \(\mathcal{H}\) and \(\{x_{n}\}\) a sequence in \(\mathcal{H}\) such that the following two conditions hold:

  1. 1.

    for any \(x\in C\), \(\lim_{n\to \infty } \|x_{n}-x\|\) exists;

  2. 2.

    every sequential weak cluster point of \(\{x_{n}\}\) is in C.

Then \(\{x_{n}\}\) converges weakly to a point in C.

Lemma 2.9

([30])

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}\), \(T:C\to \mathcal{H}\) a nonexpansive mapping. Let \(\{x_{n}\}\) be a sequence in C and \(x\in \mathcal{H}\) such that \(x_{n} \rightharpoonup x\) and \(Tx_{n}-x_{n}\to 0\) as \(n\to \infty \). Then \(x\in \operatorname{Fix}(T)\).

3 Results and discussions

In this section we prove the weak and strong convergence of a sequence generated by the proposed algorithm for finding a common fixed point of two nonexpansive mappings.

Theorem 3.1

Let X be a uniformly convex Banach space. Let \(y\in F=\operatorname{Fix}(S_{1}) \cap \operatorname{Fix}(S_{2})\). Let \(\{x_{n}\}\) be a sequence defined by Eq. (5). If (D1), (D2) and (D3) hold, then

  1. 1.

    \(\lim_{n\to \infty } \|x_{n}-y\|\) exists.

  2. 2.

    \(\lim_{n\to \infty } \|x_{n}-S_{1}(x_{n})\|=0=\lim_{n\to \infty } \|x _{n}-S_{2}(x_{n})\|\).

Proof

  1. 1.

    By the triangle inequality and the nonexpansiveness of \(S_{1}\), we have

    $$ \begin{aligned}[b] \Vert y_{n}-y \Vert &= \bigl\Vert (1-\beta _{n})\omega _{n}+ \beta _{n} S_{1}(\omega _{n})-y \bigr\Vert \\ &\leq (1-\beta _{n}) \Vert \omega _{n}-y \Vert + \beta _{n} \bigl\Vert \bigl(S_{1}(\omega _{n})-y \bigr) \bigr\Vert \\ &\leq (1-\beta _{n}) \Vert \omega _{n}-y \Vert + \beta _{n} \Vert \omega _{n}-y \Vert \\ &= \Vert \omega _{n}-y \Vert . \end{aligned} $$
    (7)

    So

    $$ \begin{aligned}[b] \Vert x_{n+1}-y \Vert &= \bigl\Vert (1-\alpha _{n})S_{1}(\omega _{n})+ \alpha _{n} S_{2}(y _{n})-y \bigr\Vert \\ &= \bigl\Vert (1-\alpha _{n}) \bigl(S_{1}(\omega _{n})-y\bigr)+ \alpha _{n} \bigl(S _{2}(y_{n})-y \bigr) \bigr\Vert \\ &\leq (1-\alpha _{n}) \bigl\Vert \bigl(S_{1}(\omega _{n})-y\bigr) \bigr\Vert + \alpha _{n} \bigl\Vert \bigl(S_{2}(y_{n})-y\bigr) \bigr\Vert .\end{aligned} $$
    (8)

    Using the nonexpansiveness of \(S_{1}\), \(S_{2}\) and (7), we have

    $$ \begin{aligned}[b] \Vert x_{n+1}-y \Vert &\leq (1-\alpha _{n}) \bigl\Vert \bigl(S_{1}(\omega _{n})-y \bigr) \bigr\Vert + \alpha _{n} \bigl\Vert \bigl(S_{2}(y_{n})-y \bigr) \bigr\Vert \\ &\leq (1-\alpha _{n}) \Vert \omega _{n}-y \Vert + \alpha _{n} \Vert y_{n}-y \Vert \\ &\leq (1-\alpha _{n}) \Vert \omega _{n}-y \Vert + \alpha _{n} \Vert \omega _{n}-y \Vert \\ &= \Vert \omega _{n}-y \Vert . \end{aligned} $$
    (9)

    It is not difficult to see that \(\{ \omega _{n}-y\}\) is bounded. Indeed, by the conditions (D2) and (D3) and the triangle inequality,

    $$ \begin{aligned}[b] \Vert \omega _{n}-y \Vert &= \bigl\Vert \omega _{n}-S_{1}(\omega _{n})+S_{1}( \omega _{n})-y \bigr\Vert \\ &\leq \bigl\Vert S_{1}(\omega _{n})-\omega _{n} \bigr\Vert + \bigl\Vert S_{1}(\omega _{n})-y \bigr\Vert \\ &\leq K, \end{aligned} $$
    (10)

    for some \(K\in [0,\infty )\). That is, \(\{ \omega _{n}-y\}\) is bounded. Hence by (9), \(\{ x_{n}-y\}\) and \(\{ x_{n}-x_{n-1}\}\) are bounded. By the identity in (6),

    $$ \begin{aligned}[b] \Vert \omega _{n}-y \Vert ^{2} &= \bigl\Vert (1+\gamma _{n}) (x_{n}-y)- \gamma _{n}(x_{n-1}-y) \bigr\Vert ^{2} \\ &=(1+\gamma _{n}) \Vert x_{n}-y \Vert ^{2}- \gamma _{n} \Vert x_{n-1}-y \Vert ^{2}+ \gamma _{n}(1+\gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}. \end{aligned} $$
    (11)

    This implies that

    $$ \begin{aligned}[b] \Vert x_{n+1}-y \Vert ^{2} & \leq \Vert \omega _{n}-y \Vert ^{2} \\ &=(1+\gamma _{n}) \Vert x_{n}-y \Vert ^{2}- \gamma _{n} \Vert x_{n-1}-y \Vert ^{2}+\gamma _{n}(1+\gamma _{n}) \Vert x_{n}-x _{n-1} \Vert ^{2}. \end{aligned} $$
    (12)

    Denote \(\varPsi _{n}:= \|x_{n}-y\|^{2}\). Then (12) becomes

    $$ \begin{aligned} \varPsi _{n+1} &\leq \varPsi _{n}+\gamma _{n}(\varPsi _{n}-\varPsi _{n-1})+\delta _{n}, \end{aligned} $$
    (13)

    where \(\delta _{n}=\gamma _{n}(1+\gamma _{n})\|x_{n}-x_{n-1}\|^{2}\). Observe that by (D1),

    $$ \begin{aligned}[b] \sum_{n=1}^{\infty } \delta _{n} &= \sum_{n=1}^{\infty } \gamma _{n}(1+ \gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &\leq \sum_{n=1}^{\infty }\gamma _{n}(1+\gamma ) (2K)^{2} \\ &< \infty . \end{aligned} $$
    (14)

    By Lemma 2.7(2), there exists \(\varPsi ^{*}\in [0, \infty ) \) such that \(\lim_{n\to \infty }\varPsi _{n}= \varPsi ^{*}\). This means that \(\lim_{n\to \infty } \|x_{n}-y\|^{2}\) exists and, therefore, \(\lim_{n\to \infty } \|x_{n}-y\|\) exists. This completes the proof of 1.

  2. 2.

    Set \(c= \lim_{n\to \infty } \|x_{n}-y\|\). By the nonexpansiveness of \(S_{1}\) and \(S_{2}\), we get

    $$ \begin{aligned}[b] \bigl\Vert x_{n}-S_{i}(x_{n}) \bigr\Vert &\leq \Vert x_{n}-y \Vert + \bigl\Vert S_{i}(x_{n})-y \bigr\Vert \\ &\leq \Vert x _{n}-y \Vert + \Vert x_{n}-y \Vert \\ &=2 \Vert x_{n}-y \Vert . \end{aligned} $$
    (15)

    So, if \(c=0\), then \(\|x_{n}-S_{i}(x_{n})\|\to 0\). Now assume that \(c>0\). Note that \(\sum_{n=1}^{\infty }\gamma _{n}<\infty \) implies \(\lim_{n\to \infty }\gamma _{n}=0\). It follows from (11)

    $$\begin{aligned} \lim_{n\to \infty } \Vert \omega _{n}-y \Vert ^{2} =&\lim_{n\to \infty }\bigl((1+ \gamma _{n}) \Vert x_{n}-y \Vert ^{2}-\gamma _{n} \Vert x_{n-1}-y \Vert ^{2} \\ &{} +\gamma _{n}(1+\gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}\bigr) \\ =&\lim_{n\to \infty } \Vert x_{n}-y \Vert ^{2} \\ =&c ^{2}. \end{aligned}$$
    (16)

    That is, \(\lim_{n\to \infty }\|\omega _{n}-y\|=c\). So, this forces \(\limsup_{n\to \infty } \|y_{n}-y\|\leq \limsup_{n\to \infty } \|\omega_{n}-y\|=c\). Next we will claim that \(\liminf_{n\to \infty } \|y_{n}-y\|\geq c\). Since \(S_{1}\) and \(S_{2}\) are nonexpansive, by (6) we have

    $$ \begin{aligned}[b] \Vert x_{n+1}-y \Vert ^{2} ={}& (1-\alpha _{n}) \bigl\Vert \bigl(S_{1}(\omega _{n})-y\bigr) \bigr\Vert ^{2}+ \alpha _{n} \bigl\Vert \bigl(S_{2}(y_{n})-y\bigr) \bigr\Vert ^{2} \\ &{}-\alpha _{n}(1-\alpha _{n}) \bigl\Vert S_{1}( \omega _{n})-S_{2}(y_{n}) \bigr\Vert ^{2} \\ \leq{}& (1-\alpha _{n}) \Vert \omega _{n}-y \Vert ^{2}+ \alpha _{n} \Vert y_{n}-y \Vert ^{2}. \end{aligned} $$
    (17)

    Rearranging (17) and by (D1), we have

    $$ \begin{aligned}[b] \Vert \omega _{n}-y \Vert ^{2} &\leq \Vert y_{n}-y \Vert ^{2} + \frac{1}{\alpha _{n}} \bigl( \Vert \omega _{n}-y \Vert ^{2}- \Vert x_{n+1}-y \Vert ^{2} \bigr) \\ & \leq \Vert y_{n}-y \Vert ^{2} + \frac{1}{\delta } \bigl( \Vert \omega _{n}-y \Vert ^{2}- \Vert x_{n+1}-y \Vert ^{2} \bigr). \end{aligned} $$
    (18)

    By (18) and (9), it yields \(\liminf_{n\to \infty } \|y_{n}-y\|^{2}\geq c^{2}\) and so \(\liminf_{n\to \infty } \|y_{n}-y\|\geq c\). Since

    $$ c\leq \liminf_{n\to \infty } \Vert y_{n}-y \Vert \leq \limsup_{n\to \infty } \Vert y_{n}-y \Vert \leq c, $$

    it follows that \(\lim_{n\to \infty } \|y_{n}-y\|= c\).

    Since

    $$\begin{aligned}& \limsup_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-y \bigr\Vert \leq \limsup_{n\to \infty } \Vert \omega _{n}-y \Vert \leq c, \\& \limsup_{n\to \infty } \bigl\Vert S_{2}(y_{n})-y \bigr\Vert \leq \limsup_{n\to \infty } \Vert y_{n}-y \Vert \leq c, \\& \lim_{n\to \infty } \bigl\Vert (1-\beta _{n}) (\omega _{n}-y)+ \beta _{n}\bigl( S_{1}( \omega _{n})-y\bigr) \bigr\Vert =\lim_{n\to \infty } \Vert y_{n}-y \Vert =c, \end{aligned}$$

    and

    $$ \lim_{n\to \infty } \bigl\Vert (1-\alpha _{n}) \bigl(S_{1}(\omega _{n})-y\bigr)+ \alpha _{n} \bigl( S_{2}(y_{n})-y\bigr) \bigr\Vert =\lim _{n\to \infty } \Vert x_{n+1}-y \Vert =c, $$

    by Lemma 2.4,

    $$ \lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-\omega _{n} \bigr\Vert =0 $$
    (19)

    and

    $$ \lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-S_{2}(y_{n}) \bigr\Vert =0. $$
    (20)

    However, we know that \(y_{n}-\omega _{n}= \beta _{n}(S_{1}(\omega _{n})- \omega _{n})\) and \(\omega _{n}-x_{n}= \gamma _{n}(x_{n}-x_{n-1})\) which yield

    $$ \begin{aligned}[b] 0 &\leq \lim_{n\to \infty } \Vert y_{n}-\omega _{n} \Vert \\ &=\lim_{n\to \infty }\beta _{n} \bigl\Vert S_{1}(\omega _{n})-\omega _{n} \bigr\Vert \\ &\leq \lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-\omega _{n} \bigr\Vert \\ &=0 \end{aligned} $$
    (21)

    and

    $$ \begin{aligned}[b] \lim_{n\to \infty } \Vert \omega _{n}-x_{n} \Vert &=\lim_{n\to \infty }\gamma _{n} \Vert x_{n}-x_{n-1} \Vert \\ &=0. \end{aligned} $$
    (22)

    Note that by (D2) and \(\gamma _{n}\to 0\) we have

    $$ \begin{aligned}[b] \lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-x_{n} \bigr\Vert &\leq \lim _{n\to \infty } \bigl\Vert S_{1}(\omega _{n})- \omega _{n} \bigr\Vert +\lim_{n\to \infty }\gamma _{n} \Vert x _{n}-x_{n-1} \Vert \\ &=0. \end{aligned} $$
    (23)

    It follows that, by (20), (21), (22), (23) and the nonexpansiveness of \(S_{1}\) and \(S_{2}\), we have

    $$\begin{aligned} 0 \leq& \lim_{n\to \infty } \bigl\Vert S_{1}(x_{n})-x_{n} \bigr\Vert \\ \leq &\lim_{n\to \infty } \bigl\Vert S_{1}(x_{n})-S_{1}( \omega _{n}) \bigr\Vert +\lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-x_{n} \bigr\Vert \\ \leq& \lim_{n\to \infty } \Vert x _{n}-\omega _{n} \Vert +\lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-x_{n} \bigr\Vert \\ =&0 \end{aligned}$$
    (24)

    and

    $$\begin{aligned} 0 \leq &\lim_{n\to \infty } \bigl\Vert S_{2}(x_{n})-x_{n} \bigr\Vert \\ \leq& \lim_{n\to \infty } \bigl\Vert S_{2}(x_{n})-S_{2}(y_{n}) \bigr\Vert + \lim_{n\to \infty } \bigl\Vert S_{2}(y_{n})-S_{1}( \omega _{n}) \bigr\Vert +\lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-x_{n} \bigr\Vert \\ \leq& \lim_{n\to \infty } \Vert x_{n}-y_{n} \Vert + \lim_{n\to \infty } \bigl\Vert S_{2}(y_{n})-S_{1}( \omega _{n}) \bigr\Vert +\lim_{n\to \infty } \bigl\Vert S_{1}(\omega _{n})-x_{n} \bigr\Vert \\ \leq& \lim_{n\to \infty } \Vert x _{n}-\omega _{n} \Vert +\lim_{n\to \infty } \Vert \omega _{n}-y_{n} \Vert \\ =0. \end{aligned}$$
    (25)

Therefore, \(\lim_{n\to \infty }\|S_{1}(x_{n})-x_{n}\|=0= \lim_{n\to \infty }\|S_{2}(x_{n})-x_{n}\|\) as desired. □

Theorem 3.2

Let \(\mathcal{H}\) be a Banach space having Opial’s property. Suppose that \(S_{1}, S_{2}: \mathcal{H}\to \mathcal{H}\) are two nonexpansive mappings with \(F=\operatorname{Fix}(S_{1})\cap \operatorname{Fix}(S_{2})\neq \emptyset \). Then the sequence \(\{x_{n}\}\) in (5) converges weakly to a common fixed point of \(S_{1}\) and \(S_{2}\).

Proof

Let \(y\in F\). By Theorem 3.1(1), \(\lim_{n\to \infty } \|x_{n}-y\|\) exists. Hence \(\{x_{n}\}\) is bounded. Let \(\{x_{n_{k}}\}\) and \(\{x_{n_{j}}\}\) be subsequences of the sequence of \(\{x_{n}\}\) with the two weak limits \(q_{1}\) and \(q_{2}\), respectively. By Theorem 3.1(2), \(\lim_{n\to \infty } \|x_{n_{k}}-S_{i}(x_{n_{k}})\|=0\) and \(\lim_{n\to \infty } \|x_{n_{j}}-S_{i}(x_{n_{j}})\|=0\) for \(i=1,2\). By Lemma (2.9), \(S_{i}(q_{1})=q_{1}\) and \(S_{i}(q_{2})=q_{2}\) for \(i=1,2\). That is, \(q_{1}, q_{2}\in F\). Applying Theorem 3.1(1) again, we have \(\lim_{n\to \infty } \|x_{n}-q_{1}\|\) and \(\lim_{n\to \infty } \|x_{n}-q_{2}\|\) exist and both \(\{x_{n_{k}}\}\) and \(\{x_{n_{j}}\}\) are sequences converging to \(q_{1}\) and \(q_{2}\), respectively. By Lemma (2.6), \(q_{1}=q_{2}\). Therefore, \(\{x_{n}\}\) converges weakly to a common fixed point in F. □

Under certain conditions, we can deduce the strong convergence theorem as follows.

Theorem 3.3

Let \(\mathcal{H}\) is a uniformly convex Banach space, Suppose that \(S_{1}, S_{2}: \mathcal{H}\to \mathcal{H}\) are two nonexpansive mappings with \(F=\operatorname{Fix}(S_{1})\cap \operatorname{Fix}(S_{2})\neq \emptyset \) and satisfy the Condition B. Then the sequence \(\{x_{n}\}\) in (5) converges strongly to a common fixed point of \(S_{1}\) and \(S_{2}\).

Proof

Let \(y\in F\). Now by (12), we get

$$\begin{aligned} \inf_{y\in F}\bigl\{ \Vert x_{n+1}-y \Vert ^{2}\bigr\} \leq& \inf_{y\in F} \bigl\{ \Vert \omega _{n}-y \Vert ^{2}\bigr\} \\ =&\inf_{y\in F}\bigl\{ (1+\gamma _{n}) \Vert x_{n}-y \Vert ^{2}\bigr\} +\inf_{y \in F} \bigl\{ -\gamma _{n} \Vert x_{n-1}-y \Vert ^{2} \bigr\} \\ &{} +\inf_{y\in F}\bigl\{ \gamma _{n}(1+\gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \bigr\} \\ \leq &\inf_{y\in F}\bigl\{ \Vert x_{n}-y \Vert ^{2}\bigr\} +\gamma _{n}\inf_{y\in F}\bigl\{ \Vert x _{n}-y \Vert ^{2}\bigr\} -\gamma _{n} \inf_{y\in F}\bigl\{ \Vert x_{n-1}-y \Vert ^{2}\bigr\} \\ &{}+\gamma _{n}(1+\gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ =&\inf_{y\in F}\bigl\{ \Vert x _{n}-y \Vert ^{2}\bigr\} +\gamma _{n}\Bigl[\inf_{y\in F} \bigl\{ \Vert x_{n}-y \Vert ^{2}\bigr\} -\inf _{y \in F}\bigl\{ \Vert x_{n-1}-y \Vert ^{2} \bigr\} \Bigr] \\ &{} +\gamma _{n}(1+\gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2}. \end{aligned}$$
(26)

Denote \(\varPsi _{n}:= \inf_{y\in F}\{\|x_{n}-y\|^{2}\}\). Then (26) becomes

$$ \begin{aligned} \varPsi _{n+1} &\leq \varPsi _{n}+\gamma _{n}(\varPsi _{n}-\varPsi _{n-1})+\delta _{n}, \end{aligned} $$
(27)

where \(\delta _{n}=\gamma _{n}(1-\gamma _{n})\|x_{n}-x_{n-1}\|^{2}\). Observe that by (D1)

$$ \begin{aligned}[b] \sum_{n=1}^{\infty } \delta _{n} &= \sum_{n=1}^{\infty } \gamma _{n}(1+ \gamma _{n}) \Vert x_{n}-x_{n-1} \Vert ^{2} \\ &\leq \sum_{n=1}^{\infty }\gamma _{n}(1+\gamma ) (2K)^{2} \\ &< \infty . \end{aligned} $$
(28)

By Lemma 2.7(2), there exists \(\varPsi ^{*}\in [0, \infty ) \) such that \(\lim_{n\to \infty }\varPsi _{n}= \varPsi ^{*}\).

That is, \(\lim_{n\to \infty } \inf_{y\in F}\{\|x_{n}-y\|^{2}\}\) exists. Therefore, \(\lim_{n\to \infty } \inf_{y\in F}\{\|x_{n}-y\|\}\) exists. Since \(S_{1}\) and \(S_{2}\) satisfy Condition B, by Theorem 3.1(2) it implies that

$$ \lim_{n\to \infty } f\Bigl(\inf_{y\in F}\bigl\{ \Vert x_{n}-y \Vert \bigr\} \Bigr)=0 $$

and, thus,

$$ \lim_{n\to \infty } \inf_{y\in F}\bigl\{ \Vert x_{n}-y \Vert \bigr\} =0. $$

So, we can find a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) and a sequence \(\{x^{*}_{j}\}\subset F\) satisfying \(\|x_{n_{j}}- x^{*}_{j} \|< \frac{1}{2^{j}}\). Next we will show that \(\{x^{*}_{j}\}\) is a Cauchy sequence. Let \(\epsilon >0\). Since \(\lim_{n\to \infty } \inf_{y \in F}\{\|x_{n}-y\|\}=0\), there is \(N\in \mathbb{N}\) such that \(\inf_{y\in F}\{\|x_{n}-y\|\}< \frac{\epsilon }{6}\) for all \(n\geq N\). For all \(m,n\geq N\), we have

$$ \Vert x_{m}-x_{n} \Vert \leq \Vert x_{m}-y \Vert + \Vert x_{n}-y \Vert $$

for all \(y\in F\). Thus,

$$ \Vert x_{m}-x_{n} \Vert \leq \inf_{y\in F} \bigl\{ \Vert x_{m}-y \Vert + \Vert x_{n}-y \Vert \bigr\} = \inf_{y\in F}\bigl\{ \Vert x_{m}-y \Vert \bigr\} + \inf_{y\in F}\bigl\{ \Vert x_{n}-y \Vert \bigr\} < \frac{\epsilon }{6}+ \frac{\epsilon }{6}= \frac{\epsilon }{3} $$

for all \(m,n\geq N\). Also, there is \(j_{0}\in \mathbb{N}\) such that \(\frac{1}{2^{j_{0}}}< \frac{\epsilon }{3}\). Choose \(M= \max \{N, j_{0}\}\). Then, for all \(j>k\geq M\), we have

$$ \bigl\Vert x^{*}_{j}-x^{*}_{k} \bigr\Vert \leq \bigl\Vert x^{*}_{j}-x_{n_{j}} \bigr\Vert + \Vert x_{n_{j}}-x_{n _{k}} \Vert + \bigl\Vert x_{n_{k}}-x^{*}_{k} \bigr\Vert < \frac{\epsilon }{3}+ \frac{\epsilon }{3}+ \frac{\epsilon }{3}=\epsilon . $$

Therefore, \(\{x^{*}_{j}\}\) is a Cauchy sequence and so there exists \(q\in \mathcal{H}\) such that \(x^{*}_{j}\) converges to q. Since F is closed, \(q\in F\). As a result, we see that \(x_{n_{j}}\) converges to q. Since \(\lim_{n\to \infty } \|x_{n}-q\|\) exists by Theorem 3.1(1), the conclusion follows. □

Theorem 3.4

Let \(\mathcal{H}\) is a uniformly convex Banach space, Suppose that \(S_{1}, S_{2}: \mathcal{H}\to \mathcal{H}\) are two nonexpansive mappings with \(F=\operatorname{Fix}(S_{1})\cap \operatorname{Fix}(S_{2})\neq \emptyset \) and one of \(S_{i}\) is semicompact. Then the sequence \(\{x_{n}\}\) in (5) converges strongly to a common fixed point of \(S_{1}\) and \(S_{2}\).

Proof

From Theorem 3.1, \(\{x_{n}\}\) is bounded and \(\lim_{n\to \infty } \|x_{n}-S_{1}(x_{n})\|=0=\lim_{n\to \infty } \|x _{n}-S_{2}(x_{n})\|\). By the semicompactness of one of \(S_{i}\), there exists \(q\in \mathcal{H}\) and a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{j}}\to q\) as \(j\to \infty \). Then

$$ \begin{aligned}[b] \bigl\Vert q-S_{i}(q) \bigr\Vert &\leq \Vert q-x_{n_{j}} \Vert + \bigl\Vert x_{n_{j}}-S_{i}(x_{n_{j}}) \bigr\Vert + \bigl\Vert S _{i}(x_{n_{j}})-S_{i}(q) \bigr\Vert \\ &\leq \Vert q-x_{n_{j}} \Vert + \bigl\Vert x_{n_{j}}-S_{i}(x _{n_{j}}) \bigr\Vert + \Vert x_{n_{j}}-q \Vert \\ &\to 0\quad \text{as } j\to \infty . \end{aligned} $$
(29)

Thus, \(q\in F\). As in the proof of Theorem 3.3, \(\lim_{n\to \infty } \inf_{y\in F}\{\|x_{n}-y\|\}\) exists. We observe that \(\inf_{y\in F}\{\|x_{n_{j}}-y\|\}\leq \|x_{n_{j}}-q\|\to 0 \) as \(j\to \infty \), hence \(\lim_{n\to \infty } \inf_{y\in F}\{\|x_{n}-y\|\}=0\). It follows, as in the proof of Theorem 3.3, that \(\{x_{n}\}\) converges strongly to a common fixed point of \(S_{1}\) and \(S_{2}\). This completes the proof. □

4 Numerical illustrations

We next demonstrate the efficiency of the InerSP iteration and compare it with the MSP iteration defined in [24] by giving some numerical examples. We use program MATLAB R2017a running on Core i7 setup processor installed with 8.00 GB of RAM using Windows 7. First, we apply our method to solve the following convex feasibility problem (see [31]).

Problem 1

([31])

For any nonempty closed convex set \(C_{i}\subset \mathbb{R}^{N}\) for each \(i=0, 1,\ldots , m\),

$$ \text{if } C:= \bigcap^{m}_{i=1}C_{i} \neq\emptyset ,\quad \text{find } x^{*}\in C. $$

Define a mapping \(T: \mathbb{R}^{N}\to \mathbb{R}^{N}\) by

$$ T:= P_{0} \Biggl( \frac{1}{m}\sum ^{m}_{i=1}P_{i} \Biggr), $$

where \(P_{i}= P_{C_{i}}\), \(i=0, 1, 2, \ldots , m\) is the metric projection onto \(C_{i}\). Note that \(P_{i}\) is nonexpansive for all \(i=1, 2, \ldots , m\), so this implies that the mapping T is also nonexpansive. Moreover, it is straightforward to check that

$$ \operatorname{Fix}(T)=\operatorname{Fix}(P_{0})\cap \bigcap ^{m}_{i=1} \operatorname{Fix}(P_{i})= C_{0}\cap \bigcap^{m}_{i=1} \operatorname{Fix}(P_{i}) =C. $$

We use the inertial S-iteration process (InerSP) and the modified S-iteration process (MSP) to solve Problem 1. For InerSP, set \(S_{1}=S_{2}=T\), \(\gamma =0.98\), \(\delta = 0.1\), \(\gamma _{n}= \big\{ \scriptsize{\begin{array}{ll} 0.95, &n\leq 10^{10}, \\ \frac{1}{(n+1)^{2}}, &n>10^{10}, \end{array}} \) and \(\beta _{n}=\alpha _{n}= 0.65+ \frac{1}{(n+1)^{0.25}}\), where n denotes the number of iterations. For MSP, the control parameters are defined the same as InerSP except γ and \(\gamma _{n}\), which are not parameters in MSP. In the experiment, we set \(m=30\) and \(C_{i}\), \(i=0, 1, \ldots , m\) as a closed ball with center \(c_{i}\in \mathbb{R}^{N}\) and radius \(r_{i}>0\). Thus, for each i, \(P_{i}\) can be computed as

$$ P_{i}(x):= \textstyle\begin{cases} c_{i}+ \frac{r_{i}}{ \Vert c_{i}-x \Vert }(x-c_{i}) &\text{if } \Vert c_{i}-x \Vert >r_{i}, \\ x &\text{if } \Vert c_{i}-x \Vert \leq r_{i}. \end{cases} $$

Choose \(r_{i}:=1\) for all \(i=0,1,\ldots , m\), \(c_{0}:=0\), \(c_{1}=[1, 0, \ldots , 0]\), and \(c_{2}=[-1, 0, \ldots , 0]\). For \(3\leq i\leq m\), \(c_{i}\in (-1/\sqrt{N}, 1/\sqrt{N})\) are randomly chosen. From the choice of \(c_{1}\), \(c_{2}\) and \(r_{1}\), \(r_{2}\), we have \(\operatorname{Fix}(T)=\{0\}\). We select initial points \(x_{0}= \operatorname{rand}(N,10)\) and \(x_{1}= x_{0}+ \frac{\operatorname{rand}(N,1)}{10,000}\) where \(N=30\). Since \(\operatorname{Fix}(T)=\{0\}\), we can consider the error as

$$ \Vert x_{n} \Vert _{\infty }= \max \bigl\{ \bigl\vert x_{n}(1) \bigr\vert , \bigl\vert x_{n}(2) \bigr\vert , \ldots , \bigl\vert x_{n}(N) \bigr\vert \bigr\} < \epsilon =0.01, $$

and take it to be the stopping criterion. In Table 1, n denotes the number of iterations, \(\{x_{n}\}\) and \(\{z_{n}\}\) denote the sequence of approximated fixed points generated by InerSP and MSP, respectively. The results are shown in Table 1.

Table 1 Convergence comparison of MSP and InerSP for the given function in Problem 1

The results are listed in Table 1, which illustrate that the errors for both the MSP iteration and the InerSP iteration reduce, which means that the approximated solutions for both methods converge to the fixed point 0. In addition, from Table 1 and Fig. 1, we can see that \(\|x_{n}\|_{\infty }\leq \|z_{n}\| _{\infty }\) and \(\lim_{n\rightarrow \infty } \frac{\|x_{n}\|_{\infty }}{\|z_{n}\|_{\infty }}=0\) so the InerSP iteration behaves better than the MSP and the sequence \(\{x_{n}\}\) converges faster than \(\{z_{n}\}\). Moreover, the running CPU time for finding the fixed point using InerSP is much less than MSP.

Figure 1
figure 1

Error comparison between MSP and InerSP

In the next example, we perform a numerical experiment to find a common fixed point of two nonexpansive mappings.

Problem 2

Define \(S_{1}, S_{2} : \mathbb{R}^{2}\to \mathbb{R}^{2}\) by

$$ S_{1}(x,y)= \biggl( \frac{1+x}{2}, 1+ \frac{y}{2} \biggr) $$

and

$$ S_{2}(x,y)= \biggl(x, 3- \frac{y}{2} \biggr). $$

It is easy to check that both \(S_{1}\) and \(S_{2}\) are nonexpansive on \(\mathbb{R}^{2}\). In this problem, for InerSP we set \(\gamma =0.98\), \(\delta = 0.1\), \(\gamma _{n}=\big\{ \scriptsize{\begin{array}{ll} 0.25, &n\leq 10^{10} \\ \frac{1}{(n+1)^{2}}, &n>10^{10} \end{array}} \), and \(\beta _{n}=\alpha _{n}= 0.65+ \frac{1}{(n+1)^{0.25}}\). For MSP we set \(\gamma _{n}=0\), \(\beta _{n}= \alpha _{n}= 0.65+ \frac{1}{(n+1)^{0.25}}\). We note that \(x^{*}=(1, 2)\) is the commom fixed point of \(S_{1}\) and \(S_{2}\). Set \(x_{0}=(500,1000)\) and \(x_{1}=(721,-5)\) as the initial values. Let \(\{z_{n}\}\) and \(\{x_{n}\}\) be sequences generated by MSP and InerSP, respectively, where \(z_{n}=(z_{1n}, z_{2n})\) and \(x_{n}=(x_{1n}, x_{2n})\) are in \(\mathbb{R}^{2}\). Moreover, we take \(\mathrm{err}=\|x_{n}-x^{*}\|_{2}\) to be the error of the iterative algorithm where \(\|\cdot \|_{2}\) is the Euclidean norm. The results are shown in Table 2.

Table 2 Convergence comparison of MSP and InerSP for the given function in Problem 2

From Table 2, we see that both \(\{z_{n}\}\) and \(\{x_{n}\} \) converge to fixed point \(x^{*}=(1, 2)\). If we iterate until the error is less than 0.001 the MSP converges to fixed point in 29 iterations and InerSP converges in 13 iterations. From Table 2 and Fig. 2, it can be observed that \(\|x_{n}-x^{*}\|\leq \|z_{n}-x^{*}\|\) for all \(n\geq 2\) and \(\lim_{n\rightarrow \infty } \frac{\|x_{n}-x^{*}\|_{2}}{\|z_{n}-x^{*}\|_{2}}=0\) so the sequence \(\{x_{n}\}\) converges faster than \(\{z_{n}\}\). In addition, the running time to find the common fixed point using InerSP is 10 times less than MSP. As illustrated in the two examples, we can perceive that the InerSP iteration has a better behavior than the MSP iteration.

Figure 2
figure 2

Error comparison between MSP and InerSP

5 Conclusions

In this work, we introduce a new iteration method, namely InerSP, by combining a modified S-iteration (MSP) with the inertial extrapolation. We also analyze the behavior of our InerSP method. Although the number of steps of the iteration process of the InerSP is higher than MSP, the numerical examples show that sequences generated by InerSP iteration converge to fixed points more rapidly than MSP iteration when using the number of iterations and CPU running times as measures.

Abbreviations

InerSP:

Inertial S-iteration process

MSP:

A modified S-iteration

CPU:

Central Processing Unit

References

  1. Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)

    Article  MathSciNet  Google Scholar 

  2. Chen, P., Huang, J., Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29, 025011 (2013)

    Article  MathSciNet  Google Scholar 

  3. Iiduka, H.: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22, 862–878 (2012)

    Article  MathSciNet  Google Scholar 

  4. Micchelli, C.A., Shen, L., Xu, Y.: Proximity algorithms for image models: denoising. Inverse Probl. 27(4), 045009 (2011)

    Article  MathSciNet  Google Scholar 

  5. Zhou, H., Zhou, Y., Feng, G.: Iterative methods for solving a class of monotone variational inequality problems with applications. J. Inequal. Appl. 2015, 68 (2015)

    Article  MathSciNet  Google Scholar 

  6. Noor, M.A.: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 251, 217–229 (2000)

    Article  MathSciNet  Google Scholar 

  7. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004)

    Article  MathSciNet  Google Scholar 

  8. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)

    Article  MathSciNet  Google Scholar 

  9. Phuengrattana, W., Suantai, S.: On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an abitrary interval. J. Comput. Appl. Math. 235, 3006–3014 (2011)

    Article  MathSciNet  Google Scholar 

  10. Phuengrattana, W., Suantai, S.: Comparison of the rate of convergence of various iterative methods for the class of weak contrations in Banach spaces. Thai J. Math. 11, 217–226 (2013)

    MathSciNet  MATH  Google Scholar 

  11. Yuan, H.: A splitting algorithm in a uniformly convex and 2-uniformly smooth Banach space. J. Nonlinear Funct. Anal. 2018, Article ID 26 (2018)

    Google Scholar 

  12. Zhao, J., Zong, H., Muangchoo, K., Kumam, P., Cho, Y.J.: Algorithms for split common fixed point in Hilbert spaces. J. Nonlinear Var. Anal. 2, 273–286 (2018)

    Google Scholar 

  13. Mainge, P.E.: Convergence theorems for inertial KM-type algorithm. J. Comput. Appl. Math. 219, 223–236 (2008)

    Article  MathSciNet  Google Scholar 

  14. Dong, Q.L., Yuan, H.B., Cho, Y.J., Rassias, T.M.: Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 12(1), 87–102 (2018). https://doi.org/10.1007/s11590-016-1102-9

    Article  MathSciNet  MATH  Google Scholar 

  15. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9(1–2), 3–11 (2001)

    Article  MathSciNet  Google Scholar 

  16. Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)

    Article  MathSciNet  Google Scholar 

  17. Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

    Article  MathSciNet  Google Scholar 

  18. Chan, R.H., Ma, S., Yang, J.F.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239–2267 (2015)

    Article  MathSciNet  Google Scholar 

  19. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  20. Chambolle, A., Dossal, C.: On the convergence of the iterates of the “fast iterative shrinkage/ thresholding algorithm”. J. Optim. Theory Appl. 166, 968–982 (2015)

    Article  MathSciNet  Google Scholar 

  21. Bot, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)

    MathSciNet  MATH  Google Scholar 

  22. Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)

    Article  MathSciNet  Google Scholar 

  23. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  24. Suparatulatorn, R., Cholamjiak, W., Suantai, S.: A modified S-iteration process for G-nonexpansive mappings in Banach spaces with graphs. Numer. Algorithms 77(2), 479–490 (2018). https://doi.org/10.1007/s11075-017-0324-y

    Article  MathSciNet  MATH  Google Scholar 

  25. Shahzad, S., Al-Dubiban, R.: Approximating common fixed points of nonexpansive mappings in Banach spaces. Georgian Math. J. 13(3), 529–537 (2006)

    MathSciNet  MATH  Google Scholar 

  26. Nakajo, K., Takahashi, W.: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372–379 (2003)

    Article  MathSciNet  Google Scholar 

  27. Berinde, V.: Iterative Approximation of Fixed Points. Editura Dfemeride, Baia Mare (2002)

    MATH  Google Scholar 

  28. Dong, Q.L., Lu, Y.Y.: A new hybrid algorithm for a nonexpansive mapping. Fixed Point Theory Appl. 2015, 37 (2015)

    Article  MathSciNet  Google Scholar 

  29. Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)

    Article  MathSciNet  Google Scholar 

  30. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)

    Book  Google Scholar 

  31. Sakurai, K., Linduka, H.: Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2014, 202 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referee for helpful and detailed comments.

Availability of data and materials

Not applicable.

Funding

This article is funded by the authors.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Aniruth Phon-on.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Phon-on, A., Makaje, N., Sama-Ae, A. et al. An inertial S-iteration process. Fixed Point Theory Appl 2019, 4 (2019). https://doi.org/10.1186/s13663-019-0654-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-019-0654-7

MSC

Keywords