Skip to main content

On some Mann’s type iterative algorithms

Abstract

First we present some interesting variants of Mann’s method. In the last section, we show that many existing results in the literature are concrete realizations of our general scheme under varying assumptions on the coefficients.

1 Some results on very small variation of Mann’s method

Let H be a real Hilbert space, \((\alpha_{n})_{n\in\mathbb{N}}\subset (0,\alpha]\subset(0,1)\) and \((\beta_{n})_{n\in\mathbb{N}},(\mu_{n})_{n\in \mathbb{N}}\subset(0,1]\). In the sequel, we will use the following notation:

  • We say that \(\zeta_{n}=o(\eta_{n})\) if \(\frac{\zeta_{n}}{\eta_{n}}\to0\) as \(n\to\infty\).

  • We say that \(\zeta_{n}=O(\eta_{n})\) if there exist \(K,N>0\) such that \(N\leq \vert \frac{\zeta_{n}}{\eta_{n}}\vert \leq K\).

Iterative schemes to approximate fixed points of nonlinear mappings have a long history and they still are an active research area of the nonlinear operator theory.

Here we are interested in the Mann iterative method introduced by Mann [1] in 1953. The method generates a sequence \((x_{n})_{n\in\mathbb {N}}\) via the recursive formula

$$ x_{n+1}=\alpha_{n}x_{n}+(1- \alpha_{n})Tx_{n}, $$
(1.1)

where the coefficients sequence \((\alpha_{n})_{n\in\mathbb{N}}\) is in the real interval \([0,1]\), T is a self-mapping of a closed and convex subset of C of a real Hilbert space H, and the value \(x_{0}\in C\) is chosen arbitrarily.

Mann’s method has been studied in the literature chiefly for nonexpansive mappings T (i.e., \(\|Tx-Ty\|\leq\|x-y\|\) for all \(x,y\in C\)). Due to Reich [2] it is known that if T is nonexpansive and \(\sum_{n}\alpha_{n}(1-\alpha_{n})=+\infty\), then the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by Mann’s algorithm (1.1) converges weakly to a fixed point of T. Thanks to the celebrated counterexample of Genel and Linderstrauss [3], we know that Mann’s algorithm fails, in general, to converge strongly also in the setting of a Hilbert space.

In order to ensure strong convergence, in the past years, the method has been modified in several directions: by Ishikawa [4] using a double convex-combination, by Halpern [5] using an anchor, by Moudafi [6] using a contraction mapping, by Nakajo and Takahashi [7] using projections. These are just a few (but extremely relevant) of such modifications.

In this section we propose a variation of Mann’s method (Theorem 1.1 and Theorem 1.3) which differs from all those present in the literature, and it is closest to the original method (1.1). Moreover, we give several corollaries that are concrete and meaningful applications.

In the next section we see that the proof of all these results can be obtained by a very general two-step iterative algorithm. In the last section we give the proof of our main theorem and compare the rate of convergence of our method with that of Halpern on a specific case.

To our knowledge, Theorem 1.1 below provides a method that is almost the Mann method but ensures strong convergence.

Theorem 1.1

Let \(\alpha_{n}, \mu_{n}\in(0,1]\) such that

  • \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  • \(|\mu_{n-1}-\mu_{n}|=o(\mu_{n})\), and \(|\alpha _{n-1}-\alpha_{n}|=o(\alpha_{n}\mu_{n})\).

Then the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}- \alpha_{n}\mu_{n}x_{n} $$

strongly converges to a point \(x^{*}\in \operatorname{Fix}(T)\) with minimum norm \(\|x^{*}\|=\min_{x\in \operatorname{Fix}(T)}\|x\|\).

Taking \(\mu_{n}=1\) we obtain the following.

Corollary 1.2

Let \(\alpha_{n}\in(0,1]\) such that

$$ \lim_{n\to\infty}\alpha_{n}=0, \qquad\sum _{n\in\mathbb{N}}\alpha _{n}=\infty,\qquad |\alpha_{n-1}- \alpha_{n}|=o(\alpha_{n}). $$

Then the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by

$$x_{n+1}=(1-\alpha_{n})Tx_{n} $$

strongly converges to a point \(x^{*}\in \operatorname{Fix}(T)\) with minimum norm \(\|x^{*}\|=\min_{x\in \operatorname{Fix}(T)}\|x\|\).

We can see \(x^{*}\) as the point in \(\operatorname{Fix}(T)\) nearest to \(0\in H\). If we search for the point in \(\operatorname{Fix}(T)\) nearest to an arbitrary \(u\in H\), then we have the following theorem.

Theorem 1.3

Under the same assumptions on the coefficients \(\alpha_{n},\mu_{n}\) of Theorem  1.1, the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}+ \alpha_{n}\mu_{n}(u-x_{n}) $$

strongly converges to a point \(x_{u}^{*}\in \operatorname{Fix}(T)\) nearest to u, \(\|x_{u}^{*}-u\|=\min_{x\in \operatorname{Fix}(T)}\|x-u\|\).

Taking again \(\mu_{n}=1\), we obtain the following.

Corollary 1.4

(Halpern’s method)

Under the same assumptions on the coefficients \(\alpha_{n}\) of Corollary  1.2, the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by

$$x_{n+1}=\alpha_{n}u+(1-\alpha_{n})Tx_{n} $$

strongly converges to a point \(x_{u}^{*}\in \operatorname{Fix}(T)\) nearest to u, \(\|x_{u}^{*}-u\|=\min_{x\in \operatorname{Fix}(T)}\|x-u\|\) too.

If A is a δ-inverse strongly monotone operator with \(A^{-1}(0)\neq\emptyset\), then \((I-\delta A)\) is nonexpansive [8, p.419] with fixed points \(\operatorname{Fix}(I-\delta A)=A^{-1}(0)\). By Theorem 1.3 with \(T=(I-\delta A)\), we have the following.

Corollary 1.5

Under the same assumptions on the coefficients \(\alpha_{n}\), \(\mu_{n}\) of Theorem  1.3, the sequence \((x_{n})_{n\in\mathbb{N}}\) generated by

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n}) (I-\delta A)x_{n}+\alpha_{n}\mu_{n}(u-x_{n}) $$

strongly converges to a point \(x_{A,u}^{*}\in A^{-1}(0)\) nearest to u, \(\|x_{A,u}^{*}-u\|=\min_{x\in A^{-1}(0)}\|x-u\|\).

The first interesting example of a monotone δ-inverse strongly monotone operator with \(A^{-1}(0)\neq\emptyset\) is the gradient of a convex function. Precisely, let \(\phi:H\to\mathbb{R}\) be a convex and Fréchet differentiable function. Let us suppose that \(\nabla\phi(x)\) is an L-Lipschitzian mapping. We are interested in approximate solutions of the variational inequality (in the sequel (VIP))

$$ \bigl\langle \nabla\phi\bigl(x^{*}\bigr),y-x^{*}\bigr\rangle \geq0,\quad \forall y\in H, $$
(1.2)

since it is the optimality condition for the minimum problem

$$\min_{x\in H}\phi(x). $$

In our hypotheses, \(\nabla\phi(x)\) is a \(\frac{1}{L}\)-inverse strongly monotone operator. Then the mapping \((I-\frac{1}{L} \nabla\phi)\) is nonexpansive, and it results that the following can be obtained by Corollary 1.5.

Corollary 1.6

Let \(\operatorname{Fix}((I-\frac{1}{L} \nabla\phi))\neq\emptyset\) and \(u\in H\). Let us suppose that

  • \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  • \(|\mu_{n-1}-\mu_{n}|=o(\mu_{n})\), and \(|\alpha _{n-1}-\alpha_{n}|=o(\alpha_{n}\mu_{n})\).

Then the sequence generated by the iteration

$$ x_{n+1}=\alpha_{n} x_{n}+ (1-\alpha_{n}) \biggl(I-\frac{1}{L} \nabla\phi \biggr) x_{n} + \alpha_{n}\mu_{n} (u-x_{n}),\quad n\geq1 $$
(1.3)

strongly converges to \(x^{*}\in \operatorname{Fix}((I-\frac{1}{L} \nabla\phi))\) that is the unique solution of the variational inequality

$$\bigl\langle x^{*}-u,y-x^{*}\bigr\rangle \geq0, \quad\forall y\in \operatorname{Fix}(S), $$
(1.4)

i.e., \(x^{*}\) is the solution of (1.2) nearest to u.

A further interesting result concerns a Tikhonov regularized-constrained least squares defined as follows:

$$ \min_{x\in C} \frac{1}{2}\|Ax-b \|^{2}+\frac{1}{2}\varepsilon\|x\|^{2}, \quad\mbox{where } \varepsilon>0, $$
(1.5)

that aims the (ill-posed) constrained least squares problem

$$ \min_{x\in C} \frac{1}{2}\|Ax-b \|^{2}, $$
(1.6)

where \(C=\bigcap_{n\in\mathbb{N}}\operatorname{Fix}(T_{n})\), A is a linear and bounded operator on H, \(b\in H\) and \((T_{n})_{n\in\mathbb{N}}\) are nonexpansive satisfying the following:

  1. (h1)

    \(T_{n}:H\to H\) are nonexpansive mappings, uniformly asymptotically regular on bounded subsets \(B\subset H\), i.e.,

    $$\lim_{n\to\infty}\sup_{x\in B}\|T_{n+1}x-T_{n}x \|=0; $$
  2. (h2)

    it is possible to define a nonexpansive mapping \(T:H\to H\) with \(Tx:=\lim_{n\to\infty}T_{n}x\) such that if \(F:=\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(T_{n})\neq\emptyset\), then \(\operatorname{Fix}(T)=F\).

Reich and Xu in [9] proved, among others, that the unique solution of (1.5) strongly converges, when \(\varepsilon\to0\), to the solution of (1.6) that has minimum norm. The optimality condition to solve (1.5) is to solve the following variational inequality:

$$ \bigl\langle A^{*}Ax-A^{*}b+\varepsilon x, y-x\bigr\rangle \geq0,\quad \forall y\in\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(T_{n}), $$
(1.7)

where \(A^{*}\) is the adjoint of A.

In light of Reich and Xu’s results, it would be interesting to approximate a solution of (1.7) (for small ε).

Let \(B:=A^{*}A-A^{*}b\). Note that since B is firmly nonexpansive, i.e., 1-inverse strongly monotone so \(I-B\) is firmly nonexpansive [10], hence nonexpansive. We are able to prove the following results.

Theorem 1.7

Assume that

  • \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  • \(|\beta_{n}-\beta_{n-1}|=o(\alpha_{n}\beta_{n}\mu_{n})\), \(|\mu_{n}-\mu_{n-1}|=o(\alpha_{n}\beta_{n}\mu_{n})\) and \(|\alpha_{n}-\alpha _{n-1}|=o(\alpha_{n}\beta_{n}\mu_{n})\);

  • \(\vert \frac{1}{\beta_{n}}-\frac{1}{\beta_{n-1}}\vert =O(\alpha_{n}\mu_{n})\).

Let us suppose \(\lim_{n\to\infty}\frac{\beta_{n}}{\alpha _{n}\mu_{n}}=\tau\in(0,+\infty)\).

(These hypotheses are satisfied, for instance, by \(\alpha _{n}=\mu_{n}=\frac{1}{\sqrt[4]{n}}\), \(\beta_{n}=\frac{2}{\sqrt{n}}\), \(n\geq1\).)

Then \((x_{n})_{n\in\mathbb{N}}\) defined by (2.1), i.e.,

$$ \begin{cases} y_{n}=\beta_{n}(I-A^{*}A)x_{n}+(1-\beta_{n})x_{n}+\beta_{n}A^{*}b,\\ x_{n+1}=\alpha_{n} (1-\mu_{n})x_{n} + (1-\alpha_{n})T_{n} y_{n}, & n\geq1 \end{cases} $$
(1.8)

strongly converges to \(\tilde{x}\in \bigcap_{n\in\mathbb{N}} \operatorname{Fix}(T_{n})\) that is the unique solution of the variational inequality

$$\biggl\langle \frac{1}{\tau}D\tilde{x}+(I-S)\tilde{x},y- \tilde{x}\biggr\rangle \geq 0, \quad\forall y\in\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(T_{n}), $$
(1.9)

i.e.,

$$\biggl\langle \frac{1}{\tau} \tilde{x}+ A^{*}A\tilde{x}-A^{*}b, y-\tilde {x} \biggr\rangle \geq0, \quad\forall y\in\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(T_{n}). $$

Note that we do not assume any hypotheses on the commutativity of the mappings in spite of the main theorem in [9] (see also [11, 12]).

2 A general iterative method

In this section we study the convergence of the following general two-step iterative algorithm in a Hilbert space H:

$$ \begin{cases} y_{n}=\beta_{n}Sx_{n}+(1-\beta_{n})x_{n},\\ x_{n+1}=\alpha_{n} (I-\mu_{n}D)x_{n} + (1-\alpha_{n})W_{n} y_{n}, & n\geq1, \end{cases} $$
(2.1)

where

  • \(D:H\to H\) is a σ-strongly monotone and L-Lipschitzian operator on H, i.e., D satisfies

    $$\langle Dx-Dy,x-y\rangle\geq\sigma\|x-y\|^{2} \quad\mbox{and}\quad \|Dx-Dy\| \leq L\|x-y\|. $$
  • \(S:H\to H\) is a nonexpansive mapping.

  • \((W_{n})_{n\in\mathbb{N}}\) is defined on H and such that

    1. (h1)

      \(W_{n}:H\to H\) are nonexpansive mappings, uniformly asymptotically regular on bounded subsets \(B\subset H\), i.e.,

      $$\lim_{n\to\infty}\sup_{x\in B}\|W_{n+1}x-W_{n}x \|=0, $$
    2. (h2)

      it is possible to define a nonexpansive mapping \(W:H\to H\), with \(Wx:=\lim_{n\to\infty}W_{n}x\) such that if \(F:=\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(W_{n})\neq\emptyset\) then \(\operatorname{Fix}(W)=F\).

  • The coefficients \((\alpha_{n})_{n\in\mathbb{N}}\subset(0,\alpha ]\subset(0,1)\), \((\beta_{n})_{n\in\mathbb{N}}\subset(0,1)\) and \((\mu _{n})_{n\in\mathbb{N}}\subset(0,\mu)\), where \(\mu<\frac {2\sigma}{L^{2}}\).

Remark 2.1

If \((T_{n})_{n\in\mathbb{N}}\) does not satisfy (h1) and (h2), then it is always possible to construct a family of nonexpansive mappings \((W_{n})_{n\in\mathbb{N}}\) satisfying (h1) and (h2) and such that \(\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(T_{n})=\bigcap_{n\in\mathbb{N}} \operatorname{Fix}(W_{n})\) (see [13, 14]).

All the previous results easily follow from our main theorem below.

Theorem 2.2

Let H be a Hilbert space. Let D, S, \((W_{n})_{n\in\mathbb{N}}\) be defined as above. Then:

(1) Let \(\tau=\lim_{n\to\infty}\frac{\beta _{n}}{\alpha_{n}\mu_{n}}=0\). Assume that

  1. (H1)

    \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  2. (H2)

    \(\sup_{x\in B} \|W_{n}z-W_{n-1}z\|=o({\alpha _{n}\mu_{n}})\), where \(B\subset H\) is bounded;

  3. (H3)

    \(|\mu_{n-1}-\mu_{n}|=o(\mu_{n})\) and \(|\alpha_{n-1}-\alpha_{n}|=o(\alpha_{n}\mu_{n})\).

(These hypotheses are satisfied, for instance, by \(\alpha _{n}=\mu_{n}=\frac{1}{\sqrt{n}}\) and \(\beta_{n}=\frac{1}{n^{2}}\), \(n\geq1\).)

Then the sequence generated by iteration (2.1) strongly converges to \(x^{*}\in F\) that is the unique solution of the variational inequality

$$\bigl\langle Dx^{*},y-x^{*}\bigr\rangle \geq0,\quad \forall y\in F. $$
(2.2)

(2) Let us suppose \(\lim_{n\to\infty}\frac{\beta _{n}}{\alpha_{n}\mu_{n}}=\tau\in(0,+\infty)\). Assume that

  1. (H1)

    \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  2. (H4)

    \(\sup_{x\in B} \|W_{n}z-W_{n-1}z\|=o(\alpha_{n}\mu _{n}\beta_{n})\), where \(B\subset H\) is bounded;

  3. (H5)

    \(|\beta_{n}-\beta_{n-1}|=o(\alpha_{n}\beta_{n}\mu _{n})\), \(|\mu_{n}-\mu_{n-1}|=o(\alpha_{n}\beta_{n}\mu_{n})\) and \(|\alpha_{n}-\alpha _{n-1}|=o(\alpha_{n}\beta_{n}\mu_{n})\);

  4. (H6)

    \(\vert \frac{1}{\beta_{n}}-\frac{1}{\beta _{n-1}}\vert =O(\alpha_{n}\mu_{n})\).

(These hypotheses are satisfied, for instance, by \(\alpha _{n}=\mu_{n}=\frac{1}{\sqrt[4]{n}}\), \(\beta_{n}=\frac{2}{\sqrt{n}}\), \(n\geq1\) and \((W_{n})\) and \((\lambda_{n})\) are as in Stm1.)

Then \(x_{n}\to\tilde{x}\), as \(n\to\infty\), where \(\tilde{x}\in F\) is the unique solution of the variational inequality

$$ \biggl\langle \frac{1}{\tau}D\tilde{x}+(I-S)\tilde{x},y- \tilde{x}\biggr\rangle \geq 0,\quad \forall y\in F. $$
(2.3)

(3) If \(\tau=\lim_{n\to\infty}\frac{\beta _{n}}{\alpha_{n}\mu_{n}}=\infty\) and \(\operatorname{Fix}(S)\bigcap F\neq\emptyset\). Let us suppose that

  1. (H1)

    \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  2. (H2)

    \(\sup_{x\in B} \|W_{n}z-W_{n-1}z\|=o({\alpha _{n}\mu_{n}})\), where \(B\subset H\) is bounded.

  3. (H7)

    \(|\mu_{n-1}-\mu_{n}|=o(\mu_{n})\), and \(|\alpha _{n-1}-\alpha_{n}|=o(\alpha_{n}\mu_{n})\).

If \(\beta_{n}\to\beta\neq0\), as \(n\to\infty\), and \(|\beta _{n-1}-\beta_{n}|=o(\alpha_{n}\mu_{n})\), then the sequence generated by iteration (2.1) strongly converges to \(x^{*}\in F\cap \operatorname{Fix}(S)\) that is the unique solution of the variational inequality

$$\bigl\langle Dx^{*},y-x^{*}\bigr\rangle \geq0, \quad\forall y\in F\cap \operatorname{Fix}(S). $$
(2.4)

Proof

We give the proof in the next (and last) section. □

Proof of Theorem 1.1

It follows by Theorem 2.2(1) choosing \(D=I\), \(\mu=1\), \(W_{n}=T\) and \(S=I\). □

Proof of Theorem 1.3

If we take \(Dx=x-u\), \(S=I\), \(\mu=1\) and \(W_{n}=T\), the proofs follow by Theorem 2.2(1). □

Proof of Corollary 1.6

Easily follows by Corollary 1.5 when \(A=\nabla\phi\) and \(\delta=\frac{1}{L}\). □

Remark 2.3

It is interesting to note that the convergence in Corollary 1.6 can be obtained from Theorem 2.2(3) when \(S=W_{n}=(I-\delta A)\).

The last application of our main theorem concerns the problem to minimize a quadratic function over a closed and convex subset C of H

$$\min_{x \in C} \frac{1}{2} \langle Ax,x \rangle-h(x), $$
(2.5)

where h is a potential function for a contraction mapping f, i.e., \(h'= f\) on H (for references, one can read [15, 16]).

Let A be a strongly positive bounded linear operator on H, i.e., there exists \(\bar{\gamma}>0\) such that \(\langle Ax,x\rangle\geq \bar{\gamma}\|x\|^{2}\) for all \(x \in H\).

Let us take as a subset C the set of common fixed points of a given semigroup of nonexpansive mappings. Let \(\mathfrak{T}\) be a one-parameter continuous semigroup of nonexpansive mappings defined on H with a common fixed points set \(F\neq\emptyset\). Let \((\lambda_{n})_{n\in \mathbb{N}}\) be a sequence in \((0,1)\) such that \(\lim_{n\to\infty}\lambda_{n}=\lambda\in (0,1)\).

We know the fact, due to Suzuki [17], that \(W_{n}x:=\lambda_{n} T(1)x+(1-\lambda_{n})T(\sqrt{2})x\) is a nonexpansive mapping such that \(\operatorname{Fix}(W_{n})= \operatorname{Fix}(T(1))\cap \operatorname{Fix}(T(\sqrt{2}))=F\). Moreover,

$$\|W_{n+1}x-W_{n}x\|\leq|\lambda_{n+1}- \lambda_{n}|\bigl\| T(1)x-T(\sqrt{2})x\bigr\| . $$

Further, if x lies in a bounded set \(B\subset X\), the uniform asymptotic regularity on B follows. If \(Sx:=\lambda T(1)x+(1-\lambda)T(\sqrt{2})x\), then \(\operatorname{Fix}(S)=F\) and, for all \(x\in H\),

$$\lim_{n\to\infty}W_{n}x=Vx. $$

In light of [16, 18, 19], we consider

$$ \min_{x \in F} \frac{1}{2} \langle Ax,x \rangle-h(x). $$
(2.6)

We are able to prove the following new convergence result.

Theorem 2.4

Let us suppose that

  • \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n\in\mathbb{N}}\alpha_{n}\mu_{n}=\infty\);

  • \(|\lambda_{n+1}-\lambda_{n}|=o({\alpha_{n}\mu_{n}})\);

  • \(|\mu_{n-1}-\mu_{n}|=o(\mu_{n})\), and \(|\alpha _{n-1}-\alpha_{n}|=o(\alpha_{n}\mu_{n})\).

If \(\beta_{n}\to\beta\neq0\), as \(n\to\infty\), and \(|\beta _{n-1}-\beta_{n}|=o(\alpha_{n}\mu_{n})\), then the sequence generated by iteration (2.1), i.e.,

$$ \begin{cases} x_{n+1}=\alpha_{n} (I-\mu_{n}A)x_{n}+\alpha_{n}\mu_{n} f(x_{n}) + (1-\alpha_{n})W_{n} y_{n}, & n\geq1,\\ y_{n}=\beta_{n}(\lambda T(1)+(1-\lambda) T(\sqrt{2}))x_{n}+(1-\beta_{n})x_{n} \end{cases} $$
(2.7)

strongly converges to \(x^{*}\in F\) that is the unique solution of the variational inequality

$$ \bigl\langle (A-\gamma f)x^{*},y-x^{*}\bigr\rangle \geq0,\quad \forall y\in F, $$
(2.8)

which is the optimality condition to solve

$$ \min_{x \in F} \frac{1}{2} \langle Ax,x\rangle-h(x). $$

Proof of Theorem 2.4

Easily follows by Theorem 2.2 statement 3 when \(S=\lambda T(1)+(1-\lambda)T(\sqrt{2})\), \(W_{n}=\lambda_{n} T(1)+(1-\lambda_{n})T(\sqrt{2})\) and \(D=A-f\) (see [16]). □

3 Proof of Theorem 2.2

Lemma 3.1

Let \((x_{n})_{n\in\mathbb{N}}\) be defined by iteration (2.1) and \((\alpha_{n})_{n\in\mathbb{N}}, (\beta_{n})_{n\in\mathbb{N}}\subset [0,1]\) and \((\mu_{n})_{n\in\mathbb{N}}\subset(0,\mu)\). Assume that

  1. (H0)

    \(\beta_{n}=O(\alpha_{n}\mu_{n})\)

holds. Then \((x_{n})_{n\in\mathbb{N}}\) and \((y_{n})_{n\in\mathbb{N}}\) are bounded.

Proof

Putting \(B_{n}:=(I-\mu_{n}D)\), we then have

$$\bigl\| (I-\mu_{n}D)x-(I-\mu_{n}D)y\bigr\| \leq(1-\mu_{n}\rho) \|x-y\|, $$

i.e., \((I-\mu_{n}D)\) is a \((1-\mu_{n}\rho)\)-contraction (see [20]). Let \(z \in F\). Then, for sufficiently large \(N_{0}\) and for \(\gamma>0\), we have

$$\begin{aligned} \|x_{n+1}-z\| \leq& \alpha_{n}\|B_{n}x_{n}-B_{n}z \|+\alpha_{n}\|B_{n}z-z\|+(1-\alpha_{n}) \|W_{n}y_{n}-z\| \\ \leq& \alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\|Dz\|+(1-\alpha _{n}) \beta_{n}\|Sx_{n}-z\| \\ &{}+(1-\alpha_{n}) (1-\beta_{n})\|x_{n}-z\| \\ \leq&\alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\|Dz\|+(1-\alpha _{n}) \beta_{n}\|Sz-z\| +(1-\alpha_{n})\|x_{n}-z\| \\ \leq&(1-\mu_{n}\alpha_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\|Dz\|+\beta_{n}\| Sz-z\| \\ \leq&(1-\mu_{n}\alpha_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\bigl(\|Dz\|+\gamma\| Sz-z\|\bigr)\\ &{}\mbox{(by convexity of the norm)} \\ \leq& \max \biggl\{ \|x_{n}-z\|,\frac{\|Dz\|+\gamma\|Sz-z\|}{\rho} \biggr\} . \end{aligned}$$

So, by an inductive process, one can see that

$$\|x_{n}-z\|\leq \max \biggl\{ \|x_{i}-z\|, \frac{\|Dz\|+\gamma\|Sz-z\|}{\rho}: i=0,\ldots ,N_{0} \biggr\} . $$

As a rule \((y_{n})_{n\in\mathbb{N}}\) is bounded too. □

We recall the following lemma.

Lemma 3.2

In the hypotheses of Theorem  2.2(1), we have that the sequence generated by \(z_{0}\in H\) and the iteration

$$z_{n+1}=\alpha_{n}(I-\mu_{n}D)z_{n}+(1- \alpha_{n})W_{n}z_{n} $$

strongly converge to \(x^{*}\in F\) that is the unique solution of the variational inequality

$$ \bigl\langle Dx^{*},y-x^{*}\bigr\rangle \geq0,\quad \forall y\in F. $$
(3.1)

Proof

The proof is given in [13, Theorem 2.6]. □

Proof of Theorem 2.2

Proof of 1. Let us note that since \(\tau=0\), then (H0) holds so \((x_{n})_{n\in\mathbb{N}}\) is bounded by Lemma 3.1. Let us consider the iteration generated by

$$ \begin{cases} z_{0}=x_{0},\\ z_{n+1}=\alpha_{n} (I-\mu_{n}D)z_{n} + (1-\alpha_{n})W_{n} z_{n}, & n\geq1. \end{cases} $$
(3.2)

By Lemma 3.2, \((z_{n})_{n\in\mathbb{N}}\) strongly converges to the unique solution of VIP (3.1). Then if we compute

$$\begin{aligned} \|x_{n+1}-z_{n+1}\| \leq&\alpha_{n} \|B_{n}x_{n}-B_{n}z_{n}\|+(1- \alpha_{n})\| W_{n}y_{n}-W_{n}z_{n} \| \\ \leq&\alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z_{n} \|+(1-\alpha_{n})\|y_{n}-z_{n}\| \\ \leq&\alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z_{n} \|+(1-\alpha_{n})\beta_{n}\|Sx_{n}-z_{n}\| +(1-\alpha_{n})\|x_{n}-z_{n}\| \\ \leq&(1-\alpha_{n}\mu_{n}\rho)\|x_{n}-z_{n} \|+(1-\alpha_{n})\beta_{n}\|Sx_{n}-z_{n}\| \\ \leq&(1-\alpha_{n}\mu_{n}\rho)\|x_{n}-z_{n} \|+\beta_{n}O(1). \end{aligned}$$

Calling \(s_{n}:=\|x_{n}-z_{n}\|\), \(a_{n}:=\alpha_{n}\mu_{n}\rho\), we have that

$$s_{n+1}\leq(1-a_{n})s_{n}+\beta_{n}O(1). $$

Since \(\sum_{n}\alpha_{n}\mu_{n}=\infty\) and \(\tau=0\), we can apply Xu’s Lemma 2.5 in [19] to obtain the required result.

Proof of 2. It is not difficult to observe that, by Byrne [10], \((I-S)\) is a \(\frac{1}{2}\)-inverse strongly monotone operator, so \((\frac {1}{\tau}D+(I-S) )\) is a \(\frac{\sigma}{\tau}\)-strongly monotone operator. Then (VIP) (2.3) has a unique solution by the celebrated results of Browder and Petryshyn [21] and Deimling Theorem 13.1 in [22].

We next prove that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular with respect to \((\beta_{n})_{n\in\mathbb{N}}\), i.e.,

$$\lim_{n\to\infty}\frac{\|x_{n+1}-x_{n}\|}{\beta_{n}}=0. $$

In order to prove the previous limit, we first compute

$$\begin{aligned} \|x_{n+1}-x_{n}\| \leq& \alpha_{n}\|B_{n}x_{n}-B_{n-1}x_{n-1} \|+\| B_{n-1}x_{n-1}-W_{n-1}y_{n-1}\|| \alpha_{n}-\alpha_{n-1}| \\ &{}+(1-\alpha_{n})\|W_{n}y_{n}-W_{n-1}y_{n-1} \| \\ \leq&\alpha_{n}\|B_{n}x_{n}-B_{n}x_{n-1} \|+\alpha_{n}\| B_{n}x_{n-1}-B_{n-1}x_{n-1} \| \\ &{}+\|B_{n-1}x_{n-1}-W_{n-1}y_{n-1} \||\alpha_{n}-\alpha_{n-1}| \\ &{}+(1-\alpha_{n})\|W_{n}y_{n}-W_{n}y_{n-1} \|+(1-\alpha_{n})\| W_{n}y_{n-1}-W_{n-1}y_{n-1} \| \\ \leq&\alpha_{n}(1-\mu_{n}\rho) \|x_{n}-x_{n-1}\|+\alpha_{n}|\mu_{n}-\mu _{n-1}|\|Dx_{n-1}\| \\ &{}+\|B_{n-1}x_{n-1}-W_{n-1}y_{n-1}\|| \alpha_{n}-\alpha_{n-1}| \\ &{}+(1-\alpha_{n})\|y_{n}-y_{n-1}\|+ \|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|. \end{aligned}$$
(3.3)

By definition of \(y_{n}\) one obtains that

$$\begin{aligned} \|y_{n}-y_{n-1}\| =& \beta_{n}\|Sx_{n}-Sx_{n-1}\|+\|Sx_{n-1}-x_{n-1} \| |\beta_{n}-\beta_{n-1}| +(1-\beta_{n})\|x_{n}-x_{n-1}\| \\ \leq&\beta_{n}\|x_{n}-x_{n-1}\|+ \|Sx_{n-1}-x_{n-1}\||\beta_{n}-\beta _{n-1}| \\ &{}+(1-\beta_{n})\|x_{n}-x_{n-1}\| \\ =&\|x_{n}-x_{n-1}\|+\|Sx_{n-1}-x_{n-1}\|| \beta_{n}-\beta_{n-1}|. \end{aligned}$$
(3.4)

So, substituting (3.4) in (3.3), we obtain

$$\begin{aligned} \|x_{n+1}-x_{n}\| \leq& \alpha_{n}(1-\mu_{n}\rho)\|x_{n}-x_{n-1}\| \\ &{}+\bigl(\alpha_{n}|\mu_{n}-\mu_{n-1}|+| \alpha_{n}-\alpha_{n-1}|+|\beta _{n}- \beta_{n-1}|\bigr)O(1) \\ &{}+(1-\alpha_{n})\|x_{n}-x_{n-1}\|+ \|W_{n}y_{n-1}-W_{n-1}y_{n-1}\| \\ =&(1-\mu_{n}\alpha_{n}\rho)\|x_{n}-x_{n-1} \|+\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\| \\ &{}+\bigl(\alpha_{n}|\mu_{n}-\mu_{n-1}|+| \alpha_{n}-\alpha_{n-1}|+|\beta _{n}- \beta_{n-1}|\bigr)O(1). \end{aligned}$$
(3.5)

Let us observe that by (H5)

$$\lim_{n\to\infty} \frac{\alpha_{n}|\mu_{n}-\mu_{n-1}|+|\alpha_{n}-\alpha _{n-1}|+|\beta_{n}-\beta_{n-1}|}{\alpha_{n}\mu_{n}}=0 $$

and (H4) guarantees that

$$\lim_{n\to\infty}\frac{\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|}{\alpha_{n}\mu_{n}}=0. $$

Putting \(s_{n}:=\|x_{n}-x_{n-1}\|\), \(a_{n}:=\mu_{n}\alpha_{n}\rho\) and \(b_{n}=\| W_{n}y_{n-1}-W_{n-1}y_{n-1}\|+(\alpha_{n}|\mu_{n}-\mu_{n-1}|+|\alpha_{n}-\alpha _{n-1}|+|\beta_{n}-\beta_{n-1}|)O(1)\), we can write (3.5) as

$$s_{n+1}\leq(1-a_{n})s_{n}+b_{n}. $$

Thus (H1), (H4) and (H5) are enough to apply Xu’s Lemma 2.5 in [19] to assure that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular. Moreover, dividing by \(\beta_{n}\) in (3.5), one observes that

$$\begin{aligned} \frac{\|x_{n+1}-x_{n}\|}{\beta_{n}} \leq&(1-\mu_{n}\alpha_{n}\rho) \frac{\| x_{n}-x_{n-1}\|}{\beta_{n}}+\frac{\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|}{\beta_{n}}\\ &{}+\frac{\alpha_{n}|\mu_{n}-\mu_{n-1}|+|\alpha_{n}-\alpha _{n-1}|+|\beta_{n}-\beta_{n-1}|}{\beta_{n}}M\\ \leq&(1-\mu_{n}\alpha_{n}\rho)\frac{\|x_{n}-x_{n-1}\|}{\beta_{n}}+\| x_{n-1}-x_{n}\|\biggl\vert \frac{1}{\beta_{n}}- \frac{1}{\beta_{n-1}}\biggr\vert \\ &{}+\frac{\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|}{\beta_{n}}\\ &{}+M \biggl[\frac{|\alpha_{n}-\alpha_{n-1}|}{\beta_{n}}+\frac{\alpha_{n}|\mu _{n}-\mu_{n-1}|}{\beta_{n}}+\frac{|\beta_{n}-\beta_{n-1}|}{\beta_{n}} \biggr]\\ \mbox{by (H6) } \leq&(1-\mu_{n}\alpha_{n}\rho) \frac{\|x_{n}-x_{n-1}\|}{\beta _{n-1}}+O(\alpha_{n}\mu_{n})\|x_{n-1}-x_{n} \|\\ &{}+\frac{\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|}{\beta_{n}} \\ &{}+M \biggl[\frac{|\alpha_{n}-\alpha_{n-1}|}{\beta_{n}}+\frac{\alpha_{n}|\mu _{n}-\mu_{n-1}|}{\beta_{n}}+\frac{|\beta_{n}-\beta_{n-1}|}{\beta_{n}} \biggr]. \end{aligned}$$

Since (H1), (H4) and (H5) hold, by using again Xu’s Lemma 2.5 in [19], we have

$$\lim_{n\to\infty}\frac{\|x_{n+1}-x_{n}\|}{\beta_{n}}=0. $$

Moreover, by the asymptotic regularity of \((x_{n})_{n\in\mathbb{N}}\), we show that the weak limits \(\omega_{w}(x_{n})\subset F\). Let \(p_{0}\in\omega _{w}(x_{n})\) and \((x_{n_{k}})_{k\in\mathbb{N}}\) be a subsequence of \((x_{n})_{n\in\mathbb{N}}\) weakly converging to \(p_{0}\). If \(p_{0}\notin F\), then by the Opial property of a Hilbert space

$$\begin{aligned} \liminf_{k\to\infty}\|x_{n_{k}}-p_{0}\| < &\liminf _{k\to\infty}\| x_{n_{k}}-Wp_{0}\| \\ \leq&\liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+\| x_{n_{k}+1}-W_{n_{k}}y_{n_{k}}\| \\ &{} +\|W_{n_{k}}y_{n_{k}}-W_{n_{k}}p_{0}\|+ \|W_{n_{k}}p_{0}-Wp_{0}\| \bigr] \\ \leq&\liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+ \alpha_{n_{k}}\| B_{n_{k}}x_{n_{k}}-W_{n_{k}}y_{n_{k}} \| \\ &{} +\|y_{n_{k}}-p_{0}\|+\|W_{n_{k}}p_{0}-Wp_{0} \| \bigr]\leq\liminf_{k\to \infty}\|y_{n_{k}}-p_{0}\| \\ \leq&\liminf_{k\to\infty}\bigl[\beta_{n_{k}}\|Sx_{n_{k}}-p_{0} \|+\|x_{n_{k}}-p_{0}\|\bigr]. \end{aligned}$$

Since \(\beta_{n}\to0\) as \(n\to\infty\), then

$$\liminf_{k\to\infty}\|x_{n_{k}}-p_{0}\|< \liminf _{k\to\infty}\| x_{n_{k}}-Wp_{0}\|\leq\liminf _{k\to\infty}\|x_{n_{k}}-p_{0}\|, $$

which is absurd. Then \(p_{0}\in F\).

On the other hand,

$$\begin{aligned} x_{n+1}-x_{n} =&\alpha_{n}(B_{n}x_{n}-x_{n})+(1- \alpha_{n}) (W_{n}y_{n}-x_{n}) \\ =&-\alpha_{n}\mu_{n}Dx_{n}+(1- \alpha_{n}) (W_{n}y_{n}-y_{n})+(1- \alpha_{n}) (y_{n}-x_{n}) \\ =&-\alpha_{n}\mu_{n}Dx_{n}+(1- \alpha_{n}) (W_{n}y_{n}-y_{n})+(1- \alpha_{n})\beta_{n}(Sx_{n}-x_{n}), \end{aligned}$$

so that we define

$$\begin{aligned} v_{n}:=\frac{x_{n}-x_{n+1}}{(1-\alpha_{n})\beta_{n}}=(I-S)x_{n}+ \frac{1}{\beta _{n}}(I-W_{n})y_{n}+\frac{\alpha_{n}\mu_{n}}{(1-\alpha_{n})\beta_{n}}Dx_{n}. \end{aligned}$$
(3.6)

As a rule \(v_{n}:=\frac{x_{n}-x_{n+1}}{(1-\alpha_{n})\beta_{n}}\) is also a null sequence as \(n\to\infty\).

Now we prove that \(\omega_{w}(x_{n})=\omega_{s}(x_{n})\), i.e., every weak limit is a strong limit too. We only need to prove that \(\omega _{w}(x_{n})\subset\omega_{s}(x_{n})\).

Let us fix \(z\in \omega_{w}(x_{n})\), then \(z\in F\), and by (3.6) it results

$$\begin{aligned} \langle v_{n},x_{n}-z\rangle =&\bigl\langle (I-S)x_{n},x_{n}-z\bigr\rangle +\frac{1}{\beta_{n}}\bigl\langle (I-W_{n})y_{n},x_{n}-z\bigr\rangle +\frac{\alpha_{n}\mu_{n}}{(1-\alpha_{n})\beta_{n}}\langle Dx_{n},x_{n}-z\rangle \\ =&\bigl\langle (I-S)x_{n}-(I-S)z,x_{n}-z\bigr\rangle +\bigl\langle (I-S)z,x_{n}-z\bigr\rangle \\ &{}+\frac{1}{\beta_{n}}\bigl\langle (I-W_{n})y_{n},x_{n}-y_{n} \bigr\rangle +\frac{1}{\beta _{n}}\bigl\langle (I-W_{n})y_{n},y_{n}-z \bigr\rangle \\ &{}+\frac{\alpha_{n}\mu_{n}}{(1-\alpha_{n})\beta_{n}}\langle Dx_{n}-Dz,x_{n}-z\rangle+ \frac{\alpha_{n}\mu_{n}}{(1-\alpha_{n})\beta_{n}}\langle Dz,x_{n}-z\rangle. \end{aligned}$$

Since the operator \((I-W_{n})\) is monotone for all \(n\in\mathbb{N}\), we obtain that

$$\begin{aligned} \langle v_{n},x_{n}-z\rangle \geq&\bigl\langle (I-S)z,x_{n}-z \bigr\rangle +\frac{1}{\beta _{n}}\bigl\langle (I-W_{n})y_{n},x_{n}-y_{n} \bigr\rangle \\ &{}+\frac{1}{\beta_{n}}\bigl\langle (I-W_{n})y_{n}-(I-W_{n})z,y_{n}-z \bigr\rangle \\ &{}+\frac{\alpha_{n}\mu_{n}}{(1-\alpha_{n})\beta_{n}}\langle Dz,x_{n}-z\rangle +\frac{\alpha_{n}\mu_{n}\sigma}{(1-\alpha_{n})\beta_{n}} \|x_{n}-z\|^{2} \\ \geq&\bigl\langle (I-S)z,x_{n}-z\bigr\rangle +\bigl\langle (I-W_{n})y_{n},x_{n}-Sx_{n}\bigr\rangle \\ &{}+\frac{\alpha_{n}\mu_{n}\sigma}{(1-\alpha_{n})\beta_{n}}\|x_{n}-z\|^{2}+\frac {\alpha_{n}\mu_{n}}{(1-\alpha_{n})\beta_{n}} \langle Dz,x_{n}-z\rangle, \end{aligned}$$

and so we can write

$$\begin{aligned} \|x_{n}-z\|^{2} \leq&\frac{(1-\alpha_{n})\beta_{n}}{\alpha_{n}\mu_{n}\sigma }\bigl[\langle v_{n},x_{n}-z\rangle-\bigl\langle (I-S)z,x_{n}-z \bigr\rangle -\bigl\langle (I-W_{n})y_{n},x_{n}-Sx_{n} \bigr\rangle \bigr]\\ &{} -\frac{1}{\sigma}\langle Dz,x_{n}-z\rangle. \end{aligned}$$

Let us note that

$$\begin{aligned} \|y_{n}-W_{n}y_{n}\| \leq&\|y_{n}-x_{n} \|+\|x_{n}-x_{n+1}\|+\|x_{n+1}-W_{n}y_{n} \| \\ \leq&\beta_{n}\|Sx_{n}-x_{n}\|+ \|x_{n}-x_{n+1}\|+\alpha_{n}\|B_{n}x_{n}-W_{n}y_{n} \| \\ \leq&\bigl(\beta_{n}+\|x_{n}-x_{n+1}\|+ \alpha_{n}\bigr)O(1). \end{aligned}$$

So, by the hypotheses, \(\|y_{n}-W_{n}y_{n}\|\to0\) as \(n\to\infty\). So if \((x_{n_{k}})_{k}\) is such that \(x_{n_{k}}\to z\) as \(k\to\infty\), it follows that

$$\begin{aligned} \|x_{n_{k}}-z\|^{2} \leq&\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha_{n_{k}}\mu _{n_{k}}\sigma}\bigl[\langle v_{n_{k}},x_{n_{k}}-z\rangle-\bigl\langle (I-S)z,x_{n_{k}}-z \bigr\rangle -\bigl\langle (I-W_{n_{k}})y_{n_{k}},x_{n_{k}}-Sx_{n_{k}} \bigr\rangle \bigr]\\ &{} -\frac{1}{\sigma}\langle Dz,x_{n_{k}}-z\rangle. \end{aligned}$$

Since \(v_{n}\to0\) and \((I-W_{n})y_{n}\to0\) as \(n\to\infty\), then every weak cluster point of \((x_{n})_{n\in\mathbb{N}}\) (that lies in F) is also a strong cluster point.

We prove that \(\omega_{w}=\omega_{s}(x_{n})\) is a singleton. By the boundedness of \((x_{n})_{n\in\mathbb{N}}\), let \((x_{n_{k}})_{k\in\mathbb{N}}\) be a subsequence of \((x_{n})_{n\in \mathbb{N}}\) converging (weakly and strongly) to \(x'\). For all \(z\in F\), again by (3.6)

$$\begin{aligned} \langle Dx_{n_{k}},x_{n_{k}}-z\rangle =&\frac{(1-\alpha_{n_{k}})\beta _{n_{k}}}{\alpha_{n_{k}}\mu_{n_{k}}}\langle v_{n_{k}},x_{n_{k}}-z\rangle-\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha _{n_{k}}\mu_{n_{k}}}\bigl\langle (I-S)x_{n_{k}},x_{n_{k}}-z\bigr\rangle \\ &{}-\frac{(1-\alpha_{n_{k}})}{\alpha_{n_{k}}\mu_{n_{k}}}\bigl\langle (I-W_{n_{k}})y_{n_{k}},x_{n_{k}}-z \bigr\rangle \\ \mbox{(by monotonicity) } \leq&\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha _{n_{k}}\mu_{n_{k}}}\langle v_{n_{k}},x_{n_{k}}-z \rangle-\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha _{n_{k}}\mu_{n_{k}}}\bigl\langle (I-S)z,x_{n_{k}}-z\bigr\rangle \\ &{}-\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha_{n_{k}}\mu_{n_{k}}}\bigl\langle (I-W_{n_{k}})y_{n_{k}},x_{n_{k}}-y_{n_{k}} \bigr\rangle \\ \leq&\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha_{n_{k}}\mu_{n_{k}}}\langle v_{n_{k}},x_{n_{k}}-z\rangle- \frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha _{n_{k}}\mu_{n_{k}}}\bigl\langle (I-S)z,x_{n_{k}}-z\bigr\rangle \\ &{}-\frac{(1-\alpha_{n_{k}})\beta_{n_{k}}}{\alpha_{n_{k}}\mu_{n_{k}}}\bigl\langle (I-W_{n_{k}})y_{n_{k}},x_{n_{k}}-Sx_{n_{k}} \bigr\rangle . \end{aligned}$$

Passing to limit as \(k\to\infty\), we obtain

$$\bigl\langle Dx',x'-z\bigr\rangle \leq-\tau\bigl\langle (I-S)z,x'-z\bigr\rangle \quad \forall z\in \operatorname{Fix}(T), $$

that is, (2.3) holds. Thus, since (2.3) cannot have more than one solution, it follows that \(\omega_{w}(x_{n})=\omega_{s}(x_{n})=\{\tilde{x}\}\) and this, of course, ensures that \(x_{n}\to\tilde{x}\) as \(n\to\infty\).

Now we investigate the case

$$\tau:=\lim_{n\to\infty}\frac{\beta_{n}}{\alpha_{n}\mu_{n}}=+\infty. $$

Proof of 3. Let \(z \in F\cap \operatorname{Fix}(S)\). Then

$$\begin{aligned} \|x_{n+1}-z\| \leq& \alpha_{n}\|B_{n}x_{n}-B_{n}z \|+\alpha_{n}\|B_{n}z-z\|+(1-\alpha_{n}) \|W_{n}y_{n}-z\| \\ \leq& \alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\|Dz\|+(1-\alpha _{n}) \beta_{n}\|Sx_{n}-z\| \\ &{}+(1-\alpha_{n}) (1-\beta_{n})\|x_{n}-z\| \\ \leq&\alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\|Dz\|+(1-\alpha_{n})\| x_{n}-z\| \\ \leq&(1-\mu_{n}\alpha_{n}\rho)\|x_{n}-z\|+ \alpha_{n}\mu_{n}\|Dz\|. \end{aligned}$$

So, by an inductive process, one can see that

$$\|x_{n}-z\|\leq r. $$

By (3.5) in Proof of 2, we have

$$\begin{aligned} \|x_{n+1}-x_{n}\| \leq&(1-\mu_{n} \alpha_{n}\rho)\|x_{n}-x_{n-1}\|+\| W_{n}y_{n-1}-W_{n-1}y_{n-1}\| \\ &{}+\bigl(\alpha_{n}|\mu_{n}-\mu_{n-1}|+| \alpha_{n}-\alpha_{n-1}|+|\beta _{n}- \beta_{n-1}|\bigr)O(1). \end{aligned}$$
(3.7)

Let us observe that by (H7) we have

$$\lim_{n\to\infty} \frac{\alpha_{n}|\mu_{n}-\mu_{n-1}|+|\alpha_{n}-\alpha _{n-1}|+|\beta_{n}-\beta_{n-1}|}{\alpha_{n}\mu_{n}}=0, $$

and (H2) guarantees that

$$\lim_{n\to\infty}\frac{\|W_{n}y_{n-1}-W_{n-1}y_{n-1}\|}{\alpha_{n}\mu_{n}}=0. $$

Then, calling \(s_{n}=\|x_{n}-x_{n-1}\|\), \(a_{n}=\mu_{n}\alpha_{n}\rho\) and \(b_{n}=\| W_{n}y_{n-1}-W_{n-1}y_{n-1}\|+(\alpha_{n}|\mu_{n}-\mu_{n-1}|+|\alpha_{n}-\alpha _{n-1}|+|\beta_{n}-\beta_{n-1}|)O(1)\), we can write (3.7) as

$$s_{n+1}\leq(1-a_{n})s_{n}+b_{n}, $$

and (H1), (H2) and (H7) are enough to apply Xu’s Lemma 2.5 in [19] and to assure that \((x_{n})_{n\in\mathbb{N}}\) is asymptotically regular.

For every \(v\in \operatorname{Fix}(S)\cap F\), it results that

$$\begin{aligned} \|x_{n+1}-v\|^{2} \leq& \alpha_{n}\|B_{n}x_{n}-v\|^{2}+ \|y_{n}-v\|^{2} \\ \leq& \alpha_{n}\|B_{n}x_{n}-v \|^{2}+\beta_{n}\|Sx_{n}-v\|^{2} \\ &{}+(1- \beta_{n})\| x_{n}-v\|^{2}-\beta_{n}(1- \beta_{n})\|Sx_{n}-x_{n}\|^{2} \\ \leq& \alpha_{n}\|B_{n}x_{n}-v\|^{2}+ \|x_{n}-v\|^{2}-\beta_{n}(1-\beta_{n}) \|Sx_{n}-x_{n}\|^{2}. \end{aligned}$$
(3.8)

So, by the boundedness we get

$$\begin{aligned} \beta_{n}(1-\beta_{n}) \|Sx_{n}-x_{n}\|^{2} \leq& \alpha_{n} \|B_{n}x_{n}-v\| ^{2}+\|x_{n}-v \|^{2}-\|x_{n+1}-v\|^{2} \\ \leq&\bigl(\alpha_{n}+\|x_{n}-x_{n+1}\|\bigr)O(1). \end{aligned}$$
(3.9)

Then \(\|Sx_{n}-x_{n}\|\to0\) as \(n\to\infty\) and, by the demiclosedness principle, the weak cluster points of \((x_{n})_{n\in\mathbb{N}}\) are fixed points of S, i.e., \(\omega_{w}(x_{n})\subset \operatorname{Fix}(S)\). Let us show that there are more \(\omega_{w}(x_{n})\subset F\). If not, let \(p_{0}\in\omega_{w}(x_{n})\) and \(p_{0}\notin F\). By the Opial property of a Hilbert space, we have

$$\begin{aligned} \liminf_{k\to\infty}\|x_{n_{k}}-p_{0}\| < &\liminf _{k\to\infty}\| x_{n_{k}}-Wp_{0}\| \\ \leq&\liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+\| x_{n_{k}+1}-W_{n_{k}}y_{n_{k}}\| \\ &{} +\|W_{n_{k}}y_{n_{k}}-W_{n_{k}}p_{0}\|+ \|W_{n_{k}}p_{0}-Wp_{0}\| \bigr] \\ \leq&\liminf_{k\to\infty} \bigl[\|x_{n_{k}}-x_{n_{k}+1}\|+ \alpha_{n_{k}}\| B_{n_{k}}x_{n_{k}}-W_{n_{k}}y_{n_{k}} \| \\ &{} +\|y_{n_{k}}-p_{0}\|+\|W_{n_{k}}p_{0}-Wp_{0} \| \bigr]\leq\liminf_{k\to \infty}\|y_{n_{k}}-p_{0}\| \\ \leq&\liminf_{k\to\infty}\bigl[\beta_{n_{k}} \|Sx_{n_{k}}-p_{0}\|+(1-\beta _{n_{k})} \|x_{n_{k}}-p_{0}\|\bigr] \\ =&\liminf_{k\to\infty}\|x_{n_{k}}-p_{0}\|, \end{aligned}$$

which is absurd, so \(p_{0}\in F\). To conclude, if z is the unique solution of VIP (2.8), then

$$\begin{aligned} \|x_{n+1}-z\|^{2} =&\bigl\| \alpha_{n} \bigl(B_{n}(x_{n})-B_{n}z\bigr)+\alpha_{n} (B_{n}z-z)+(1-\alpha_{n}) (W_{n}y_{n}-z) \bigr\| ^{2} \\ \leq&\bigl\| \alpha_{n}(B_{n}x_{n}-B_{n}z)+(1- \alpha_{n}) (W_{n}y_{n}-z)\bigr\| ^{2} \\ &{}+2\alpha_{n}\mu_{n}\langle-Dz,x_{n+1}-z\rangle \\ \leq&\alpha_{n}(1-\mu_{n}\rho)\|x_{n}-z \|^{2}+(1-\alpha_{n})\|y_{n}-z\|^{2} \\ &{}+2\alpha_{n}\mu_{n}\langle-Dz,x_{n+1}-z\rangle \\ =&(1-\alpha_{n}\mu_{n}\rho)\|x_{n}-z \|^{2}+2\alpha_{n}\mu_{n}\langle -Dz,x_{n+1}-z\rangle. \end{aligned}$$

Since every weak cluster point of \((x_{n})_{n\in \mathbb{N}}\) lies in \(F\cap \operatorname{Fix}(S)\), then for an opportune subsequence \((x_{n_{k}})\rightharpoonup p_{0}\)

$$\limsup_{n\to\infty}\langle -Dz,x_{n+1}-z\rangle=\lim _{k\to\infty}\langle -Dz,x_{n_{k}}-z\rangle=\langle -Dz,p_{0}-z\rangle\leq0. $$

Thus, calling \(s_{n}:=\|x_{n}-z\|^{2}\), \(a_{n}= \alpha_{n}\mu_{n}\rho\) and \(b_{n}=2\alpha_{n}\mu_{n}\langle-Dz,x_{n+1}-z\rangle\), we can write

$$s_{n+1}\leq(1-a_{n})s_{n}+b_{n}, $$

and by Xu’s Lemma 2.5 in [19], \(x_{n}\to z\) as \(n\to\infty\). □

To us, applications of Theorem 2.2 are a well-known problem, and also we know that there exist several iterative approaches to approximate the solutions. Nevertheless, our iterative scheme summarizes a lot of them assuming very simple hypotheses on the numerical sequences, and it can be applied to a wide class of mappings thanks to hypotheses (h1) and (h2). However the reader could ask for a comparison of scheme (2.1) and the well-known iterative approach cited here. We do not known the rate of convergence of our method, but it is enough to see the numerical examples in [13, 23] to conclude that it is not possible to compare two iterative schemes. However, for the sake of completeness, we include a very simple case which shows that our scheme is faster than Halpern’s scheme.

Example 3.3

Let \(H=\mathbb{R}\), \(u=1\), \(W_{n}x:=-x\) (hence \(F=\{0\}\)), \(Dx:=x-1\), \(Sx:=x-1\). Let \(\alpha_{n}=\frac{1}{2n}\), \(\mu_{n}=1\) and \(z_{1}=2\). Then Halpern’s iterative method

$$z_{n+1}=\alpha_{n} u+(1-\alpha_{n})W_{n} z_{n} $$

becomes

$$z_{n+1}=\frac{1}{2n}- \biggl(1-\frac{1}{2n} \biggr)z_{n}. $$

If \(\beta_{n}=\frac{1}{n^{2}}\), from our scheme (2.1) we obtain

$$x_{n+1}=\frac{1}{2n}- \biggl(1-\frac{1}{2n} \biggr) \biggl(x_{n}-\frac {1}{n^{2}} \biggr). $$

Thus our iterative scheme is slightly faster as shown in Table 1 (see also [24]).

Table 1 Comparison of convergence rate of Halpern’s iteration and iteration ( 2.1 )

References

  1. Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)

    Article  MATH  MathSciNet  Google Scholar 

  2. Reich, S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67(2), 274-276 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  3. Genel, A, Lindenstrauss, J: An example concerning fixed points. Isr. J. Math. 22(1), 81-86 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  4. Ishikawa, S: Fixed points and iteration of a nonexpansive mapping in a Banach space. Proc. Am. Math. Soc. 59, 65-71 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  5. Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  6. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46-55 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  7. Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279(2), 372-379 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  8. Takahashi, W, Toyoda, M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118(2), 417-428 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  9. Reich, S, Xu, H-K: An iterative approach to a constrained least squares problem. Abstr. Appl. Anal. 2003, 503-512 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  10. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems 20, 103-120 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  11. Hussain, N, Marino, G, Abdu, AAN: On Mann’s method with viscosity for nonexpansive and nonspreading mappings in Hilbert spaces. Abstr. Appl. Anal. 2014, Article ID 152530 (2014)

    Google Scholar 

  12. Hussain, N, Takahashi, W: Weak and strong convergence theorems for semigroups of mappings without continuity in Hilbert spaces. J. Nonlinear Convex Anal. 14(4), 769-783 (2013)

    MATH  MathSciNet  Google Scholar 

  13. Marino, G, Muglia, L: On the auxiliary mappings generated by a family of mappings and solutions of variational inequalities problems. Optim. Lett. (2013). 10.1007/s11590-013-0705-7

    MathSciNet  MATH  Google Scholar 

  14. Marino, G, Muglia, L, Yao, Y: The uniform asymptotical regularity of families of mappings and solutions of variational inequality problems. J. Nonlinear Convex Anal. 15(3), 477-492 (2014)

    MATH  MathSciNet  Google Scholar 

  15. Colao, V, Marino, G, Xu, H-K: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 344(1), 340-352 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  16. Marino, G, Xu, HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 318(1), 43-52 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  17. Suzuki, T: The set of common fixed points of a one-parameter continuous semigroup of mappings is \(F(T(1))\cap F(T(\sqrt{2}))\). Proc. Am. Math. Soc. 134(3), 673-681 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  18. Cianciaruso, F, Marino, G, Muglia, L: Iterative methods for equilibrium and fixed point problems for nonexpansive semigroups in Hilbert spaces. J. Optim. Theory Appl. 146(2), 491-509 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  19. Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2, 1-17 (2002)

    MathSciNet  Google Scholar 

  20. Xu, HK, Kim, TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 119(1), 185-201 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  21. Browder, FE, Petryshyn, VW: Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 20(2), 197-228 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  22. Demling, K: Nonlinear Functional Analysis. Dover, New York (2010); first Springer, Berlin (1985)

    Google Scholar 

  23. Cianciaruso, F, Marino, G, Muglia, L, Yao, Y: On a two-step algorithm for hierarchical fixed point problems and variational inequalities. J. Inequal. Appl. 2009, Article ID 208692 (2009). doi:10.1155/2009/208692

    Article  MathSciNet  MATH  Google Scholar 

  24. Khan, AR, Kumar, V, Hussain, N: Analytical and numerical treatment of Jungck-type iterative schemes. Appl. Math. Comput. 231, 521-535 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referees for their useful comments and suggestions. This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University under grant No. (29-130-35-HiCi). The authors, therefore, acknowledge technical and financial support of KAU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nawab Hussain.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hussain, N., Marino, G., Muglia, L. et al. On some Mann’s type iterative algorithms. Fixed Point Theory Appl 2015, 17 (2015). https://doi.org/10.1186/s13663-015-0267-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0267-8

MSC

Keywords