Skip to main content

Viscosity approximation method with Meir-Keeler contractions for common zero of accretive operators in Banach spaces

Abstract

The purpose of this paper is to introduce a new iteration by the combination of the viscosity approximation with Meir-Keeler contractions and proximal point algorithm for finding common zeros of a finite family of accretive operators in a Banach space with a uniformly Gâteaux differentiable norm. The results of this paper improve and extend corresponding well-known results by many others.

1 Introduction

Let E be a real Banach space and let J be the normalized duality mapping from E into \(2^{E^{*}}\) given by

$$J(x)= \bigl\{ f\in E^{*}: \langle x,f\rangle=\|x\|^{2}=\|f \|^{2} \bigr\} ,\quad \forall x\in E, $$

where \(E^{*}\) denotes the dual space of E and \(\langle\cdot,\cdot\rangle\) denotes the generalized duality pairing. It is well known that if \(E^{*}\) is strictly convex then J is single-valued. In the sequel, we denote the single-valued normalized duality mapping by j. For an operator \(A: E\to2^{E}\), we define its domain, range, and graph as follows:

$$\begin{aligned} &D(A)=\{x\in E: Ax\neq\emptyset\},\\ &R(A)=\bigcup \bigl\{ Az: z\in D(A) \bigr\} , \end{aligned}$$

and

$$G(T)= \bigl\{ (x,y)\in E\times E: x\in D(A), y\in Ax \bigr\} , $$

respectively. The inverse \(A^{-1}\) of A is defined by

$$x\in A^{-1}y,\quad\mbox{if and only if}\quad y\in Ax. $$

An operator A is said to be accretive if, for each \(x,y\in D(A)\), there exists \(j(x-y)\in J(x-y)\) such that

$$\bigl\langle u-v,j(x-y) \bigr\rangle \geq0, $$

for all \(u\in Ax\) and \(v\in Ay\). We denote by I the identity operator on E. An accretive operator A is said to be maximal accretive if there is no proper accretive extension of A and A is said to be m-accretive if \(R(I+\lambda A)=E\), for all \(\lambda>0\). If A is m-accretive, then it is maximal, but generally, the converse is not true. If A is accretive, then we can define, for each \(\lambda>0\), a nonexpansive single-valued mapping \(J_{\lambda}^{A}: R(I+\lambda A)\to D(A)\) by

$$J_{\lambda}^{A}=(I+\lambda A)^{-1}. $$

It is called the resolvent of A which is denoted by \(J^{A}\) when \(\lambda=1\).

Let \(A: E\to2^{E}\) be an m-accretive operator. It is well known that many problems in nonlinear analysis and optimization can be formulated as the problem: Find \(x\in E\) such that

$$0\in A(x). $$

One popular method of solving the equation \(0\in A(x)\), where A is a maximal monotone operator in a Hilbert space H, is the proximal point algorithm. The proximal point algorithm generates, for any starting point \(x_{0}=x\in E\), a sequence \(\{x_{n}\}\) by the rule

$$ x_{n+1}=J_{r_{n}}^{A}(x_{n}), $$
(1.1)

for all \(n\in\mathbb{N}\), where \(\{r_{n}\}\) is a regularization sequence of positive real numbers, \(J_{r_{n}}^{A}=(I+r_{n}A)^{-1}\) is the resolvent of A, and is the set of all natural numbers. Some of them deal with the weak convergence theorem of the sequence \(\{x_{n}\}\) generated by (1.1) and others proved strong convergence theorems by imposing assumptions on A.

Note that algorithm (1.1) can be rewritten as

$$ x_{n+1}-x_{n}+r_{n}A(x_{n+1})\ni0, $$
(1.2)

for all \(n\in\mathbb{N}\). This algorithm was first introduced by Martinet [1]. If \(\psi: H\to\mathbb{R}\cup\{\infty\}\) is proper lower semicontinuous convex function, then the algorithm reduces to

$$x_{n+1}=\mathop{\operatorname{argmin}}\limits_{y\in H} \biggl\{ \psi(y)+\frac{1}{2r_{n}} \|x_{n}-y\| ^{2} \biggr\} , $$

for all \(n\in\mathbb{N}\). Moreover, Rockafellar [2] has given a more practical method which is an inexact variant of the method:

$$ x_{n}+e_{n}\ni x_{n+1}+r_{n}Ax_{n+1}, $$
(1.3)

for all \(n\in\mathbb{N}\), where \(\{e_{n}\}\) is regarded as an error sequence and \(\{r_{n}\}\) is a sequence of positive regularization parameters. Note that the algorithm (1.3) can be rewritten as

$$ x_{n+1}=J_{r_{n}}^{A}(x_{n}+e_{n}), $$
(1.4)

for all \(n\in\mathbb{N}\). This method is called inexact proximal point algorithm. It was shown in Rockafellar [2] that if \(e_{n}\to0\) quickly enough such that \(\sum_{n=1}^{\infty}\|e_{n}\|<\infty\), then \(x_{n}\rightharpoonup z\in H\) with \(0\in Az\).

Further, Rockafellar [2] posed the open question of whether the sequence generated by (1.1) converges strongly or not. In 1991, Güler [3] gave an example showing that Rockafellar’s proximal point algorithm does not converge strongly.

An example of the authors, Bauschke et al. [4] also showed that the proximal algorithm only converges weakly but not strongly.

When A is maximal monotone in a Hilbert space H, Lehdili and Moudafi [5] obtained the convergence of the sequence \(\{x_{n}\}\) generated by the algorithm

$$ x_{n+1}=J^{A_{n}}_{r_{n}}(x_{n}), $$
(1.5)

where \(A_{n}=\mu_{n}I+A\), \(\mu>0\) is viewed as a Tikhonov regularization of A. Next, in 2006, Xu [6] and in 2009, Song and Yang [7] used the technique of nonexpansive mappings to get convergence theorems for \(\{x_{n}\}\) defined by the perturbed version of algorithm (1.4) in the form

$$ x_{n+1}=J^{A}_{r_{n}} \bigl(t_{n}u+(1-t_{n})x_{n}+e_{n} \bigr). $$
(1.6)

Note that algorithm (1.6) can be rewritten as

$$ r_{n}A(x_{n+1})+x_{n+1}\ni t_{n}u+(1-t_{n})x_{n} + e_{n},\quad n \geq0. $$
(1.7)

In [8], Tuyen was studied an extension the results of Xu [6], when A is an m-accretive operator in a uniformly smooth Banach space E which has a weakly sequentially continuous normalized duality mapping j from E to \(E^{*}\) (cf. [9]). At that time, in [10], Sahu and Yao also extended the results of Xu [6] for the zero of an accretive operator in a Banach space which has a uniformly Gâteaux differentiable norm by combining the prox-Tikhonov method and the viscosity approximation method. They introduced the iterative method to define the sequence \(\{x_{n}\}\) as follows:

$$ x_{n+1}=J_{r_{n}}^{A} \bigl((1-\alpha_{n})x_{n}+ \alpha_{n} f(x_{n}) \bigr), $$
(1.8)

for all \(n\in\mathbb{N}\), where A is an accretive operator such that \(S=A^{-1}0\neq\emptyset\) and \(\overline{D(A)}\subset C\subset\bigcap_{t>0}R(I+tA)\), and f is a contractive mapping on C.

Zegeye and Shahzed [11] studied the convergence problem of finding a common zero of a finite family of m-accretive operators (cf. [12, 13]). More precisely, they proved the following result.

Theorem 1.1

[11]

Let E be a strictly convex and reflexive Banach space with a uniformly Gâteaux differentiable norm, K be a nonempty, closed, and convex subset of E and \(A_{i}: K\to E \) be an m-accretive operator, for each \(i=1,2,\ldots,N\) with

$$\bigcap_{i=1}^{N}A_{i}^{-1}0 \neq\emptyset. $$

For any \(u,x_{0}\in K\), let \(\{x_{n}\}\) be a sequence in K generated by the algorithm:

$$ x_{n+1}=\alpha_{n} u+(1-\alpha_{n})S_{N}(x_{n}),\quad \forall n\ge0, $$
(1.9)

where \(S_{N}:=a_{0}I+a_{1}J^{A_{1}}+a_{2}J^{A_{2}}+\cdots+a_{N}J^{A_{N}}\) with \(J^{A_{i}}=(I+A_{i})^{-1}\) for \(0< a_{i}<1\), \(i=0,1,2,\ldots,N\), \(\sum_{i=0}^{N}a_{i}=1\), and \(\{\alpha_{n}\}\) is a real sequence which satisfies the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty }\alpha_{n}=\infty\),

  2. (ii)

    \(\sum_{n=1}^{\infty}|\alpha_{n}-\alpha_{n-1}|<\infty\) or \(\lim_{n\rightarrow\infty}\frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha_{n}}=0\).

If every nonempty, bounded, closed, and convex subset of E has the fixed point property for nonexpansive mapping, then \(\{x_{n}\}\) converges strongly to a common solution of the equations \(A_{i}(x)=0\) for \(i=1,2,\ldots,N\).

Motivated by Xu [6] and Zegeye and Shahzed [11], Tuyen [14] introduced an iterative algorithm as follows:

$$ \left \{ \begin{array}{@{}l} x_{0}\in C,\\ x_{n+1}=S_{N} (\alpha_{n} f(x_{n})+(1-\alpha_{n})x_{n} ),\quad \forall n \ge0, \end{array} \right . $$
(1.10)

where \(S_{N}:=a_{0}I+a_{1}J^{A_{1}}+a_{2}J^{A_{2}}+\cdots+a_{N}J^{A_{N}}\) with \(a_{0},a_{1},\ldots,a_{N}\) in \((0,1)\) such that \(\sum_{i=0}^{N}a_{i}=1\) and \(\{\alpha_{n}\}\subset(0,1)\) is a real sequence of positive numbers. The result of Tuyen [14] is given by the following.

Theorem 1.2

[14]

Let E be a strictly convex and reflexive Banach space which has a weakly continuous duality mapping \(J_{\varphi}\) with gauge φ. Let C be a nonempty, closed, and convex subset of E and f be a contraction mapping of C into itself with the contractive coefficient \(c\in(0,1)\). Let \(A_{i}: C\rightarrow E\) be an m-accretive operator, for each \(i=1,2,\ldots,N\) with

$$\bigcap_{i=1}^{N}A_{i}^{-1}0 \neq\emptyset. $$

Let \(J^{A_{i}}=(I+A_{i})^{-1}\) for \(i=1,2,\ldots,N\). For any \(x_{0}\in C\), let \(\{x_{n}\}\) be a sequence generated by algorithm (1.10). If the sequence \(\{\alpha_{n}\}\) satisfies the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\),

  2. (ii)

    \(\sum_{n=1}^{\infty}|\alpha_{n}-\alpha_{n-1}|<\infty\) or \(\lim_{n\rightarrow\infty}\frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha_{n}}=0\),

then \(\{x_{n}\}\) converges strongly to a common solution of the equations \(A_{i}(x)=0\) for \(i=1,2,\ldots,N\).

In this paper, we combine the proximal point method [9] and the viscosity approximation method [15] with Meir-Keeler contractions to get strong convergence theorems for the problem of finding a common zero of a finite family of accretive operators in Banach spaces. We also give some applications of our results for the convex minimization problem and the variational inequality problem in Hilbert spaces.

2 Preliminaries

Let E be a real Banach space and \(M\subseteq E\). We denote by \(F(T)\) the set of all fixed points of the mapping \(T: M\to M\).

Recall that a mapping \(\phi: (X,d)\to(X,d)\) from the metric space \((X,d)\) into itself is said to be a Meir-Keeler contraction, if, for every \(\varepsilon>0\), there exists \(\delta>0\) such that \(d(x,y)<\varepsilon+\delta\) implies

$$d(\phi x,\phi y)< \varepsilon, $$

for all \(x,y\in X\). We know that if \((X,d)\) is a complete metric space, then ϕ has a unique fixed point [16]. In the sequel, we always use \(\Sigma_{M}\) to denote the collection of all Meir-Keeler contractions on M and \(S_{E}\) to denote the unit sphere \(S_{E}=\{x\in E: \|x\|=1\}\). A Banach space E is said to be strictly convex if \(x,y\in S_{E} \) with \(x\neq y\), and, for all \(t\in(0,1)\),

$$\bigl\| (1-t)x+ty\bigr\| < 1. $$

A Banach space E is said to be smooth provided the limit

$$\lim_{t\to0}\frac{\|x+ty\|-\|x\|}{t} $$

exists for each x and y in \(S_{E}\). In this case, the norm of E is said to be Gâteaux differentiable. It is said to be uniformly Gâteaux differentiable if for each \(y\in S_{E}\), this limit is attained uniformly for \(x\in S_{E}\). It is well known that every uniformly smooth Banach space has a uniformly Gâteaux differentiable norm.

A closed convex subset C of a Banach space E is said to have the fixed point property for nonexpansive mappings if every nonexpansive mapping of a nonempty, closed, and convex subset M of C into itself has a fixed point in M.

A subset C of a Banach space E is called a retract of E if there is a continuous mapping P from E onto C such that \(Px=x\), for all \(x\in C\). We call such P a retraction of E onto C. It follows that if P is a retraction, then \(Py=y\), for all y in the range of P. A retraction P is said to be sunny if \(P(Px+t(x-Px))=Px\), for all \(x\in E\) and \(t\geq 0\). If a sunny retraction P is also nonexpansive, then C is said to be a sunny nonexpansive retract of E.

An accretive operator A defined on a Banach space E is said to satisfy the range condition if \(\overline{D(A)}\subset R(I+\lambda A)\), for all \(\lambda>0\), where \(\overline{D(A)}\) denotes the closure of the domain of A. We know that for an accretive operator A which satisfies the range condition, \(A^{-1}0=F(J_{\lambda}^{A})\), for all \(\lambda>0\).

Let f be a continuous linear functional on \(l_{\infty}\). We use \(f_{n}(x_{n+m})\) to denote

$$f(x_{m+1},x_{m+2},\ldots,x_{m+n},\ldots), $$

for \(m=0,1,2,\ldots \) . A continuous linear functional f on \(l_{\infty}\) is called a Banach limit if \(\|f\|=f(e)=1\) and \(f_{n}(x_{n})=f_{n}(x_{n+1})\) for each \(x=(x_{1},x_{2},\ldots)\) in \(l_{\infty}\). Fix any Banach limit and denote it by \(LIM\). Note that \(\|LIM\|=1\), and, for all \(\{x_{n}\} \in l_{\infty}\),

$$ \liminf_{n\to\infty}x_{n}\leq LIM_{n}x_{n}\leq\limsup_{n\to\infty}x_{n}. $$
(2.1)

The following lemmas play crucial roles for the proof of main theorems in this paper.

Lemma 2.1

[17]

Let ϕ be a Meir-Keeler contraction on a convex subset C of a Banach space E. Then for each \(\varepsilon>0\), there exists \(r\in (0,1)\) such that, for all \(x,y\in C\), \(\|x-y\|\geq\varepsilon\) implies

$$ \|\phi x-\phi y\|\leq r\|x-y\|. $$
(2.2)

Remark 2.2

From Lemma 2.1, for each \(\varepsilon>0\), there exists \(r\in(0,1)\) such that

$$ \|\phi x -\phi y\|\leq\max\bigl\{ \varepsilon, r\|x-y\|\bigr\} , $$
(2.3)

for all \(x,y\in C\).

Lemma 2.3

[17]

Let C be a convex subset of a Banach space E. Let T be a nonexpansive mapping on C and ϕ be a Meir-Keeler contraction on C. Then, for each \(t\in(0,1)\), a mapping \(x\mapsto(1-t)Tx+t\phi x\) is also a Meir-Keeler contraction on C.

Lemma 2.4

[18]

Let C be a convex subset of a smooth Banach space E, D a nonempty subset of C, and P a retraction from C onto D. Then the following statements are equivalent:

  1. (i)

    P is sunny nonexpansive.

  2. (ii)

    \(\langle x-Px,j(z-Px)\rangle\leq0\), for all \(x\in C\), \(z\in D\).

  3. (iii)

    \(\langle x-y,j(Px-Py)\rangle\geq\|Px-Py\|^{2}\), for all \(x,y\in C\).

We can easily prove the following lemma from Lemma 1 in [19].

Lemma 2.5

[19]

Let E be a Banach space with a uniformly Gâteaux differentiable norm, C a nonempty, closed, and convex subset of E and \(\{x_{n}\}\) a bounded sequence in E. Let \(LIM\) be a Banach limit and \(y\in C\) such that

$$LIM_{n}\|x_{n}-y\|^{2}=\inf_{x\in C}LIM_{n} \|x_{n}-x\|^{2}. $$

Then \(LIM_{n}\langle x-y,j(x_{n}-y)\rangle\leq0\), for all \(x\in C\).

Lemma 2.6

[20]

Let \(\{a_{n}\}\), \(\{b_{n}\}\), \(\{\sigma_{n}\}\) be sequences of positive numbers satisfying the inequality:

$$a_{n+1}\leq(1-b_{n})a_{n}+\sigma_{n},\quad b_{n}< 1. $$

If \(\sum_{n=0}^{\infty}b_{n}=+\infty\) and \(\lim_{n\to\infty}\sigma_{n}/b_{n}=0\), then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Lemma 2.7

[21]

Let E be a Banach space with a uniformly Gâteaux differentiable norm and let C be a nonempty, closed, and convex subset of E with fixed point property for nonexpansive self-mappings. Let \(A: D(A)\subset E\to2^{E}\) be an accretive operator such that \(A^{-1}0\neq\emptyset\) and \(\overline{D(A)}\subset \bigcap_{t>0}R(I+tA)\). Then \(A^{-1}0\) is a sunny nonexpansive retract of C.

Lemma 2.8

[11]

Let C be a nonempty, closed, and convex subset of a strictly convex Banach space E. Let \(A_{i}:C\to E \) be an m-accretive operator for each \(i=1,2,\ldots,N\) with \(\bigcap_{i=1}^{N} N(A_{i})\neq\emptyset\). Let \(a_{0},a_{1},\ldots,a_{N}\) be real numbers in \((0,1)\) such that \(\sum_{i=0}^{N} a_{i}=1\) and let \(S_{N}:=a_{0}I+a_{1}J^{A_{1}}+a_{2}J^{A_{2}}+\cdots+a_{N}J^{A_{N}}\), where \(J^{A_{i}}:=(I+A_{i})^{-1}\). Then \(S_{N}\) is a nonexpansive mapping and \(F(S_{N})=\bigcap_{i=1}^{N} N(A_{i})\).

3 Main results

Now, we are in a position to introduce and prove the main theorems.

Propositon 3.1

Let E be a reflexive Banach space with a uniformly Gâteaux differentiable norm and let C be a closed convex subset of E which has the fixed point property for nonexpansive mappings. Let T be a nonexpansive mapping on C. Then for each \(\phi\in\Sigma_{C}\) and every \(t\in(0,1)\), there exists a unique fixed point \(v_{t}\in C\) of the Meir-Keeler contraction \(C\ni v_{t}\mapsto t\phi v_{t} +(1-t)Tv_{t}\), such that \(\{v_{t}\}\) converges strongly to \(x^{*}\in F(T)\) as \(t\to0\) which solves the variational inequality:

$$ \bigl\langle x^{*}-\phi x^{*}, j \bigl(x^{*} -x \bigr) \bigr\rangle \leq0, $$
(3.1)

for all \(x \in F(T)\).

Proof

By Lemma 2.3, the mapping \(C\ni v\mapsto t\phi v +(1-t)Tv\) is a Meir-Keeler contraction on C. So, there is a unique \(v_{t}\in C\) which satisfies

$$v_{t}=t\phi v_{t} +(1-t)Tv_{t}. $$

Now we show that \(\{v_{t}\}\) is bounded. Indeed, take a \(p\in F(T)\) and a number \(\varepsilon>0\).

Case 1. Let \(\|v_{t}-p\|\leq\varepsilon\). Then we can see easily that \(\{v_{t}\}\) is bounded.

Case 2. Let \(\|v_{t}-p\|\geq\varepsilon\). Then, by Lemma 2.4, there exists \(r\in(0,1)\) such that

$$\|\phi v_{t}-\phi p\|\leq r\|v_{t}-p\|. $$

So, we have

$$\begin{aligned} \|v_{t}-p\|&=\bigl\| t\phi v_{t} +(1-t)Tv_{t}-p\bigr\| \\ &\leq t\|\phi v_{t}-\phi p\|+t\|\phi p-p\|+(1-t)\|v_{t}-p \|\\ &\leq rt\|v_{t}-p\|+t\|\phi p-p\|+(1-t)\|v_{t}-p\|. \end{aligned}$$

Therefore,

$$\|v_{t}-p\|\leq\frac{\|\phi p-p\|}{1-r}. $$

Hence, we conclude that \(\{v_{t}\}\) is bounded and \(\{\phi v_{t}\}\), \(\{ Tv_{t}\}\) are also bounded.

By the boundedness of \(\{v_{t}\}\), \(\{\phi v_{t}\}\), and \(\{Tv_{t}\}\), we have

$$\|v_{t}-Tv_{t}\|=t\|\phi v_{t}-Tv_{t} \|\to0\quad\mbox{as }t\to0. $$

Assume \(t_{n}\to0\). Set \(v_{n}:=v_{t_{n}}\) and define \(\varphi: C\to \mathbb{R}^{+}\) by

$$\varphi(x)=LIM_{n}\|v_{n}-x\|^{2}, $$

for all \(x\in C\) and let

$$M= \Bigl\{ y\in C: \varphi(y)=\inf_{x\in C}\varphi(x) \Bigr\} . $$

Since E is reflexive, \(\varphi(x)\to\infty\) as \(\|x\|\to\infty\), and φ is a continuous convex function, from Barbu and Precupanu [22], we know that M is a nonempty subset of C. By Takahashi [23], we see that M is also closed, convex, and bounded.

For all \(x\in M\), from \(\|v_{n}-Tv_{n}\|\to\)0 as \(n\to\infty\), we have

$$\begin{aligned} \varphi(Tx)&=LIM_{n}\|v_{n}-Tx \|^{2} \\ &\leq LIM_{n} \bigl(\|v_{n}-Tv_{n}\|+ \|Tv_{n}-Tx\| \bigr)^{2} \\ &\leq LIM_{n} \|Tv_{n}-Tx\|^{2} \\ &\leq LIM_{n}\|v_{n}-x\|^{2} \\ &=\varphi(x). \end{aligned}$$

So, M is invariant under T, i.e., \(T(M)\subset M\). By assumption, we have \(M\cap F(T)\neq\emptyset\). Let \(x^{*}\in M\cap F(T)\). By Lemma 2.7, we obtain

$$ LIM_{n} \bigl\langle x-x^{*},j \bigl(v_{n}-x^{*} \bigr) \bigr\rangle \leq0, $$
(3.2)

for all \(x\in C\). In particular,

$$ LIM_{n} \bigl\langle \phi x^{*}-x^{*},j \bigl(v_{n}-x^{*} \bigr) \bigr\rangle \leq0. $$
(3.3)

Suppose that \(LIM_{n}\|v_{n}-x^{*}\|^{2}\geq\varepsilon>0\). By (2.1),

$$\limsup_{n\rightarrow\infty}\bigl\| v_{n}-x^{*}\bigr\| ^{2}\geq \varepsilon. $$

So, there exists a subsequence \(\{v_{n_{k}}\}\) of \(\{v_{n}\}\) such that, for all \(k\geq1\),

$$\bigl\| v_{n_{k}}-x^{*}\bigr\| \geq\varepsilon_{0}, $$

where \(\varepsilon_{0}\in(0,\sqrt{\varepsilon} )\). By Lemma 2.3, there is \(r_{0}\in(0,1)\) such that

$$\bigl\| \phi v_{n_{k}}-\phi x^{*}\bigr\| \leq r\bigl\| v_{n_{k}}-x^{*}\bigr\| . $$

From

$$\bigl\langle Tv_{n_{k}}-v_{n_{k}},j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle \leq0, $$

for all \(k\geq1\), we have

$$\begin{aligned} \bigl\| v_{n_{k}}-x^{*}\bigr\| ^{2}&=t \bigl\langle \phi v_{n_{k}}-x^{*}, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle +(1-t) \bigl\langle Tv_{n_{k}}-x^{*},j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle \\ &\leq t \bigl\langle \phi v_{n_{k}}-x^{*}, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle +(1-t)\bigl\| v_{n_{k}}-x^{*}\bigr\| ^{2}, \end{aligned}$$

which implies that

$$\begin{aligned}[b] \bigl\| v_{n_{k}}-x^{*}\bigr\| ^{2}&\leq \bigl\langle \phi v_{n_{k}}-x^{*}, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle \\ &\leq \bigl\langle \phi v_{n_{k}}-x, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle + \bigl\langle \phi x-x^{*}, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle , \end{aligned} $$

for all \(x\in C\). So, from (3.2), we get

$$\begin{aligned} LIM_{n}\bigl\| v_{n_{k}}-x^{*} \bigr\| ^{2}&\leq LIM_{n} \bigl\langle \phi v_{n_{k}}-x, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle +LIM_{n} \bigl\langle \phi x-x^{*}, j \bigl(v_{n_{k}}-x^{*} \bigr) \bigr\rangle \\ &\leq LIM_{n}\|\phi v_{n_{k}}-x\| \bigl\| v_{n_{k}}-x^{*}\bigr\| , \end{aligned}$$

for all \(x\in C\). In particular,

$$\begin{aligned} LIM_{n}\bigl\| v_{n_{k}}-x^{*} \bigr\| ^{2}&\leq LIM_{n}\bigl\| \phi v_{n_{k}}-\phi x^{*}\bigr\| \bigl\| v_{n_{k}}-x^{*}\bigr\| \\ &\leq r_{0} LIM_{n}\bigl\| v_{n_{k}}-x^{*}\bigr\| ^{2}, \end{aligned}$$

which is a contradiction. Hence, \(LIM_{n}\|v_{n}-x^{*}\|=0\) and there exists a subsequence \(\{v_{n_{k}}\}\) of \(\{v_{n}\}\) such that \(v_{n_{k}}\to x^{*}\) as \(k\to\infty\).

Assume that \(\{v_{n_{l}}\}\) is another subsequence of \(\{v_{n}\}\) such that \(v_{n_{l}}\to y^{*}\) with \(y^{*}\neq x^{*}\). It is easy to see that \(y^{*}\in F(T)\). By Lemma 2.3, there exists \(r_{1}\in(0,1)\) such that

$$ \bigl\| \phi x^{*}-\phi y^{*}\bigr\| \leq r_{1}\bigl\| x^{*}-y^{*}\bigr\| . $$
(3.4)

Observe that

$$\begin{aligned} &\bigl| \bigl\langle v_{n}-\phi v_{n}, j \bigl(v_{n}-y^{*} \bigr) \bigr\rangle - \bigl\langle x^{*}-\phi x^{*}, j \bigl(x^{*}-y^{*} \bigr) \bigr\rangle \bigr| \\ &\quad\leq\bigl| \bigl\langle v_{n}-\phi v_{n}, j \bigl(v_{n}-y^{*} \bigr) \bigr\rangle - \bigl\langle x^{*}-\phi x^{*}, j \bigl(v_{n}-y^{*} \bigr) \bigr\rangle \bigr| \\ &\qquad{} +\bigl| \bigl\langle x^{*}-\phi x^{*}, j \bigl(v_{n}-y^{*} \bigr) \bigr\rangle - \bigl\langle x^{*}-\phi x^{*}, j \bigl(x^{*}-y^{*} \bigr) \bigr\rangle \bigr| \\ &\quad\leq\bigl\| v_{n}-\phi v_{n} - \bigl(x^{*}-\phi x^{*} \bigr)\bigr\| \bigl\| v_{n}-y^{*}\bigr\| +\bigl| \bigl\langle x^{*}-\phi x^{*},j \bigl(v_{n}-y^{*} \bigr)-j \bigl(x^{*}-y^{*} \bigr) \bigr\rangle \bigr|, \end{aligned}$$

for all \(n\in\mathbb{N}\). Since \(v_{n_{k}}\to x^{*}\) and j is norm to weak* uniformly continuous, we obtain

$$\bigl\langle x^{*}-\phi x^{*},j \bigl(x^{*}-y^{*} \bigr) \bigr\rangle \leq0. $$

Similarly, we have

$$\bigl\langle y^{*}-\phi y^{*},j \bigl(y^{*}-x^{*} \bigr) \bigr\rangle \leq0. $$

Adding the above two inequalities yields

$$\bigl\langle x^{*}-y^{*}- \bigl(\phi x^{*}-\phi y^{*} \bigr), j \bigl(x^{*}-y^{*} \bigr) \bigr\rangle \leq0, $$

and combining with (3.4) implies that

$$\bigl\| x^{*}-y^{*}\bigr\| \leq r_{1}\bigl\| x^{*}-y^{*}\bigr\| , $$

which is a contradiction. Hence \(\{v_{t_{n}}\}\) converges strongly to \(x^{*}\).

Now, we prove that the net \(\{v_{t}\}\) converges strongly to \(x^{*}\) as \(t\to0\). We assume that there is another subsequence \(\{s_{n}\}\) with \(s_{n}\in(0,1)\), for all n and \(s_{n}\to0\) as \(n\to\infty\) such that \(v_{s_{n}}\to z^{*}\) as \(n\to\infty\). Then we have \(z^{*}\in F(T)\). For each t and \(z\in F(T)\), we have

$$\bigl\langle v_{t}-\phi v_{t},j(v_{t}-z) \bigr\rangle =\frac{1-t}{t} \bigl\langle Tv_{t}-v_{t},j(v_{t}-z) \bigr\rangle \leq0. $$

So, we obtain

$$\bigl\langle v_{t_{n}}-\phi v_{t_{n}},j \bigl(v_{t_{n}}-z^{*} \bigr) \bigr\rangle \leq0 $$

and similarly, we have

$$\bigl\langle v_{s_{n}}-\phi v_{s_{n}},j \bigl(v_{s_{n}}-x^{*} \bigr) \bigr\rangle \leq0, $$

which implies that

$$\bigl\langle x^{*}-\phi x^{*},j \bigl(x^{*}-z^{*} \bigr) \bigr\rangle \leq0 $$

and

$$\bigl\langle z^{*}-\phi z^{*},j \bigl(z^{*}-x^{*} \bigr) \bigr\rangle \leq0. $$

Thus, we have \(x^{*}=z^{*}\). Therefore, \(\{v_{t}\}\) converges strongly to \(x^{*}\) and it is easy to see that \(x^{*}\) solves the variational inequality

$$ \bigl\langle x^{*}-\phi x^{*}, j \bigl(x^{*} -x \bigr) \bigr\rangle \leq0, $$

for all \(x\in F(T)\). This completes the proof. □

Remark 3.2

Let Q be a sunny nonexpansive retraction from C onto \(F(T)\). By the uniqueness of Q, inequality (3.1) and Lemma 2.4, we obtain \(Q\phi x^{*} =x^{*}\).

Proposition 3.3

Let C be a closed convex subset of a reflexive Banach space E with a uniformly Gâteaux differentiable norm and let T be a nonexpansive mapping on C with \(F(T)\neq\emptyset\). Assume \(\{x_{n}\}\) is a bounded sequence such that \(x_{n}-Tx_{n}\to0\) as \(n\to \infty\). Let \(x_{t}=t\phi x_{t}+(1-t)Tx_{t}\), for all \(t\in(0,1)\), where \(\phi\in\Sigma_{C}\). Assume that \(x^{*}=\lim_{t\rightarrow0}x_{t}\) exists. Then we have

$$ \limsup_{n\rightarrow\infty} \bigl\langle (\phi-I)x^{*},j \bigl(x_{n}-x^{*} \bigr) \bigr\rangle \leq0. $$
(3.5)

Proof

Set \(M=\sup\{\|x_{n}-x_{t}\|: t\in(0,1), n\geq0\}\). Then we have

$$\begin{aligned} \|x_{t}-x_{n}\|^{2}={}&t \bigl\langle \phi x_{t}-x_{n}, j(x_{t}-x_{n}) \bigr\rangle +(1-t) \bigl\langle Tx_{t}-x_{n}, j(x_{t}-x_{n}) \bigr\rangle \\ ={}&t \bigl\langle \phi x_{t}-x_{t},j(x_{t}-x_{n}) \bigr\rangle +(1-t) \bigl\langle Tx_{t}-Tx_{n}, j(x_{t}-x_{n}) \bigr\rangle \\ &{} +(1-t) \bigl\langle Tx_{n}-x_{n}, j(x_{t}-x_{n}) \bigr\rangle \\ \leq{}& t \bigl\langle \phi x_{t}-x_{t},j(x_{t}-x_{n}) \bigr\rangle +t\|x_{t}-x_{n}\|^{2} \\ &{} +(1-t)\|x_{t}-x_{n}\|^{2}+M \|x_{n}-Tx_{n}\|, \end{aligned}$$

which implies that

$$\bigl\langle \phi x_{t}-x_{t},j(x_{n}-x_{t}) \bigr\rangle \leq\frac{M}{t}\|x_{n}-Tx_{n}\|. $$

Fix t and letting \(n\rightarrow\infty\) yields

$$\limsup_{n\rightarrow\infty} \bigl\langle (\phi-I)x^{*},j \bigl(x_{n}-x^{*} \bigr) \bigr\rangle \leq0. $$

This completes the proof. □

Now, let E be a reflexive and strictly convex Banach space with a uniformly Gâteaux differentiable norm and C a closed convex subset of E which has the fixed point property for nonexpansive mappings. Let \(A_{i}: E\to2^{E}\) be an accretive operator, for each \(i=1,2,\ldots,N\) such that

$$S=\bigcap_{i=1}^{N}A_{i}^{-1}0 \neq\emptyset $$

and

$$\overline{D(A_{i})}\subset C\subset\bigcap _{r>0}R(I+rA_{i}), $$

for all \(i=1,2,\ldots,N\).

For each \(\phi\in\Sigma_{C}\), we study the strong convergence of the sequence \(\{z_{n}\}\) defined by

$$ \left \{ \begin{array}{@{}l} z_{0}\in C,\\ z_{n+1}=S_{N} (\alpha_{n} \phi z_{n}+(1-\alpha_{n})z_{n} ),\quad \forall n\ge0, \end{array} \right . $$
(3.6)

where \(S_{N}:=a_{0}I+a_{1}J^{A_{1}}+a_{2}J^{A_{2}}+\cdots+a_{N}J^{A_{N}}\) with \(a_{0},a_{1},\ldots,a_{N}\) are real numbers in \((0,1)\) such that \(\sum_{i=0}^{N}a_{i}=1\) and \(\{\alpha_{n}\}\subset(0,1)\) is a real sequence of positive numbers, under the conditions:

  1. (C1)

    \(\lim_{n\to\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty }\alpha_{n}=\infty\),

  2. (C2)

    \(\sum_{n=1}^{\infty}|\alpha_{n}-\alpha_{n-1}|<\infty\) or \(\lim_{n\rightarrow\infty}\frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha_{n}}=0\).

Then we have the following theorem.

Theorem 3.4

If the sequence \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C2), then the sequence \(\{x_{n}\}\) generated by

$$ x_{n+1}=S_{N} \bigl(\alpha_{n} u+(1- \alpha_{n})x_{n} \bigr),\quad \forall n\ge0, $$
(3.7)

converges strongly to Qu, where \(u\in C\) and Q is a sunny nonexpansive retraction from C onto S.

Proof

By Lemma 2.8, we have \(F(S_{N})=\bigcap_{i=1}^{N}A_{i}^{-1}0\neq\emptyset\). Now, for each \(p\in F(S_{N})\), we have

$$ \begin{aligned}[b] \|x_{n+1}-p\|&=\bigl\| S_{N} \bigl(\alpha_{n} u+(1-\alpha_{n})x_{n} \bigr)-S_{N}(p)\bigr\| \\ &\leq(1-\alpha_{n})\|x_{n}-p\|+\alpha_{n}\|u-p \| \\ &\leq\max \bigl\{ \|x_{n}-p\|,\|u-p\| \bigr\} \\ & \vdots\\ &\leq\max \bigl\{ \|x_{0}-p\|,\|u-p\| \bigr\} . \end{aligned} $$
(3.8)

Hence \(\{x_{n}\}\) is bounded. Suppose that \(\max \{\sup\|x_{n}\|, \|u\| \}\le K\). It follows that

$$\begin{aligned} \bigl\| x_{n+1}-S_{N}(x_{n})\bigr\| &= \bigl\| S_{N} \bigl(\alpha_{n} f(x_{n})+(1- \alpha_{n})x_{n} \bigr)-S_{N}(x_{n})\bigr\| \\ &\le\alpha_{n}\bigl\| f(x_{n})-x_{n}\bigr\| \to0, \quad\mbox{as }n\rightarrow\infty. \end{aligned}$$
(3.9)

From (1.10), we get

$$\begin{aligned} \|x_{n+1}-x_{n}\|&= \bigl\| S_{N} \bigl(\alpha_{n} u+(1-\alpha_{n})x_{n} \bigr)-S_{N} \bigl(\alpha_{n-1} u+(1-\alpha_{n-1})x_{n-1} \bigr)\bigr\| \\ &\le(1-\alpha_{n})\|x_{n}-x_{n-1}\|+ \alpha_{n}\beta_{n}, \end{aligned}$$

where \(\beta_{n}=2K\frac{|\alpha_{n}-\alpha_{n-1}|}{\alpha_{n}}\).

We consider two cases of condition (C2).

First, suppose that \(\sum_{n=1}^{\infty}|\alpha_{n}-\alpha_{n-1}|<\infty \). Then

$$\|x_{n+1}-x_{n}\|\le(1-\alpha_{n}) \|x_{n}-x_{n-1}\|+\sigma_{n}, $$

where \(\sigma_{n}=2K|\alpha_{n}-\alpha_{n-1}|\). So, we have \(\sum_{n=1}^{\infty}\sigma_{n}<\infty\).

Second, suppose that \(\lim_{n\to\infty}\frac{|\alpha_{n}-\alpha _{n-1}|}{\alpha_{n}}=0\). Then

$$\|x_{n+1}-x_{n}\|\le(1-\alpha_{n}) \|x_{n}-x_{n-1}\|+\sigma_{n}, $$

where \(\sigma_{n}=\alpha_{n}\beta_{n}\). So, we have \(\sigma_{n}=o(\alpha_{n})\).

For any case, we have \(\|x_{n+1}-x_{n}\|\to0\) as \(n\to\infty\), from Lemma 2.6. By (3.9) we obtain

$$ \|x_{n}-S_{N}x_{n}\|\le\|x_{n+1}-x_{n} \|+\|x_{n+1}-S_{N}x_{n}\|\to0 \quad\mbox{as }n\to\infty. $$
(3.10)

Let \(y_{n}=\alpha_{n} u+(1-\alpha_{n})x_{n}\). Then we have

$$\|y_{n}-x_{n}\|=\alpha_{n}\|u-x_{n} \|\to0\quad\mbox{as }n\to\infty, $$

it follows that

$$\begin{aligned} \|y_{n}-S_{N}y_{n}\|& \leq\|y_{n}-x_{n}\|+\|x_{n}-S_{N}x_{n} \|+\|S_{N}x_{n}-S_{N}y_{n}\| \\ &\leq2\|y_{n}-x_{n}\|+\|x_{n}-S_{N}x_{n} \|\to0\quad\mbox{as }n\to\infty. \end{aligned}$$

For each \(t\in(0,1)\), let \(x_{t}=tu+(1-t)S_{N}x_{t}\). Apply Proposition 3.1 with \(\phi x =u\), for all \(x\in C\), we know that \(\{x_{t}\}\) converges strongly to \(x^{*}\in F(S_{N})\), satisfying \(Qu=x^{*}\). It follows from Proposition 3.3 that

$$\limsup_{n\to\infty} \bigl\langle u-x^{*},j \bigl(y_{n}-x^{*} \bigr) \bigr\rangle \leq0. $$

Observe that

$$\begin{aligned}[b] \bigl\| y_{n}-x^{*}\bigr\| ^{2}&= \bigl\langle \alpha_{n} u+(1-\alpha _{n})x_{n}-x^{*},j \bigl(y_{n}-x^{*} \bigr) \bigr\rangle \\ &\leq(1-\alpha_{n})\bigl\| x_{n}-x^{*}\bigr\| \bigl\| y_{n}-x^{*} \bigr\| +\alpha_{n} \bigl\langle u-x^{*},j \bigl(y_{n}-x^{*} \bigr) \bigr\rangle \\ &\leq\frac{(1-\alpha_{n})}{2} \bigl(\bigl\| x_{n}-x^{*}\bigr\| ^{2}+ \bigl\| y_{n}-x^{*}\bigr\| ^{2} \bigr)+\alpha_{n} \bigl\langle u-x^{*},j \bigl(y_{n}-x^{*} \bigr) \bigr\rangle . \end{aligned} $$

Hence, we have

$$\bigl\| y_{n}-x^{*}\bigr\| ^{2}\leq(1-\alpha_{n})\bigl\| x_{n}-x^{*}\bigr\| ^{2} +2\alpha_{n} \bigl\langle u-x^{*},j \bigl(y_{n}-x^{*} \bigr) \bigr\rangle . $$

Next, we have

$$ \bigl\| x_{n+1}-x^{*}\bigr\| ^{2}\leq(1-\alpha_{n})\bigl\| x_{n}-x^{*}\bigr\| ^{2} +2\alpha_{n} \bigl\langle u-x^{*},j \bigl(y_{n}-x^{*} \bigr) \bigr\rangle . $$
(3.11)

From Lemma 2.6, we have the desired result. That is, the sequence \(\{ x_{n}\}\) converges strongly to \(Qu=x^{*}\). This completes the proof. □

The following is a strong convergence theorem for the sequence \(\{z_{n}\} \) in (3.6).

Theorem 3.5

If the sequence \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C2), then the sequence \(\{z_{n}\}\) generated by (3.6) converges strongly to \(x^{*}\in S\), which satisfies \(Q\phi x^{*}=x^{*}\), where Q is a sunny nonexpansive retraction from C onto S.

Proof

Let \(x^{*}\) be a unique fixed point of , that is, \(Q\phi x^{*}=x^{*}\). Let \(\{x_{n}\}\) be a sequence defined by

$$x_{n+1}=S_{N} \bigl(\alpha_{n} \phi x^{*} +(1- \alpha_{n})x_{n} \bigr), \quad\mbox{for all }n\geq0. $$

By Theorem 3.4, \(x_{n}\to Q\phi x^{*}=x^{*}\) as \(n\to\infty\).

Now, we prove that \(\|z_{n}-x_{n}\|\to0\) as \(n\to\infty\). Assume that

$$\limsup_{n\rightarrow\infty}\|z_{n}-x_{n}\|>0. $$

Then we choose ε with \(\varepsilon\in(0,\limsup_{n\to \infty}\|z_{n}-x_{n}\|)\). By Lemma 2.3, there exists \(r\in(0,1)\) satisfying (2.2). We also choose \(n_{1}\in\mathbb{N}\) such that

$$\frac{r\|x_{n}-x^{*}\|}{1-r}< \varepsilon, $$

for all \(n\geq n_{1}\). We divide this into the following two cases:

  1. (i)

    There exists \(n_{2}\in\mathbb{N}\) satisfying \(n_{2}\geq n_{1}\) and \(\|z_{n_{2}}-x_{n_{2}}\|\leq\varepsilon\).

  2. (ii)

    \(\|z_{n}-x_{n}\| >\varepsilon\), for all \(n\geq n_{1}\).

In the case of (i), we have

$$\begin{aligned} \|z_{n_{2}+1}-x_{n_{2}+1}\|\leq{}&(1- \alpha_{n_{2}})\|z_{n_{2}}-x_{n_{2}}\|+\alpha _{n_{2}} \bigl\| \phi z_{n_{2}}-\phi x^{*}\bigr\| \\ \leq{}&(1-\alpha_{n_{2}})\|z_{n_{2}}-x_{n_{2}}\|+ \alpha_{n_{2}}\max \bigl\{ r\bigl\| z_{n_{2}}-x^{*}\bigr\| ,\varepsilon \bigr\} \\ \leq{}&\max \biggl\{ (1-\alpha_{n_{2}}+r\alpha_{n_{2}}) \|z_{n_{2}}-x_{n_{2}}\| +\alpha_{n_{2}}(1-r) \frac{r\|x_{n}-x^{*}\|}{1-r}, \\ &{} (1-\alpha_{n_{2}})\|z_{n_{2}}-x_{n_{2}}\|+\alpha _{n_{2}}\varepsilon \biggr\} \\ \leq{}&\varepsilon. \end{aligned}$$

By induction, we can show that \(\|z_{n}-x_{n}\|\leq\varepsilon\), for all \(n\geq n_{2}\). This is a contradiction to the fact that \(\varepsilon<\limsup_{n\rightarrow\infty}\|z_{n}-x_{n}\|\).

In the case of (ii), for each \(n\geq n_{1}\), we have

$$\begin{aligned} \|z_{n+1}-x_{n+1}\|&\leq(1- \alpha_{n})\|z_{n}-x_{n}\|+\alpha_{n} \bigl\| \phi z_{n}-\phi x^{*}\bigr\| \\ &\leq(1-\alpha_{n})\|z_{n}-x_{n}\|+ \alpha_{n}\|\phi z_{n}-\phi x_{n}\| + \alpha_{n}\bigl\| \phi x_{n}-\phi x^{*}\bigr\| \\ &\leq \bigl[1-\alpha_{n} (1-r) \bigr]\|z_{n}-x_{n} \|+\alpha_{n} \bigl\| \phi x_{n}-\phi x^{*}\bigr\| . \end{aligned}$$

So, by Lemma 2.1, we get \(\lim_{n\rightarrow\infty}\|z_{n}-x_{n}\|=0\). This is a contradiction. Therefore \(\lim_{n\rightarrow\infty}\|z_{n}-x_{n}\|=0\). Thus we obtain

$$\lim_{n\rightarrow\infty}\bigl\| z_{n}-x^{*}\bigr\| \leq\lim _{n\rightarrow\infty}\| z_{n}-x_{n}\| +\lim _{n\rightarrow\infty}\bigl\| x_{n}-x^{*}\bigr\| =0. $$

Hence \(\{z_{n}\}\) convergence strongly to \(Q\phi x^{*}=x^{*}\). This completes the proof. □

Corollary 3.6

Let E be a reflexive and strictly convex Banach space with a uniformly Gâteaux differentiable norm and let C be a closed convex subset of E which has the fixed point property for nonexpansive mappings. Let \(A_{i}: E\to2^{E}\) be an m-accretive operator, for each \(i=1,2,\ldots,N\) such that

$$S=\bigcap_{i=1}^{N}A_{i}^{-1}0 \neq\emptyset. $$

For each \(\phi\in\Sigma_{C}\), let \(\{z_{n}\}\) be a sequence generated by (3.6). If the sequence \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C2), then the sequence \(\{z_{n}\}\) converges strongly to \(x^{*}\in S\) which satisfies \(Q\phi x^{*}=x^{*}\), where Q is a sunny nonexpansive retraction from C onto S.

Proof

Since for each \(i=1,2,\ldots,N\), \(A_{i}\) is an m-accretive operator, the condition \(\overline{D(A_{i})}\subset C\subset\bigcap_{r>0}R(I+rA_{i})\) is satisfied, for all \(i=1,2,\ldots,N\). By the assumption and Theorem 3.5, we have \(z_{n}\to x^{*}\) as \(n\to\infty \) which satisfies \(Q\phi x^{*}=x^{*}\). This completes the proof. □

Remark 3.7

Corollary 3.6 is a generalization of the results of Tuyen [14], Zegeye and Shahzad [11] and Jung [24].

Remark 3.8

If we take \(r=1\), then we may take \(S_{1}:=J^{A}=(I+A)^{-1}\) and strict convexity of E and the real constants \(a_{i}\), \(i=0,1\), may not be needed.

Corollary 3.9

Let E be a reflexive and strictly convex Banach space with a uniformly Gâteaux differentiable norm and let C be a closed convex subset of E which has the fixed point property for nonexpansive mappings. Let \(A: E\to2^{E}\) be an m-accretive operator such that \(S=A^{-1}0\neq\emptyset\). For each \(\phi\in\Sigma_{C}\), let \(\{z_{n}\}\) be a sequence defined by

$$ \left \{ \begin{array}{@{}l} z_{0}\in C,\\ z_{n+1}=J^{A} (\alpha_{n} \phi z_{n}+(1-\alpha_{n})z_{n} ), \end{array} \right . $$
(3.12)

for all \(n\geq0\). If the sequence \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C2), then the sequence \(\{z_{n}\}\) converges strongly to \(x^{*}\in S\) which satisfies \(Q\phi x^{*}=x^{*}\), where Q is a sunny nonexpansive retraction from C onto S.

Remark 3.10

Corollary 3.9 is a generalization of the results of Tuyen in [8].

4 Applications

In this section, we give some applications in the framework of Hilbert spaces. We first apply Corollary 3.9 to the convex minimization problem.

Theorem 4.1

Let H be a Hilbert space and let \(f: H\to (-\infty,\infty]\) be a proper lower semicontinuous convex function such that \((\partial f)^{-1}0\neq \emptyset\) for a subdifferential mapping ∂f of f. Let \(\{x_{n}\}\) be a sequence defined as follows:

$$ \left \{ \begin{array}{@{}l} x_{0}\in H,\\ y_{n}=\alpha_{n} \phi x_{n}+(1- \alpha_{n})x_{n},\\ x_{n+1}=\operatorname{argmin}_{z\in H} \{ f(z)+ \frac{1}{2}\|z-y_{n}\| ^{2} \}, \end{array} \right . $$
(4.1)

for all \(n\geq0\), where \(\{\alpha_{n}\}\) is a sequence positive real numbers and \(\phi\in\Sigma_{H} \). If the sequence \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C2), then the sequence \(\{x_{n}\}\) converges strongly to \(x^{*}\) in \((\partial f)^{-1}0\).

Proof

By the Rockafellar theorem [25] (cf. [26]), the subdifferential mapping ∂f is maximal monotone in H. So,

$$x_{n+1}=\mathop{\operatorname{argmin}}\limits_{z\in H} \biggl\{ f(z)+ \frac{1}{2}\|z-y_{n}\| ^{2} \biggr\} $$

is equivalent to \(\partial f(x_{n+1})+x_{n+1}\ni y_{n}\). Using Corollary 3.9, \(\{x_{n}\}\) converges strongly to an element \(x^{*}\) in \((\partial f)^{-1}0\). This completes the proof. □

We next apply Proposition 3.3 to the variational inequality problem. Let C be a nonempty, closed, and convex subset of a Hilbert space H and let \(A : C\to H\) be a single-valued monotone operator which is hemicontinuous. Then a point \(u \in C\) is said to be a solution of the variational inequality for A if

$$ \langle y-u, Au\rangle\geq0, $$
(4.2)

for all \(y\in C\). We denote by \(VI(C,A)\) the set of all solutions of the variational inequality (4.2) for A. We also denote by \(N_{C}(x)\) the normal cone for C at a point \(x \in C\), that is,

$$N_{C}(x)= \bigl\{ z\in H: \langle y-x,z\rangle\leq0,\mbox{for all }y \in C \bigr\} . $$

Theorem 4.2

Let C be a nonempty, closed, and convex subset of a Hilbert space H and let \(A : C\longrightarrow H\) be a single-valued monotone operator and hemicontinuous such that \(VI(C,A)\neq\emptyset\). Let \(\{x_{n}\}\) be a sequence defined as follows:

$$ \left \{ \begin{array}{@{}l} x_{0}\in H,\\ y_{n}=\alpha_{n} \phi x_{n}+(1- \alpha_{n})x_{n},\\ x_{n+1}=VI(C,A+I-y_{n}), \end{array} \right . $$
(4.3)

for all \(n\geq0\), where \(\{\alpha_{n}\}\) is a sequence of positive real numbers and \(\phi\in\Sigma_{H} \). If the sequence \(\{\alpha_{n}\}\) satisfies the conditions (C1)-(C2), then the sequence \(\{x_{n}\}\) converges strongly to \(x^{*}\) in \(VI(C,A)\).

Proof

Define a mapping \(T\subset H\times H\) by

$$Tx= \begin{cases} Ax+N_{C}(x),& x\in C,\\ \emptyset,& x\notin C. \end{cases} $$

By the Rockafellar theorem [27], we know that T is maximal monotone and \(T^{-1}0=VI(C,A)\).

Note that

$$x_{n+1}=VI(C,A+I-y_{n}) $$

if and only if

$$\langle y-x_{n+1}, Ax_{n+1}+x_{n+1}-y_{n} \rangle\geq0, $$

for all \(y\in C\), that is,

$$-Ax_{n+1}-x_{n+1}+y_{n}\in N_{C}(x_{n+1}). $$

This implies that

$$x_{n+1}=J^{T} \bigl(\alpha_{n}\phi x_{n} +(1-\alpha_{n})x_{n} \bigr). $$

Using Corollary 3.9, \(\{x_{n}\}\) converges strongly to an element \(x^{*}\) in \(VI(C,A)\). This completes the proof. □

References

  1. Martinet, B: Regularisation d’inequations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154-158 (1970)

    MATH  MathSciNet  Google Scholar 

  2. Rockaffelar, RT: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 887-897 (1976)

    MathSciNet  Google Scholar 

  3. Güler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403-419 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  4. Bauschke, HH, Matoušková, E, Reich, S: Projection and proximal point methods convergence results and counterexamples. Nonlinear Anal. TMA 56, 715-738 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  5. Lehdili, N, Moudafi, A: Combining the proximal algorithm and Tikhonov regularization. Optimization 37, 239-252 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  6. Xu, H-K: A regularization method for the proximal point algorithm. J. Glob. Optim. 36, 115-125 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  7. Song, Y, Yang, C: A note on a paper: A regularization method for the proximal point algorithm. J. Glob. Optim. 43, 171-174 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  8. Tuyen, TM: A regularization proximal point algorithm for zeros of accretive operators in Banach spaces. Afr. Diaspora J. Math. 13, 62-73 (2012)

    MATH  MathSciNet  Google Scholar 

  9. Kim, JK, Tuyen, TM: Regularization proximal point algorithm for finding a common fixed point of a finite family of nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2011, 52 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Sahu, DR, Yao, JC: The prox-Tikhonov regularization method for the proximal point algorithm in Banach spaces. J. Glob. Optim. 51, 641-655 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  11. Zegeye, H, Shahzad, N: Strong convergence theorems for a common zero of a finite family of m-accretive mappings. Nonlinear Anal. TMA 66, 1161-1169 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  12. Yao, Y, Liou, YC, Wong, MM, Yao, JC: Hierarchical convergence to the zero point of maximal monotone operators. Fixed Point Theory 13, 293-306 (2012)

    MATH  MathSciNet  Google Scholar 

  13. Ceng, LC, Ansari, QH, Schaible, S, Yao, JC: Hybrid viscosity approximation method for zeros of m-accretive operators in Banach spaces. Numer. Funct. Anal. Optim. 32(11), 1127-1150 (2011)

    Article  MathSciNet  Google Scholar 

  14. Tuyen, TM: Strong convergence theorem for a common zero of m-accretive mappings in Banach spaces by viscosity approximation methods. Nonlinear Funct. Anal. Appl. 17, 187-197 (2012)

    MATH  Google Scholar 

  15. Witthayarat, U, Kim, JK, Kumam, P: A viscosity hybrid steepest-descent methods for a system of equilibrium problems and fixed point for an infinite family of strictly pseudo-contractive mappings. J. Inequal. Appl. 2012, 224 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Meir, A, Keeler, R: A theorem on contraction mappings. J. Math. Anal. Appl. 28, 326-329 (1969)

    Article  MATH  MathSciNet  Google Scholar 

  17. Suzuki, T: Moudafi’s viscosity approximations with Meir-Keeler contractions. J. Math. Anal. Appl. 325, 342-352 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  18. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  19. Ha, KS, Jung, JS: Strong convergence theorems for accretive operators in Banach space. J. Math. Anal. Appl. 147, 330-339 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  20. Xu, H-K: Strong convergence of an iterative method for nonexpansive and accretive operators. J. Math. Anal. Appl. 314, 631-643 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  21. Wong, NC, Sahu, DR, Yao, JC: Solving variational inequalities involving nonexpansive type mappings. Nonlinear Anal. TMA 69, 4732-4753 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  22. Barbu, V, Precupanu, T: Convexity and Optimization in Banach Spaces. Editura Academiei R.S.R., Bucharest (1978)

    MATH  Google Scholar 

  23. Takahashi, W: Nonlinear Functional Analysis. Fixed Point Theory and Applications. Yokohama Publishers, Yokohama (2009)

    Google Scholar 

  24. Jung, JS: Strong convergence of an iterative method for finding common zeros of a finite family of accretive operators. Commun. Korean Math. Soc. 24(3), 381-393 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  25. Rockafellar, RT: Characterization of the subdifferentials of convex functions. Pac. J. Math. 17, 497-510 (1966)

    Article  MATH  MathSciNet  Google Scholar 

  26. Jung, JS: Some results on Rockafellar-type iterative algorithms for zeros of accretive operators. J. Inequal. Appl. 2013, 255 (2013). doi:10.1186/1029-242X-2013-255

    Article  MATH  MathSciNet  Google Scholar 

  27. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Basic Science Research Program through the National Research Foundation Grant funded by Ministry of Education of the republic of Korea (2014046293).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jong Kyu Kim.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, J.K., Tuyen, T.M. Viscosity approximation method with Meir-Keeler contractions for common zero of accretive operators in Banach spaces. Fixed Point Theory Appl 2015, 9 (2015). https://doi.org/10.1186/s13663-014-0256-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-014-0256-3

MSC

Keywords