# A regularization algorithm for zero points of accretive operators

## Abstract

A regularization algorithm with a computational error for treating accretive operators is investigated. A strong convergence theorem for zero points of accretive operators is established in a reflexive Banach space.

## 1 Introduction

In this paper, we are concerned with the problem of finding zero points of a mapping $A:E\to {2}^{{E}^{\ast }}$; that is, finding a point x in the domain of A such that $0\in Ax$. The domain of a mapping A is defined by the set $\left\{x\in E:Ax\ne 0\right\}$. Many important problems have reformulations which require finding zero points, for instance, evolution equations, complementarity problems, mini-max problems, variational inequalities and optimization problems. It is well known that minimizing a convex function f can be reduced to finding zero points of the subdifferential mapping $A=\partial f$. One of the most popular techniques for solving the inclusion problem goes back to the work of Browder [1]. One of the basic ideas in the case of a Hilbert space H is reducing the above inclusion problem to a fixed point problem of the operator ${R}_{A}$ defined by ${R}_{A}={\left(I+A\right)}^{-1}$, which is called the classical resolvent of A. If A has some monotonicity conditions, the classical resolvent of A is with full domain and firmly nonexpansive, that is, ${\parallel {R}_{A}x-{R}_{A}y\parallel }^{2}\le 〈{R}_{A}x-{R}_{A}y,x-y〉$, $\mathrm{\forall }x,y\in H$. The property of the resolvent ensures that the Picard iterative algorithm ${x}_{n+1}={R}_{A}{x}_{n}$ converges weakly to a fixed point of ${R}_{A}$, which is necessarily a zero point of A. Rockafellar introduced this iteration method and called it the proximal point algorithm; for more detail, see [24] and the references therein. Methods for finding zero points of monotone mappings in the framework of Hilbert spaces are based on the good properties of the resolvent ${R}_{A}$, but these properties are not available in the framework of Banach spaces.

In this paper, we study a viscosity algorithm with a computational error. A strong convergence theorem for zero points of accretive operators is established in a reflexive Banach space. The organization of this paper is as follows. In Section 2, we provide some necessary preliminaries. In Section 3, a strong convergence theorem is established in a reflexive Banach space. Two applications of the main results are also discussed in this section.

## 2 Preliminaries

In what follows, we always assume that E is a Banach space with the dual ${E}^{\ast }$. Let ${U}_{E}=\left\{x\in E:\parallel x\parallel =1\right\}$. E is said to be smooth or is said to have a Gâteaux differentiable norm if the limit ${lim}_{t\to 0}\frac{\parallel x+ty\parallel -\parallel x\parallel }{t}$ exists for each $x,y\in {U}_{E}$. E is said to have a uniformly Gâteaux differentiable norm if for each $y\in {U}_{E}$, the limit is attained uniformly for all $x\in {U}_{E}$. E is said to be uniformly smooth or is said to have a uniformly Fréchet differentiable norm if the limit is attained uniformly for $x,y\in {U}_{E}$. Let $〈\cdot ,\cdot 〉$ denote the pairing between E and ${E}^{\ast }$. The normalized duality mapping $J:E\to {2}^{{E}^{\ast }}$ is defined by $J\left(x\right)=\left\{f\in {E}^{\ast }:〈x,f〉={\parallel x\parallel }^{2}={\parallel f\parallel }^{2}\right\}$, $\mathrm{\forall }x\in E$. In the sequel, we use j to denote the single-valued normalized duality mapping. It is known that if the norm of E is uniformly Gâteaux differentiable, then the duality mapping j is single-valued and uniformly norm to weak continuous on each bounded subset of E.

Let C be a nonempty closed convex subset of E. Let $T:C\to C$ be a mapping. In this paper, we use $F\left(T\right)$ to denote the set of fixed points of T. Recall that T is said to be α-contractive if there exits a constant $\alpha \in \left(0,1\right)$ such that $\parallel Tx-Ty\parallel \le \alpha \parallel x-y\parallel$, $\mathrm{\forall }x,y\in C$. T is said to be nonexpansive if $\alpha =1$. T is said to be pseudocontractive if there exists some $j\left(x-y\right)\in J\left(x-y\right)$ such that $〈Tx-Ty,j\left(x-y\right)〉\le {\parallel x-y\parallel }^{2}$, $\mathrm{\forall }x,y\in C$.

Recall that a closed convex subset C of a Banach space E is said to have normal structure if for each bounded closed convex subset K of C which contains at least two points, there exists an element x of K which is not a diametral point of K, i.e., $sup\left\{\parallel x-y\parallel :y\in K\right\}, where $d\left(K\right)$ is the diameter of K. Let D be a nonempty subset of C. Let $Q:C\to D$. Q is said to be contraction if ${Q}^{2}=Q$; sunny if for each $x\in C$ and $t\in \left(0,1\right)$, we have $Q\left(tx+\left(1-t\right)Qx\right)=Qx$; sunny nonexpansive retraction if Q is sunny, nonexpansive, and contraction. K is said to be a nonexpansive retract of C if there exists a nonexpansive retraction from C onto D; for more details, see [5] and the references therein.

Let I denote the identity operator on E. An operator $A\subset E×E$ with domain $D\left(A\right)=\left\{z\in E:Az\ne \mathrm{\varnothing }\right\}$ and range $R\left(A\right)=\bigcup \left\{Az:z\in D\left(A\right)\right\}$ is said to be accretive if for each ${x}_{i}\in D\left(A\right)$ and ${y}_{i}\in A{x}_{i}$, $i=1,2$, there exists $j\left({x}_{1}-{x}_{2}\right)\in J\left({x}_{1}-{x}_{2}\right)$ such that $〈{y}_{1}-{y}_{2},j\left({x}_{1}-{x}_{2}\right)〉\ge 0$. An accretive operator A is said to be m-accretive if $R\left(I+rA\right)=E$ for all $r>0$. In a real Hilbert space, an operator A is m-accretive if and only if A is maximal monotone. In this paper, we use ${A}^{-1}\left(0\right)$ to denote the set of zero points of A. For an accretive operator A, we can define a nonexpansive single-valued mapping ${J}_{r}:R\left(I+rA\right)\to D\left(A\right)$ by ${J}_{r}={\left(I+rA\right)}^{-1}$ for each $r>0$, which is called the resolvent of A.

One of classical methods of studying the problem $0\in Ax$, where $A\subset E×E$ is an accretive operator, is the proximal point algorithm (PPA) which was initiated by Martinet [6] and further developed by Rockafellar [3]. It is known that PPA is only weakly convergent; see Güler [7]. In many disciplines, including economics, image recovery, quantum physics, and control theory, problems arise in infinite dimension spaces. In such problems, strong convergence (norm convergence) is often much more desirable than weak convergence, for it translates the physically tangible property that the energy $\parallel {x}_{n}-x\parallel$ of the error between the iterate ${x}_{n}$ and the solution x eventually becomes arbitrarily small. The importance of strong convergence is also underlined in [7], where a convex function f is minimized via the proximal-point algorithm: it is shown that the rate of convergence of the value sequence $\left\{f\left({x}_{n}\right)\right\}$ is better when $\left\{{x}_{n}\right\}$ converges strongly than when it converges weakly. Such properties have a direct impact when the process is executed directly in the underlying infinite dimensional space.

Regularization methods recently have been investigated for treating zero points of accretive operators; see [822] and the references therein. In this paper, zero points of m-accretive operators are investigated based on a viscosity iterative algorithm with a computational error. A strong convergence theorem for zero points of m-accretive operators is established in a reflexive Banach space.

In order to state our main results, we need the following lemmas.

Lemma 2.1 [23]

Let $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ be bounded sequences in a Banach space E. Let $\left\{{\beta }_{n}\right\}$ be a sequence in $\left(0,1\right)$ with $0<{lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}\le {lim sup}_{n\to \mathrm{\infty }}{\beta }_{n}<1$. Suppose that ${x}_{n+1}=\left(1-{\beta }_{n}\right){y}_{n}+{\beta }_{n}{x}_{n}$, $\mathrm{\forall }n\ge 1$ and ${lim sup}_{n\to \mathrm{\infty }}\left(\parallel {y}_{n+1}-{y}_{n}\parallel -\parallel {x}_{n+1}-{x}_{n}\parallel \right)\le 0$. Then ${lim}_{n\to \mathrm{\infty }}\parallel {y}_{n}-{x}_{n}\parallel =0$.

Lemma 2.2 [21]

Let E be a real reflexive Banach space with the uniformly Gâteaux differentiable norm and the normal structure, and let C be a nonempty closed convex subset of E. Let $T:C\to C$ be a nonexpansive mapping with a fixed point, and let $f:C\to C$ be a fixed contraction with the coefficient $\alpha \in \left(0,1\right)$. Let $\left\{{x}_{t}\right\}$ be a sequence generated by the following ${x}_{t}=tf\left({x}_{t}\right)+\left(1-t\right)T{x}_{t}$, where $t\in \left(0,1\right)$. Then $\left\{{x}_{t}\right\}$ converges strongly as $t\to 0$ to a fixed point ${x}^{\ast }$ of T, which is the unique solution in $F\left(T\right)$ to the following variational inequality $〈f\left({x}^{\ast }\right)-{x}^{\ast },j\left({x}^{\ast }-p\right)〉\ge 0$, $\mathrm{\forall }p\in F\left(T\right)$.

Lemma 2.3 [24]

Let E be a Banach space, and let A be an m-accretive operator. For $\lambda >0$, $\mu >0$, and $x\in E$, we have ${J}_{\lambda }x={J}_{\mu }\left(\frac{\mu }{\lambda }x+\left(1-\frac{\mu }{\lambda }\right){J}_{\lambda }x\right)$, where ${J}_{\lambda }={\left(I+\lambda A\right)}^{-1}$ and ${J}_{\mu }={\left(I+\mu A\right)}^{-1}$.

Lemma 2.4 [25]

Let $\left\{{a}_{n}\right\}$ be a sequence of nonnegative numbers satisfying the condition ${a}_{n+1}\le \left(1-{t}_{n}\right){a}_{n}+{t}_{n}{b}_{n}+{c}_{n}$, $\mathrm{\forall }n\ge 0$, where $\left\{{t}_{n}\right\}$ is a number sequence in $\left(0,1\right)$ such that ${lim}_{n\to \mathrm{\infty }}{t}_{n}=0$ and ${\sum }_{n=0}^{\mathrm{\infty }}{t}_{n}=\mathrm{\infty }$, $\left\{{b}_{n}\right\}$ is a number sequence such that ${lim sup}_{n\to \mathrm{\infty }}{b}_{n}\le 0$, and $\left\{{c}_{n}\right\}$ is a positive number sequence such that ${\sum }_{n=0}^{\mathrm{\infty }}{c}_{n}<\mathrm{\infty }$. Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

## 3 Main results

Theorem 3.1 Let E be a real reflexive Banach space with the uniformly Gâteaux differentiable norm, and let A be an m-accretive operator in E. Assume that $C:=\overline{D\left(A\right)}$ is convex and has the normal structure. Let $f:C\to C$ be a fixed α-contraction. Let $\left\{{x}_{n}\right\}$ be a sequence generated in the following manner: ${x}_{0}\in C$ and

${x}_{n+1}={\beta }_{n}{x}_{n}+\left(1-{\beta }_{n}\right){J}_{{r}_{n}}\left({\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}+{e}_{n+1}\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,$

where $\left\{{\alpha }_{n}\right\}$ and $\left\{{\beta }_{n}\right\}$ are real number sequences in $\left(0,1\right)$, $\left\{{e}_{n}\right\}$ is a sequence in E, $\left\{{r}_{n}\right\}$ is a positive real number sequence, and ${J}_{{r}_{n}}={\left(I+{r}_{n}A\right)}^{-1}$. Assume that ${A}^{-1}\left(0\right)$ is not empty and the above control sequences satisfy the following restrictions:

1. (a)

${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$;

2. (b)

$0<{lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}\le {lim sup}_{n\to \mathrm{\infty }}{\beta }_{n}<1$;

3. (c)

${\sum }_{n=1}^{\mathrm{\infty }}\parallel {e}_{n}\parallel <\mathrm{\infty }$;

4. (d)

${r}_{n}\ge r>0$ and ${lim}_{n\to \mathrm{\infty }}|{r}_{n}-{r}_{n+1}|=0$.

Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to $\overline{x}\in {A}^{-1}\left(0\right)$, which is the unique solution to the following variational inequality $〈f\left(\overline{x}\right)-\overline{x},j\left(p-\overline{x}\right)〉\le 0$, $\mathrm{\forall }p\in {A}^{-1}\left(0\right)$.

Proof Fixing $p\in {A}^{-1}\left(0\right)$, we find that

$\begin{array}{rcl}\parallel {x}_{n+1}-p\parallel & \le & {\beta }_{n}\parallel {x}_{n}-p\parallel +\left(1-{\beta }_{n}\right)\parallel {J}_{{r}_{n}}\left({\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}+{e}_{n+1}\right)-p\parallel \\ \le & {\beta }_{n}\parallel {x}_{n}-p\parallel +\left(1-{\beta }_{n}\right)\left({\alpha }_{n}\parallel f\left({x}_{n}\right)-p\parallel +\left(1-{\alpha }_{n}\right)\parallel {x}_{n}-p\parallel +\parallel {e}_{n+1}\parallel \right)\\ \le & \left(1-{\alpha }_{n}\left(1-{\beta }_{n}\right)\left(1-\alpha \right)\right)\parallel {x}_{n}-p\parallel +{\alpha }_{n}\left(1-{\beta }_{n}\right)\parallel f\left(p\right)-p\parallel +\parallel {e}_{n+1}\parallel \\ \le & max\left\{\parallel {x}_{n}-p\parallel ,\frac{\parallel f\left(p\right)-p\parallel }{1-\alpha }\right\}+\parallel {e}_{n+1}\parallel \\ ⋮\\ \le & max\left\{\parallel {x}_{0}-p\parallel ,\frac{\parallel f\left(p\right)-p\parallel }{1-\alpha }\right\}+\sum _{i=1}^{n+1}\parallel {e}_{i}\parallel \\ \le & max\left\{\parallel {x}_{0}-p\parallel ,\frac{\parallel f\left(p\right)-p\parallel }{1-\alpha }\right\}+\sum _{i=1}^{\mathrm{\infty }}\parallel {e}_{i}\parallel <\mathrm{\infty }.\end{array}$

This proves that the sequence $\left\{{x}_{n}\right\}$ is bounded. Put ${y}_{n}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}+{e}_{n+1}$. It follows that

$\begin{array}{rcl}\parallel {y}_{n+1}-{y}_{n}\parallel & \le & {\alpha }_{n}\parallel f\left({x}_{n+1}\right)-f\left({x}_{n}\right)\parallel +|{\alpha }_{n+1}-{\alpha }_{n}|\parallel f\left({x}_{n}\right)-{x}_{n}\parallel \\ +\left(1-{\alpha }_{n+1}\right)\parallel {x}_{n+1}-{x}_{n}\parallel +\parallel {e}_{n+1}\parallel +\parallel {e}_{n+1}\parallel \\ \le & \parallel {x}_{n+1}-{x}_{n}\parallel +|{\alpha }_{n+1}-{\alpha }_{n}|\parallel f\left({x}_{n}\right)-{x}_{n}\parallel +\parallel {e}_{n+1}\parallel +\parallel {e}_{n+1}\parallel .\end{array}$
(3.1)

In view of Lemma 2.3, we find that

$\begin{array}{rcl}\parallel {J}_{{r}_{n+1}}{y}_{n+1}-{J}_{{r}_{n}}{y}_{n}\parallel & =& \parallel {J}_{{r}_{n}}\left(\frac{{r}_{n}}{{r}_{n+1}}{y}_{n+1}+\left(1-\frac{{r}_{n}}{{r}_{n+1}}\right){J}_{{r}_{n+1}}{y}_{n+1}\right)-{J}_{{r}_{n}}{y}_{n}\parallel \\ \le & \parallel \left(\frac{{r}_{n}}{{r}_{n+1}}{y}_{n+1}+\left(1-\frac{{r}_{n}}{{r}_{n+1}}\right){J}_{{r}_{n+1}}{y}_{n+1}\right)-{y}_{n}\parallel \\ \le & \parallel {y}_{n+1}-{y}_{n}\parallel +\frac{{r}_{n+1}-{r}_{n}}{r}M,\end{array}$
(3.2)

where M is an appropriate constant such that $M\ge {sup}_{n\ge 0}\left\{\parallel {J}_{{r}_{n+1}}{y}_{n+1}-{y}_{n+1}\parallel \right\}$. Substituting (3.1) into (3.2), we find that

$\begin{array}{c}\parallel {J}_{{r}_{n+1}}{y}_{n+1}-{J}_{{r}_{n}}{y}_{n}\parallel -\parallel {x}_{n+1}-{x}_{n}\parallel \hfill \\ \phantom{\rule{1em}{0ex}}\le |{\alpha }_{n+1}-{\alpha }_{n}|\parallel f\left({x}_{n}\right)-{x}_{n}\parallel +\parallel {e}_{n+1}\parallel +\parallel {e}_{n+1}\parallel +\frac{{r}_{n+1}-{r}_{n}}{r}M.\hfill \end{array}$

In view of the restrictions (a), (c) and (d), we find that

$\underset{n\to \mathrm{\infty }}{lim sup}\left(\parallel {J}_{{r}_{n+1}}{y}_{n+1}-{J}_{{r}_{n}}{y}_{n}\parallel -\parallel {x}_{n}-{x}_{n+1}\parallel \right)\le 0.$

It follows from Lemma 2.1 that

$\underset{n\to \mathrm{\infty }}{lim}\parallel {J}_{{r}_{n}}{y}_{n}-{x}_{n}\parallel =0.$
(3.3)

Notice that $\parallel {y}_{n}-{x}_{n}\parallel \le {\alpha }_{n}\parallel f\left({x}_{n}\right)-{x}_{n}\parallel +\parallel {e}_{n+1}\parallel$. It follows from the restrictions (a) and (c) that

$\underset{n\to \mathrm{\infty }}{lim}\parallel {y}_{n}-{x}_{n}\parallel =0.$
(3.4)

In view of $\parallel {J}_{{r}_{n}}{y}_{n}-{y}_{n}\parallel \le \parallel {J}_{{r}_{n}}{y}_{n}-{x}_{n}\parallel +\parallel {x}_{n}-{y}_{n}\parallel$, we find from (3.3) and (3.4) that

$\underset{n\to \mathrm{\infty }}{lim}\parallel {J}_{{r}_{n}}{y}_{n}-{y}_{n}\parallel =0.$
(3.5)

Take a fixed number s such that $r>s>0$. It follows from Lemma 2.3 that

$\begin{array}{rcl}\parallel {y}_{n}-{J}_{s}{y}_{n}\parallel & \le & \parallel {y}_{n}-{J}_{{r}_{n}}{y}_{n}\parallel +\parallel {J}_{s}\left(\frac{s}{{r}_{n}}{y}_{n}+\left(1-\frac{s}{{r}_{n}}\right){J}_{{r}_{n}}{y}_{n}\right)-{J}_{s}{y}_{n}\parallel \\ \le & \parallel {y}_{n}-{J}_{{r}_{n}}{y}_{n}\parallel +\parallel \left(1-\frac{s}{{r}_{n}}\right)\left({J}_{{r}_{n}}{y}_{n}-{y}_{n}\right)\parallel \\ \le & 2\parallel {y}_{n}-{J}_{{r}_{n}}{y}_{n}\parallel .\end{array}$

This implies from (3.5) that

$\underset{n\to \mathrm{\infty }}{lim}\parallel {y}_{n}-{J}_{s}{y}_{n}\parallel =0.$
(3.6)

Now, we are in a position to claim that ${lim sup}_{n\to \mathrm{\infty }}〈\overline{x}-f\left(\overline{x}\right),j\left({y}_{n}-\overline{x}\right)〉\le 0$, where $\overline{x}={lim}_{t\to 0}{x}_{t}$, and ${x}_{t}$ solves the fixed point equation ${x}_{t}=tf\left({x}_{t}\right)+\left(1-t\right){J}_{s}{x}_{t}$, $\mathrm{\forall }t\in \left(0,1\right)$. It follows that

$\begin{array}{rcl}{\parallel {x}_{t}-{y}_{n}\parallel }^{2}& \le & \left(1-t\right)\left({\parallel {x}_{t}-{y}_{n}\parallel }^{2}+\parallel {J}_{s}{y}_{n}-{y}_{n}\parallel \parallel {x}_{t}-{y}_{n}\parallel \right)\\ +t〈f\left({x}_{t}\right)-{x}_{t},j\left({x}_{t}-{y}_{n}\right)〉+t{\parallel {x}_{t}-{y}_{n}\parallel }^{2}\\ \le & {\parallel {x}_{t}-{y}_{n}\parallel }^{2}+\parallel {J}_{s}{y}_{n}-{y}_{n}\parallel \parallel {x}_{t}-{y}_{n}\parallel +t〈f\left({x}_{t}\right)-{x}_{t},j\left({x}_{t}-{y}_{n}\right)〉.\end{array}$

This implies that $〈{x}_{t}-f\left({x}_{t}\right),j\left({x}_{t}-{y}_{n}\right)〉\le \frac{1}{t}\parallel {J}_{s}{y}_{n}-{y}_{n}\parallel \parallel {x}_{t}-{y}_{n}\parallel$, $\mathrm{\forall }t\in \left(0,1\right)$. In view of (3.6), we find that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{t}-f\left({x}_{t}\right),j\left({x}_{t}-{y}_{n}\right)〉\le 0.$
(3.7)

Since ${x}_{t}\to \overline{x}$, as $t\to 0$ and the fact that j is strong to weak uniformly continuous on bounded subsets of E, we see that

Hence, for any $ϵ>0$, there exists $\lambda >0$ such that $\mathrm{\forall }t\in \left(0,\lambda \right)$ the following inequality holds $〈f\left(\overline{x}\right)-\overline{x},j\left({y}_{n}-\overline{x}\right)〉\le 〈{x}_{t}-f\left({x}_{t}\right),j\left({x}_{t}-{y}_{n}\right)〉+ϵ$. Taking ${lim sup}_{n\to \mathrm{\infty }}$ in the above inequality, we find that ${lim sup}_{n\to \mathrm{\infty }}〈f\left(\overline{x}\right)-\overline{x},j\left({y}_{n}-\overline{x}\right)〉\le {lim sup}_{n\to \mathrm{\infty }}〈{x}_{t}-f\left({x}_{t}\right),j\left({x}_{t}-{y}_{n}\right)〉+ϵ$. Since ϵ is arbitrary, we obtain from (3.7) that ${lim sup}_{n\to \mathrm{\infty }}〈f\left(\overline{x}\right)-\overline{x},j\left({y}_{n}-\overline{x}\right)〉\le 0$.

Finally, we prove that ${x}_{n}\to \overline{x}$ as $n\to \mathrm{\infty }$. Note that

${\parallel {y}_{n}-\overline{x}\parallel }^{2}\le 2{\alpha }_{n}〈f\left({x}_{n}\right)-\overline{x},j\left({y}_{n}-\overline{x}\right)〉+\left(1-{\alpha }_{n}\right){\parallel {x}_{n}-\overline{x}\parallel }^{2}+2\parallel {e}_{n+1}\parallel \parallel {y}_{n}-\overline{x}\parallel .$
(3.8)

On the other hand, we have

$\begin{array}{rcl}{\parallel {x}_{n+1}-\overline{x}\parallel }^{2}& \le & {\beta }_{n}〈{x}_{n}-\overline{x},j\left({x}_{n+1}-\overline{x}\right)〉+\left(1-{\beta }_{n}\right)〈{J}_{{r}_{n}}{y}_{n}-\overline{x},j\left({x}_{n+1}-\overline{x}\right)〉\\ \le & \frac{{\beta }_{n}}{2}\left({\parallel {x}_{n}-\overline{x}\parallel }^{2}+{\parallel {x}_{n+1}-\overline{x}\parallel }^{2}\right)+\frac{1-{\beta }_{n}}{2}\left({\parallel {y}_{n}-\overline{x}\parallel }^{2}+{\parallel {x}_{n+1}-\overline{x}\parallel }^{2}\right).\end{array}$

It follows from (3.8) that

$\begin{array}{rcl}{\parallel {x}_{n+1}-\overline{x}\parallel }^{2}& \le & \left(1-{\alpha }_{n}\left(1-{\beta }_{n}\right)\right){\parallel {x}_{n}-\overline{x}\parallel }^{2}+2{\alpha }_{n}\left(1-{\beta }_{n}\right)〈f\left({x}_{n}\right)-\overline{x},j\left({y}_{n}-\overline{x}\right)〉\\ +2\parallel {e}_{n+1}\parallel \parallel {y}_{n}-\overline{x}\parallel .\end{array}$

In view of Lemma 2.4, we find the desired conclusion immediately. □

## 4 Applications

In this section, we give two applications of our main result in the framework of Hilbert spaces.

First, we consider, in the framework of Hilbert spaces, solutions of a Ky Fan inequality, which is known as an equilibrium problem in the terminology of Blum and Oettli; see [26] and [27] and the references therein.

Let C be a nonempty closed and convex subset of a Hilbert space H. Let F be a bifunction of $C×C$ into , where denotes the set of real numbers. Recall the following equilibrium problem:

(4.1)

To study equilibrium problem (4.1), we may assume that F satisfies the following restrictions:

1. (A1)

$F\left(x,x\right)=0$ for all $x\in C$;

2. (A2)

F is monotone, i.e., $F\left(x,y\right)+F\left(y,x\right)\le 0$ for all $x,y\in C$;

3. (A3)

for each $x,y,z\in C$, ${lim sup}_{t↓0}F\left(tz+\left(1-t\right)x,y\right)\le F\left(x,y\right)$;

4. (A4)

for each $x\in C$, $y↦F\left(x,y\right)$ is convex and lower semi-continuous.

The following lemma can be found in [27].

Lemma 4.1 Let C be a nonempty, closed, and convex subset of H and $F:C×C\to \mathbb{R}$ be a bifunction satisfying (A1)-(A4). Then, for any $s>0$ and $x\in H$, there exists $z\in C$ such that $F\left(z,y\right)+\frac{1}{s}〈y-z,z-x〉\ge 0$, $\mathrm{\forall }y\in C$. Further, define

${T}_{s}x=\left\{z\in C:F\left(z,y\right)+\frac{1}{s}〈y-z,z-x〉\ge 0,\mathrm{\forall }y\in C\right\}$
(4.2)

for all $s>0$ and $x\in H$. Then (1) ${T}_{s}$ is single-valued and firmly nonexpansive; (2) $F\left({T}_{s}\right)=\mathit{EP}\left(F\right)$ is closed and convex.

Lemma 4.2 [28]

Let F be a bifunction from $C×C$ to which satisfies (A1)-(A4), and let ${A}_{F}$ be a multivalued mapping of H into itself defined by

${A}_{F}x=\left\{\begin{array}{cc}\left\{z\in H:F\left(x,y\right)\ge 〈y-x,z〉,\mathrm{\forall }y\in C\right\},\hfill & x\in C,\hfill \\ \mathrm{\varnothing },\hfill & x\notin C.\hfill \end{array}$
(4.3)

Then ${A}_{F}$ is a maximal monotone operator with domain $D\left({A}_{F}\right)\subset C$, $\mathit{EP}\left(F\right)={A}_{F}^{-1}\left(0\right)$, where $\mathit{EP}\left(F\right)$ stands for the solution set of (4.1), and

${T}_{s}x={\left(I+s{A}_{F}\right)}^{-1}x,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in H,s>0,$

where ${T}_{s}$ is defined as in (4.2).

Theorem 4.3 Let $F:C×C\to \mathbb{R}$ be a bifunction satisfying (A1)-(A4). Let $f:C\to C$ be a fixed α-contraction. Let $\left\{{x}_{n}\right\}$ be a sequence generated in the following manner: ${x}_{0}\in C$ and

${x}_{n+1}={\beta }_{n}{x}_{n}+\left(1-{\beta }_{n}\right){T}_{{r}_{n}}\left({\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}+{e}_{n+1}\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,$

where $\left\{{\alpha }_{n}\right\}$ and $\left\{{\beta }_{n}\right\}$ are real number sequences in $\left(0,1\right)$, $\left\{{e}_{n}\right\}$ is a sequence in H, $\left\{{r}_{n}\right\}$ is a positive real number sequence, and ${T}_{{r}_{n}}={\left(I+{r}_{n}{A}_{F}\right)}^{-1}$. Assume that $\mathit{EP}\left(F\right)$ is not empty and the above control sequences satisfy the restrictions (a), (b), (c) and (d) in Theorem  3.1. Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to $\overline{x}\in \mathit{EP}\left(F\right)$, which is the unique solution to the following variational inequality $〈f\left(\overline{x}\right)-\overline{x},p-\overline{x}〉\le 0$, $\mathrm{\forall }p\in {A}^{-1}\left(0\right)$.

Next, we consider the problem of finding a minimizer of a proper convex lower semicontinuous function.

For a proper lower semicontinuous convex function $g:H\to \left(-\mathrm{\infty },\mathrm{\infty }\right]$, the subdifferential mapping ∂g of g is defined by

$\partial g\left(x\right)=\left\{{x}^{\ast }\in H:g\left(x\right)+〈y-x,{x}^{\ast }〉\le g\left(y\right),\mathrm{\forall }y\in H\right\},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in H.$

Rockafellar [2] proved that ∂g is a maximal monotone operator. It is easy to verify that $0\in \partial g\left(v\right)$ if and only if $g\left(v\right)={min}_{x\in H}g\left(x\right)$.

Theorem 4.4 Let $g:H\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ be a proper convex lower semicontinuous function such that ${\left(\partial g\right)}^{-1}\left(0\right)$ is not empty. Let $f:H\to H$ be a κ-contraction, and let $\left\{{x}_{n}\right\}$ be a sequence in H in the following process: ${x}_{0}\in H$ and

$\left\{\begin{array}{c}{y}_{n}=arg{min}_{z\in H}\left\{g\left(z\right)+\frac{{\parallel z-{\alpha }_{n}f\left({x}_{n}\right)-\left(1-{\alpha }_{n}\right){x}_{n}-{e}_{n+1}\parallel }^{2}}{2{r}_{n}}\right\},\hfill \\ {x}_{n+1}={\beta }_{n}{x}_{n}+\left(1-{\beta }_{n}\right){y}_{n},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,\hfill \end{array}$

where $\left\{{\alpha }_{n}\right\}$ and $\left\{{\beta }_{n}\right\}$ are real number sequences in $\left(0,1\right)$, $\left\{{e}_{n}\right\}$ is a sequence in E, and $\left\{{r}_{n}\right\}$ is a positive real number sequence. Assume that the above control sequences satisfy the restrictions in Theorem  3.1. Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to $\overline{x}\in {\left(\partial f\right)}^{-1}\left(0\right)$, which is the unique solution to the following variational inequality $〈f\left(\overline{x}\right)-\overline{x},j\left(p-\overline{x}\right)〉\le 0$, $\mathrm{\forall }p\in {\left(\partial f\right)}^{-1}\left(0\right)$.

Proof Since $g:H\to \left(-\mathrm{\infty },\mathrm{\infty }\right]$ is a proper convex and lower semicontinuous function, we see that subdifferential ∂g of g is maximal monotone. Note that

${y}_{n}=arg\underset{z\in H}{min}\left\{g\left(z\right)+\frac{{\parallel z-{\alpha }_{n}f\left({x}_{n}\right)-\left(1-{\alpha }_{n}\right){x}_{n}-{e}_{n+1}\parallel }^{2}}{2{r}_{n}}\right\}$

is equivalent to

$0\in \partial g\left({y}_{n}\right)+\frac{1}{{r}_{n}}\left({y}_{n}-{\alpha }_{n}f\left({x}_{n}\right)-\left(1-{\alpha }_{n}\right){x}_{n}-{e}_{n+1}\right).$

It follows that

${\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}+{e}_{n+1}\in {y}_{n}+{r}_{n}\partial g\left({y}_{n}\right).$

Following the proof in Theorem 3.1, we draw the desired conclusion immediately. □

## References

1. Browder FE: Existence and approximation of solutions of nonlinear variational inequalities. Proc. Natl. Acad. Sci. USA 1966, 56: 1080–1086. 10.1073/pnas.56.4.1080

2. Rockfellar RT: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1976, 1: 97–116. 10.1287/moor.1.2.97

3. Rockfellar RT: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

4. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert spaces. Math. Program. 2000, 87: 189–202.

5. Bruck RE: Nonexpansive projections on subsets of Banach spaces. Pac. J. Math. 1973, 47: 341–355. 10.2140/pjm.1973.47.341

6. Martinet B: Regularisation d’inequations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970, 4: 154–158.

7. Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

8. Cho SY, Kang SM: Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24: 224–228. 10.1016/j.aml.2010.09.008

9. Cho SY, Kang SM: Zero point theorems for m -accretive operators in a Banach space. Fixed Point Theory 2012, 13: 49–58.

10. Yang S: Zero theorems of accretive operators in reflexive Banach spaces. J. Nonlinear Funct. Anal. 2013., 2013: Article ID 2

11. Wen M, Hu C: Strong convergence of an new iterative method for a zero of accretive operator and nonexpansive mapping. Fixed Point Theory Appl. 2012., 2012: Article ID 98

12. Luo H, Wang Y: Iterative approximation for the common solutions of a infinite variational inequality system for inverse-strongly accretive mappings. J. Math. Comput. Sci. 2012, 2: 1660–1670.

13. Cho SY, Qin X, Kang SM: Iterative processes for common fixed points of two different families of mappings with applications. J. Glob. Optim. 2013, 57: 1429–1446. 10.1007/s10898-012-0017-y

14. Cho SY: Strong convergence of an iterative algorithm for sums of two monotone operators. J. Fixed Point Theory 2013., 2013: Article ID 6

15. Jung JS: Strong convergence of viscosity approximation methods for finding zeros of accretive operators in Banach spaces. Nonlinear Anal. 2010, 72: 449–459. 10.1016/j.na.2009.06.079

16. Zegeye H, Shahzad N: Strong convergence theorem for a common point of solution of variational inequality and fixed point problem. Adv. Fixed Point Theory 2012, 2: 374–397.

17. Yang S: A proximal point algorithm for zeros of monotone operators. Math. Finance Lett. 2013., 2013: Article ID 7

18. Cho SY, Qin X, Kang SM: Hybrid projection algorithms for treating common fixed points of a family of demicontinuous pseudocontractions. Appl. Math. Lett. 2012, 25: 854–857. 10.1016/j.aml.2011.10.031

19. Cho SY, Kang SM: Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32: 1607–1618. 10.1016/S0252-9602(12)60127-1

20. Wu C, Lv S: Bregman projection methods for zeros of monotone operators. J. Fixed Point Theory 2013., 2013: Article ID 7

21. Qin X, Cho S, Wang L: Iterative algorithms with errors for zero points of m -accretive operators. Fixed Point Theory Appl. 2013., 2013: Article ID 148

22. Qin X, Su Y: Approximation of a zero point of accretive operator in Banach spaces. J. Math. Anal. Appl. 2007, 329: 415–424. 10.1016/j.jmaa.2006.06.067

23. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochne integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017

24. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Space. Noordhoff, Groningen; 1976.

25. Liu L: Ishikawa-type and Mann-type iterative processes with errors for constructing solutions of nonlinear equations involving m -accretive operators in Banach spaces. Nonlinear Anal. 1998, 34: 307–317. 10.1016/S0362-546X(97)00579-8

26. Fan K: A minimax inequality and applications. III. In Inequality. Edited by: Shisha O. Academic Press, New York; 1972:103–113.

27. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

28. Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2

## Author information

Authors

### Corresponding author

Correspondence to Yuan Qing.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

Both authors contributed equally to this manuscript. Both authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Qing, Y., Cho, S.Y. A regularization algorithm for zero points of accretive operators. Fixed Point Theory Appl 2013, 341 (2013). https://doi.org/10.1186/1687-1812-2013-341

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-1812-2013-341

### Keywords

• accretive operator
• fixed point
• nonexpansive mapping
• regularization algorithm
• zero point