# Approximation of a zero point of monotone operators with nonsummable errors

## Abstract

In this paper, we study an iterative scheme for two different types of resolvents of a monotone operator defined on a Banach space. These resolvents are generalizations of resolvents of a monotone operator in a Hilbert space. We obtain iterative approximations of a zero point of a monotone operator generated by the shrinking projection method with errors in a Banach space. Using our result, we discuss some applications.

## Introduction

Let H be a real Hilbert space and let $$A \subset H \times H$$ be a maximal monotone operator. Then the zero point problem is to find $$u \in H$$ such that

$$0 \in A u.$$
(1.1)

Such a $$u \in H$$ is called a zero point (or a zero) of A. The set of zero points of A is denoted by $$A^{-1}0$$. This problem is connected with many problems in Nonlinear Analysis and Optimization, that is, convex minimization problems, variational inequality problems, equilibrium problems and so on. A well-known method for solving (1.1) is the proximal point algorithm: $$x_{1} \in H$$ and

$$x_{n+1}=J_{r_{n}}x_{n}, \quad n=1,2, \ldots,$$
(1.2)

where $$\{r_{n}\} \subset\mathopen]0, \infty\mathclose[$$ and $$J_{r_{n}}=(I+r_{n}A)^{-1}$$. This algorithm was first introduced by Martinet [1]. In 1976, Rockafellar [2] proved that if $$\liminf_{n} r_{n} > 0$$ and $$A^{-1}0 \ne\emptyset$$, then the sequence $$\{x_{n}\}$$ defined by (1.2) converges weakly to a solution of the zero point problem. Later, many researchers have studied this problem; see [39] and others.

On the other hand, Kimura [10] introduced the following iterative scheme for finding a fixed point of nonexpansive mappings by the shrinking projection method with error in a Hilbert space:

### Theorem 1.1

(Kimura [10])

Let C be a bounded closed convex subset of a Hilbert space H with $$D= \operatorname {diam}C =\sup_{x,y\in C}\Vert x-y\Vert < \infty$$, and let $$T:C\to H$$ be a nonexpansive mapping having a fixed point. Let $$\{\epsilon_{n}\}$$ be a nonnegative real sequence such that $$\epsilon_{0}=\limsup_{n} \epsilon_{n} < \infty$$. For a given point $$u\in H$$, generate an iterative sequence $$\{x_{n}\}$$ as follows: $$x_{1} \in C$$ such that $$\Vert x_{1}-u\Vert <\epsilon_{1}$$, $$C_{1} =C$$,

\begin{aligned} & C_{n+1} = \bigl\{ z \in C : \Vert z-Tx_{n}\Vert \leq \Vert z-x_{n}\Vert \bigr\} \cap C_{n}, \\ & x_{n+1} \in C_{n+1} \quad \textit{such that}\quad \Vert u-x_{n+1} \Vert ^{2} \leq d(u,C_{n+1})^{2}+ \epsilon^{2}_{n+1} \end{aligned}

for all $$n \in \mathbb {N}$$. Then

$$\limsup_{n\to\infty} \Vert x_{n}-Tx_{n}\Vert \leq2\epsilon_{0}.$$

Further, if $$\epsilon_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{F(T)}u \in F(T)$$.

We remark that the original result of the theorem above deals with a family of nonexpansive mappings, and the shrinking projection method was first introduced by Takahashi et al. [11]. This result was extended to more general Banach spaces by Kimura [12] (see also Ibaraki and Kimura [13]).

In this paper, we study the shrinking projection method with error introduced by Kimura [10] (see also [12, 14]). We obtain an iterative approximation of a zero point of a monotone operator generated by the shrinking projection method with errors in a Banach space. Using our result, we discuss some applications.

## Preliminaries

Let E be a real Banach space with its dual $$E^{*}$$. The normalized duality mapping J from E into $$E^{*}$$ is defined by

$$Jx=\bigl\{ x^{*} \in E^{*}: \bigl\langle x, x^{*} \bigr\rangle = \Vert x \Vert ^{2} = \bigl\Vert x^{*}\bigr\Vert ^{2}\bigr\}$$

for each $$x \in E$$. We also know the following properties: see [15, 16] for more details.

1. (1)

$$Jx \ne\emptyset$$ for each $$x \in E$$;

2. (2)

if E is reflexive, then J is surjective;

3. (3)

if E is smooth, then the duality mapping J is single valued.

4. (4)

if E is strictly convex, then J is one-to-one and satisfies that $$\langle x-y, x^{*}-y^{*} \rangle> 0$$ for each $$x,y \in E$$ with $$x \neq y$$, $$x^{*} \in Jx$$ and $$y^{*} \in Jy$$;

5. (5)

if E is reflexive, smooth, and strictly convex, then the duality mapping $$J_{*}: E^{*} \to E$$ is the inverse of J, that is, $$J_{*} = J^{-1}$$;

6. (6)

if E uniformly smooth, then the duality mapping J is uniformly norm to norm continuous on each bounded set of E.

Let E be a reflexive and strictly convex Banach space and let C be a nonempty closed convex subset of E. It is well known that for each $$x\in E$$ there exists a unique point $$z \in C$$ such that $$\Vert x-z\Vert =\min\{\Vert x-y\Vert : y \in C\}$$. Such a point z is denoted by $$P_{C} x$$ and $$P_{C}$$ is called the metric projection of E onto C. The following result is well known; see, for instance, [16].

### Lemma 2.1

Let E be a reflexive, smooth, and strictly convex Banach space, let C be a nonempty closed convex subset of E, let $$P_{C}$$ be the metric projection of E onto C, let $$x \in E$$ and let $$x_{0} \in C$$. Then $$x_{0} = P_{C} x$$ if and only if

$$\bigl\langle x_{0}-y, J(x-x_{0}) \bigr\rangle \geq0$$

for all $$y \in C$$.

Let C be a nonempty closed convex subset of a smooth Banach space E. A mapping $$T: C \to E$$ is said to be of type (P) [17] if

$$\bigl\langle Tx-Ty, J(x-Tx)-J(y-Ty) \bigr\rangle \geq0$$

for each $$x,y \in C$$. A mapping $$T: C \to E$$ is said to be of type (Q) [17, 18] if

$$\bigl\langle Tx-Ty, (Jx-JTx)-(Jy-JTy) \bigr\rangle \geq0$$

for each $$x,y \in C$$. We denote by $$F(T)$$ the set of fixed points of T. A point p in C is said to be an asymptotic fixed point of T if C contains a sequence $$\{x_{n}\}$$ such that $$x_{n}\rightharpoonup p$$ and $$x_{n} -Tx_{n} \to0$$. The set of all asymptotic fixed points of T is denoted by $$\hat{F}(T)$$. It is clear that if $$T: C \to E$$ is of type (P) and $$F(T)$$ is nonempty, then

$$\bigl\langle Tx-p, J(x-Tx) \bigr\rangle \geq0$$
(2.1)

for each $$x \in C$$ and $$p \in F(T)$$. Let E be a reflexive, smooth, and strictly convex Banach space and let C be a nonempty closed convex subset of E. It is well known that the metric projection $$P_{C}$$ of E onto C is a mapping of type (P). We also know that if $$T: C \to E$$ is of type (Q) and $$F(T)$$ is nonempty, then

$$\langle Tx-p, Jx-JTx \rangle\geq0$$
(2.2)

for each $$x \in C$$ and $$p \in F(T)$$.

The following results describe the relation between the set of fixed points and that of asymptotic fixed points for each type of mapping.

### Lemma 2.2

(Aoyama-Kohsaka-Takahashi [19])

Let E be a smooth Banach space, let C be a nonempty closed convex subset of E and let $$T: C \to E$$ be a mapping of type (P). If $$F(T)$$ is nonempty, then $$F(T)$$ is closed and convex and $$F(T)=\hat{F}(T)$$.

### Lemma 2.3

(Kohsaka-Takahashi [18])

Let E be a strictly convex Banach space whose norm is uniformly Gâteaux differentiable, let C be a nonempty closed convex subset of E and let $$T: C \to E$$ be a mapping of type (Q). If $$F(T)$$ is nonempty, then $$F(T)$$ is closed and convex and $$F(T)=\hat{F}(T)$$.

In 1984, Tsukada [20] proved the following theorem for the metric projections in a Banach space. For the exact definition of Mosco limit $$\mathrm {M}\text{-}\!\lim _{n} C_{n}$$, see [21].

### Theorem 2.4

Let E be a reflexive and strictly convex Banach space and let $$\{C_{n}\}$$ be a sequence of nonempty closed convex subsets of E. If $$C_{0} =\mathrm {M}\text{-}\!\lim _{n} C_{n}$$ exists and is nonempty, then for each $$x \in E$$, $$\{P_{C_{n}}x\}$$ converges weakly to $$P_{C_{0}}x$$, where $$P_{C_{n}}$$ is the metric projection of E onto $$C_{n}$$. Moreover, if E has the Kadec-Klee property, the convergence is in the strong topology.

One of the simplest example of the sequence $$\{C_{n}\}$$ satisfying the condition in this theorem above is a decreasing sequence with respect to inclusion; $$C_{n+1}\subset C_{n}$$ for each $$n\in \mathbb {N}$$. In this case, $$\mathrm {M}\text{-}\!\lim C_{n} =\bigcap_{n=1}^{\infty} C_{n}$$ (see [7, 12, 21, 22] for more details).

Let E be a smooth Banach space and consider the following function $$V: E \times E \to \mathbb {R}$$ defined by

$$V(x, y) = \Vert x\Vert ^{2} - 2\langle x, Jy \rangle+ \Vert y\Vert ^{2}$$
(2.3)

for each $$x,y \in E$$. We know the following properties:

1. (1)

$$(\Vert x\Vert -\Vert y\Vert )^{2} \leq V(x,y) \leq(\Vert x\Vert +\Vert y\Vert )^{2}$$ for each $$x,y \in E$$;

2. (2)

$$V(x,y) + V(y,x) = 2 \langle x-y, Jx-Jy \rangle$$ for each $$x,y \in E$$;

3. (3)

$$V(x,y) = V(x,z) + V(z,y) + 2 \langle x-z, Jz-Jy \rangle$$ for each $$x,y,z \in E$$;

4. (4)

if E is additionally assumed to be strictly convex, then $$V(x,y)=0$$ if and only if $$x=y$$.

### Lemma 2.5

(Kamimura-Takahashi [23])

Let E be a smooth and uniformly convex Banach space and let $$\{x_{n}\}$$ and $$\{y_{n}\}$$ be sequences in E such that either $$\{x_{n}\}$$ or $$\{y_{n}\}$$ is bounded. If $$\lim_{n} V(x_{n}, y_{n})=0$$, then $$\lim_{n} \Vert x_{n}-y_{n} \Vert =0$$.

The following results show the existence of mappings $$\underline{g}_{r}$$ and $$\overline{g}_{r}$$, related to the convex structures of a Banach space E. These mappings play important roles in our result.

### Theorem 2.6

(Xu [24])

Let E be a Banach space, $$r\in\mathopen]0, \infty\mathclose[$$ and $$B_{r}= \{x\in E : \Vert x \Vert \leq r\}$$. Then

1. (i)

if E is uniformly convex, then there exists a continuous, strictly increasing, and convex function $$\underline{g}_{r}:[0,2r] \to\mathopen[0,\infty\mathclose[$$ with $$\underline{g}_{r}(0)=0$$ such that

$$\bigl\Vert \alpha x +(1-\alpha) y\bigr\Vert ^{2} \leq\alpha \Vert x\Vert ^{2}+(1-\alpha)\Vert y \Vert ^{2} -\alpha(1- \alpha)\underline{g}_{r}\bigl(\Vert x-y \Vert \bigr)$$

for all $$x,y\in B_{r}$$ and $$\alpha\in[0,1]$$;

2. (ii)

if E is uniformly smooth, then there exists a continuous, strictly increasing, and convex function $$\overline{g}_{r}:[0,2r] \to\mathopen[0,\infty\mathclose[$$ with $$\overline{g}_{r}(0)=0$$ such that

$$\bigl\Vert \alpha x +(1-\alpha) y\bigr\Vert ^{2} \geq\alpha \Vert x\Vert ^{2}+(1-\alpha)\Vert y \Vert ^{2} -\alpha(1- \alpha)\overline{g}_{r}\bigl(\Vert x-y \Vert \bigr)$$

for all $$x,y\in B_{r}$$ and $$\alpha\in[0,1]$$.

### Theorem 2.7

(Kimura [12])

Let E be a uniformly smooth and uniformly convex Banach space and let $$r>0$$. Then the function $$\underline{g}_{r}$$ and $$\overline{g}_{r}$$ in Theorem  2.6 satisfies

$$\underline{g}_{r}\bigl(\Vert x-y\Vert \bigr)\leq V(x,y)\leq \overline{g}_{r}\bigl(\Vert x-y\Vert \bigr)$$

for all $$x,y\in B_{r}$$.

## Approximation theorem for the resolvents of type (P)

In this section, we discuss an iterative scheme of resolvents of a monotone operator defined on a Banach space. Let E be a reflexive, smooth, and strictly convex Banach space. An operator $$A \subset E \times E^{*}$$ with domain $$D(A)=\{ x \in E: Ax \ne\emptyset\}$$ and range $$R(A)=\bigcup\{Ax: x \in D(A)\}$$ is said to be monotone if $$\langle x-y, x^{*}-y^{*} \rangle\geq0$$ for any $$(x, x^{*}), (y, y^{*}) \in A$$. A monotone operator A is said to be maximal if $$A=B$$ whenever $$B \subset E \times E^{*}$$ is a monotone operator such that $$A \subset B$$. We denote by $$A^{-1}0$$ the set $$\{z\in D(A): 0\in Az\}$$.

Let C be a nonempty closed convex subset of E, let $$r>0$$ and let $$A\subset E\times E^{*}$$ be a monotone operator satisfying

$$D(A) \subset C \subset R\bigl(I+rJ^{-1}A\bigr)$$
(3.1)

for $$r>0$$. It is well known that if A is maximal monotone operator, then $$R(I+rJ^{-1}A)=E$$; see [2527]. Hence, if A is maximal monotone, then (3.1) holds for $$C=\overline{D(A)}$$. We also know that $$\overline{D(A)}$$ is convex; see [28]. If A satisfies (3.1) for $$r>0$$, we can define the resolvent (of type (P)) $$P_{r}:C\to D(A)$$ of A by

$$P_{r} x =\bigl\{ z\in E : 0\in J(z-x)+rAz \bigr\}$$
(3.2)

for all $$x\in C$$. In other words, $$P_{r}x=(I+rJ^{-1}A)^{-1}x$$ for all $$x\in C$$. The Yosida approximation $$A_{r}:C\to E^{*}$$ is also defined $$A_{r}x=J(x-P_{r}x)/r$$ for all $$x\in C$$. We know the following; see, for instance, [15, 17, 19]:

1. (1)

$$P_{r}$$ is mapping of type (P) from C into $$D(A)$$;

2. (2)

$$(P_{r} x, A_{r}x)\in A$$ for all $$x \in C$$;

3. (3)

$$\Vert A_{r}x \Vert \leq \vert Ax \vert :=\inf\{\Vert x^{*}\Vert : x^{*} \in Ax\}$$ for all $$x \in D(A)$$;

4. (4)

$$F(P_{r})=A^{-1}0$$.

We obtain an approximation theorem for a zero point of a monotone operator in a smooth and uniformly convex Banach space by using the resolvent of type (P).

### Theorem 3.1

Let E be a smooth and uniformly convex Banach space and let $$A\subset E\times E^{*}$$ be a monotone operator with $$A^{-1}0 \ne \emptyset$$. Let $$\{r_{n}\}$$ be a positive real sequence such that $$\liminf_{n} r_{n} >0$$, let C be a nonempty bounded closed convex subset of E satisfying

$$D(A) \subset C \subset R\bigl(I+r_{n}J^{-1}A\bigr)$$

for all $$n\in \mathbb {N}$$ and let $$r\in\mathopen]0,\infty\mathclose[$$ such that $$C \subset B_{r}$$. Let $$\{\delta_{n}\}$$ be a nonnegative real sequence and let $$\delta_{0}=\limsup_{n} \delta_{n}$$. For a given point $$u\in E$$, generate a sequence $$\{x_{n}\}$$ by $$x_{1} = x \in C$$, $$C_{1} =C$$, and

\begin{aligned} & y_{n} = P_{r_{n}} x_{n}, \\ & C_{n+1} = \bigl\{ z \in C : \bigl\langle y_{n} - z, J(x_{n} - y_{n}) \bigr\rangle \geq0\bigr\} \cap C_{n}, \\ & x_{n+1} \in\bigl\{ z \in C : \Vert u-z\Vert ^{2} \leq d(u,C_{n+1})^{2}+\delta _{n+1}\bigr\} \cap C_{n+1}, \end{aligned}

for all $$n \in \mathbb {N}$$. Then

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\underline {g}_{r}^{-1}(\delta_{0}).$$

Moreover, if $$\delta_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{A^{-1}0}u$$.

### Proof

Since $$C_{n}$$ includes $$A^{-1}0\ne\emptyset$$ for all $$n\in \mathbb {N}$$, $$\{C_{n}\}$$ is a sequence of nonempty closed convex subsets and, by definition, it is decreasing with respect to inclusion. Let $$p_{n}=P_{C_{n}}u$$ for all $$n\in \mathbb {N}$$. Then, by Theorem 2.4, we see that $$\{p_{n}\}$$ converges strongly to $$p_{0}=P_{C_{0}}u$$, where $$C_{0}=\bigcap_{n=1}^{\infty}C_{n}$$. Since $$x_{n}\in C_{n}$$ and $$d(u,C_{n})=\Vert u-p_{n} \Vert$$, we see that

$$\Vert u-x_{n} \Vert ^{2} \leq \Vert u-p_{n} \Vert ^{2} +\delta_{n}$$

for every $$n\in \mathbb {N}\setminus\{1\}$$. From Theorem 2.6(i), we see that for $$\alpha\in \mathopen]0,1\mathclose[$$,

\begin{aligned} \Vert p_{n} -u \Vert ^{2} &\leq\bigl\Vert \alpha p_{n} +(1-\alpha)x_{n}-u \bigr\Vert ^{2} \\ &\leq\alpha \Vert p_{n} -u \Vert ^{2} +(1-\alpha)\Vert x_{n}-u \Vert ^{2} - \alpha(1-\alpha)\underline{g}_{r} \bigl(\Vert p_{n} -x_{n} \Vert \bigr) \end{aligned}

and thus

$$\alpha\underline{g}_{r} \bigl(\Vert p_{n} -x_{n} \Vert \bigr) \leq \Vert x_{n}-u \Vert ^{2} - \Vert p_{n} -u \Vert ^{2} \leq \delta_{n}.$$

As $$\alpha\to1$$, we see that $$\underline{g}_{r} (\Vert p_{n} -x_{n} \Vert )\leq\delta_{n}$$ and thus $$\Vert p_{n} -x_{n} \Vert \leq\underline{g}_{r}^{-1}(\delta_{n})$$. Using the definition of $$p_{n}$$, we see that $$p_{n+1}\in C_{n+1}$$ and thus

$$\bigl\langle y_{n} - p_{n+1}, J(x_{n} - y_{n}) \bigr\rangle \geq0,$$

or equivalently,

$$\bigl\langle x_{n} - p_{n+1}, J(x_{n} - y_{n}) \bigr\rangle \geq \Vert x_{n}-y_{n}\Vert ^{2}.$$

Hence we obtain

$$\Vert x_{n}-y_{n}\Vert \leq \Vert x_{n}-p_{n+1} \Vert \leq \Vert x_{n}-p_{n}\Vert + \Vert p_{n}-p_{n+1}\Vert \leq\underline{g}_{r}^{-1}( \delta_{n}) + \Vert p_{n}-p_{n+1}\Vert$$

for every $$n\in \mathbb {N}\setminus\{1\}$$. Since $$\lim_{n} p_{n}=p_{0}$$ and $$\limsup_{n} \delta_{n}=\delta_{0}$$, we see that

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\underline {g}_{r}^{-1}(\delta_{0}).$$

For the latter part of the theorem, suppose that $$\delta_{0}=0$$. Then we see that

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\underline{g}_{r}^{-1}(0)=0$$

and

$$\limsup_{n\to\infty}\underline{g}_{r}\bigl(\Vert x_{n}-p_{n}\Vert \bigr) \leq\limsup_{n\to\infty} \delta_{n}=0.$$

Therefore, we obtain

$$\lim_{n\to\infty} \Vert x_{n}-y_{n}\Vert =0 \quad \text{and}\quad \lim_{n\to\infty} \Vert x_{n}-p_{n} \Vert =0.$$

Hence, we also obtain

$$\lim_{n\to\infty} x_{n}=p_{0} \quad \text{and} \quad \lim_{n\to\infty} y_{n}=p_{0}.$$
(3.3)

So, from

$$\Vert y_{n} - P_{r_{1}} y_{n} \Vert = r_{1}\Vert A_{r_{1}} y_{n} \Vert \leq r_{1}\vert A y_{n} \vert \leq r_{1}\biggl\Vert \frac{J(x_{n} - y_{n})}{r_{n}}\biggr\Vert = r_{1}\biggl\Vert \frac{x_{n} - y_{n}}{r_{n}}\biggr\Vert .$$

and $$\liminf_{n} r_{n} > 0$$, we see that $$\lim_{n}\Vert y_{n}-P_{r_{1}}y_{n}\Vert =0$$. Then, by Lemma 2.2 and (3.3), we obtain $$x_{n}\to p_{0} \in\hat{F}(P_{r_{1}})=F(P_{r_{1}})=A^{-1}0$$. Since $$A^{-1}0\subset C_{0}$$, we get $$p_{0}=P_{C_{0}}u=P_{A^{-1}0}u$$, which completes the proof. □

## Approximation theorem for the resolvents of type (Q)

We next consider an iterative scheme of resolvents of a monotone operator which is different type of Section 3, in a Banach space. Let C be a nonempty closed convex subset of a reflexive, smooth, and strictly convex Banach space E, let $$r>0$$ and let $$A\subset E\times E^{*}$$ be a monotone operator satisfying

$$D(A) \subset C \subset J^{-1}R(J+rA)$$
(4.1)

for $$r>0$$. It is well known that if A is maximal monotone operator, then $$J^{-1}R(J+rA)=E$$; see [2527]. Hence, if A is maximal monotone, then (4.1) holds for $$C=\overline{D(A)}$$. We also know that $$\overline{D(A)}$$ is convex; see [28]. If A satisfies (4.1) for $$r>0$$, then we can define the resolvent (of type (Q)) $$Q_{r}:C\to D(A)$$ of A by

$$Q_{r} x =\{z\in E : Jx\in Jz+rAz \}$$
(4.2)

for all $$x\in C$$. In other words, $$Q_{r}x=(J+rA)^{-1}Jx$$ for all $$x\in C$$. We know the following; see, for instance, [17, 18]:

1. (1)

$$Q_{r}$$ is mapping of type (Q) from C into $$D(A)$$;

2. (2)

$$(Jx-JQ_{r}x)/r \in AQ_{r} x$$ for all $$x \in C$$;

3. (3)

$$F(Q_{r})=A^{-1}0$$.

Before our result, we need the following lemma.

### Lemma 4.1

Let E be a reflexive, smooth, and strictly convex Banach space, and let $$A\subset E\times E^{*}$$ be a monotone operator. Let $$r>0$$ and C be a closed convex subset of E satisfying (4.1) for $$r>0$$. Then the following holds:

$$V(x,Q_{r}x)+V(Q_{r}x,x)\leq2r\bigl\langle x-Q_{r}x, x^{*}\bigr\rangle$$

for all $$(x,x^{*})\in A$$.

### Proof

Let $$(x,x^{*})\in A$$. Since $$(Jx-JQ_{r} x)/r \in AQ_{r} x$$, we see that

\begin{aligned}& 0 \leq \biggl\langle x-Q_{r} x, x^{*}-\frac{Jx-JQ_{r} x}{r} \biggr\rangle , \\& \biggl\langle x-Q_{r} x,\frac{Jx-JQ_{r} x}{r} \biggr\rangle \leq \bigl\langle x-Q_{r} x, x^{*}\bigr\rangle , \\& \langle x-Q_{r} x,Jx-JQ_{r} x\rangle \leq r\bigl\langle x-Q_{r} x, x^{*}\bigr\rangle . \end{aligned}

From the property of V, we see that

$$V(x,Q_{r}x)+V(Q_{r}x,x) = 2\langle x-Q_{r}x, Jx-JQ_{r}x\rangle \leq2r\bigl\langle x-Q_{r} x, x^{*}\bigr\rangle$$

for all $$(x,x^{*})\in A$$. □

We obtain an approximation theorem for a zero point of a monotone operator in a smooth and uniformly convex Banach space by using the resolvent of type (Q).

### Theorem 4.2

Let E be a uniformly smooth and uniformly convex Banach space and let $$A\subset E\times E^{*}$$ be a monotone operator with $$A^{-1}0\ne \emptyset$$. Let $$\{r_{n}\}$$ be a positive real numbers such that $$\liminf_{n} r_{n} >0$$, let C be a nonempty bounded closed convex subset of E satisfying

$$D(A) \subset C \subset J^{-1}R(J+r_{n}A)$$

for all $$n\in \mathbb {N}$$ and let $$r\in\mathopen]0,\infty\mathclose[$$ such that $$C \subset B_{r}$$. Let $$\{\delta_{n}\}$$ be a nonnegative real sequence and let $$\delta_{0}=\limsup_{n} \delta_{n}$$. For a given point $$u\in E$$, generate a sequence $$\{x_{n}\}$$ by $$x_{1} = x \in C$$, $$C_{1} =C$$, and

\begin{aligned} &y_{n}=Q_{r_{n}}x_{n}, \\ &C_{n+1} = \bigl\{ z \in C : \langle y_{n} - z, Jx_{n} - Jy_{n} \rangle\geq0\bigr\} \cap C_{n}, \\ &x_{n+1} \in\bigl\{ z \in C : \Vert u-z\Vert ^{2} \leq d(u,C_{n+1})^{2}+\delta _{n+1}\bigr\} \cap C_{n+1}, \end{aligned}

for all $$n \in \mathbb {N}$$. Then

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\underline{g}_{r}^{-1}\bigl(\overline{g}_{r} \bigl(\underline{g}_{r}^{-1}(\delta_{0})\bigr) \bigr).$$

Moreover, if $$\delta_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{A^{-1}0}u$$.

### Proof

Since $$C_{n}$$ includes $$A^{-1}0\ne\emptyset$$ for all $$n\in \mathbb {N}$$, $$\{C_{n}\}$$ is a sequence of nonempty closed convex subsets and, by definition, it is decreasing with respect to inclusion. Let $$p_{n}=P_{C_{n}}u$$ for all $$n\in \mathbb {N}$$. Then, by Theorem 2.4, we see that $$\{p_{n}\}$$ converges strongly to $$p_{0}=P_{C_{0}}u$$, where $$C_{0}=\bigcap_{n=1}^{\infty}C_{n}$$. Since $$x_{n}\in C_{n}$$ and $$d(u,C_{n})=\Vert u-p_{n} \Vert$$, we see that

$$\Vert u-x_{n} \Vert ^{2} \leq \Vert u-p_{n} \Vert ^{2} +\delta_{n}$$

for every $$n\in \mathbb {N}\setminus\{1\}$$. From Theorem 2.6(i), we see that for $$\alpha\in\mathopen]0,1\mathclose[$$,

\begin{aligned} \Vert p_{n} -u \Vert ^{2} &\leq\bigl\Vert \alpha p_{n} +(1-\alpha)x_{n}-u \bigr\Vert ^{2} \\ &\leq\alpha \Vert p_{n} -u \Vert ^{2} +(1-\alpha)\Vert x_{n}-u \Vert ^{2} - \alpha(1-\alpha)\underline{g}_{r} \bigl(\Vert p_{n} -x_{n} \Vert \bigr) \end{aligned}

and thus

$$\alpha\underline{g}_{r} \bigl(\Vert p_{n} -x_{n} \Vert \bigr) \leq \Vert x_{n}-u \Vert ^{2} - \Vert p_{n} -u \Vert ^{2} \leq \delta_{n}.$$

As $$\alpha\to1$$, we see that $$\underline{g}_{r} (\Vert p_{n} -x_{n} \Vert )\leq\delta_{n}$$ and thus $$\Vert p_{n} -x_{n} \Vert \leq\underline{g}_{r}^{-1}(\delta_{n})$$. Using the definition of $$p_{n}$$, we see that $$p_{n+1}\in C_{n+1}$$ and thus

$$\langle y_{n} - p_{n+1}, Jx_{n} - Jy_{n} \rangle\geq0.$$

From the property of the function V, we see that

\begin{aligned} 0 &\leq2\langle y_{n} - p_{n+1}, Jx_{n} - Jy_{n} \rangle \\ &= 2\langle p_{n+1} - y_{n}, Jy_{n} - Jx_{n} \rangle \\ &= V(p_{n+1}, x_{n})-V(p_{n+1}, y_{n})-V(y_{n}, x_{n}) \\ &\leq V(p_{n+1}, x_{n})-V(y_{n}, x_{n}). \end{aligned}

By Theorem 2.7, we obtain

\begin{aligned} V(y_{n}, x_{n}) &\leq V(p_{n+1}, x_{n}) \\ &= V(p_{n+1}, p_{n})+V(p_{n}, x_{n})+2 \langle p_{n+1}-p_{n}, Jp_{n}-Jx_{n}\rangle \\ &\leq V(p_{n+1}, p_{n})+\overline{g}_{r}\bigl( \Vert p_{n}-x_{n}\Vert \bigr) +2\langle p_{n+1}-p_{n}, Jp_{n}-Jx_{n}\rangle \\ &\leq V(p_{n+1}, p_{n})+\overline{g}_{r}\bigl( \underline{g}_{r}^{-1}(\delta_{n})\bigr) +2\langle p_{n+1}-p_{n}, Jp_{n}-Jx_{n}\rangle. \end{aligned}

Since $$\limsup_{n}\delta_{n}=\delta_{0}$$ and $$p_{n} \to p_{0}$$, we see that

$$\limsup_{n\to\infty} V(y_{n}, x_{n}) \leq \overline{g}_{r}\bigl(\underline {g}_{r}^{-1}( \delta_{0})\bigr).$$

Therefore, by Theorem 2.7, we see that

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\limsup_{n\to\infty} \underline{g}_{r}^{-1} \bigl(V(y_{n}, x_{n})\bigr) \leq\underline{g}_{r}^{-1} \bigl(\overline{g}_{r}\bigl(\underline{g}_{r}^{-1}( \delta_{0})\bigr)\bigr).$$

For the latter part of the theorem, suppose that $$\delta_{0}=0$$. Then we see that

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq \underline{g}_{r}^{-1}\bigl(\overline{g}_{r} \bigl(\underline{g}_{r}^{-1}(0)\bigr)\bigr)=0$$

and

$$\limsup_{n\to\infty}\underline{g}_{r}\bigl(\Vert x_{n}-p_{n}\Vert \bigr) \leq\limsup_{n\to\infty} \delta_{n}=0.$$

Therefore, we obtain

$$\lim_{n\to\infty} \Vert x_{n}-y_{n}\Vert =0 \quad \text{and}\quad \lim_{n\to\infty} \Vert x_{n}-p_{n} \Vert =0.$$

Hence, we also obtain

$$\lim_{n\to\infty} x_{n} = p_{0} \quad \text{and}\quad \lim_{n\to\infty} y_{n} = p_{0}.$$
(4.3)

Since E is uniformly smooth, the duality mapping J is uniformly norm-to-norm continuous on each bounded subset on E. Therefore, we obtain

$$\lim_{n\to\infty} \Vert Jx_{n}-Jy_{n} \Vert =0.$$
(4.4)

From Lemma 4.1 we see that

$$V(y_{n}, Q_{r_{1}}y_{n}) \leq V(y_{n}, Q_{r_{1}}y_{n})+V(Q_{r_{1}}y_{n}, y_{n}) \leq2r_{1}\bigl\langle y_{n}-Q_{r_{1}}y_{n}, x^{*}\bigr\rangle$$

for all $$x^{*}\in Ay_{n}$$. From $$y_{n}$$, $$Q_{r_{1}}y_{n} \in D(A) \subset C \subset B_{r}$$ and $$(Jx_{n}-Jy_{n})/r_{n} \in Ay_{n}$$, we see that

\begin{aligned} V(y_{n}, Q_{r_{1}}y_{n}) \leq& 2r_{1} \biggl\langle y_{n}-Q_{r_{1}}y_{n}, \frac{Jx_{n}-Jy_{n}}{r_{n}} \biggr\rangle \\ \leq& 2r_{1}\Vert y_{n}-Q_{r_{1}}y_{n} \Vert \biggl\Vert \frac{Jx_{n}-Jy_{n}}{r_{n}}\biggr\Vert \\ \leq& 2r_{1} \bigl(\Vert y_{n}\Vert +\Vert Q_{r_{1}}y_{n} \Vert \bigr) \biggl\Vert \frac{Jx_{n}-Jy_{n}}{r_{n}} \biggr\Vert \\ = & 4r_{1} r \biggl\Vert \frac{Jx_{n}-Jy_{n}}{r_{n}}\biggr\Vert . \end{aligned}

Since $$\liminf_{n} r_{n} > 0$$ and (4.4), we obtain

$$\limsup_{n\to\infty} V(y_{n},Q_{r_{1}}y_{n}) \leq0.$$

This implies $$\lim_{n} V(y_{n},Q_{r_{1}}y_{n}) = 0$$. From Theorem 2.5, we see that

$$\lim_{n\to\infty} \Vert y_{n} -Q_{r_{1}}y_{n} \Vert =0.$$

Then, by Lemma 2.3 and (4.3), we see that $$x_{n}\to p_{0} \in\hat{F}(Q_{r_{1}})=F(Q_{r_{1}})=A^{-1}0$$. Since $$A^{-1}0\subset C_{0}$$, we get $$p_{0}=P_{C_{0}}u=P_{A^{-1}0}u$$, which completes the proof. □

## Applications

In this section, we give some applications of Theorems 3.1 and 4.2. We first study the convex minimization problem: Let E be a reflexive, smooth, and strictly convex Banach space with its dual $$E^{*}$$ and let $$f:E \to\mathopen]-\infty, \infty\mathclose]$$ be a proper lower semicontinuous convex function. Then the subdifferential ∂f of f is defined as follows:

$$\partial f (x) = \bigl\{ x^{*} \in E^{*}: f (x) + \bigl\langle y-x, x^{*} \bigr\rangle \leq f(y), \forall y \in E\bigr\}$$

for all $$x \in E$$. By Rockafellar’s theorem [29, 30], the subdifferential $$\partial f \subset E \times E^{*}$$ is maximal monotone. It is easy to see that $$(\partial f)^{-1}0=\mathop{\mathrm{argmin}}\{f(x):x \in E\}$$. It is also known that, see, for instance, [15, 27, 28],

$$D(\partial f)\subset D(f) \subset\overline{D(\partial f)} .$$
(5.1)

As a direct consequence of Theorems 3.1 and 4.2, we can show the following corollaries.

### Corollary 5.1

Let E be a smooth and uniformly convex Banach space, let $$f:E \to\mathopen]-\infty, \infty\mathclose]$$ be a proper lower semicontinuous convex function with $$D(f)$$ being bounded, and let $$r\in\mathopen]0,\infty\mathclose[$$ such that $$D(f) \subset B_{r}$$. Let $$\{\delta_{n}\}$$ be a nonnegative real sequence and let $$\delta_{0}=\limsup_{n} \delta_{n}$$. For a given point $$u\in E$$, generate a sequence $$\{x_{n}\}$$ by $$x_{1} = x \in\overline{D(f)}$$, $$C_{1} =\overline{D(f)}$$, and

\begin{aligned} & y_{n} = \mathop{\mathrm{argmin}}_{y \in E} \biggl\{ f(y) + \frac {1}{2r_{n}} \Vert y-x_{n}\Vert ^{2} \biggr\} , \\ & C_{n+1} = \bigl\{ z \in\overline{D(f)} : \bigl\langle y_{n} - z, J(x_{n} - y_{n}) \bigr\rangle \geq0\bigr\} \cap C_{n}, \\ & x_{n+1} \in\bigl\{ z \in\overline{D(f)} : \Vert u-z\Vert ^{2} \leq d(u,C_{n+1})^{2}+\delta_{n+1} \bigr\} \cap C_{n+1}, \end{aligned}

for all $$n \in \mathbb {N}$$, where $$\{r_{n}\}\subset\mathopen]0,\infty\mathclose[$$ such that $$\liminf_{n} r_{n} >0$$. If $$(\partial f)^{-1}0$$ is nonempty, then

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\underline {g}_{r}^{-1}(\delta_{0}).$$

Moreover, if $$\delta_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{(\partial f)^{-1}0}u$$.

### Proof

Put $$C=\overline{D(f)}$$. Since the subdifferential $$\partial f \subset E \times E^{*}$$ is maximal monotone, we have $$E=R(I+r\partial f)$$ for all $$r>0$$ and hence, from (5.1), we see that

$$D(\partial f)\subset\overline{D(\partial f)} =\overline{D(f)} =C \subset E= R(I+r\partial f)$$

for all $$r>0$$.

Fix $$r>0$$ and $$z\in C$$. Let $$P_{r}$$ be the resolvent (of type (P)) of ∂f, then we also know that

$$P_{r} z = \mathop{\mathrm{argmin}}_{y \in E} \biggl\{ f(y) + \frac {1}{2r} \Vert y-z\Vert ^{2} \biggr\} .$$

Therefore, we obtain the desired result by Theorem 3.1. □

### Corollary 5.2

Let E be a uniformly smooth and uniformly convex Banach space, let $$f:E \to\mathopen]-\infty, \infty\mathclose]$$ be a proper lower semicontinuous convex function with $$D(f)$$ being bounded and let $$r\in\mathopen]0,\infty\mathclose[$$ such that $$D(f) \subset B_{r}$$. Let $$\{\delta_{n}\}$$ be a nonnegative real sequence and let $$\delta_{0}=\limsup_{n} \delta_{n}$$. For a given point $$u\in E$$, generate a sequence $$\{x_{n}\}$$ by $$x_{1} = x \in\overline{D(f)}$$, $$C_{1} =\overline{D(f)}$$, and

\begin{aligned} & y_{n}=\mathop{\mathrm{argmin}}_{y \in E} \biggl\{ f(y) + \frac {1}{2r_{n}} \Vert y\Vert ^{2} -\frac{1}{r_{n}}\langle y, Jx_{n} \rangle \biggr\} , \\ & C_{n+1} = \bigl\{ z \in\overline{D(f)} : \langle y_{n} - z, Jx_{n} - Jy_{n} \rangle\geq0\bigr\} \cap C_{n}, \\ & x_{n+1} \in\bigl\{ z \in\overline{D(f)} : \Vert u-z\Vert ^{2} \leq d(u,C_{n+1})^{2}+\delta_{n+1} \bigr\} \cap C_{n+1}, \end{aligned}

for all $$n \in \mathbb {N}$$, where $$\{r_{n}\}\subset\mathopen]0,\infty\mathclose[$$ such that $$\liminf_{n} r_{n} >0$$. If $$(\partial f)^{-1}0$$ is nonempty, then

$$\limsup_{n\to\infty} \Vert x_{n}-y_{n}\Vert \leq\underline{g}_{r}^{-1}\bigl(\overline{g}_{r} \bigl(\underline{g}_{r}^{-1}(\delta_{0})\bigr) \bigr).$$

Moreover, if $$\delta_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{(\partial f)^{-1}0}u$$.

### Proof

Fix $$r>0$$ and $$z\in C$$. Let $$Q_{r}$$ be the resolvent (of type (Q)) of ∂f, then we also know that

$$Q_{r} z = \mathop{\mathrm{argmin}}_{y \in E} \biggl\{ f(y) + \frac {1}{2r} \Vert y\Vert ^{2} -\frac{1}{r}\langle y, Jz \rangle \biggr\} .$$

In the same way as Corollary 5.1, we obtain the desired result by Theorem 4.2. □

Next, we study the approximation of fixed points for mappings of type (P) and (Q). Before show our applications, we need the following results.

### Lemma 5.3

([17])

Let E be a reflexive, smooth, and strictly convex Banach space, let C be a nonempty subset of E, let $$T:C\to E$$ be a mapping, and let $$A_{T}\subset E\times E^{*}$$ be an operator defined by $$A_{T}=J(T^{-1}-I)$$. Then T is of mapping of type (P) if and only if $$A_{T}$$ is monotone. In this case $$T=(I+J^{-1}A_{T})^{-1}$$.

### Lemma 5.4

([31])

Let E be a reflexive, smooth, and strictly convex Banach space, let C be a nonempty subset of E and let $$T:C\to E$$ be a mapping, and let $$A_{T}\subset E\times E^{*}$$ be an operator defined by $$A_{T}=JT^{-1}-J$$. Then T is a mapping of type (Q) if and only if $$A_{T}$$ is monotone. In this case $$T=(J+A_{T})^{-1}J$$.

As a direct consequence of Theorems 3.1 and 4.2, we can show the following corollaries.

### Corollary 5.5

Let E be a smooth and uniformly convex Banach space, let C be a bounded closed convex subset of E. Let $$T:C \to C$$ be a mapping of type (P) with $$F(T)$$ being nonempty and let $$r\in\mathopen]0,\infty\mathclose[$$ such that $$C \subset B_{r}$$. Let $$\{\delta_{n}\}$$ be a nonnegative real sequence and let $$\delta_{0}=\limsup_{n} \delta_{n}$$. For a given point $$u\in E$$, generate a sequence $$\{x_{n}\}$$ by $$x_{1} = x \in C$$, $$C_{1} =C$$, and

\begin{aligned} &C_{n+1} = \bigl\{ z \in C : \bigl\langle Tx_{n} - z, J(x_{n} - Tx_{n}) \bigr\rangle \geq0\bigr\} \cap C_{n}, \\ &x_{n+1} \in\bigl\{ z \in C : \Vert u-z\Vert ^{2} \leq d(u,C_{n+1})^{2}+\delta _{n+1}\bigr\} \cap C_{n+1}, \end{aligned}

for all $$n \in \mathbb {N}$$, where $$\{r_{n}\}\subset(0,\infty)$$ such that $$\liminf_{n} r_{n} >0$$. Then

$$\limsup_{n\to\infty} \Vert x_{n}-Tx_{n}\Vert \leq\underline {g}_{r}^{-1}(\delta_{0}).$$

Moreover, if $$\delta_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{F(T)}u$$.

### Proof

Put $$A_{T}=J(T^{-1}-I)$$ and $$r_{n}=1$$ for all $$n\in \mathbb {N}$$. From Lemma 5.3, we see that T is the resolvent (of type (P)) of $$A_{T}$$ for 1 and

$$D(A_{T}) =R(T) \subset C=D(T)=R\bigl(I+J^{-1}A_{T} \bigr).$$

Therefore, we obtain the desired result by Theorem 3.1. □

### Corollary 5.6

Let E be a uniformly smooth and uniformly convex Banach space, let C be a bounded closed convex subset of E. Let $$T:C \to C$$ be a mapping of type (Q) with $$F(T)$$ being nonempty and let $$r\in\mathopen]0,\infty\mathclose[$$ such that $$C \subset B_{r}$$. Let $$\{\delta_{n}\}$$ be a nonnegative real sequence and let $$\delta_{0}=\limsup_{n} \delta_{n}$$. For a given point $$u\in E$$, generate a sequence $$\{x_{n}\}$$ by $$x_{1} = x \in C$$, $$C_{1} =C$$, and

\begin{aligned} &C_{n+1} = \bigl\{ z \in C : \langle Tx_{n} - z, Jx_{n} - JTx_{n} \rangle\geq0\bigr\} \cap C_{n}, \\ &x_{n+1} \in\bigl\{ z \in C : \Vert u-z\Vert ^{2} \leq d(u,C_{n+1})^{2}+\delta _{n+1}\bigr\} \cap C_{n+1}, \end{aligned}

for all $$n \in \mathbb {N}$$. Then

$$\limsup_{n\to\infty} \Vert x_{n}-Tx_{n}\Vert \leq\underline{g}_{r}^{-1}\bigl(\overline{g}_{r} \bigl(\underline{g}_{r}^{-1}(\delta_{0})\bigr) \bigr).$$

Moreover, if $$\delta_{0}=0$$, then $$\{x_{n}\}$$ converges strongly to $$P_{F(T)}u$$.

### Proof

In the same way as Corollary 5.5, we obtain the desired result by Lemma 5.4 and Theorem 4.2. □

## References

1. Martinet, B: Régularsation d’inéquations variationnelles par approximations successives. Rev. Francaise Informat. Recherche Opérationnelles 4, 154-158 (1970) (in French)

2. Rockafellar, RT: Monotone operators and proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)

3. Lions, PL: Une méthode itérative de résolution d’une inéquation variationnelle. Isr. J. Math. 31, 204-208 (1978)

4. Güler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403-419 (1991)

5. Ibaraki, T: Strong convergence theorems for zero point problems and equilibrium problems in a Banach space. In: Nonlinear Analysis and Convex Analysis I, pp. 115-126. Yokohama Publishers, Yokohama, Japan (2013)

6. Kamimura, S, Takahashi, W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 106, 226-240 (2000)

7. Kimura, Y, Takahashi, W: On a hybrid method for a family of relatively nonexpansive mappings in a Banach space. J. Math. Anal. Appl. 357, 356-363 (2009)

8. Ohsawa, S, Takahashi, W: Strong convergence theorems for resolvents of maximal monotone operators in Banach spaces. Arch. Math. 81, 439-445 (2003)

9. Solodov, MV, Svaiter, BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program., Ser. A 87, 189-202 (2000)

10. Kimura, Y: Approximation of a common fixed point of a finite family of nonexpansive mappings with nonsummable errors in a Hilbert space. J. Nonlinear Convex Anal. 15, 429-436 (2014)

11. Takahashi, W, Takeuchi, Y, Kubota, R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276-286 (2008)

12. Kimura, Y: Approximation of a fixed point of nonlinear mappings with nonsummable errors in a Banach space. In: Maligranda, L, Kato, M, Suzuki, T (eds.) Proceedings of the International Symposium on Banach and Function Spaces IV, (Kitakyushu, Japan), pp. 303-311 (2014)

13. Ibaraki, T, Kimura, Y: Approximation of a fixed point of generalized firmly nonexpansive mappings with nonsummable errors. Linear Nonlinear Anal. (to appear)

14. Kimura, Y: Approximation of a fixed point of nonexpansive mapping with nonsummable errors in a geodesic space. In: Proceedings of the 10th International Conference on Fixed Point Theory and Its Applications, pp. 157-164 (2012)

15. Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces. Editura Academiei Republicii Socialiste România, Bucharest (1976)

16. Takahashi, W: Nonlinear Functional Analysis - Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama, Japan (2000)

17. Aoyama, K, Kohsaka, F, Takahashi, W: Three generalizations of firmly nonexpansive mappings: their relations and continuity properties. J. Nonlinear Convex Anal. 10, 131-147 (2009)

18. Kohsaka, F, Takahashi, W: Existence and approximation of fixed points of firmly nonexpansive type mappings in Banach spaces. SIAM J. Optim. 19, 824-835 (2008)

19. Aoyama, K, Kohsaka, F, Takahashi, W: Strong convergence theorems for a family of mappings of type (P) and applications. In: Proceedings of Asian Conference on Nonlinear Analysis and Optimization, pp. 1-17. Yokohama Publishers, Yokohama, Japan (2009)

20. Tsukada, M: Convergence of best approximations in a smooth Banach space. J. Approx. Theory 40, 301-309 (1984)

21. Mosco, U: Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 3, 510-585 (1969)

22. Beer, G: Topologies on Closed and Closed Convex Sets. Kluwer Academic, Dordrecht (1993)

23. Kamimura, S, Takahashi, W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938-945 (2002)

24. Xu, HK: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127-1138 (1991)

25. Browder, FE: Nonlinear maximal monotone operators in Banach space. Math. Ann. 175, 89-113 (1968)

26. Rockafellar, RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75-88 (1970)

27. Takahashi, W: Convex Analysis and Approximation of Fixed Points. Yokohama Publishers, Yokohama, Japan (2000) (in Japanese)

28. Rockafellar, RT: On the virtual convexity of the domain and range of a nonlinear maximal monotone operator. Math. Ann. 185, 81-90 (1970)

29. Rockafellar, RT: Characterization of the subdifferentials of convex functions. Pac. J. Math. 17, 497-510 (1966)

30. Rockafellar, RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209-216 (1970)

31. Kohsaka, F, Takahashi, W: Fixed point theorems for a class of nonlinear mappings related to maximal monotone operators in Banach spaces. Arch. Math. (Basel) 91, 166-177 (2008)

## Acknowledgements

The author is supported by Grant-in-Aid for Young Scientific (B) No. 24740075 from the Japan Society for the Promotion of Science.

## Author information

Authors

### Corresponding author

Correspondence to Takanori Ibaraki.

### Competing interests

The author declares to have no competing interests.

## Rights and permissions

Reprints and Permissions

Ibaraki, T. Approximation of a zero point of monotone operators with nonsummable errors. Fixed Point Theory Appl 2016, 48 (2016). https://doi.org/10.1186/s13663-016-0535-2

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/s13663-016-0535-2

• 47H05
• 47H09
• 47J25

### Keywords

• resolvent
• monotone operator
• metric projection