• Research
• Open Access

# Viscosity approximation methods for multivalued nonexpansive mappings in geodesic spaces

Fixed Point Theory and Applications20152015:114

https://doi.org/10.1186/s13663-015-0356-8

• Accepted: 18 June 2015
• Published:

## Abstract

We prove strong convergence of the viscosity approximation method for multivalued nonexpansive mappings in $$\operatorname{CAT}(0)$$ spaces. Our results generalize the results of Dhompongsa et al. (2012), Wangkeeree and Preechasilp (2013) and many others. Some related results in $$\mathbb{R}$$-trees are also given.

## Keywords

• viscosity approximation method
• fixed point
• strong convergence
• multivalued nonexpansive mapping
• $$\operatorname{CAT}(0)$$ space

## 1 Introduction

One of the successful approximation methods for finding fixed points of nonexpansive mappings was given by Moudafi . Let E be a nonempty closed convex subset of a Hilbert space H and $$t : E \to E$$ be a nonexpansive mapping with a nonempty fixed point set $$\operatorname{Fix}(t)$$. The following scheme is known as the viscosity approximation method or Moudafis viscosity approximation method:
\begin{aligned}& x_{1}\in E \quad \text{arbitrarily chosen}, \\& x_{n+1}=\alpha_{n} f(x_{n})+(1-\alpha _{n})t(x_{n}),\quad n\in\mathbb{N}, \end{aligned}
(1)
where $$f : E\to E$$ is a contraction and $$\{\alpha_{n}\}$$ is a sequence in $$(0, 1)$$. In , under some suitable assumptions, the author proved that the sequence $$\{x_{n}\}$$ defined by (1) converges strongly to a point z in $$\operatorname{Fix}(t)$$ which satisfies the following variational inequality:
$$\bigl\langle f(z)-z, z-x \bigr\rangle \geq0,\quad x\in \operatorname{Fix}(t).$$
We note that the Halpern approximation method ,
$$x_{n+1}=\alpha_{n} u+(1-\alpha_{n}) t(x_{n}),\quad n\in\mathbb{N},$$
where u is a fixed element in E, is a special case of the Moudafi one. Notice also that the Moudafi viscosity approximation method can be applied to convex optimization, linear programming, monotone inclusions, and elliptic differential equations.
The first extension of Moudafi’s result to the so-called $$\operatorname{CAT}(0)$$ space was proved by Shi and Chen . However, they assumed that the space $$(X, \rho)$$ must satisfy the property $$\mathcal{P}$$, i.e., for $$x, u, y_{1}, y_{2}\in X$$, one has
$$\rho(x, m_{1})\rho(x, y_{1})\leq\rho(x, m_{2}) \rho(x, y_{2}) + \rho(x, u)\rho (y_{1}, y_{2}),$$
where $$m_{1}$$ and $$m_{2}$$ are the unique nearest points of u on the segments $$[x, y_{1}]$$ and $$[x, y_{2}]$$, respectively. By using the concept of quasi-linearization introduced by Berg and Nikolaev , Wangkeeree and Preechasilp  could omit the property $$\mathcal{P}$$ from Shi and Chen’s result. Precisely, they obtained the following theorems.

### Theorem 1.1

(Theorem 3.1 of )

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$t : E \to E$$ be a nonexpansive mapping with $$\operatorname{Fix}(t)\neq\emptyset$$, and $$f:E\to E$$ be a contraction with constant $$k\in[0,1)$$. For each $$s\in(0,1)$$, let $$x_{s}$$ be given by
$$x_{s}=s f(x_{s})\oplus(1-s)t(x_{s}).$$
Then $$\{x_{s}\}$$ converges strongly as $$s\to0$$ to $$\tilde{x}$$ such that $$\tilde{x}=P_{\operatorname{Fix}(t)}(f(\tilde{x}))$$, which is equivalent to the variational inequality:
$$\bigl\langle \overrightarrow{\tilde{x}f(\tilde{x})}, \overrightarrow{x \tilde {x}}\bigr\rangle \geq0,\quad x\in \operatorname{Fix}(t).$$

### Theorem 1.2

(Theorem 3.4 of )

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$t : E \to E$$ be a nonexpansive mapping with $$\operatorname{Fix}(t)\neq\emptyset$$, and $$f:E\to E$$ be a contraction with constant $$k\in[0,1)$$. Suppose that $$x_{1}\in E$$ is arbitrarily chosen and $$\{x_{n}\}$$ is iteratively generated by
$$x_{n+1}=\alpha_{n} f(x_{n})\oplus(1- \alpha_{n})t(x_{n}),\quad n\in\mathbb{N},$$
where $$\{\alpha_{n}\}$$ is a sequence in $$(0, 1)$$ satisfying:
1. (C1)

$$\lim_{n\to\infty} \alpha_{n} =0$$;

2. (C2)

$$\sum^{\infty}_{n=1} \alpha_{n} = \infty$$;

3. (C3)

$$\sum^{\infty}_{n=1}|\alpha_{n}-\alpha_{n+1}|<\infty$$ or $$\lim_{n\to\infty} (\frac{\alpha_{n}}{\alpha_{n+1}} )=1$$.

Then $$\{x_{n}\}$$ converges strongly to $$\tilde{x}$$, where $$\tilde{x}=P_{\operatorname{Fix}(t)}(f(\tilde{x}))$$.

However, on p.14 of , in order to conclude that $$\alpha'_{n}=\frac{2(1-k)\alpha_{n}}{1-k\alpha_{n}}\in (0,1)$$, the sequence $$\{\alpha_{n}\}$$ must be contained in $$(0,\frac{1}{2-k} )$$. Therefore, Theorem 1.2 should be rewritten as follows.

### Theorem 1.3

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$t : E \to E$$ be a nonexpansive mapping with $$\operatorname{Fix}(t)\neq\emptyset$$, and $$f:E\to E$$ be a contraction with constant $$k\in[0,1)$$. Suppose that $$x_{1}\in E$$ is arbitrarily chosen and $$\{x_{n}\}$$ is iteratively generated by
$$x_{n+1}=\alpha_{n} f(x_{n})\oplus(1- \alpha_{n})t(x_{n}),\quad n\in\mathbb{N},$$
where $$\{\alpha_{n}\}$$ is a sequence in $$(0,\frac{1}{2-k} )$$ satisfying (C1), (C2), and (C3). Then $$\{x_{n}\}$$ converges strongly to $$\tilde{x}=P_{\operatorname{Fix}(t)}(f(\tilde{x}))$$.

Fixed point theory for multivalued mappings has many useful applications in applied sciences, in particular, in game theory and optimization theory. Thus, it is natural to study the extension of the known fixed point results for single-valued mappings to the setting of multivalued mappings.

Let E be a closed convex subset of a complete $$\operatorname{CAT}(0)$$ space, $$f:E\to E$$ be a contraction, and T be a nonexpansive mapping on E whose values are nonempty bounded closed subsets of E. For each $$s\in(0,1)$$, we can define a multivalued contraction $$G_{s}$$ on E by
$$G_{s}(x)=(1-s)f(x)\oplus sT(x),\quad x\in E.$$
Applying Nadler’s theorem , $$G_{s}$$ has a (not necessarily unique) fixed point $$x_{s}\in E$$, that is,
$$x_{s}\in s f(x_{s})\oplus(1-s)T(x_{s}).$$
(2)

Recently, Bo and Yi  extended Theorems 1.1 and 1.2 to multivalued nonexpansive mappings. We observe that there are many gaps in the proof of . In fact, Theorem 3.1 of  is absolutely wrong since there is a closed convex subset E of the Euclidean plane $$\mathbb{R}^{2}$$ and a nonexpansive mapping on E such that the net $$\{x_{s}\}$$ defined by (2) does not converge (see , p.697).

The purpose of this paper is to extend Theorems 1.1 and 1.2 in the right way. Of course, a condition like the endpoint condition must be added. Our main discoveries are Theorems 3.1 and 3.3.

## 2 Preliminaries

Throughout this paper, $$\mathbb{N}$$ stands for the set of natural numbers and $$\mathbb{R}$$ stands for the set of real numbers. Let $$[0,l]$$ be a closed interval in $$\mathbb{R}$$ and x, y be two points in a metric space $$(X,\rho)$$. A geodesic joining x to y is a map $$\xi:[0,l]\to X$$ such that $$\xi(0)=x$$, $$\xi(l)=y$$, and $$\rho(\xi(s),\xi(t))=|s-t|$$ for all $$s, t\in[0,l]$$. The image of ξ is called a geodesic segment joining x and y, which when unique is denoted by $$[x,y]$$. The space $$(X,\rho)$$ is said to be a geodesic space if every two points in X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic joining x and y for each $$x,y\in X$$. A subset E of X is said to be convex if every pair of points $$x,y\in E$$ can be joined by a geodesic in X and the image of every such geodesic is contained in E.

A geodesic triangle $$\triangle(p, q, r)$$ in a geodesic space $$(X,\rho)$$ consists of three points p, q, r in X and a choice of three geodesic segments $$[p, q]$$, $$[q, r]$$, $$[r, p]$$ joining them. A comparison triangle for geodesic triangle $$\triangle(p, q, r)$$ in X is a triangle $$\overline{\triangle}(\bar{p}, \bar{q}, \bar{r})$$ in the Euclidean plane $$\mathbb{R}^{2}$$ such that $$d_{\mathbb{R}^{2}} ( \bar{p},\bar{q} ) =\rho(p, q)$$, $$d_{\mathbb{R}^{2}} ( \bar{q},\bar{r} ) =\rho(q, r)$$, and $$d_{\mathbb{R}^{2}} ( \bar{r},\bar{p} ) =\rho(r, p)$$. A point $$\bar{u}\in[\bar{p}, \bar{q}]$$ is called a comparison point of $$u\in[p, q]$$ if $$\rho(p, u)=d_{\mathbb{R}^{2}}(\bar{p},\bar{u})$$. Comparison points on $$[\bar{q}, \bar{r}]$$ and $$[\bar{r}, \bar{p}]$$ are defined in the same way.

### Definition 2.1

A geodesic triangle $$\triangle(p, q, r)$$ in $$(X,\rho)$$ is said to satisfy the $$\operatorname{CAT}(0)$$ inequality if for any $$u,v\in\triangle(p, q, r)$$ and for their comparison points $$\bar{u}, \bar{v}\in \overline{\triangle}(\bar{p}, \bar{q}, \bar{r})$$, one has
$$\rho(u,v)\leq d_{\mathbb{R}^{2}}(\bar{u}, \bar{v}).$$
A geodesic space X is said to be a $$\operatorname{CAT}(0)$$ space if all of its geodesic triangles satisfy the $$\operatorname{CAT}(0)$$ inequality. For other equivalent definitions and basic properties of $$\operatorname{CAT}(0)$$ spaces, we refer the reader to standard texts, such as [9, 10]. It is well known that every $$\operatorname{CAT}(0)$$ space is uniquely geodesic. Notice also that Pre-Hilbert spaces, $$\mathbb{R}$$-trees, Euclidean buildings are examples of $$\operatorname{CAT}(0)$$ spaces (see [9, 11]). Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space $$(X,\rho)$$. It follows from Proposition 2.4 of  that for each $$x\in X$$, there exists a unique point $$x_{0}\in E$$ such that
$$\rho(x, x_{0})=\inf\bigl\{ \rho(x,y) : y\in E\bigr\} .$$
In this case, $$x_{0}$$ is called the unique nearest point of x in E. The metric projection of X onto E is the mapping $$P_{E}:X\to E$$ defined by
$$P_{E}(x):= \text{the unique nearest point of } x \text{ in } E.$$
By Lemma 2.1 of , for each $$x,y\in X$$ and $$t\in[0,1]$$, there exists a unique point $$z\in[x,y]$$ such that
$$\rho(x,z)=(1-t)\rho(x,y)\quad \text{and} \quad \rho (y,z)=t \rho(x,y).$$
(3)
We shall denote by $$tx\oplus(1-t)y$$ the unique point z satisfying (3). Now, we collect some elementary facts about $$\operatorname{CAT}(0)$$ spaces.

### Lemma 2.2

(Lemma 2.4 of )

Let $$(X,\rho)$$ be a $$\operatorname{CAT}(0)$$ space. Then
$$\rho\bigl(tx\oplus(1-t)y,z\bigr)\leq t\rho(x,z)+(1-t)\rho(y,z)$$
for all $$x,y,z\in X$$ and $$t\in[0,1]$$.

### Lemma 2.3

(Lemma 2.5 of )

Let $$(X,\rho)$$ be a $$\operatorname{CAT}(0)$$ space. Then
$$\rho^{2}\bigl(t x\oplus(1-t)y,z\bigr)\leq t \rho^{2}(x,z)+(1-t) \rho^{2}(y,z)-t(1-t)\rho^{2}(x,y)$$
for all $$x,y,z\in X$$ and $$t\in[0,1]$$.

### Lemma 2.4

(Lemma 3 of )

Let $$(X,\rho)$$ be a $$\operatorname{CAT}(0)$$ space. Then
$$\rho\bigl(tx\oplus(1-t)z,ty\oplus(1-t)z\bigr)\leq t \rho(x,y)$$
for all $$x,y,z\in X$$ and $$t\in[0,1]$$.
Let $$\{x_{n}\}$$ be a bounded sequence in X. For $$x\in X$$, we set
$$r \bigl( x,\{x_{n}\} \bigr) = \limsup_{n\rightarrow\infty} \rho ( x,x_{n} ) .$$
The asymptotic radius $$r ( \{x_{n}\} )$$ of $$\{x_{n}\}$$ is given by
$$r \bigl( \{x_{n}\} \bigr) =\inf \bigl\{ r \bigl( x,\{x_{n} \} \bigr) :x\in X \bigr\} ,$$
and the asymptotic center $$A ( \{x_{n}\} )$$ of $$\{x_{n}\}$$ is the set
$$A \bigl( \{x_{n}\} \bigr) = \bigl\{ x\in X:r \bigl( x, \{x_{n}\} \bigr) =r \bigl( \{x_{n}\} \bigr) \bigr\} .$$

It is well known from Proposition 7 of  that in a $$\operatorname{CAT}(0)$$ space, $$A(\{x_{n}\})$$ consists of exactly one point. A sequence $$\{x_{n}\}$$ in X is said to Δ-converge to $$x\in X$$ if $$A(\{x_{n_{k}}\})=\{x\}$$ for every subsequence $$\{x_{n_{k}}\}$$ of $$\{x_{n}\}$$. In this case we write $$\Delta\mbox{-} \lim_{n\to\infty}x_{n}=x$$ and call x the Δ-limit of $$\{x_{n}\}$$.

### Lemma 2.5

(, p.3690)

Every bounded sequence in a complete $$\operatorname{CAT}(0)$$ space always has a Δ-convergent subsequence.

### Lemma 2.6

(Proposition 2.1 of )

If E is a closed convex subset of a complete $$\operatorname{CAT}(0)$$ space and if $$\{x_{n}\}$$ is a bounded sequence in E, then the asymptotic center of $$\{x_{n}\}$$ is in E.

The concept of quasi-linearization was introduced by Berg and Nikolaev . Let $$(X,\rho)$$ be a metric space. We denote a pair $$(a, b) \in X\times X$$ by $$\overrightarrow{ab}$$ and call it a vector. The quasi-linearization is a map $$\langle\cdot, \cdot\rangle: (X \times X)\times(X\times X) \to \mathbb{R}$$ defined by
$$\langle\overrightarrow{ab}, \overrightarrow{cd}\rangle=\frac {1}{2} \bigl( \rho^{2}(a, d) + \rho^{2}(b, c) -\rho^{2}(a, c)- \rho^{2}(b, d) \bigr) \quad \text{for all } a, b, c, d \in X.$$
It is easy to see that $$\langle\overrightarrow{ab}, \overrightarrow{cd}\rangle=\langle\overrightarrow{cd}, \overrightarrow{ab}\rangle$$, $$\langle\overrightarrow{ab}, \overrightarrow{cd}\rangle=- \langle\overrightarrow{ba}, \overrightarrow{cd}\rangle$$, and $$\langle\overrightarrow{ax}, \overrightarrow{cd}\rangle+\langle\overrightarrow{xb}, \overrightarrow{cd}\rangle=\langle\overrightarrow{ab}, \overrightarrow{cd}\rangle$$ for all $$a, b, c, d, x \in X$$. We say that $$(X,\rho)$$ satisfies the Cauchy-Schwarz inequality if
$$\bigl\vert \langle\overrightarrow{ab}, \overrightarrow{cd}\rangle\bigr\vert \leq\rho(a, b)\rho(c, d)\quad \text{for all } a, b, c, d \in X.$$
It is well known from Corollary 3 of  that a geodesic space X is a $$\operatorname{CAT}(0)$$ space if and only if it satisfies the Cauchy-Schwarz inequality. Some other properties of quasi-linearization are included as follows.

### Lemma 2.7

(Theorem 3.1 of )

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$u\in X$$ and $$v\in E$$. Then
$$v=P_{E}(u)\quad \textit{if and only if}\quad \langle\overrightarrow{v u}, \overrightarrow{w v}\rangle\geq0 \quad \textit{for all } w\in E.$$

### Lemma 2.8

(Lemma 2.9 of )

Let X be a $$\operatorname{CAT}(0)$$ space. Then
$$\rho^{2}(x,z) \leq\rho^{2}(y,z) + 2 \langle \overrightarrow{xy}, \overrightarrow{xz} \rangle \quad \textit{for all } x, y, z \in X.$$

### Lemma 2.9

(Lemma 2.10 of )

Let u and v be two points in a $$\operatorname{CAT}(0)$$ space X. For each $$t\in [0, 1]$$, we set $$u_{t} = t u\oplus(1-t)v$$. Then, for each $$x, y\in X$$, we have
1. (i)

$$\langle\overrightarrow{u_{t} x}, \overrightarrow{u_{t} y}\rangle\leq t\langle\overrightarrow{ux}, \overrightarrow{u_{t} y}\rangle+(1-t)\langle\overrightarrow{vx}, \overrightarrow{u_{t} y}\rangle$$;

2. (ii)

$$\langle\overrightarrow{u_{t} x}, \overrightarrow{uy}\rangle\leq t\langle\overrightarrow{ux}, \overrightarrow{uy}\rangle+(1-t)\langle\overrightarrow{vx}, \overrightarrow{uy}\rangle$$ and $$\langle\overrightarrow{u_{t} x}, \overrightarrow{vy}\rangle\leq t\langle\overrightarrow{ux}, \overrightarrow{vy}\rangle+(1-t)\langle\overrightarrow{vx}, \overrightarrow{vy}\rangle$$.

### Lemma 2.10

(Theorem 2.6 of )

Let X be a complete $$\operatorname{CAT}(0)$$ space, $$\{x_{n}\}$$ be a sequence in X, and $$x\in X$$. Then $$\{x_{n}\}$$ Δ-converges to x if and only if $$\limsup_{n\to\infty}\langle\overrightarrow{x_{n} x}, \overrightarrow{y x}\rangle\leq0$$ for all $$y\in X$$.

Recall that a continuous linear functional μ on $$\ell_{\infty}$$, the Banach space of bounded real sequences, is called a Banach limit if $$\|\mu\|=\mu(1,1,\ldots)=1$$ and $$\mu_{n}(a_{n})=\mu_{n}(a_{n+1})$$ for all $$\{a_{n}\}\in\ell_{\infty}$$.

### Lemma 2.11

(Proposition 2 of )

Let α be a real number and let $$(a_{1}, a_{2}, \ldots) \in \ell_{\infty}$$ be such that $$\mu_{n}(a_{n}) \leq\alpha$$ for all Banach limits μ and $$\limsup_{n}(a_{n+1} - a_{n}) \leq0$$. Then $$\limsup_{n} a_{n} \leq\alpha$$.

### Lemma 2.12

(Lemma 2.1 of )

Let $$\{c_{n}\}$$ be a sequence of non-negative real numbers satisfying
$$c_{n+1}\leq(1-\gamma_{n})c_{n}+\gamma_{n} \eta_{n}\quad \textit{for all } n\in\mathbb{N},$$
where $$\{\gamma_{n}\}\subset(0,1)$$ and $$\{\eta_{n}\}\subset\mathbb{R}$$ such that
1. (i)

$$\sum_{n=1}^{\infty} \gamma_{n}=\infty$$;

2. (ii)

$$\sum_{n=1}^{\infty} |\gamma_{n}\eta_{n}| < \infty$$ or $$\limsup_{n\to\infty}\eta_{n} \leq0$$.

Then $$\{c_{n}\}$$ converges to zero as $$n\to\infty$$.
Let E be a nonempty subset of a $$\operatorname{CAT}(0)$$ space $$(X,\rho)$$. We shall denote the family of nonempty bounded closed subsets of E by $$\mathit{BC}(E)$$, the family of nonempty bounded closed convex subsets of E by $$\mathit{BCC}(E)$$, and the family of nonempty compact subsets of E by $$K(E)$$. Let $$H(\cdot,\cdot)$$ be the Hausdorff distance on $$\mathit{BC}(X)$$, i.e.,
$$H(A,B)=\max \Bigl\{ \sup_{a\in A}\operatorname{dist}(a,B),\sup _{b\in B}\operatorname {dist}(b,A) \Bigr\} ,\quad A,B\in \mathit{BC}(X),$$
where $$\operatorname{dist}(a,B) := \inf\{\rho(a,b):b \in B\}$$ is the distance from the point a to the set B.

### Definition 2.13

A multivalued mapping $$T:E\rightarrow \mathit{BC}(X)$$ is said to be a contraction if there exists a constant $$k\in[0,1)$$ such that
$$H\bigl(T(x),T(y)\bigr)\leq k\rho(x,y),\quad x, y\in E.$$
(4)
If (4) is valid when $$k=1$$, then T is called nonexpansive. A point $$x\in E$$ is called a fixed point of T if $$x\in T(x)$$. We shall denote by $$\operatorname{Fix}(T)$$ the set of all fixed points of T. A multivalued mapping T is said to satisfy the endpoint condition  if $$\operatorname{Fix}(T)\neq\emptyset$$ and $$T(x)=\{x\}$$ for all $$x\in \operatorname{Fix}(T)$$.

The following fact is a consequence of Lemma 3.2 in . Notice also that it is an extension of Proposition 3.7 in .

### Lemma 2.14

If E is a closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X and $$T : E\to K(X)$$ is a nonexpansive mapping, then the conditions $$\{x_{n}\}$$ Δ-converges to x and $$\operatorname{dist}(x_{n}, T(x_{n}))\to0$$ imply $$x\in \operatorname{Fix}(T)$$.

The following fact is also needed.

### Lemma 2.15

(Lemma 3.1 of )

Let E be a closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X and $$T:E\to \mathit{BC}(X)$$ be a nonexpansive mapping. If T satisfies the endpoint condition, then $$\operatorname{Fix}(T)$$ is closed and convex.

## 3 Main results

### Theorem 3.1

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$T : E \to K(E)$$ be a nonexpansive mapping satisfying the endpoint condition, and $$f:E\to E$$ be a contraction with $$k\in[0,1)$$. Then the following statements hold:
1. (i)

$$\{x_{s}\}$$ defined by (2) converges strongly to $$\tilde{x}$$ as $$s\to0$$, where $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$.

2. (ii)
If $$\{x_{n}\}$$ is a bounded sequence in E such that $$\lim_{n \to\infty}\operatorname{dist}(x_{n},T(x_{n}))=0$$, then
$$\rho^{2}\bigl(f(\tilde{x}),\tilde{x}\bigr) \leq\mu_{n} \rho^{2}\bigl(f(\tilde{x}), x_{n}\bigr)$$
for all Banach limits μ.

### Proof

(i) We first show that $$\{x_{s}\}$$ is bounded. From (2), for each $$x_{s}$$, there exists $$y_{s}\in T(x_{s})$$ such that $$x_{s}=s f(x_{s})\oplus(1-s) y_{s}$$. By the endpoint condition, for each $$p\in \operatorname{Fix}(T)$$, we have
\begin{aligned} \rho(x_{s}, p) \leq& s\rho\bigl(f(x_{s}),p\bigr)+(1-s)\rho (y_{s},p) \\ = & s\rho\bigl(f(x_{s}),p\bigr)+(1-s)\operatorname{dist} \bigl(y_{s},T(p)\bigr) \\ \leq& s\rho\bigl(f(x_{s}),p\bigr)+(1-s) H\bigl(T(x_{s}),T(p) \bigr) \\ \leq& s\rho\bigl(f(x_{s}),p\bigr)+(1-s) \rho(x_{s}, p), \end{aligned}
which implies
\begin{aligned} \rho(x_{s}, p) \leq& \rho\bigl(f(x_{s}),p\bigr)\leq\rho \bigl(f(x_{s}),f(p)\bigr)+\rho\bigl(f(p),p\bigr) \\ \leq& k \rho(x_{s},p)+\rho\bigl(f(p),p\bigr). \end{aligned}
Thus $$\rho(x_{s}, p) \leq \frac{1}{1-k} \rho(f(p),p)$$. Hence, $$\{x_{s}\}$$ is bounded and so are $$\{f(x_{s})\}$$ and $$\{y_{s}\}$$. We note that
$$\operatorname{dist}\bigl(x_{s}, T(x_{s})\bigr) \leq\rho(x_{s}, y_{s}) \leq s \rho\bigl(f(x_{s}), y_{s}\bigr)\to0\quad \text{as } s\to0.$$
(5)
Next, we show that $$\{x_{s}\}$$ converges strongly to $$\tilde{x}$$ where $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$. Let a sequence $$\{s_{n}\}$$ in $$(0,1)$$ converging to 0 and put $$x_{n} := x_{s_{n}}$$. It suffices to show that there exists a subsequence of $$\{x_{n}\}$$ converging to $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$. By Lemmas 2.5 and 2.14, there exists a subsequence $$\{x_{n_{k}}\}$$ of $$\{x_{n}\}$$ and a point $$\tilde{x}$$ in $$\operatorname{Fix}(T)$$ such that $$\Delta\mbox{-} \lim_{k\to\infty}x_{n_{k}}=\tilde{x}$$. It follows from the endpoint condition and Lemma 2.9(i) that
\begin{aligned} \rho^{2}(x_{n_{k}}, \tilde{x}) = & \langle \overrightarrow{x_{n_{k}}\tilde{x}}, \overrightarrow{x_{n_{k}}\tilde {x}}\rangle \\ \leq& s_{n_{k}}\bigl\langle \overrightarrow{f(x_{n_{k}}) \tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}}\bigr\rangle +(1-s_{n_{k}}) \langle \overrightarrow{y_{n_{k}}\tilde{x}}, \overrightarrow{x_{n_{k}} \tilde {x}}\rangle \\ \leq& s_{n_{k}}\bigl\langle \overrightarrow{f(x_{n_{k}}) \tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}}\bigr\rangle +(1-s_{n_{k}}) \rho(y_{n_{k}},\tilde {x})\rho(x_{n_{k}},\tilde{x}) \\ = & s_{n_{k}}\bigl\langle \overrightarrow{f(x_{n_{k}})\tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}}\bigr\rangle +(1-s_{n_{k}}) \operatorname{dist}\bigl(y_{n_{k}},T(\tilde{x})\bigr)\rho(x_{n_{k}}, \tilde{x}) \\ \leq& s_{n_{k}}\bigl\langle \overrightarrow{f(x_{n_{k}}) \tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}}\bigr\rangle +(1-s_{n_{k}}) H\bigl(T(x_{n_{k}}),T(\tilde{x})\bigr)\rho(x_{n_{k}},\tilde{x}) \\ \leq& s_{n_{k}}\bigl\langle \overrightarrow{f(x_{n_{k}}) \tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}}\bigr\rangle +(1-s_{n_{k}}) \rho^{2}(x_{n_{k}},\tilde{x}), \end{aligned}
which implies
\begin{aligned} \rho^{2}(x_{n_{k}}, \tilde{x}) \leq& \bigl\langle \overrightarrow{f(x_{n_{k}})\tilde{x}}, \overrightarrow{x_{n_{k}} \tilde {x}}\bigr\rangle \\ = & \bigl\langle \overrightarrow{f(x_{n_{k}})f(\tilde{x})}, \overrightarrow {x_{n_{k}}\tilde{x}}\bigr\rangle +\bigl\langle \overrightarrow{f(\tilde{x})\tilde {x}}, \overrightarrow{x_{n_{k}} \tilde{x}}\bigr\rangle \\ \leq& \rho\bigl(f(x_{n_{k}}),f(\tilde{x})\bigr)\rho(x_{n_{k}}, \tilde{x})+\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow{x_{n_{k}}\tilde {x}}\bigr\rangle \\ \leq& k\rho^{2}(x_{n_{k}},\tilde{x})+\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}} \bigr\rangle . \end{aligned}
Thus
$$\rho^{2}(x_{n_{k}}, \tilde{x}) \leq \frac {1}{1-k}\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow {x_{n_{k}}\tilde{x}}\bigr\rangle .$$
(6)
Since $$\Delta\mbox{-} \lim_{k\to\infty}x_{n_{k}}=\tilde{x}$$, by Lemma 2.10 we have
$$\limsup_{k\to\infty} \bigl\langle \overrightarrow{f(\tilde{x}) \tilde{x}}, \overrightarrow{x_{n_{k}}\tilde{x}}\bigr\rangle \leq0.$$
This, together with (6), implies that $$\{x_{n_{k}}\}$$ converges strongly to $$\tilde{x}$$.
Next, we show that $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$. Since T satisfies the endpoint condition, we have
\begin{aligned} \operatorname{dist}\bigl(f(x_{n_{k}}), T(x_{n_{k}})\bigr) \leq& \rho \bigl(f(x_{n_{k}}), f(\tilde{x})\bigr)+\rho\bigl(f(\tilde{x}), \tilde{x}\bigr)+ \operatorname{dist}\bigl(\tilde {x}, T(x_{n_{k}})\bigr) \\ \leq& \rho(x_{n_{k}}, \tilde{x})+\rho\bigl(f(\tilde{x}), \tilde{x} \bigr)+ H\bigl(T(\tilde{x}), T(x_{n_{k}})\bigr) \\ \leq& \rho\bigl(f(\tilde{x}), \tilde{x}\bigr)+ 2\rho(x_{n_{k}}, \tilde{x}) \end{aligned}
and
\begin{aligned} \rho\bigl(f(\tilde{x}), \tilde{x}\bigr) = & \operatorname{dist}\bigl(f( \tilde{x}), T(\tilde{x})\bigr) \\ \leq&\rho\bigl(f(\tilde{x}), f(x_{n_{k}})\bigr)+\operatorname{dist} \bigl(f(x_{n_{k}}), T(x_{n_{k}})\bigr) + H\bigl(T(x_{n_{k}}), T(\tilde{x})\bigr) \\ \leq& \operatorname{dist}\bigl(f(x_{n_{k}}), T(x_{n_{k}})\bigr) + 2\rho(x_{n_{k}}, \tilde{x}). \end{aligned}
Thus
$$\bigl\vert \operatorname{dist}\bigl(f(x_{n_{k}}), T(x_{n_{k}})\bigr)-\rho \bigl(f(\tilde{x}), \tilde{x}\bigr) \bigr\vert \leq2\rho(x_{n_{k}}, \tilde{x}).$$
(7)
Applying Lemma 2.3, for any $$q\in \operatorname{Fix}(T)$$, we have
\begin{aligned} \rho^{2}(x_{n_{k}}, q) = & \rho^{2} \bigl(s_{n_{k}} f(x_{n_{k}})\oplus(1-s_{n_{k}}) y_{n_{k}}, q\bigr) \\ \leq& s_{n_{k}} \rho^{2}\bigl(f(x_{n_{k}}), q\bigr) + (1-s_{n_{k}})\rho^{2}(y_{n_{k}}, q) -s_{n_{k}}(1-s_{n_{k}}) \rho^{2}\bigl(f(x_{n_{k}}), y_{n_{k}}\bigr) \\ \leq& s_{n_{k}} \rho^{2}\bigl(f(x_{n_{k}}), q\bigr) + (1-s_{n_{k}})H^{2}\bigl(T(x_{n_{k}}), T(q)\bigr) -s_{n_{k}}(1-s_{n_{k}})\rho^{2}\bigl(f(x_{n_{k}}), y_{n_{k}}\bigr) \\ \leq& s_{n_{k}} \rho^{2}\bigl(f(x_{n_{k}}), q\bigr) + (1-s_{n_{k}})\rho^{2}(x_{n_{k}}, q) -s_{n_{k}}(1-s_{n_{k}}) \rho^{2}\bigl(f(x_{n_{k}}), y_{n_{k}}\bigr). \end{aligned}
This implies that
\begin{aligned} \begin{aligned} \rho^{2}(x_{n_{k}}, q) & \leq \rho^{2} \bigl(f(x_{n_{k}}), q\bigr) - (1-s_{n_{k}})\rho^{2} \bigl(f(x_{n_{k}}), y_{n_{k}}\bigr) \\ & \leq \rho^{2}\bigl(f(x_{n_{k}}), q\bigr) - (1-s_{n_{k}}) \bigl[\operatorname{dist}\bigl(f(x_{n_{k}}), T(x_{n_{k}})\bigr) \bigr]^{2}. \end{aligned} \end{aligned}
Taking $$k \to\infty$$, together with (7), we get
$$\rho^{2}(\tilde{x}, q) \leq \rho^{2}\bigl(f(\tilde{x}), q \bigr) - \rho^{2}\bigl(f(\tilde {x}), \tilde{x}\bigr).$$
Hence
$$0\leq\frac{1}{2} \bigl[\rho^{2}(\tilde{x}, \tilde{x}) + \rho^{2}\bigl(f(\tilde {x}), q\bigr) - \rho^{2}(\tilde{x}, q) - \rho^{2}\bigl(f(\tilde{x}), \tilde{x}\bigr) \bigr]=\bigl\langle \overrightarrow{\tilde{x}f(\tilde{x})}, \overrightarrow{q\tilde{x}}\bigr\rangle \quad \text{for all } q\in \operatorname{Fix}(T).$$
By Lemma 2.7, $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$ and this completes the proof.
(ii) Let $$\{x_{n}\}$$ be a bounded sequence in E such that $$\lim_{n \to\infty}\operatorname{dist}(x_{n},T(x_{n}))=0$$ and let μ be a Banach limit. Suppose that $$\mu_{n} \rho^{2}(f(\tilde{x}),x_{n} ) < \eta< \gamma< \rho^{2}(f(\tilde{x}),\tilde{x})$$ for some $$\eta, \gamma\in\mathbb{R}$$. Then there exists a subsequence $$\{x_{n_{k}}\}$$ of $$\{x_{n}\}$$ such that
$$\rho^{2}\bigl(f(\tilde{x}),x_{n_{k}} \bigr) < \gamma\quad \text{for all } k\in\mathbb{N}.$$
(8)
Otherwise $$\rho^{2}(f(\tilde{x}),x_{n}) \geq\gamma$$ for all large n which implies that $$\mu_{n} \rho^{2}(f(\tilde{x}),x_{n}) \geq\gamma > \eta$$, a contradiction, and therefore (8) holds. By Lemmas 2.5 and 2.14, we can assume that $$\{x_{n_{k}}\}$$ Δ-converges to a point p in $$\operatorname{Fix}(T)$$. Then by (8) and Lemma 2.6, p is contained in the closed ball centered at $$f(\tilde{x})$$ of radius $$\sqrt{\gamma}$$. This contradicts the fact that $$\tilde{x}$$ is the unique nearest point of $$f(\tilde{x})$$ in $$\operatorname{Fix}(T)$$ and hence the proof is complete. □

As an immediate consequence of Theorem 3.1, we obtain the following.

### Corollary 3.2

(Theorem 3.4 of )

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, and $$T : E \to K(E)$$ be a nonexpansive mapping satisfying the endpoint condition. Fix $$u\in E$$, and for each $$s\in(0,1)$$ let $$x_{s}$$ be a fixed point of $$G_{s}: E\to K(E)$$ defined by $$G_{s}(x)=s u\oplus(1-s) T(x)$$, that is, $$x_{s}\in E$$ and $$x_{s}\in s u \oplus(1-s) T(x_{s})$$. Then the following statements hold:
1. (i)

$$\{x_{s}\}$$ converges strongly to $$\tilde{x}$$ as $$s\to0$$, where $$\tilde{x}=P_{\operatorname{Fix}(T)}(u)$$.

2. (ii)
If $$\{x_{n}\}$$ is a bounded sequence in E such that $$\lim_{n \to\infty}\operatorname{dist}(x_{n},T(x_{n}))=0$$, then
$$\rho^{2}(u,\tilde{x}) \leq\mu_{n}\rho^{2}(u, x_{n})$$
for all Banach limits μ.

Now, we define an explicit approximation method for multivalued nonexpansive mappings. Let $$T:E\to K(E)$$ be a nonexpansive mapping, $$f: E\to E$$ be a contraction, and $$\{\alpha_{n}\}$$ be a sequence in $$(0,1)$$. Fix $$x_{1}\in E$$ and $$y_{1}\in T(x_{1})$$. Let
$$x_{2}=\alpha_{1} f(x_{1})\oplus(1- \alpha_{1})y_{1}.$$
By the definition of Hausdorff distance and the nonexpansiveness of T, we can choose $$y_{2} \in T(x_{2})$$ such that
$$\rho(y_{1},y_{2})\leq\rho(x_{1},x_{2}).$$
Inductively, we have
$$x_{n+1}=\alpha_{n} f(x_{n})\oplus(1- \alpha_{n})y_{n},\quad y_{n}\in T(x_{n}),$$
(9)
and $$\rho(y_{n},y_{n+1})\leq\rho(x_{n},x_{n+1})$$ for all $$n\in\mathbb{N}$$.

### Theorem 3.3

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$T : E \to K(E)$$ be a nonexpansive mapping satisfying the endpoint condition. Let $$f:E\to E$$ be a contraction with $$k\in [0,\frac{1}{2} )$$ and $$\{\alpha_{n}\}$$ be a sequence in $$(0,\frac{1}{2-k} )$$ satisfying:
1. (C1)

$$\lim_{n\rightarrow\infty}\alpha_{n}=0$$;

2. (C2)

$$\sum_{n=1}^{\infty}\alpha_{n}=\infty$$;

3. (C3)

$$\sum^{\infty}_{n=1}|\alpha_{n}-\alpha_{n+1}|<\infty$$ or $$\lim_{n\to\infty} (\frac{\alpha_{n}}{\alpha_{n+1}} )=1$$.

Then the sequence $$\{x_{n}\}$$ defined by (9) converges strongly to $$\tilde{x}$$, where $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$.

### Proof

We divide the proof into three steps.

Step 1. We show that $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{f(x_{n})\}$$ are bounded sequences. Let $$p\in \operatorname{Fix}(T)$$. By Lemma 2.2, we have
\begin{aligned} \rho(x_{n+1}, p) \leq& \alpha_{n} \rho\bigl(f(x_{n}), p\bigr)+(1-\alpha_{n}) \rho(y_{n}, p) \\ \leq& \alpha_{n} \bigl[\rho\bigl(f(x_{n}), f(p)\bigr)+\rho \bigl(f(p),p\bigr) \bigr]+(1-\alpha _{n}) H\bigl(T(x_{n}), T(p)\bigr) \\ \leq& \max \biggl\{ \rho(x_{n},p), \frac{\rho(f(p), p)}{1-k} \biggr\} . \end{aligned}
By induction, we also have
$$\rho(x_{n},p)\leq\max \biggl\{ \rho(x_{1},p), \frac{\rho(f(p), p)}{1-k} \biggr\} \quad \text{for all } n\in\mathbb{N}.$$
Hence, $$\{x_{n}\}$$ is bounded and so are $$\{y_{n}\}$$ and $$\{f(x_{n})\}$$.
Step 2. We show that $$\lim_{n\to\infty} \rho(x_{n+1}, x_{n})=0$$. Observe that
\begin{aligned} \rho(x_{n+1},x_{n}) \leq& \rho \bigl(\alpha_{n} f(x_{n})\oplus(1-\alpha_{n}) y_{n}, \alpha_{n-1} f(x_{n-1})\oplus(1-\alpha _{n-1}) y_{n-1} \bigr) \\ \leq& \rho \bigl(\alpha_{n} f(x_{n})\oplus(1- \alpha_{n}) y_{n}, \alpha_{n} f(x_{n}) \oplus(1-\alpha_{n}) y_{n-1} \bigr) \\ &{}+\rho \bigl(\alpha_{n} f(x_{n})\oplus(1- \alpha_{n}) y_{n-1}, \alpha_{n} f(x_{n-1}) \oplus(1-\alpha_{n}) y_{n-1} \bigr) \\ &{}+\rho \bigl(\alpha_{n} f(x_{n-1})\oplus(1- \alpha_{n}) y_{n-1}, \alpha _{n-1} f(x_{n-1}) \oplus(1-\alpha_{n-1}) y_{n-1} \bigr) \\ \leq& (1-\alpha_{n})\rho( y_{n}, y_{n-1})+ \alpha_{n}\rho\bigl(f(x_{n}), f(x_{n-1})\bigr) \\ &{}+\vert \alpha_{n}-\alpha_{n-1}\vert \rho \bigl(f(x_{n-1}), y_{n-1}\bigr) \\ \leq& \bigl(1-\alpha_{n}(1-k) \bigr)\rho(x_{n}, x_{n-1})+\vert \alpha _{n}-\alpha_{n-1}\vert \rho \bigl(f(x_{n-1}), y_{n-1}\bigr). \end{aligned}
Putting, in Lemma 2.12, $$c_{n}=\rho(x_{n}, x_{n-1})$$, $$\gamma_{n}=(1-k)\alpha_{n}$$ and $$\eta_{n}=\frac{1}{1-k}\vert 1-\frac{\alpha_{n-1}}{\alpha_{n}}\vert \rho (f(x_{n-1}), y_{n-1})$$, we get by (C2) and (C3) that $$\lim_{n\to\infty} \rho(x_{n+1},x_{n})=0$$.
Step 3. We show that $$\{x_{n}\}$$ converges strongly to a point $$\tilde{x}\in \operatorname{Fix}(T)$$ with $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$. For each $$s\in(0,1)$$, let $$x_{s}$$ be defined by (2). By Theorem 3.1, $$\{x_{s}\}$$ converges strongly to a point $$\tilde{x}\in \operatorname{Fix}(T)$$ and $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$. We note that
\begin{aligned} \operatorname{dist}\bigl(x_{n}, T(x_{n})\bigr) \leq& \rho(x_{n},y_{n}) \\ \leq& \rho(x_{n},x_{n+1})+\rho(x_{n+1},y_{n}) \\ \leq& \rho(x_{n},x_{n+1})+\alpha_{n}\rho \bigl(f(x_{n}),y_{n}\bigr) \to 0 \quad \text{as } n\to\infty. \end{aligned}
Again by Theorem 3.1, we have $$\mu_{n}(\rho^{2}(f(\tilde{x}),\tilde{x})- \rho^{2}(f(\tilde{x}),x_{n})) \leq0$$ for all Banach limits μ. Moreover, since $$\lim_{n\to\infty} \rho(x_{n+1},x_{n})=0$$,
$$\limsup_{n\rightarrow\infty} \bigl[ \bigl(\rho^{2}\bigl(f( \tilde{x}),\tilde {x}\bigr)-\rho^{2}\bigl(f(\tilde{x}),x_{n+1} \bigr) \bigr)- \bigl(\rho^{2}\bigl(f(\tilde {x}),\tilde{x}\bigr)- \rho^{2}\bigl(f(\tilde{x}),x_{n}\bigr) \bigr) \bigr]=0.$$
It follows from Lemma 2.11 that
$$\limsup_{n\rightarrow\infty} \bigl(\rho^{2}\bigl(f( \tilde{x}),\tilde {x}\bigr)-\rho^{2}\bigl(f(\tilde{x}),x_{n} \bigr) \bigr)\leq 0.$$
(10)
For each $$n\in\mathbb{N}$$, we set $$z_{n}=\alpha_{n} \tilde{x}\oplus (1-\alpha_{n}) y_{n}$$. It follows from Lemmas 2.8 and 2.9 that
\begin{aligned} \rho^{2}(x_{n+1},\tilde{x}) \leq& \rho^{2}(z_{n}, \tilde{x})+2 \langle\overrightarrow{x_{n+1}z_{n}}, \overrightarrow{x_{n+1}\tilde{x}}\rangle \\ \leq& (1-\alpha_{n})^{2}\rho^{2}(y_{n}, \tilde{x}) +2 \bigl[\alpha_{n}\bigl\langle \overrightarrow{f(x_{n})z_{n}}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle +(1-\alpha_{n}) \langle \overrightarrow{y_{n}z_{n}}, \overrightarrow{x_{n+1} \tilde{x}}\rangle \bigr] \\ \leq& (1-\alpha_{n})^{2}H^{2} \bigl(T(x_{n}),T(\tilde{x})\bigr) +2 \bigl[\alpha_{n}^{2} \bigl\langle \overrightarrow{f(x_{n})\tilde{x}}, \overrightarrow{x_{n+1} \tilde{x}}\bigr\rangle \\ & {}+\alpha_{n}(1-\alpha_{n})\bigl\langle \overrightarrow{f(x_{n})y_{n}}, \overrightarrow{x_{n+1} \tilde{x}}\bigr\rangle +\alpha_{n}(1-\alpha_{n})\langle \overrightarrow{y_{n}\tilde{x}}, \overrightarrow{x_{n+1} \tilde{x}}\rangle\bigr] \\ \leq& (1-\alpha_{n})^{2}\rho^{2}(x_{n}, \tilde{x}) +2 \bigl[\alpha_{n}^{2}\bigl\langle \overrightarrow{f(x_{n})\tilde{x}}, \overrightarrow{x_{n+1} \tilde{x}}\bigr\rangle +\alpha_{n}(1-\alpha_{n})\bigl\langle \overrightarrow{f(x_{n})\tilde{x}}, \overrightarrow{x_{n+1} \tilde{x}}\bigr\rangle \bigr] \\ =& (1-\alpha_{n})^{2}\rho^{2}(x_{n}, \tilde{x}) +2 \alpha_{n}\bigl\langle \overrightarrow{f(x_{n}) \tilde{x}}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle \\ =& (1-\alpha_{n})^{2}\rho^{2}(x_{n}, \tilde{x}) +2 \alpha_{n}\bigl\langle \overrightarrow{f(x_{n})f( \tilde{x})}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle +2 \alpha_{n}\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle \\ \leq& (1-\alpha_{n})^{2}\rho^{2}(x_{n}, \tilde{x}) +2k\alpha_{n} \rho(x_{n},\tilde{x}) \rho(x_{n+1},\tilde{x})+2 \alpha_{n}\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow{x_{n+1}\tilde{x}} \bigr\rangle \\ \leq& (1-\alpha_{n})^{2}\rho^{2}(x_{n}, \tilde{x}) +k\alpha_{n} \bigl[ \rho^{2}(x_{n}, \tilde{x})+\rho^{2}(x_{n+1},\tilde{x}) \bigr] \\ & {}+\alpha_{n} \bigl[\rho^{2}\bigl(f(\tilde{x}),\tilde{x} \bigr)+\rho^{2}(x_{n+1},\tilde {x})-\rho^{2}\bigl(f( \tilde{x}),x_{n+1}\bigr) \bigr], \end{aligned}
yielding
\begin{aligned} \rho^{2}(x_{n+1},\tilde{x}) \leq&\frac{1-(2-k)\alpha _{n}+\alpha^{2}_{n}}{1-(1+k)\alpha_{n}} \rho^{2}(x_{n},\tilde{x}) +\frac{\alpha_{n}}{1-(1+k)\alpha_{n}} \bigl[ \rho^{2}\bigl(f(\tilde{x}),\tilde {x}\bigr)-\rho^{2}\bigl(f( \tilde{x}),x_{n+1}\bigr) \bigr] \\ \leq&\frac{1-(2-k)\alpha_{n}}{1-(1+k)\alpha_{n}}\rho^{2}(x_{n},\tilde {x})+ \frac{\alpha^{2}_{n}}{1-(1+k)\alpha_{n}}M \\ &{}+\frac{\alpha_{n}}{1-(1+k)\alpha_{n}} \bigl[\rho^{2}\bigl(f(\tilde{x}),\tilde {x} \bigr)-\rho^{2}\bigl(f(\tilde{x}),x_{n+1}\bigr) \bigr], \end{aligned}
where $$M\geq \sup_{n\in\mathbb{N}} \{\rho(x_{n},\tilde{x}) \}$$. It follows that
$$\rho^{2}(x_{n+1},\tilde{x})\leq (1- \gamma_{n})\rho^{2}(x_{n},\tilde{x})+ \gamma_{n}\eta_{n},$$
(11)
where
$$\gamma_{n}=\frac{(1-2k)\alpha_{n}}{1-(1+k)\alpha_{n}}\quad \text{and}\quad \eta_{n} =\frac{\alpha_{n}}{1-2k}M+\frac{1}{1-2k} \bigl[\rho^{2}\bigl(f( \tilde{x}),\tilde {x}\bigr)-\rho^{2}\bigl(f(\tilde{x}),x_{n+1} \bigr) \bigr].$$
Since $$\alpha_{n}\in (0,\frac{1}{2-k} )$$ and $$k\in [0,\frac{1}{2} )$$, we have $$\gamma_{n}\in(0,1)$$. By (C1) and (10), $$\limsup_{n} \eta_{n} \leq0$$. Applying Lemma 2.12 to the inequality (11), we can conclude that $$x_{n}\to\tilde{x}$$ as $$n\to\infty$$. Therefore, the proof is complete. □

### Corollary 3.4

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, and $$T : E \to K(E)$$ be a nonexpansive mapping satisfying the endpoint condition. Suppose that $$u, x_{1}\in E$$ are arbitrarily chosen and $$\{x_{n}\}$$ is defined by
$$x_{n+1}=\alpha_{n} u\oplus(1-\alpha_{n})y_{n},$$
where $$y_{n}\in T(x_{n})$$ such that $$\rho(y_{n},y_{n+1})\leq\rho(x_{n},x_{n+1})$$ for all $$n\in\mathbb{N}$$ and $$\{\alpha_{n}\}$$ is a sequence in $$(0,\frac{1}{2} )$$ satisfying (C1), (C2), and (C3). Then $$\{x_{n}\}$$ converges strongly to the unique nearest point of u in $$\operatorname{Fix}(T)$$.

### Proof

We define $$f:E\to E$$ by $$f(x)=u$$ for all $$x\in E$$. Then f is a contraction with $$k=0$$. The conclusion follows immediately from Theorem 3.3. □

### Remark 3.5

There are some results in Banach spaces related to our work (see, e.g., ). Notice that our approach is quite different from that of .

As we have observed, Theorem 3.3 can be viewed as an extension of Theorem 1.3 for a contraction f with $$k\in [0,\frac{1}{2} )$$. It remains an open question whether Theorem 3.3 holds for $$k\in [\frac{1}{2}, 1 )$$.

### Question 3.6

Let E be a nonempty closed convex subset of a complete $$\operatorname{CAT}(0)$$ space X, $$T : E \to K(E)$$ be a nonexpansive mapping satisfying the endpoint condition. Let $$f:E\to E$$ be a contraction with $$k\in[0,1)$$ and $$\{\alpha_{n}\}$$ be a sequence in $$(0,1)$$ satisfying (C1), (C2), and (C3) and $$\{x_{n}\}$$ be a sequence defined by (9). Does $$\{x_{n}\}$$ converge to $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$?

## 4 $$\mathbb{R}$$-Trees

### Definition 4.1

An $$\mathbb{R}$$-tree is a geodesic space X such that:
1. (i)

there is a unique geodesic segment $$[x, y]$$ joining each pair of points $$x, y \in X$$;

2. (ii)

if $$[y,x]\cap[x,z]=\{x\}$$, then $$[y,x]\cup[x,z]=[y,z]$$.

By (i) and (ii) we have
1. (iii)

if $$u,v,w\in X$$, then $$[u,v]\cap[u,w]=[u,z]$$ for some $$z\in X$$.

It is well known that every $$\mathbb{R}$$-tree is a $$\operatorname{CAT}(0)$$ space which does not contain the Euclidean plane. To avoid the endpoint condition, we prefer to work on $$\mathbb{R}$$-trees. Although an $$\mathbb{R}$$-tree is not strong enough to make all nonexpansive mappings having the endpoint condition (see Example 5.3 in ), but it is strong enough to make our theorems hold without this condition.

Let E be a closed convex subset of a complete $$\mathbb{R}$$-tree $$(X,\rho)$$ and $$T:E\to \mathit{BCC}(E)$$ a multivalued mapping. Then, by Theorem 4.1 of , there exists a single-valued mapping $$t:E\to E$$ such that $$t(x)\in T(x)$$ and
$$\rho\bigl(t(x),t(y)\bigr)\leq H\bigl(T(x),T(y)\bigr)\quad \text{for all } x,y\in E.$$
(12)
In this case, we call t a nonexpansive selection of T.
Let $$f:E\to E$$ be a contraction and fix $$x_{1}\in E$$. We define a sequence $$\{x_{n}\}$$ in E by
$$x_{n+1}=\alpha_{n} f(x_{n})\oplus (1- \alpha_{n})y_{n},$$
(13)
where $$y_{n}=t(x_{n})\in T(x_{n})$$ for all $$n\in\mathbb{N}$$.

### Theorem 4.2

Let E be a nonempty closed convex subset of a complete $$\mathbb{R}$$-tree X, and $$T : E \to \mathit{BCC}(E)$$ be a nonexpansive mapping with $$\operatorname{Fix}(T)\neq\emptyset$$. Let $$f:E\to E$$ be a contraction with $$k \in [0,1)$$ and $$\{\alpha_{n}\}$$ be a sequence in $$(0,\frac{1}{2-k} )$$ satisfying:
1. (C1)

$$\lim_{n\rightarrow\infty}\alpha_{n}=0$$;

2. (C2)

$$\sum_{n=1}^{\infty}\alpha_{n}=\infty$$;

3. (C3)

$$\sum^{\infty}_{n=1}|\alpha_{n}-\alpha_{n+1}|<\infty$$ or $$\lim_{n\to\infty} (\frac{\alpha_{n}}{\alpha_{n+1}} )=1$$.

Then the sequence $$\{x_{n}\}$$ defined by (13) converges strongly to $$\tilde{x}=P_{\operatorname{Fix}(T)}(f(\tilde{x}))$$.

### Proof

By Theorem 4.2 of  (see also Theorem 2 of ), $$\operatorname{Fix}(t)=\operatorname{Fix}(T)$$. The set is closed and convex by Proposition 1 of  and t is nonexpansive by (12). The conclusion follows from Theorem 1.3. □

### Corollary 4.3

Let E be a nonempty closed convex subset of a complete $$\mathbb{R}$$-tree X, and $$T : E \to \mathit{BCC}(E)$$ be a nonexpansive mapping with $$\operatorname{Fix}(T)\neq\emptyset$$. Let $$\{\alpha_{n}\}$$ be a sequence in $$(0,\frac{1}{2})$$ satisfying (C1), (C2), and (C3). Fix $$x_{1}\in E$$ and let $$\{x_{n}\}$$ be a sequence defined by
$$x_{n+1}=\alpha_{n} u\oplus(1-\alpha_{n})t(x_{n}), \quad n\in\mathbb{N},$$
where $$t:E\to E$$ is a nonexpansive selection of T with $$\operatorname{Fix}(t)=\operatorname{Fix}(T)$$. Then $$\{x_{n}\}$$ converges strongly to the unique nearest point of u in $$\operatorname{Fix}(T)$$.

## Declarations

### Acknowledgements

This research was supported by Chiang Mai University and Thailand Research Fund under Grant RTA5780007. 