Skip to main content

Existence and convergence theorems of fixed points for multi-valued SCC-, SKC-, KSC-, SCS- and C-type mappings in hyperbolic spaces

Abstract

The purpose of this paper is to introduce the concepts of multi-valued SCC-, SKC-, KSC-, SCS- and C-type mappings and propose a classical Kuhfitting-type iteration (Kuhfitting in Pac. J. Math. 97(1):137-139, 1981) for finding a common fixed point of the SKC-, KSC-, SCS- and C-type multi-valued mappings in the setting of hyperbolic spaces. Under suitable conditions some Δ-convergence theorems and strong convergence theorems for the iterative sequence generated by the proposed scheme to approximate a common fixed point of a finite family of SKC-, KSC-, SCS- and C-type multi-valued mappings are proved. The results presented in the paper extend and improve some recent results announced in the current literature.

1 Introduction and preliminaries

In 2008, Suzuki [1] introduced a class of single-valued mappings satisfying the following condition (C):

$$ \frac{1}{2}\|x - Tx\| \le\|x - y\|\quad \mbox{implies}\quad \|Tx - Ty\|\le\|x - y\|. $$
(C)

Such mappings lie between the class of nonexpansiveness and quasi-nonexpansiveness. Later, this kind of mappings were called Suzuki-type nonexpansive mappings (or single-valued C-type generalized nonexpansive mappings). In [1], the author proved the existence of a fixed point for such kind of mappings.

In 2010, Nanjaras et al. [2] gave some characterization of existing fixed points for mappings with condition (C) in the framework of \(\operatorname{CAT}(0)\) spaces. In 2012, Dhompongsa et al. [3] proved some strong convergence theorems for multi-valued nonexpansive mappings in \(\operatorname{CAT}(0)\) space. In 2011, the notion of C-condition was generalized by Karapinar and Tas [4], and some new fixed point theorems were obtained in the setting of Banach spaces.

More recently, in Ghoncheh and Razani [5], the notion of C-condition introduced in [4] was generalized to the case of multi-valued version and some existence theorems of fixed point for these mappings were proved in a Ptolemy metric space.

The purpose of this paper is first to introduce the concepts of multi-valued SCC-, SKC-, KSC-, SCS- and C-type mappings and then to propose a classical Kuhfitting-type iteration [6] for finding a common fixed point for such kind of multi-valued mappings in the setting of hyperbolic spaces (see the definition below). Under suitable conditions some Δ-convergence theorem and strong convergence theorems are proved for the iterative sequence generated by the proposed scheme to approximate a common fixed point. The results presented in the paper extend and improve some recent results announced in the current literature [115].

For the purpose let us first recall some definitions, notations and conclusions which will be needed in proving our main results.

A hyperbolic space is a metric space \((X,d)\) together with a mapping \(W: X^{2}\times[0,1]\rightarrow X\) satisfying

  1. (i)

    \(d(u,W(x,y,\alpha))\leq\alpha d(u,x)+(1-\alpha)d(u,y)\),

  2. (ii)

    \(d(W(x,y,\alpha),W(x,y,\beta))=|\alpha-\beta|d(x,y)\),

  3. (iii)

    \(W(x,y,\alpha)=W(y,x,(1-\alpha))\),

  4. (iv)

    \(d(W(x,z,\alpha),W(y,w,\alpha))\leq(1-\alpha)d(x,y)+\alpha d(z,w)\),

for all \(x,y,z,w\in X\) and \(\alpha,\beta\in[0,1]\).

A nonempty subset K of a hyperbolic space X is said to be convex if \(W(x,y,\alpha)\in K\) for all \(x,y\in K\) and \(\alpha\in[0,1]\). The class of hyperbolic spaces contains normed spaces and convex subsets thereof, the Hilbert ball equipped with the hyperbolic metric [16], Hadamard manifolds as well as \(\operatorname{CAT}(0)\) spaces in the sense of Gromov (see [17]).

A hyperbolic space is uniformly convex [18] if for any given \(r>0\) and \(\epsilon\in(0,2]\), there exists \(\delta\in(0,1]\) such that for all \(u,x,y\in X\),

$$d\biggl(W\biggl(x,y,\frac{1}{2}\biggr),u\biggr)\leq(1-\delta)r, $$

provided \(d(x,u)\leq r\), \(d(y,u)\leq r\) and \(d(x,y)\geq\epsilon r\).

A map \(\eta:(0,\infty)\times(0,2]\rightarrow(0,1]\), which provides such \(\delta=\eta(r,\epsilon)\) for given \(r>0\) and \(\epsilon\in (0,2]\), is known as a modulus of uniform convexity of X. η is said to be monotone if it decreases with r (for fixed ϵ), i.e., for any given \(\epsilon>0\) and for any \(r_{2}\geq r_{1}>0\), we have \(\eta(r_{2},\epsilon)\leq\eta(r_{1},\epsilon)\).

In the sequel, let \((X,d)\) be a metric space and K be a nonempty subset of X. We shall denote by \(F(T)=\{x\in K: Tx=x\}\) the fixed point set of a mapping T.

If \(T: K \to2^{K}\) is a multi-valued mapping, we shall use \(F(T) =\{x\in K: x\in Tx\}\) to denote the fixed point set of T in K.

K is said to be proximal, if for each \(x\in X\), there exists an element \(y\in K\) such that

$$d(x,y)=d(x,K):=\inf_{z\in K}d(x,z). $$

Remark 1.1

It is well known that each weakly compact convex subset of a Banach space is proximal. As well as each closed convex subset of a uniformly convex Banach space is also proximal.

In the sequel, we denote by \(\mathit{CB}(X)\) and \(P(X)\) the collection of all nonempty and closed bounded subsets and the collection of all nonempty proximal bounded and closed subsets of X, respectively. The Hausdorff metric H on \(\mathit{CB}(X)\) is defined by

$$H(A,B):=\max\Bigl\{ \sup_{x\in A}d(x,B),\sup_{y\in B}d(y,A) \Bigr\} ,\quad \forall A,B\in \mathit{CB}(X). $$

Definition 1.2

A multi-valued mapping \(T: K\to \mathit{CB}(K)\) is said to be

  1. (i)

    nonexpansive if for all \(x,y\in K\),

    $$H(Tx,Ty)\leq d(x,y); $$
  2. (ii)

    quasi-nonexpansive, if \(F(T)\neq\emptyset\) and

    $$H(Tx,Tp)\leq d(x,p),\quad \forall p\in F(T), x\in K. $$

Definition 1.3

Let \(T: K \to \mathit{CB}(K)\) be a multi-valued mapping.

  1. (i)

    T is said to be SCC-type if

    $$\frac{1}{2}d(x, Tx)\le d(x, y) \quad \mbox{implies}\quad H(Tx, Ty) \le M(x, y),\quad \forall x, y \in K, $$

    where

    $$M(x, y) = \max \bigl\{ d(x, y), d(x, Tx), d(y,Ty), d(x, Ty), d(y, Tx)\bigr\} ; $$
  2. (ii)

    T is said to be SKC-type if

    $$\frac{1}{2}d(x, Tx) \le d(x, y)\quad \mbox{implies}\quad H(Tx, Ty) \le N(x, y),\quad \forall x, y \in K, $$

    where

    $$N(x, y) = \max \biggl\{ d(x, y), \frac{1}{2}\bigl\{ d(x, Tx)+ d(y,Ty)\bigr\} , \frac {1}{2}\bigl\{ d(x, Ty)+ d(y,Tx)\bigr\} \biggr\} ; $$
  3. (iii)

    T is said to be KSC-type if

    $$\frac{1}{2}d(x, Tx) \le d(x, y) \quad \mbox{implies}\quad H(Tx, Ty) \le \frac{1}{2}\bigl\{ d(x, Tx)+ d(y,Ty)\bigr\} , \quad \forall x, y \in K; $$
  4. (iv)

    T is said to be CSC-type if

    $$\frac{1}{2}d(x, Tx) \le d(x, y) \quad \mbox{implies} \quad H(Tx, Ty) \le \frac{1}{2}\bigl\{ d(x, Ty)+ d(y,Tx)\bigr\} ,\quad \forall x, y \in K; $$
  5. (v)

    T is said to be C-type if

    $$\frac{1}{2}d(x, Tx) \le d(x, y) \quad \mbox{implies}\quad H(Tx, Ty) \le d(x,y),\quad \forall x, y \in K. $$

Remark 1.4

  1. 1.

    It is obvious that each SKC-type, KSC-type, CSC-type and C-type multi-valued mapping is a special case of an SCC-type multi-valued mapping. As well as each KSC-type, CSC-type and C-type multi-valued mapping is a special case of an SKC-type multi-valued mapping.

  2. 2.

    Each multi-valued nonexpansive mapping is a special case of a C-type multi-valued mapping, so it is a special case of SKC-type and SCC-type multi-valued mappings.

Proposition 1.5

If \(T : K \to \mathit{CB}(K)\) is a multi-valued SKC-type mapping with \(F(T) \neq \emptyset\), then T is a multi-valued quasi-nonexpansive mapping.

Proof

For any \(p \in F(T)\) and \(x \in K\), we have

$$\frac{1}{2}d(p, Tp) = 0 \le d(p,x). $$

Since T is a multi-valued SKC-type mapping, it follows that

$$\begin{aligned} H(Tp, Tx) \le& N(p,x) \\ : =& \max\biggl\{ d(p, x), \frac{1}{2}\bigl\{ d(p, Tp)+ d(x,Tx)\bigr\} , \frac{1}{2}\bigl\{ d(x, Tp)+ d(p,Tx)\bigr\} \biggr\} . \end{aligned}$$
(1.1)

(a) If \(N(p,x) = d(p, x)\), then \(H(Tp, Tx) \le d(p, x)\). The conclusion holds.

(b) If \(N(p,x) = \frac{1}{2}\{ d(p, Tp)+ d(x,Tx)\}\), then

$$\begin{aligned} H(Tp, Tx) &\le\frac{1}{2}\bigl\{ d(p, Tp)+ d(x,Tx)\bigr\} = \frac {1}{2}d(x,Tx) \\ & \le\frac{1}{2}\bigl\{ d(x,Tp) + H(Tp, Tx)\bigr\} \le\frac{1}{2} \bigl\{ d(x,p) + H(Tp, Tx)\bigr\} . \end{aligned}$$

Simplifying we have

$$H(Tp, Tx) \le d(p, x). $$

The conclusion holds.

(c) If \(N(p,x) = \frac{1}{2}\{ d(x, Tp)+ d(p,Tx)\}\), then

$$\begin{aligned} H(Tp, Tx) &\le\frac{1}{2}\bigl\{ d(x, Tp)+ d(p,Tx)\bigr\} \le \frac{1}{2}\bigl\{ d(x,p) + d(p,Tp) + H(Tp, Tx)\bigr\} \\ & \le\frac{1}{2}\bigl\{ d(x,p) + H(Tp, Tx)\bigr\} . \end{aligned}$$

Simplifying we have

$$H(Tp, Tx) \le d(p, x). $$

This completes the proof of Proposition 1.5. □

Remark 1.6

Since each KSC-type, CSC-type and C-type multi-valued mapping is an SKC-type multi-valued mapping, it follows from Proposition 1.5 that if their fixed point set is nonempty, then all of them are multi-valued quasi-nonexpansive mappings.

Example 1.7

(Examples of SKC-type multi-valued mapping [4])

Consider the space

$$\begin{aligned}& X = \bigl\{ (0,0), (0,1), (1,1), (1, 0)\bigr\} \\& \quad \mbox{with }l^{\infty}\mbox{ metric: } d\bigl((x_{1}, y_{1}), (x_{2}, y_{2})\bigr) = \max \bigl\{ |x_{1} - x_{2}|, |y_{1} - y_{2}|\bigr\} , \\& \quad \forall (x_{1}, y_{1}), (x_{2}, y_{2} ) \in X. \end{aligned}$$
(1.2)

Define a multi-valued mapping T on X by

$$ T(a, b) = \left \{ \textstyle\begin{array}{l@{\quad}l} \{(1, 1), (0, 0)\}& \mbox{if }(a, b) \neq (0,0), \\ \{(0, 1)\} &\mbox{if }(a, b ) = (0, 0). \end{array}\displaystyle \right . $$
(1.3)

Now we prove that T is an SKC-type mapping. In fact, suppose \(x = (0,0)\) and \(y = (1,1)\), thus \(Tx = \{(0, 1)\}\), \(Ty = \{(1,1), (0,0)\}\) and \((1,1)\) is a fixed point of T in X. Hence we have

$$ \begin{aligned} &\frac{1}{2}d(x, Tx) = \frac{1}{2}d\bigl((0,0), (0, 1) \bigr) = \frac{1}{2} \le d(x, y)= d\bigl((0,0), (1,1)\bigr)= 1, \\ &H(Tx, Ty)= H\bigl(\bigl\{ (0, 1)\bigr\} , \bigl\{ (1,1), (0,0)\bigr\} \bigr) = d \bigl((0, 1), \bigl\{ (1,1), (0,0)\bigr\} \bigr) =1 \end{aligned} $$
(1.4)

and

$$\begin{aligned} N(x, y) =& \max \biggl\{ d\bigl((0,0), (1,1)\bigr), \frac{1}{2}\bigl\{ d \bigl((0,0), T\bigl((0,0)\bigr)\bigr) + d\bigl((1,1), T(1,1)\bigr)\bigr\} , \\ & \frac{1}{2}\bigl\{ d\bigl((0,0), T\bigl((1,1)\bigr)\bigr) + d \bigl((1,1), T(0,0)\bigr)\bigr\} \biggr\} \\ =& \max \biggl\{ 1, \frac{1}{2}\bigl\{ d\bigl((0,0), (0,1)\bigr) + d \bigl((1,1), \bigl\{ (1,1), (0,1)\bigr\} \bigr)\bigr\} , \\ & \frac{1}{2}\bigl\{ d\bigl((0,0), \bigl\{ (1,1) (0,0)\bigr\} \bigr) + d \bigl((1,1), (0,1)\bigr)\bigr\} \biggr\} \\ =& \max \biggl\{ 1, \frac{1}{2}, \frac{1}{2}\biggr\} = 1. \end{aligned}$$
(1.5)

Therefore we have

$$H(Tx, Ty) \le N(x, y). $$

By the same way we can prove that the SKC condition holds for the other points in X. This implies that T is an SKC-type multi-valued mapping and \((1,1)\) is the unique fixed point of T in X. Therefore T is also a quasi-nonexpansive mapping.

In order to introduce the concept of Δ-convergence in the general setting of hyperbolic spaces [19], we first recall some definition and conclusions.

Let \(\{x_{n}\}\) be a bounded sequence in a hyperbolic space X. For \(x\in X\), we define a continuous functional \(r(\cdot,\{x_{n}\}): X\to [0,\infty)\) by

$$ r\bigl(x,\{x_{n}\}\bigr)=\limsup_{n\to\infty} d(x_{n}, x). $$
(1.6)

The asymptotic radius \(r(\{x_{n}\})\) of \(\{x_{n}\}\) is given by

$$ r\bigl(\{x_{n}\}\bigr)=\inf\bigl\{ r\bigl(x,\{x_{n}\}\bigr): x\in X\bigr\} . $$
(1.7)

The asymptotic center \(A_{K}(\{x_{n}\})\) of a bounded sequence \(\{x_{n}\}\) with respect to \(K\subset X\) is the set

$$A_{K}\bigl(\{x_{n}\}\bigr)=\bigl\{ x\in X: r\bigl(x, \{x_{n}\}\bigr)\leq r\bigl(y,\{x_{n}\}\bigr), \forall y \in K\bigr\} . $$

This shows that the asymptotic center \(A_{K}(\{x_{n}\})\) of a bounded sequence is the set of minimizers of the functional \(r(\cdot,\{x_{n}\})\) on K. If the asymptotic center is taken with respect to X, then it is simply denoted by \(A(\{x_{n}\})\).

It is known that each uniformly convex Banach space and each \(\operatorname{CAT}(0)\) space enjoy the property that ‘each bounded sequence has a unique asymptotic center with respect to closed convex subsets.’ This property also holds in a complete uniformly convex hyperbolic space. This can been seen from the following.

Lemma 1.8

[20]

Let \((X,d,W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η. Then every bounded sequence \(\{x_{n}\}\) in X has a unique asymptotic center with respect to any nonempty closed convex subset K of X.

Recall that a sequence \(\{x_{n}\}\) in X is said to ‘Δ-converge to \(x\in X\)’ if x is the unique asymptotic center of \(\{u_{n}\}\) for every subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\). In this case, we write \(\Delta\mbox{-} \lim_{n\to\infty}x_{n}=x\) and call x the Δ-limit of \(\{x_{n}\}\).

Lemma 1.9

[15]

Let \((X,d,W)\) be a uniformly convex hyperbolic space with a monotone modulus of uniform convexity η. Let \(x\in X\) and \(\{\alpha_{n}\}\) be a sequence in \([a,b]\) for some \(a,b\in(0,1)\). If \(\{x_{n}\}\) and \(\{y_{n}\}\) are sequences in X such that \(\limsup_{n\to\infty}d(x_{n},x)\leq c\), \(\limsup_{n\to\infty }d(y_{n},x)\leq c\), \(\lim_{n\to\infty}d(W(x_{n},y_{n},\alpha_{n}),x)=c\) for some \(c\geq0\), then

$$\lim_{n\to\infty}d(x_{n},y_{n})=0. $$

Lemma 1.10

Let \((X,d, W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η, then X possesses the Opial property, i.e., for any sequence \(\{x_{n}\} \subset X\) with \(\Delta\mbox{-} \lim_{n \to\infty} x_{n} = x\) and for any \(y\in X\) with \(x\neq y\), then

$$\limsup_{n\to\infty} d(x_{n},x)< \limsup _{n\to\infty}d(x_{n},y). $$

Proof

In fact, the conclusion of Lemma 1.10 can be obtained from the uniqueness of the asymptotic center for each complete uniformly convex hyperbolic space with monotone modulus of uniform convexity. □

By a similar method as given in [19], we can also prove the following lemma.

Lemma 1.11

Let \((X,d, W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η and \(\{x_{n}\}\) be a bounded sequence in X with \(A(\{x_{n}\})=\{p\}\). Suppose that \(\{u_{n}\}\) is a subsequence of \(\{x_{n}\}\) with \(A(\{u_{n}\})=\{u\}\), and the sequence \(\{d(x_{n},u)\}\) is convergent, then \(p=u\).

In the sequel, we always assume that \((X,d, W)\) is a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η and K is a nonempty closed subset of X, and \(T: K\to P(K)\) is an SKC-type multi-valued mapping. For any given \(x, y \in K\), since Tx and Ty both are nonempty bounded proximal subsets in K, there exist \(u_{x} \in Tx\), \(u_{y} \in Ty\) such that

$$ d(x, u_{x}) = d(x, Tx),\qquad d(y, u_{y}) = d(y, Ty). $$
(1.8)

Lemma 1.12

Let \((X,d, W)\), K, T be the same as above. For any given \(x, y \in K\), let \(u_{x} \in Tx\), \(u_{y} \in Ty\) be the points satisfying (1.8). Then the following conclusions hold:

  1. (1)

    \(H(Tx,Tu_{x}) \le d(x,Tx) = d(x,u_{x})\);

  2. (2)

    either \(\frac{1}{2}d(x, Tx) \le d(x, y)\) or \(\frac{1}{2}d(u_{x}, Tu_{x}) \le d(y, u_{x})\);

  3. (3)

    either \(H(Tx, Ty) \le N(x,y)\) or \(H(Ty, Tu_{x}) \le N(y, u_{x})\), where

    $$ \left \{ \textstyle\begin{array}{l} N(x,y) : = \max\{d(x,y), \frac{1}{2}\{d(x, Tx) + d(y, Ty)\}, \frac {1}{2}\{d(x, Ty) + d(y, Tx)\}\}, \\ N(y, u_{x}): = \max\{d(y, u_{x}), \frac{1}{2}\{d(y, Ty) + d(u_{x}, Tu_{x})\}, \\ \hphantom{N(y, u_{x}): =} \frac{1}{2}\{d(y, Tu_{x}) + d(u_{x}, Ty)\}\}. \end{array}\displaystyle \right . $$
    (1.9)

Proof

For given \(x\in K\), since Tx is a nonempty bounded and proximal subset of K, there exists \(u_{x}\in Tx\) such that \(d(x,u_{x})=d(x,Tx)\). Since \(\frac{1}{2}d(x,Tx)\leq d(x,u_{x})\) and T is an SKC-type multi-valued mapping, we have

$$\begin{aligned} H(Tx,Tu_{x}) &\leq N(x,u_{x}) \\ & = \max\biggl\{ d(x, u_{x}), \frac{1}{2}\bigl\{ d(x, Tx) + d(u_{x}, Tu_{x})\bigr\} , \frac {1}{2}\bigl\{ d(x, Tu_{x}) + d(u_{x}, Tx)\bigr\} \biggr\} \\ & = \max\biggl\{ d(x, u_{x}), \frac{1}{2}\bigl\{ d(x, Tx) + d(u_{x}, Tu_{x})\bigr\} , \frac {1}{2}d(x, Tu_{x})\biggr\} . \end{aligned}$$
(1.10)

(a) If \(N(x,u_{x}) = d(x, u_{x})\), then \(H(Tx,Tu_{x})\leq d(x, u_{x})\). The conclusion of Lemma 1.12 holds.

(b) If \(N(x,u_{x}) = \frac{1}{2}\{d(x, Tx) + d(u_{x}, Tu_{x})\}\), then we have

$$H(Tx,Tu_{x})\leq\frac{1}{2}\bigl\{ d(x, Tx) + d(u_{x}, Tx) + H(Tx, Tu_{x})\bigr\} = \frac{1}{2} \bigl\{ d(x, Tx) + H(Tx, Tu_{x})\bigr\} . $$

Simplifying we have

$$H(Tx,Tu_{x})\le d(x, Tx). $$

(c) If \(N(x,u_{x}) = \frac{1}{2}d(x, Tu_{x})\), then we have

$$H(Tx,Tu_{x})\leq\frac{1}{2}d(x,Tu_{x}) \le \frac{1}{2}\bigl\{ d(x,Tx) + H(Tx, Tu_{x})\bigr\} . $$

Simplifying we have

$$H(Tx,Tu_{x})\leq d(x,Tx). $$

Conclusion (1) is proved.

It is obvious that conclusion (3) is a consequence of conclusion (2).

Next we prove conclusion (2).

In fact, if \(\frac{1}{2}d(x, Tx) > d(x, y)\) and \(\frac{1}{2}d(u_{x}, Tu_{x}) > d(u_{x}, y)\), then from conclusion (1) we have

$$\begin{aligned} d(x, u_{x}) & \le d(x,y) + d(y, u_{x}) \\ &< \frac{1}{2} \bigl\{ d(x, Tx) + d(u_{x}, Tu_{x})\bigr\} \\ & \le\frac{1}{2}\bigl\{ d(x, Tx) + d(u_{x}, Tx) + H(Tx, Tu_{x})\bigr\} \\ & \le\frac{1}{2}\bigl\{ d(x, Tx) + 0 + d(x, Tx)\bigr\} \ ( \mbox{by the conclusion (1)}) = d(x, Tx), \end{aligned}$$

which is a contradiction. Therefore the conclusion of Lemma 1.12 is proved. □

Lemma 1.13

Let \((X,d, W)\), K, T be the same as in Lemma  1.12. For any given \(x, y \in K\), the following conclusion holds:

$$ d(x, Ty)\le7 d(x, Tx) + d(x,y). $$
(1.11)

Proof

By conclusion (3) in Lemma 1.12, for any \(x, y \in K\), we have

$$\mbox{either}\quad H(Tx, Ty) \le N(x,y)\quad \mbox{or}\quad H(Ty, Tu_{x}) \le N(y, u_{x}), $$

where

$$\begin{aligned}& N(x,y): = \max\biggl\{ d(x,y), \frac{1}{2}\bigl\{ d(x, Tx) + d(y, Ty)\bigr\} , \frac {1}{2}\bigl\{ d(x, Ty) + d(y, Tx)\bigr\} \biggr\} , \\& N(y, u_{x}): = \max\biggl\{ d(y, u_{x}), \frac{1}{2} \bigl\{ d(y, Ty) + d(u_{x}, Tu_{x})\bigr\} , \frac{1}{2} \bigl\{ d(y, Tu_{x}) + d(u_{x}, Ty)\bigr\} \biggr\} , \end{aligned}$$

and \(u_{x} \in Tx\), \(u_{y} \in Ty \) are the points satisfying (1.8).

(I) Now we consider the first case: \(H(Tx, Ty) \le N(x,y)\).

(a) If \(N(x,y) = d(x,y)\), then \(H(Tx, Ty) \le d(x,y)\). Hence we have

$$d(x, Ty) \le d(x, Tx) + H(Tx, Ty) \le d(x, Tx) + d(x, y). $$

(b) If \(N(x,y) = \frac{1}{2}\{d(x, Tx) + d(y, Ty)\}\), then \(H(Tx, Ty) \le\frac{1}{2}\{d(x, Tx) + d(y, Ty)\}\). Hence we have

$$\begin{aligned} d(x, Ty) &\le d(x, Tx) + H(Tx, Ty) \le d(x, Tx) + \frac{1}{2}\bigl\{ d(x, Tx) + d(y, Ty)\bigr\} \\ & = \frac{3}{2} d(x, Tx) + \frac{1}{2}\bigl\{ d(x, Ty) + d(x, y)\bigr\} . \end{aligned}$$

Simplifying we have

$$d(x, Ty) \le3 d(x, Tx) + d(x, y). $$

(c) If \(N(x,y) = \frac{1}{2}\{d(x, Ty) + d(y, Tx)\}\), then we have \(H(Tx, Ty) \le\frac{1}{2}\{d(x, Ty) + d(y, Tx)\}\). This implies that

$$\begin{aligned} \begin{aligned} d(x, Ty) &\le d(x, Tx) + H(Tx, Ty) \le d(x, Tx) + \frac{1}{2}\bigl\{ d(x, Ty) + d(y, Tx)\bigr\} \\ &\le d(x, Tx) + \frac{1}{2}\bigl\{ d(x, Ty) + d(x, y) + d(x, Tx)\bigr\} . \end{aligned} \end{aligned}$$

Simplifying we have

$$d(x, Ty) \le3 d(x, Tx) + d(x, y). $$

This implies that in the first case, the conclusion of Lemma 1.13 is true.

(II) Now we consider the second case, i.e., \(H(Ty, Tu_{x}) \le N(y, u_{x})\).

(a) If \(N(y, u_{x}) = d(y, u_{x})\), then \(H(Ty, Tu_{x}) \le d(y, u_{x})\). By using Lemma 1.12(1), we have

$$\begin{aligned} d(x, Ty) &\le d(x, Tx) + H(Tx, Tu_{x}) + H(Tu_{x}, Ty) \le d(x, Tx) + d(x, Tx) + d(y, u_{x}) \\ &\le2 d(x, Tx) + d(x, y) + d(x, u_{x}) = 3d(x, Tx) + d(x, y). \end{aligned}$$

(b) If \(N(y, u_{x}) = \frac{1}{2}\{d(y, Ty) + d(u_{x}, Tu_{x})\}\), then \(H(Ty, Tu_{x}) \le\frac{1}{2}\{d(y, Ty) + d(u_{x}, Tu_{x})\}\). By using Lemma 1.12(1) again, we have

$$\begin{aligned} d(x, Ty) &\le d(x, Tx) + H(Tx, Tu_{x}) + H(Tu_{x}, Ty) \\ & \le d(x, Tx) + d(x, Tx) + \frac{1}{2}\bigl\{ d(y, Ty) + d(u_{x}, Tu_{x})\bigr\} \\ & \le2d(x, Tx) + \frac{1}{2}\bigl\{ d(y, x) + d(x, Ty) + d(u_{x},Tx) + H(Tx, Tu_{x})\bigr\} \\ &\le 2d(x, Tx) + \frac{1}{2}\bigl\{ d(y, x) + d(x, Ty) + 0 + d(x, Tx)\bigr\} . \end{aligned}$$

Simplifying we have

$$d(x, Ty)\le5d(x, Tx) + d(x, y). $$

(c) If \(N(y, u_{x}) = \frac{1}{2}\{d(y, Tu_{x}) + d(u_{x}, Ty)\}\), then \(H(Ty, Tu_{x}) \le \frac{1}{2}\{d(y, Tu_{x}) + d(u_{x}, Ty)\}\). By using Lemma 1.12(1) again, we have

$$\begin{aligned} d(x, Ty) &\le d(x, Tx) + H(Tx, Tu_{x}) + H(Tu_{x}, Ty) \\ & \le d(x, Tx) + d(x, Tx) + \frac{1}{2}\bigl\{ d(y, Tu_{x}) + d(u_{x}, Ty)\bigr\} \\ & \le2d(x, Tx) + \frac{1}{2}\bigl\{ d(y, x) + d(x,Tx) + H(Tx, Tu_{x}) + d(x, u_{x}) + d(x, Ty)\bigr\} \\ &= 2d(x, Tx) + \frac{1}{2}\bigl\{ d(x, y) + d(x,Tx) + d(x, Tx) + d(x, Tx) + d(x, Ty) \bigr\} . \end{aligned}$$

Simplifying we have

$$d(x, Ty)\le7d(x, Tx) + d(x, y). $$

This completes the proof of Lemma 1.13. □

Lemma 1.14

Let \((X,d, W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η, K be a nonempty closed and convex subset of X and \(T: K\to P(K)\) be an SKC-type multi-valued mapping with convex values. Suppose \(\{x_{n}\}\) is a sequence in K such that \(\Delta\mbox{-} \lim_{n\to\infty}x_{n}=z\) and \(\lim_{n\to\infty}d(x_{n},Tx_{n})=0\). Then z is a fixed point of T.

Proof

By the assumption, Tz is a convex and proximal subset of K. Hence, for each \(x_{n}\), \(n\geq1\), there exists a point \(u_{z_{n}}\in Tz\) such that

$$d(x_{n},u_{z_{n}})=d(x_{n},Tz),\quad \forall n\geq1. $$

Taking \(x = x_{n}\), \(y =z\) in Lemma 1.13, by Lemma 1.13 we have

$$ d(x_{n},u_{z_{n}})=d(x_{n}, Tz)\le7 d(x_{n}, Tx_{n}) + d(x_{n}, z). $$
(1.12)

Since \(\{u_{z_{n}}\}\) is a bounded sequence in Tz, by Lemma 1.8, there exists a subsequence \(\{u_{z_{n_{k}}}\} \subset\{u_{z_{n}}\}\) such that \(\Delta\mbox{-} \lim_{k \to\infty} u_{z_{n_{k}}} = u_{z} \in Tz\). Hence we have

$$d(x_{n_{k}}, u_{z})\le d(x_{n_{k}}, u_{z_{n}}) + d(u_{z_{n}}, u_{z}) \le7 d(x_{n_{k}}, Tx_{n_{k}}) + d(x_{n_{k}}, z) + d(u_{z_{n}}, u_{z}). $$

Taking the superior limit on the both sides of the above inequality, we get

$$\begin{aligned} \limsup_{k\to\infty}d(x_{n_{k}}, u_{z})&\leq\limsup _{k\to\infty}\bigl\{ 7 d(x_{n_{k}}, Tx_{n_{k}}) + d(x_{n_{k}}, z)\bigr\} \\ & \le\limsup_{k\to\infty} d(x_{n_{k}}, z). \end{aligned}$$

Hence by Lemma 1.10, \(u_{z}=z\). Thus \(z\in Tz\) and the proof is completed. □

2 Main results

Now we are in a position to give the following existence and approximation results.

Theorem 2.1

Let \((X, d, W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η and K be a nonempty closed convex subset of X. Let \(T_{i}: K\to P(K)\) (\(i=1,2,\ldots,m\)) be a finite family of SKC-type multi-valued mappings with convex values. Suppose that \(\mathcal{F}=\bigcap^{m}_{i=1}F(T_{i})\neq\emptyset\) and \(T_{i}(p)=\{p\}\) for each \(p\in\mathcal{F}\). Let \(\{\alpha_{n,i}\}\subset [a,b]\subset(0,1)\) (\(i=1,2,\ldots,m\)). For arbitrarily chosen \(x_{1}\in K\), let \(\{x_{n}\}\) be the classical Kuhfitting-type iteration [6] defined by

$$ \left \{ \textstyle\begin{array}{l} y_{n,1}=W(x_{n},z_{n,1},\alpha_{n,1}), \\ y_{n,2}=W(x_{n},z_{n,2},\alpha_{n,2}), \\ \vdots \\ y_{n,m-1}=W(x_{n},z_{n,m-1},\alpha_{n,m-1}), \\ x_{n+1}=W(x_{n}, z_{n,m},\alpha_{n,m}),\quad n\geq1, \end{array}\displaystyle \right . $$
(2.1)

where \(z_{n,1}\in T_{1}(x_{n})\) and \(z_{n,k}\in T_{k}(y_{n,k-1})\) for \(k=2,3,\ldots,m\). Then the sequence \(\{x_{n}\}\) defined by (2.1) Δ-converges to a point in \(\mathcal{F}\).

Proof

The proof of Theorem 2.1 is divided into three steps as follows.

Step 1. First we prove that \(\lim_{n\to\infty}d(x_{n},p)\) exists for each \(p\in\mathcal{F}\).

In fact, it follows from Proposition 1.5 that each SKC-type multi-valued mapping T with \(F(T) \neq \emptyset\) is a multi-valued quasi-nonexpansive mapping. Hence, for any \(p\in\mathcal {F}\), by (2.1) we have

$$\begin{aligned} d(y_{n,1},p)&=d\bigl(W(x_{n},z_{n,1}, \alpha_{n,1}),p\bigr) \\ &\leq(1-\alpha_{n,1})d(x_{n},p)+\alpha_{n,1}d(z_{n,1},p) \\ &\leq (1-\alpha_{n,1})d(x_{n},p)+\alpha_{n,1}d(z_{n,1},T_{1}p) \\ &\leq(1-\alpha_{n,1})d(x_{n},p)+\alpha_{n,1}H \bigl(T_{1}(x_{n}),T_{1}p\bigr) \\ &\leq d(x_{n},p) \end{aligned}$$
(2.2)

and

$$\begin{aligned} d(y_{n,2},p)&=d\bigl(W(x_{n},z_{n,2}, \alpha_{n,2}),p\bigr) \\ &\leq(1-\alpha_{n,2})d(x_{n},p)+\alpha_{n,2}d(z_{n,2},p) \\ &\leq (1-\alpha_{n,2})d(x_{n},p)+\alpha_{n,2}H \bigl(T_{2}(y_{n,1}),T_{2}p\bigr) \\ &\leq(1-\alpha_{n,2})d(x_{n},p)+\alpha_{n,2}d(y_{n,1},p) \\ &\leq d(x_{n},p). \end{aligned}$$
(2.3)

Similarly, we can also have

$$ d(y_{n,k},p)\leq d(x_{n},p)\quad (k=3, 4,\ldots, m-1) $$
(2.4)

and

$$\begin{aligned} d(x_{n+1},p)&=d\bigl(W(x_{n}, z_{n,m}, \alpha_{n,m}),p\bigr) \\ &\leq(1-\alpha_{n,m})d(x_{n},p)+\alpha_{n,m}d(z_{n,m},p) \\ &\leq (1-\alpha_{n,m})d(x_{n},p)+\alpha_{n,m}H \bigl(T_{2}(y_{n,m-1}),T_{m}p\bigr) \\ &\leq(1-\alpha_{n,m})d(x_{n},p)+\alpha_{n,m}d(y_{n,m-1},p) \\ &\leq d(x_{n},p). \end{aligned}$$
(2.5)

This implies that \(\lim_{n\to\infty}d(x_{n},p)\) exists for each \(p\in \mathcal{F}\). And so \(\{x_{n}\}\) is bounded.

Step 2. Now we prove that

$$ \lim_{n\to\infty}d(x_{n},T_{i}x_{n})=0, \quad \mbox{for each }i=1,2,\ldots, m. $$
(2.6)

In fact, since for each \(p\in\mathcal{F}\) the limit \(\lim_{n\to\infty}d(x_{n},p)\) exists, without loss of generality, we may assume that \(\lim_{n\to\infty}d(x_{n},p)=c\geq0\). If \(c=0\), then

$$\begin{aligned} \begin{aligned} d(x_{n},T_{i}x_{n})&\leq d(x_{n},p)+d(p,T_{i}x_{n})=d(x_{n},p)+H(T_{i}p,T_{i}x_{n}) \\ &\leq2d(x_{n},p)\to0\quad (\mbox{as }n\to\infty). \end{aligned} \end{aligned}$$

Therefore conclusion (2.6) holds. If \(c > 0\), from (2.4) we have

$$ \limsup_{n \to\infty} d(y_{n,k},p)\leq c\quad (k=1,2, \ldots,m-1). $$
(2.7)

Furthermore, since for \(k={1,2,\ldots,m}\),

$$d(z_{n,k},p)=d(z_{n,k},T_{k}p)\leq H \bigl(T_{k}(y_{n,k-1}),T_{k}p\bigr)\leq d(y_{n,k-1},p). $$

We have

$$ \limsup_{n\to\infty} d(z_{n,k},p)\leq c\quad (k=1,2, \ldots,m). $$
(2.8)

Since

$$ \lim_{n\to\infty}d(x_{n+1},p)= \lim_{n\to\infty} d \bigl(W(x_{n}, z_{n,m},\alpha_{n,m}),p\bigr)=c, $$
(2.9)

it follows from (2.8), (2.9) and Lemma 1.9 that

$$ \lim_{n\to\infty}d(x_{n},z_{n,m})=0. $$
(2.10)

On the other hand, it follows from (2.5) that

$$\begin{aligned} d(x_{n},p)&\leq\frac{d(x_{n},p)-d(x_{n+1},p)}{\alpha _{n,m}}+d(y_{n,m-1},p) \\ &\leq\frac{d(x_{n},p)-d(x_{n+1},p)}{a}+d(y_{n,m-1},p). \end{aligned}$$
(2.11)

Let \(n\to\infty\) and taking lower limit on both sides of the above inequality, we have

$$ c\leq\liminf_{n\to\infty}d(y_{n,m-1},p). $$
(2.12)

By (2.7) and (2.12) we have that

$$ \lim_{n\to\infty}d(y_{n,m-1},p)=c. $$
(2.13)

It follows from (2.8), (2.13) and Lemma 1.9 that

$$ \lim_{n\to\infty}d(x_{n},z_{n,m-1})=0. $$
(2.14)

Since

$$d(y_{n,m-1},p)\leq (1-\alpha_{n,m-1})d(x_{n},p)+ \alpha_{n,m-1}d(y_{n,m-2},p), $$

we have

$$\begin{aligned} \begin{aligned}[b] d(x_{n},p)&\leq\frac{d(x_{n},p)-d(y_{n,m-1},p)}{\alpha _{n,m-1}}+d(y_{n,m-2},p) \\ &\leq\frac{d(x_{n},p)-d(y_{n,m-1},p)}{\alpha}+d(y_{n,m-2},p). \end{aligned} \end{aligned}$$
(2.15)

Let \(n\to\infty\) and taking lower limit on both sides of the above inequality, by (2.13), we have

$$ c\leq\liminf_{n\to\infty}d(y_{n,m-2},p). $$
(2.16)

By (2.7) and (2.16) we have that

$$ \lim_{n\to\infty}d(y_{n,m-2},p)=c. $$
(2.17)

It follows from (2.8), (2.17) and Lemma 1.9 that

$$ \lim_{n\to\infty}d(x_{n},z_{n,m-3})=0. $$
(2.18)

Similarly, we can also have

$$ \lim_{n\to\infty}d(y_{n,k},p)=c\quad (k=1,2,\ldots,m-3) $$
(2.19)

and

$$ \lim_{n\to\infty}d(x_{n},z_{n,k})=0\quad (k=1,2, \ldots,m-2). $$
(2.20)

Thus, for each \(k=1,2,\ldots,m-1\), we have

$$ d(y_{n,k},x_{n})=d\bigl(W(x_{n}, z_{n,k}, \alpha_{n,k}),x_{n}\bigr)\leq \alpha_{n,k}d(z_{n,k},x_{n}) \to0\quad (\mbox{as }n\to\infty) $$
(2.21)

and

$$ \lim_{n\to\infty}d\bigl(x_{n},T_{k}(y_{n,k}) \bigr)\leq \lim_{n\to\infty}d(x_{n},z_{n,k-1})=0. $$
(2.22)

By virtue of (2.14), (2.18), (2.21)-(2.22) and Lemma 1.13, we have

$$\begin{aligned} d\bigl(x_{n}, T_{k}(x_{n})\bigr) &\leq d(x_{n}, y_{n,k})+ d\bigl(y_{n,k},T_{k}(x_{n}) \bigr) \\ &\leq d(x_{n}, y_{n,k})+ 7d( y_{n,k}, T_{k} y_{n,k}) + d(y_{n,k}, x_{n}) \\ &\leq2 d(x_{n}, y_{n,k})+ 7\bigl\{ d( y_{n,k}, x_{n}) + d(x_{n}, T_{k} y_{n,k}) \bigr\} \\ &\to0\quad (\mbox{as }n\to\infty)\ (\mbox{for }k=1,2,\ldots, m). \end{aligned}$$
(2.23)

This completes the proof of (2.6).

Step 3. Finally, we prove that the sequence \(\{x_{n}\} \Delta\)-converges to a common fixed point of \(\mathcal{F}\).

Denote by \(W_{\omega}(x_{n})=\bigcup_{\{u_{n}\}\subset \{x_{n}\}}A(\{u_{n}\})\). Firstly, we show that \(W_{\omega}(x_{n})\subset\mathcal{F}\). Indeed, if \(u\in W_{\omega}(x_{n})\), then there exists a subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\) such that \(A(\{u_{n}\})=\{u\}\). By Lemma 1.8, there exists a subsequence \(\{u_{n_{k}}\}\) of \(\{u_{n}\}\) such that \(\Delta\mbox{-} \lim_{k\to\infty}u_{n_{k}}=p\in K\). Since \(\lim_{n\to\infty}d(x_{n},T_{i}x_{n})=0\) (\(i=1,2,\ldots,m\)), it follows from Lemma 1.14 that \(p\in\mathcal{F}\). So \(\lim_{n\to\infty}d(x_{n},p)\) exists. By Lemma 1.11, we have that \(p=u\in\mathcal{F}\). This implies that \(W_{\omega}(x_{n})\subset\mathcal{F}\).

Next, let \(\{u_{n}\}\) be a subsequence of \(\{x_{n}\}\) with \(A(\{u_{n}\})=\{u\}\) and \(A\{x_{n}\}=\{v\}\). Since \(u\in W_{\omega}(x_{n})\subset\mathcal{F}\), and \(\lim_{n\to\infty}d(x_{n},u)\) exists, by Lemma 1.11 it follows that \(v=u\). This implies that \(W_{\omega}(x_{n})\) contains only one point. Again since \(W_{\omega}(x_{n})\subset\mathcal{F}\) and \(W_{\omega}(x_{n})\) contains only one point and \(\lim_{n\to\infty}d(x_{n},q)\) exists for each \(q\in \mathcal{F}\), we known that \(\{x_{n}\}\) Δ-converges to a common fixed point of \(T_{i}\) (\(i=1,2,\ldots,m\)). The proof is completed. □

Theorem 2.2

Let \((X, d, W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η and K be a nonempty compact convex subset of X. Let \(T_{i}: K\to \mathit{CB}(K)\) (\(i=1,2,\ldots,m\)) be a finite family of SKC-type multi-valued mappings with nonempty convex-values. Suppose that \(\mathcal{F}=\bigcap^{m}_{i=1}F(T_{i})\neq\emptyset\) and \(T_{i}(p)=\{p\}\) for each \(p\in\mathcal{F}\). Let \(\{\alpha_{n,i}\}\subset [a,b]\subset(0,1)\) (\(i=1,2,\ldots,m\)) and \(\{x_{n}\}\) be the sequence as given in Theorem  2.1. Then \(\{x_{n}\}\) converges strongly to a point in \(\mathcal{F}\).

Proof

By the assumption that for each \(x \in K\) and each \(i = 1, 2, \ldots, m\), \(T_{i} x\) is a bounded closed and convex subset of K. Since K is compact, \(T_{i}x\) is a nonempty compact and convex subset, it is a bounded proximal subset in K, i.e., \(T_{i}: K\to P(K)\) (\(i=1,2,\ldots, m\)). Therefore all conditions in Theorem 2.1 are satisfied. It follows from (2.5) and (2.6) that for each \(p\in \mathcal{F}\) and for each \(i= 1,2,\ldots, m\), the following limit

$$ \lim_{n\to\infty}d(x_{n},p) \mbox{ exists} \quad \mbox{and} \quad \lim_{n\to\infty}d(x_{n},T_{i}x_{n})=0. $$
(2.24)

Furthermore, since K is compact, there exists a subsequence \(\{ x_{n_{k}}\} \subset\{x_{n}\}\) such that \(x_{n_{k}} \to p^{*}\) (some point in K). Since \(T_{i}\), \(i = 1, 2, \ldots, m\), is an SKC-type multi-valued mapping, it follows from Lemma 1.13 that

$$d\bigl(x_{n_{k}}, T_{i} p^{*}\bigr) \le7 d(x_{n_{k}}, Tx_{n_{k}}) + d\bigl(x_{n_{k}}, p^{*}\bigr). $$

Letting \(k \to\infty\), from (2.24) we have that \(d(p^{*}, T_{i} p^{*})=0\). Hence \(p^{*} \in T_{i} p^{*}\), for each \(i = 1, 2, \ldots, m\), i.e., \(p^{*} \in\mathcal{F}\), and \(x_{n} \to p^{*}\).

This completes the proof. □

Lemma 2.3

[14]

Let \((X, d, W)\) be a complete hyperbolic space, K be a nonempty closed convex subset of X. Let \(T : K \to P(K)\) be a multi-valued mapping with \(F(T)\neq\emptyset\). Let \(P_{T}: K \to2^{K}\) be a multi-valued mapping defined by

$$ P_{T}(x): = \bigl\{ y\in Tx: d(x, y) = d(x, Tx)\bigr\} , \quad x \in K. $$
(2.25)

Then the following conclusions hold:

  1. (1)

    \(F(T) = F(P_{T})\);

  2. (2)

    \(P_{T} (p) = \{p\}\) for each \(p \in F(T)\);

  3. (3)

    for each \(x \in K\), \(P_{T}(x)\) is a closed subset of \(T(x)\);

  4. (4)

    \(d(x, Tx) = d(x, P_{T} (x))\) for each \(x \in K\);

  5. (5)

    \(P_{T}\) is a multi-valued mapping from K to \(P(K)\).

Theorem 2.4

Let \((X, d, W)\) be a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity η and K be a nonempty closed convex subset of X. Let \(T_{i}: K\to \mathit{CB}(K)\) (\(i=1,2,\ldots,m\)) be a finite family of multi-valued mappings with convex value and \(\mathcal{F}=\bigcap^{m}_{i=1}F(T_{i})\neq\emptyset\). Let \(P_{T_{i}}\), \(i = 1,2, \ldots, m\), be an SKC-type multi-valued mapping which is defined by (2.25). Let \(\{\alpha_{n,i}\}\subset [a,b]\subset(0,1)\) (\(i=1,2,\ldots,m\)). For arbitrarily chosen \(x_{1}\in K\), let \(\{x_{n}\}\) be the classical Kuhfitting-type iteration defined by

$$ \left \{ \textstyle\begin{array}{l} y_{n,1}=W(x_{n},z_{n,1},\alpha_{n,1}), \\ y_{n,2}=W(x_{n},z_{n,2},\alpha_{n,2}), \\ \vdots \\ y_{n,m-1}=W(x_{n},z_{n,m-1},\alpha_{n,m-1}), \\ x_{n+1}=W(x_{n}, z_{n,m},\alpha_{n,m}),\quad n\geq1, \end{array}\displaystyle \right . $$
(2.26)

where \(z_{n,1}\in P_{T_{1}}(x_{n})\) and \(z_{n,k}\in P_{T_{k}}(y_{n,k-1})\) for \(k=2,3,\ldots,m\). Then the sequence \(\{x_{n}\}\) Δ-converges to a point in \(\mathcal{F}\).

Proof

By virtue of Lemma 2.3, we know that the mapping \(P_{T_{i}}\), \(i = 1,2, \ldots, m\), defined by (2.25) possesses the following properties: for each \(i =1, 2, \ldots, m\), \(P_{T_{i}}: K \to P(K)\) is an SKC-type multi-valued mapping with \(\mathcal{F} = \bigcap_{i=1}^{m} F(P_{T_{i}}) = \bigcap_{i=1}^{m} F(T_{i})\neq\emptyset\) and for each \(i = 1,2, \ldots, m\), \(P_{T_{i}}(p) = \{p\}\) for each \(p \in \mathcal{F}\). Replacing the mappings \(T_{i}\) by \(P_{T_{i}}\) in Theorem 2.1, \(i = 1, 2, \ldots, m\), then all the conditions in Theorem 2.1 are satisfied. Therefore the conclusion of Theorem 2.4 can be obtained from Theorem 2.1 immediately. □

3 An application to the image recovery

The image recovery problem is formulated as to find the nearest point in the intersection of a family of closed convex subsets from a given point by using corresponding metric projection of each subset. In this section, we consider this problem for two subsets of a complete \(\operatorname{CAT}(0)\) space.

Theorem 3.1

Let \((X, d)\) be a complete \(\operatorname{CAT}(0)\) space. Let \(C_{1}\) and \(C_{2}\) be nonempty closed convex subsets of X such that \(C_{1}\cap C_{2} \neq \emptyset\). Let \(P_{1}\) and \(P_{2}\) be metric projections from X onto \(C_{1}\) and \(C_{2}\), respectively. Let \(\{\alpha_{n,i}\}\subset [a,b]\subset(0,1)\) (\(i=1,2\)). For arbitrarily chosen \(x_{1}\in X\), let \(\{x_{n}\}\) be the iteration defined by

$$ \left \{ \textstyle\begin{array}{l} y_{n,1}= \alpha_{n,1}x_{n} \oplus(1- \alpha_{n,1})P_{1} x_{n}, \\ x_{n+1}= \alpha_{n,2}x_{n} \oplus(1- \alpha_{n,2})P_{2} y_{n,1},\quad n \geq1. \end{array}\displaystyle \right . $$
(3.1)

Then \(\{x_{n}\}\) Δ-converges to a fixed point of the intersection of \(C_{1}\) and \(C_{2}\).

Proof

Since \((X,d)\) is a \(\operatorname{CAT}(0)\) space, it is a uniformly convex hyperbolic space with a monotone modulus of uniform convexity \(\eta= \frac{\varepsilon^{2}}{8}\), and \(W(x, y, \alpha) = \alpha x \oplus(1-\alpha)y\) for all \(x, y \in X\) and \(\alpha\in[0, 1]\). Further, since \(P_{1}\) and \(P_{2}\) are metric projections, they are single-valued SKC-type mappings. Further, we also get \(F(P_{1}) = C_{1}\) and \(F(P_{2}) = C_{2}\). Thus, letting \(T_{1} = P_{1}\) and \(T_{2} = P_{2}\) in Theorem 2.1, we know that all conditions in Theorem 2.1 are satisfied. Therefore the desired result can be obtained from Theorem 2.1 immediately. □

4 A numerical example

Let \((X, d) = \mathcal{R}\) with \(d(x,y)= |x-y|\) and \(K = [0, \frac {7}{2}]\). Denote by

$$W(x,y,\alpha):= \alpha x + (1-\alpha)y, \quad \forall x, y \in X \mbox{ and } \alpha\in[0, 1], $$

then \((X, d, W)\) is a complete uniformly convex hyperbolic space with a monotone modulus of uniform convexity and K is a nonempty closed and convex subset of X. Let \(T: K \to P(K)\) be a multi-valued mapping defined by

$$ T(x) = \left \{ \textstyle\begin{array}{l@{\quad}l} [0, \frac{x}{7}] &\mbox{if }x \neq \frac{7}{2}; \\ \{1\}& \mbox{if }x = \frac{7}{2}. \end{array}\displaystyle \right . $$
(4.1)

It is easy to prove that (also see [5, 21]) \(T: K \to P(K)\) is a C-type multi-valued mapping with convex-values and \(0 \in K\) is the unique fixed point of T in K and \(T(0) = \{0\}\). Therefore T is an SKC-type multi-valued mapping with convex-values and satisfies all conditions in Theorem 2.1. Let \(\{\alpha_{n} \}\) be a constant sequence such that \(\alpha_{n} =\frac{1}{2}\), \(\forall n \ge1\). For any given \(x_{0} \in[0, \frac{7}{2}]\) (for the sake of simplicity, we can assume that \(x_{0} = 1\)). By the same method as given in (2.1) (with \(m=1\)), we can define a sequence \(\{x_{n}\}\) as follows.

For \(x_{0} = 1\), we have \(Tx_{0} = T(1) = [0, \frac{1}{7}]\). Taking \(z_{0} = \frac{1}{7}\in T(x_{0})\), we define

$$x_{1} = \frac{1}{2}x_{0} + \frac{1}{2}z_{0} = \frac{1}{2} \biggl(1 + \frac{1}{7}\biggr). $$

For \(x_{1} = \frac{1}{2} (1 + \frac{1}{7})\), we have

$$T(x_{1}) = \biggl[0, \frac{x_{1}}{7}\biggr] = \biggl[0, \frac{1 + \frac{1}{7}}{2\times7}\biggr]. $$

Taking \(z_{1} = \frac{1}{2 \times7^{2}}\in Tx_{1}\), we define

$$x_{2} = \frac{1}{2}x_{1} + \frac{1}{2}z_{1} = \frac{1}{2^{2}} \biggl(1 + \frac {1}{7} + \frac{1}{7^{2}}\biggr). $$

For \(x_{2}\), we have \(Tx_{2} = [0, \frac{x_{2}}{7}] = [0,\frac{1}{7}(\frac {1}{2^{2}} (1 + \frac{1}{7} + \frac{1}{7^{2}}))]\). Taking \(z_{2} = \frac{1}{2^{2}\times7^{3}}\in Tx_{2} \), we define

$$x_{3} = \frac{1}{2}x_{2} + \frac{1}{2}z_{2} = \frac{1}{2^{3}} \biggl(1 + \frac {1}{7} + \frac{1}{7^{2}} + \frac{1}{7^{3}}\biggr). $$

Inductively, we can define

$$ x_{n+1}=\frac{1}{2}x_{n} + \frac{1}{2}z_{n} = \frac{1}{2^{n+1}}\sum_{i=0}^{n+1} \frac{1}{7^{i}}, \quad \forall n \ge0, $$
(4.2)

where

$$ \left \{ \textstyle\begin{array}{l} x_{n} = \frac{1}{2^{n}} (1 + \frac{1}{7} + \frac{1}{7^{2}} + \frac {1}{7^{3}} + \cdots+ \frac{1}{7^{n}}), \\ z_{n} = \frac{1}{2^{n} \times7^{n+1}}\in Tx_{n}. \end{array}\displaystyle \right . $$
(4.3)

From (4.2) we have that \(\lim_{n \to\infty}x_{n} = 0 \in F(T)\). This completes the proof.

References

  1. Suzuki, T: Fixed point theorems and convergence theorems for some generalized nonexpansive mappings. J. Math. Anal. Appl. 340, 1088-1095 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  2. Nanjaras, B, Panyanak, B, Phuengrattana, W: Fixed point theorems and convergence theorems for Suzuki-generalized nonexpansive mappings in \(\operatorname{CAT}(0)\) spaces. Nonlinear Anal. Hybrid Syst. 4, 25-31 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  3. Dhompongsa, S, Kaewkhaoa, A, Panyanaka, B: On Kirk’s strong convergence theorem for multivalued nonexpansive mappings on \(\operatorname{CAT}(0)\) spaces. Nonlinear Anal. 75, 459-468 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  4. Karapinar, E, Tas, K: Generalized (C)-conditions and related fixed point theorems. Comput. Math. Appl. 61, 3370-3380 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  5. Ghoncheh, SJH, Razani, A: Multi-valued version of SCC, SKC, KSC, and CSC conditions in Ptolemy metric spaces. J. Inequal. Appl. 2014, 471 (2014)

    Article  Google Scholar 

  6. Kuhfitting, PKF: Common fixed points of nonexpansive mappings by iteration. Pac. J. Math. 97(1), 137-139 (1981)

    Article  Google Scholar 

  7. Chang, S, Tang, Y, Wang, L, Xu, Y, Zhao, Y, Wang, G: Convergence theorems for some multi-valued generalized nonexpansive mappings. Fixed Point Theory Appl. 2014, 33 (2014)

    Article  MathSciNet  Google Scholar 

  8. Khan, AR, Khamsi, MA, Fukhar-ud-din, H: Strong convergence of a general iteration scheme in \(\operatorname{CAT}(0)\) spaces. Nonlinear Anal. 74, 783-791 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  9. Kirk, W, Panyanak, B: A concept of convergence in geodesic spaces. Nonlinear Anal. 68, 3689-3696 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  10. Sahin, A, Basarir, M: On the strong convergence of a modified S-iteration process for asymptotically quasi-nonexpansive mappings in a \(\operatorname{CAT}(0)\) space. Fixed Point Theory Appl. 2013, 12 (2013). doi:10.1186/1687-1812-2013-12

    Article  MathSciNet  Google Scholar 

  11. Agarwal, RP, O’Regan, D, Sahu, DR: Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 8(1), 61-79 (2007)

    MATH  MathSciNet  Google Scholar 

  12. Chang, SS, Cho, YJ, Zhou, H: Demiclosed principal and weak convergence problems for asymptotically nonexpansive mappings. J. Korean Math. Soc. 38(6), 1245-1260 (2001)

    MATH  MathSciNet  Google Scholar 

  13. Chang, SS, Wang, L, Lee, HWJ: Total asymptotically nonexpansive mappings in a \(\operatorname{CAT}(0)\) space demiclosed principle and a convergence theorems for total asymptotically nonexpansive mappings in a \(\operatorname{CAT}(0)\) space. Appl. Math. Comput. 219, 2611-2617 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chang, S, Wang, G, Wang, L, Tang, YK, Ma, ZL: Δ-Convergence theorems for multi-valued nonexpansive mappings in hyperbolic spaces. Appl. Math. Comput. 249, 535-540 (2014)

    Article  MathSciNet  Google Scholar 

  15. Khan, AR, Fukhar-ud-din, H, Kuan, MAA: An implicit algorithm for two finite families of nonexpansive maps in hyperbolic spaces. Fixed Point Theory Appl. 2012, 54 (2012). doi:10.1186/1687-1812-2012-54

    Article  Google Scholar 

  16. Reich, S, Shafrir, I: Nonexpansive iterations in hyperbolic spaces. Nonlinear Anal., Theory Methods Appl. 15, 537-558 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  17. Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)

    MATH  Google Scholar 

  18. Bridson, MR, Haefliger, A: Metric Spaces of Non-Positive Curvature. Springer, Berlin (1999)

    Book  MATH  Google Scholar 

  19. Dhompongsa, S, Panyanak, B: On Δ-convergence theorem in \(\operatorname{CAT}(0)\) spaces. Comput. Math. Appl. 56(10), 2572-2579 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  20. Leustean, L: Nonexpansive iteration in uniformly convex W-hyperbolic spaces. In: Leizarowitz, A, Mordukhovich, BS, Shafrir, I, Zaslavski, A (eds.) Nonlinear Analysis and Optimization I: Nonlinear Analysis. Contemporary Mathematics, vol. 513, pp. 193-209. Am. Math. Soc., Providence (2010)

    Chapter  Google Scholar 

  21. Abkar, A, Eslamian, M: Generalized nonexpansive multivalued mappings in strictly convex Banach spaces. Fixed Point Theory 14, 269-280 (2013)

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to express their thanks to the referees for their helpful comments and advises. This work was supported by the National Natural Science Foundation of China (Grant No. 11361070).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ravi P Agarwal.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, Ss., Agarwal, R.P. & Wang, L. Existence and convergence theorems of fixed points for multi-valued SCC-, SKC-, KSC-, SCS- and C-type mappings in hyperbolic spaces. Fixed Point Theory Appl 2015, 83 (2015). https://doi.org/10.1186/s13663-015-0339-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0339-9

MSC

Keywords