Skip to main content

Some generalized fixed point results in a b-metric space and application to matrix equations

Abstract

We have proved a generalized Presic-Hardy-Rogers contraction principle and Ciric-Presic type contraction principle for two mappings in a b-metric space. As an application, we derive some convergence results for a class of nonlinear matrix equations. Numerical experiments are also presented to illustrate the convergence algorithms.

1 Introduction

There appears in literature several generalizations of the famous Banach contraction principle. One such generalization was given by Presic [1, 2] as follows.

Theorem 1.1

[2]

Let \((X, d)\) be a metric space, k be a positive integer, \(T : X^{k}\rightarrow X\) be a mapping satisfying the following condition:

$$\begin{aligned}& d\bigl(T(x_{1}, x_{2}, \ldots , x_{k}),T(x_{2}, x_{3}, \ldots , x_{k+1}) \bigr) \\ & \quad \le q_{1}\cdot d(x_{1}, x_{2}) + q_{2}\cdot d(x_{2}, x_{3}) + \cdots + q_{k}\cdot d(x_{k}, x_{k+1}), \end{aligned}$$
(1.1)

where \(x_{1}, x_{2}, \ldots , x_{k+1}\) are arbitrary elements in X and \(q_{1}, q_{2}, \ldots , q_{k}\) are nonnegative constants such that \(q_{1} + q_{2} + \cdots + q_{k} < 1\). Then there exists some \(x \in X\) such that \(x = T(x, x, \ldots, x)\). Moreover, if \(x_{1}, x_{2}, \ldots , x_{k}\) are arbitrary points in X and for \(n \in N\), \(x_{n+k} = T(x_{n}, x_{n+1}, \ldots, x_{n+k-1})\), then the sequence \(\langle x_{n}\rangle\) is convergent and \(\lim x_{n} = T(\lim x_{n}, \lim x_{n}, \ldots, \lim x_{n})\).

Note that for \(k=1\) the above theorem reduces to the well-known Banach contraction principle. Ciric and Presic [3] generalizing the above theorem proved the following.

Theorem 1.2

[3]

Let \((X, d)\) be a metric space, k be a positive integer, \(T : X^{k}\rightarrow X\) be a mapping satisfying the following condition:

$$\begin{aligned} \begin{aligned}[b] &d\bigl(T(x_{1}, x_{2}, \ldots , x_{k}),T(x_{2}, x_{3}, \ldots , x_{k+1}) \bigr) \\ &\quad \le\lambda\cdot \max\bigl\{ d(x_{1}, x_{2}), d(x_{2}, x_{3}),\ldots, d(x_{k}, x_{k+1}) \bigr\} , \end{aligned} \end{aligned}$$
(1.2)

where \(x_{1}, x_{2}, \ldots , x_{k+1}\) are arbitrary elements in X and \(\lambda\in(0, 1)\). Then there exists some \(x \in X\) such that \(x = T(x, x, \ldots, x)\). Moreover, if \(x_{1}, x_{2}, \ldots , x_{k}\) are arbitrary points in X and for \(n \in N\), \(x_{n+k} = T(x_{n}, x_{n+1}, \ldots, x_{n+k-1})\), then the sequence \(\langle x_{n}\rangle\) is convergent and \(\lim x_{n} = T(\lim x_{n}, \lim x_{n}, \ldots, \lim x_{n})\). If in addition T satisfies \(D(T(u, u, \ldots, u), T(v, v, \ldots, v)) < d(u, v)\) for all \(u, v \in X\), then x is the unique point satisfying \(x = T(x, x, \ldots, x)\).

In [4, 5] Pacurar gave a classic generalization of the above results. Later the above results were further extended and generalized by many authors (see [6–14]). Generalizing the concept of metric space, Bakhtin [15] introduced the concept of b-metric space which is not necessarily Hausdorff and proved the Banach contraction principle in the setting of a b-metric space. Since then several papers have dealt with fixed point theory or the variational principle for single-valued and multi-valued operators in b-metric spaces (see [16–23] and the references therein). In this paper we have proved common fixed point theorems for the generalized Presic-Hardy-Rogers contraction and Ciric-Presic contraction for two mappings in a b-metric space. Our results extend and generalize many well-known results. As an application, we have derived some convergence results for a class of nonlinear matrix equations. Numerical experiments are also presented to illustrate the convergence algorithms.

2 Preliminaries

Definition 2.1

[15]

Let X be a nonempty set and \(d: X\times X\rightarrow[0,\infty )\) satisfy:

  1. (bM1)

    \(d(x,y)=0\) if and only if \(x=y\) for all \(x,y\in X\);

  2. (bM2)

    \(d(x,y)=d(y,x)\) for all \(x,y\in X\);

  3. (bM3)

    there exists a real number \(s \geq1\) such that \(d(x,y)\leq s[d(x,z)+d(z,y)]\) for all \(x,y,z\in X\).

Then d is called a b-metric on X and \((X,d)\) is called a b-metric space (in short bMS) with coefficient s.

Convergence, Cauchy sequence and completeness in b-metric space are defined as follows.

Definition 2.2

[15]

Let \((X,d)\) be a b-metric space, \(\{x_{n}\}\) be a sequence in X and \(x\in X\). Then:

  1. (a)

    The sequence \(\{x_{n}\}\) is said to be convergent in \((X,d)\), and it converges to x if for every \(\varepsilon>0\) there exists \(n_{0}\in\mathbb{N}\) such that \(d(x_{n},x)<\varepsilon\) for all \(n>n_{0}\), and this fact is represented by \(\lim_{n\rightarrow \infty}x_{n}=x\) or \(x_{n}\rightarrow x\) as \(n\rightarrow\infty\).

  2. (b)

    The sequence \(\{x_{n}\}\) is said to be Cauchy sequence in \((X,d)\) if for every \(\varepsilon>0\) there exists \(n_{0}\in\mathbb{N}\) such that \(d(x_{n},x_{n+p})<\varepsilon\) for all \(n>n_{0}\), \(p>0\) or, equivalently, if \(\lim_{n\rightarrow\infty }d(x_{n},x_{n+p})=0\) for all \(p>0\).

  3. (c)

    \((X,d)\) is said to be a complete b-metric space if every Cauchy sequence in X converges to some \(x\in X\).

Definition 2.3

[9]

Let \((X,d)\) be a metric space, k be a positive integer, \(T:X^{k}\rightarrow X\) and \(f:X\rightarrow X\) be mappings.

  1. (a)

    An element \(x \in X\) is said to be a coincidence point of f and T if and only if \(f(x)=T(x,x,\ldots,x)\). If \(x = f(x)=T(x,x,\ldots,x)\), then we say that x is a common fixed point of f and T. If \(w = f(x)=T(x,x,\ldots,x)\), then w is called a point of coincidence of f and T.

  2. (b)

    Mappings f and T are said to be commuting if and only if \(f(T(x,x,\ldots, x))=T(fx,fx,\ldots, fx)\) for all \(x\in X\).

  3. (c)

    Mappings f and T are said to be weakly compatible if and only if they commute at their coincidence points.

Remark 2.4

For \(k=1\) the above definitions reduce to the usual definition of commuting and weakly compatible mappings in a metric space.

The set of coincidence points of f and T is denoted by \(C(f,T)\).

Lemma 2.5

[24]

Let X be a nonempty set, k be a positive integer and \(f:X^{k}\rightarrow X\), \(g:X\rightarrow X\) be two weakly compatible mappings. If f and g have a unique point of coincidence \(y=f(x,x,\ldots,x)=g(x)\), then y is the unique common fixed point of f and g.

Khan et al. [8] defined the set function \(\theta:[0,\infty)^{4}\rightarrow[0,\infty)\) as follows:

  1. 1.

    θ is continuous,

  2. 2.

    for all \(t_{1},t_{2},t_{3},t_{4}\in[0,\infty)\), \(\theta(t_{1},t_{2},t_{3},t_{4})=0\Leftrightarrow t_{1}t_{2}t_{3}t_{4}=0\).

3 Main results

Throughout this paper we assume that the b-metric \(d: X\times X\rightarrow[0,\infty)\) is continuous on \(X^{2}\).

Theorem 3.1

Let \((X,d)\) be a b-metric space with coefficient \(s \ge1\). For any positive integer k, let \(f:X^{k} \rightarrow X\) and \(g:X\rightarrow X\) be mappings satisfying the following conditions:

$$\begin{aligned}& f\bigl(X^{k}\bigr) \subseteq g(X), \end{aligned}$$
(3.1)
$$\begin{aligned}& d\bigl(f(x_{1},x_{2},\ldots, x_{k}), f(x_{2},x_{3},\ldots, x_{k+1})\bigr) \\& \quad \leq\sum_{i=1}^{k} \alpha_{i}d(gx_{i},gx_{i+1})+\sum _{i=1}^{k+1}\sum_{j=1}^{k+1} \beta_{i,j}d\bigl(gx_{i},f(x_{j},x_{j}, \ldots, x_{j})\bigr) \\& \qquad {}+ L\cdot\theta \bigl(d\bigl(gx_{1},f(x_{k+1},x_{k+1},x_{k+1}, \ldots,x_{k+1})\bigr),d\bigl(gx_{k+1},f(x_{1},x_{1},x_{1}, \ldots,x_{1})\bigr), \\& \qquad d\bigl(gx_{1},f(x_{1},x_{1}, \ldots,x_{1})\bigr),d\bigl(gx_{k+1},f(x_{k+1},x_{k+1}, \ldots,x_{k+1})\bigr)\bigr), \end{aligned}$$
(3.2)

where \(x_{1},x_{2},\ldots, x_{k+1}\) are arbitrary elements in X and \(\alpha _{i}\), \(\beta_{ij}\), L are nonnegative constants such that \(\sum_{n=1}^{k}s^{k+3-n}[\alpha_{n}+\sum_{i=1}^{k+1}\sum_{j=1}^{k+1}\beta _{i,j}]<1\) and

$$ g(X) \textit{ is complete}. $$
(3.3)

Then f and g have a unique coincidence point, i.e., \(C(f, g) \neq \emptyset\). In addition, if f and g are weakly compatible, then f and g have a unique common fixed point. Moreover, for any \(x_{1}\in X\), the sequence \(\{y_{n}\}\) defined by \(y_{n} = g(x_{n}) = f(x_{n-1}, x_{n-1}, \ldots, x_{n-1})=Fx_{n-1}\) converges to the common fixed point of f and g.

Proof

Let \(x_{1}\in X\), then \(f(x_{1},x_{1},\ldots,x_{1})\in f(X^{k})\subset g(X)\). So there exists \(x_{2}\in X\) such that \(f(x_{1},x_{1},\ldots,x_{1})=g(x_{2})\). Now \(f(x_{2},x_{2},\ldots,x_{2})\in f(X^{k})\subset g(X)\) and so there exists \(x_{3}\in X\) such that \(f(x_{2},x_{2},\ldots,x_{2})=g(x_{3})\). Continuing this process we define the sequence \(\langle y_{n}\rangle\) in \(g(X)\) as \(y_{n} = g(x_{n}) = f(x_{n-1}, x_{n-1}, \ldots, x_{n-1})=Fx_{n-1}\), \(n = 1,2,\ldots,k+1\), where F is the associate operator for f. Let \(d_{n} = d(y_{n}, y_{n+1})=d(gx_{n}, gx_{n+1})\) and \(D_{ij}=d(gx_{i},f(x_{j},x_{j},\ldots,x_{j}))\).

Then we have

$$\begin{aligned} d_{n+1} =&d\bigl(g(x_{n+1}),g(x_{n+2})\bigr) \\ =& d(Fx_{n},Fx_{n+1}) \\ =& d\bigl(f(x_{n},x_{n},\ldots,x_{n}),f(x_{n+1},x_{n+1}, \ldots,x_{n+1})\bigr) \\ \leq& sd\bigl(f(x_{n},x_{n},\ldots,x_{n}),f(x_{n},x_{n}, \ldots,x_{n+1})\bigr) \\ &{}+s^{2}d\bigl(f(x_{n},x_{n}, \ldots,x_{n+1}),f(x_{n},x_{n},\ldots, x_{n+1},x_{n+1})\bigr) \\ &{}+s^{3}d\bigl(f(x_{n},x_{n},\ldots, x_{n+1},x_{n+1}), f(x_{n},\ldots,x_{n+1},x_{n+1},x_{n+1}) \bigr)+\cdots \\ &{}+s^{k}d\bigl(f(x_{n},x_{n+1}, \ldots,x_{n+1},x_{n+1}), f(x_{n+1}, \ldots,x_{n+1},x_{n+1},x_{n+1})\bigr). \end{aligned}$$

Using (3.2) we get

$$\begin{aligned} d_{n+1} \leq&s\Biggl\{ \alpha_{k} d_{n}+\Biggl[\sum _{j=1}^{k}\beta_{1,j}+\sum _{j=1}^{k}\beta_{2,j}+\cdots+\sum _{j=1}^{k}\beta_{kj}\Biggr]D_{n,n} \\ &{}+\Biggl[\sum_{i=1}^{k} \beta_{i,k+1}\Biggr]D_{n,n+1}+\Biggl[\sum _{j=1}^{k}\beta _{k+1,j}\Biggr]D_{n+1,n}+ \beta_{k+1,k+1}D_{n+1,n+1}\Biggr\} \\ &{}+s^{2}\Biggl\{ \alpha_{k-1} d_{n}+\Biggl[\sum _{j=1}^{k-1}\beta_{1,j}+\sum _{j=1}^{k-1}\beta_{2,j}+\cdots+\sum _{j=1}^{k-1}\beta_{k-1,j}\Biggr]D_{n,n} \\ &{}+\Biggl[\sum_{i=1}^{k-1} \beta_{i,k}+\sum_{i=1}^{k-1}\beta _{i,k+1}\Biggr]D_{n,n+1}+\Biggl[\sum_{j=1}^{k-1} \beta_{k,j}+\sum_{j=1}^{k-1} \beta_{k+1,j}\Biggr]D_{n+1,n} \\ &{}+\Biggl[\sum_{j=k}^{k+1} \beta_{k,j}+\sum_{j=k}^{k+1}\beta _{k+1,j}\Biggr]D_{n+1,n+1}\Biggr\} \\ &{}+\cdots+s^{k}\Biggl\{ \alpha_{1} d_{n}+\beta _{1,1}D_{n,n}+\Biggl[\sum_{j=2}^{k+1} \beta_{1,j}\Biggr]D_{n,n+1}+\Biggl[\sum _{i=2}^{k+1}\beta_{i,1}\Biggr]D_{n+1,n} \\ &{}+\Biggl[\sum_{j=2}^{k+1}\beta _{2,j}+\sum_{j=2}^{k+1} \beta_{3,j}+\cdots+\sum_{j=2}^{k+1} \beta _{k+1,j}\Biggr]D_{n+1,n+1}\Biggr\} \\ &{}+ L\cdot \theta \bigl(d\bigl(gx_{n},(fx_{n+1},x_{n+1},x_{n+1}, \ldots,x_{n+1})\bigr),d\bigl(gx_{n+1},f(x_{n},x_{n},x_{n}, \ldots,x_{n})\bigr), \\ &d\bigl(gx_{n},f(x_{n},x_{n}, \ldots,x_{n})\bigr),d\bigl(gx_{n+1},f(x_{n+1},x_{n+1}, \ldots,x_{n+1})\bigr)\bigr), \end{aligned}$$

i.e.,

$$\begin{aligned} d_{n+1} \leq&\bigl[s\alpha_{k}+s^{2} \alpha_{k-1}+s^{3}\alpha _{k-2}+\cdots+s^{k} \alpha_{1}\bigr]d_{n}+s\Biggl\{ \Biggl[\sum _{i=1}^{k}\sum_{j=1}^{k} \beta _{i,j}\Biggr]D_{n,n} \\ &{}+\Biggl[\sum_{i=1}^{k} \beta_{i,k+1}\Biggr]D_{n,n+1}+\Biggl[\sum _{j=1}^{k}\beta _{k+1,j}\Biggr]D_{n+1,n}+ \beta_{k+1,k+1}D_{n+1,n+1}\Biggr\} \\ &{}+s^{2}\Biggl\{ \Biggl[\sum_{i=1}^{k-1} \sum_{j=1}^{k-1}\beta_{i,j} \Biggr]D_{n,n}+\Biggl[\sum_{i=1}^{k-1} \sum_{j=k}^{k+1}\beta_{i,j} \Biggr]D_{n,n+1}+\Biggl[\sum_{i=k}^{k+1} \sum_{j=1}^{k-1}\beta_{i,j} \Biggr]D_{n+1,n} \\ &{}+\Biggl[\sum_{i=k}^{k+1}\sum _{j=k}^{k+1}\beta_{i,j}\Biggr]D_{n+1,n+1} \Biggr\} +\cdots+s^{k}\Biggl\{ \beta_{1,1}D_{n,n}+ \Biggl[\sum_{j=2}^{k+1}\beta_{1,j} \Biggr]D_{n,n+1} \\ &{}+\Biggl[\sum_{i=2}^{k+1} \beta_{i,1}\Biggr]D_{n+1,n}+\Biggl[\sum _{i=2}^{k+1}\sum_{j=2}^{k+1} \beta_{i,j}\Biggr]D_{n+1,n+1}\Biggr\} +L\cdot 0, \end{aligned}$$

i.e.,

$$\begin{aligned} d_{n+1} \leq& \bigl[s\alpha_{k}+s^{2} \alpha_{k-1}+s^{3}\alpha _{k-2}+\cdots+s^{k} \alpha_{1}\bigr]d_{n} \\ &{}+\Biggl[s\sum_{i=1}^{k}\sum _{j=1}^{k}\beta _{i,j}+s^{2}\sum _{i=1}^{k-1}\sum_{j=1}^{k-1} \beta _{i,j}+\cdots+s^{k-1}\sum_{i=1}^{2} \sum_{j=1}^{2}\beta_{i,j}+s^{k} \beta _{1,1}\Biggr]D_{n,n} \\ &{}+\Biggl[s\sum_{i=1}^{k} \beta_{i,k+1}+s^{2}\sum_{i=1}^{k-1} \sum_{j=k}^{k+1}\beta_{i,j}+ \cdots+s^{k-1}\sum_{i=1}^{2}\sum _{j=3}^{k+1}\beta_{i,j}+s^{k} \sum_{j=2}^{k+1}\beta_{1,j} \Biggr]D_{n,n+1} \\ &{}+\Biggl[s\sum_{j=1}^{k} \beta_{k+1,j}+s^{2}\sum_{i=k}^{k+1} \sum_{j=1}^{k-1}\beta_{i,j}+ \cdots+s^{k-1}\sum_{i=3}^{k+1}\sum _{j=1}^{2}\beta_{i,j}+s^{k} \sum_{i=2}^{k+1}\beta_{i,1} \Biggr]D_{n+1,n} \\ &{}+\Biggl[s^{k}\sum_{i=2}^{k+1} \sum_{j=2}^{k+1}\beta_{i,j}+s^{k-1} \sum_{i=3}^{k+1}\sum _{j=3}^{k+1}\beta_{i,j}+\cdots+s^{2} \sum_{i=k}^{k+1}\sum _{j=k}^{k+1}\beta_{i,j}+s\beta_{k+1,k+1} \Biggr]D_{n+1,n+1} \\ =&Ad_{n}+BD_{n,n}+CD_{n,n+1}+ED_{n+1,n}+FD_{n+1,n+1}, \end{aligned}$$

where A, B, C, E and F are the coefficients of \(d_{n}\), \(D_{n,n}\), \(D_{n,n+1}\), \(D_{n+1,n}\) and \(D_{n+1,n+1}\) respectively in the above inequality. By the definition, \(D_{n,n}=d(gx_{n},f(x_{n},x_{n},\ldots,x_{n}))= d(gx_{n},gx_{n+1})=d_{n}\), \(D_{n,n+1}=d(gx_{n},f(x_{n+1},x_{n+1},\ldots,x_{n+1}))=d(gx_{n},gx_{n+2})\), \(D_{n+1,n}=d(gx_{n+1},f(x_{n},x_{n},\ldots,x_{n}))=d(gx_{n+1},gx_{n+1})=0\), \(D_{n+1,n+1}=d(gx_{n+1},f(x_{n+1},x_{n+1},\ldots,x_{n+1}))=d(gx_{n+1},gx_{n+2})=d_{n+1}\); therefore,

$$\begin{aligned} d_{n+1} \leq& Ad_{n}+Bd_{n}+Cd(gx_{n},gx_{n+2})+Fd_{n+1} \\ \leq&Ad_{n}+Bd_{n}+Csd(gx_{n},gx_{n+1})+Csd(gx_{n+1},gx_{n+2})+Fd_{n+1} \\ =&(A+B+Cs)d_{n}+(Cs+F)d_{n+1}, \end{aligned}$$

i.e., \((1-Cs-F)d_{n+1}\leq(A+B+Cs)d_{n}\). Again, interchanging the role of \(x_{n}\) and \(x_{n+1}\) and repeating the above process, we obtain \((1-Es-B)d_{n+1}\leq(A+F+Es)d_{n}\). It follows that

$$\begin{aligned} \bigl(2-(C+E)s-F-B\bigr)d_{n+1} \leq& \bigl(2A+B+F+s(C+E) \bigr)d_{n} \\ d_{n+1} \leq&\frac{2A+B+F+s(C+E)}{2-B-F-(C+E)s}d_{n} \\ d_{n+1} \leq&\lambda d_{n}, \end{aligned}$$

where \(\lambda=\frac{2A+B+F+s(C+E)}{2-B-F-(C+E)s}\). Thus we have

$$ d_{n+1}\leq\lambda^{n+1}d_{0} \quad \mbox{for all } n\geq0 . $$
(3.4)

We will show that \(\lambda< 1\) and \(s\lambda< 1\). We have

$$\begin{aligned}& A+B+F+s(C+E) \\& \quad \le s[A+B+C+E+F] \\& \quad = s\bigl[s\alpha_{k}+s^{2}\alpha_{k-1}+s^{3} \alpha_{k-2}+\cdots+s^{k}\alpha_{1}\bigr] \\& \qquad {} + s\Biggl[s\sum_{i=1}^{k}\sum _{j=1}^{k}\beta_{i,j}+s^{2} \sum_{i=1}^{k-1}\sum _{j=1}^{k-1}\beta_{i,j}+\cdots+s^{k-1} \sum_{i=1}^{2}\sum _{j=1}^{2}\beta_{i,j}+s^{k} \beta_{1,1}\Biggr] \\& \qquad {} + s\Biggl[s\sum_{i=1}^{k} \beta_{i,k+1}+s^{2}\sum_{i=1}^{k-1} \sum_{j=k}^{k+1}\beta_{i,j}+ \cdots+s^{k-1}\sum_{i=1}^{2}\sum _{j=3}^{k+1}\beta_{i,j}+s^{k} \sum_{j=2}^{k+1}\beta_{1,j}\Biggr] \\& \qquad {} + s\Biggl[s\sum_{j=1}^{k} \beta_{k+1,j}+s^{2}\sum_{i=k}^{k+1} \sum_{j=1}^{k-1}\beta_{i,j}+ \cdots+s^{k-1}\sum_{i=3}^{k+1}\sum _{j=1}^{2}\beta_{i,j}+s^{k} \sum_{i=2}^{k+1}\beta_{i,1}\Biggr] \\& \qquad {} + s\Biggl[s\beta_{k+1,k+1}+s^{2}\sum _{i=k}^{k+1}\sum_{j=k}^{k+1} \beta _{i,j}+\cdots+s^{k-1}\sum_{i=3}^{k+1} \sum_{j=3}^{k+1}\beta_{i,j}+ s^{k}\sum_{i=2}^{k+1}\sum _{j=2}^{k+1}\beta_{i,j}\Biggr] \\& \quad = \bigl[s^{2}\alpha_{k}+s^{3} \alpha_{k-1}+s^{4}\alpha_{k-2}+\cdots+s^{k+1} \alpha _{1}\bigr]+s^{2}\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta_{i,j}+s^{3} \sum_{i=1}^{k+1}\sum _{j=1}^{k+1}\beta_{i,j} \\& \qquad {} + s^{4}\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta_{i,j}+\cdots +s^{k+1}\sum_{i=1}^{k+1}\sum _{j=1}^{k+1}\beta_{i,j} \\& \quad = \bigl[s^{2}\alpha_{k}+s^{3} \alpha_{k-1}+s^{4}\alpha_{k-2}+\cdots+s^{k+1} \alpha _{1}\bigr] \\ & \qquad {}+ \bigl[s^{2}+s^{3}+s^{4}+ \cdots+s^{k+1}\bigr]\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta _{i,j} \\ & \quad \leq \bigl[s^{3}\alpha_{k}+s^{4} \alpha_{k-1}+s^{5}\alpha_{k-2}+\cdots +s^{k+2}\alpha_{1}\bigr] \\ & \qquad {}+ \bigl[s^{3}+s^{4}+s^{5}+ \cdots+s^{k+2}\bigr]\sum_{i=1}^{k+1} \sum_{j=1}^{k+2}\beta _{i,j} \\ & \quad = \sum_{n=1}^{k}s^{k+3-n} \Biggl[\alpha_{n}+\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta_{i,j}\Biggr]< 1, \end{aligned}$$

and so \(\lambda<1\). We also have \(sA+sB+sF+s(C+E) = s(A+B+F+C+E) < 1\) (proved above) and

$$\begin{aligned}& sA + B + F +s^{2}(C+E) \\& \quad \le s^{2}[A+B+C+E+F] \\& \quad = s^{2}\bigl[s\alpha_{k}+s^{2} \alpha_{k-1}+s^{3}\alpha_{k-2}+\cdots+s^{k} \alpha_{1}\bigr] \\& \qquad {} + s^{2}\Biggl[s\sum_{i=1}^{k} \sum_{j=1}^{k}\beta_{i,j}+s^{2} \sum_{i=1}^{k-1}\sum _{j=1}^{k-1}\beta_{i,j}+\cdots+s^{k-1} \sum_{i=1}^{2}\sum _{j=1}^{2}\beta_{i,j}+s^{k} \beta_{1,1}\Biggr] \\& \qquad {} + s^{2}\Biggl[s\sum_{i=1}^{k} \beta_{i,k+1}+s^{2}\sum_{i=1}^{k-1} \sum_{j=k}^{k+1}\beta_{i,j}+ \cdots+s^{k-1}\sum_{i=1}^{2}\sum _{j=3}^{k+1}\beta_{i,j}+s^{k} \sum_{j=2}^{k+1}\beta_{1,j}\Biggr] \\& \qquad {} + s^{2}\Biggl[s\sum_{j=1}^{k} \beta_{k+1,j}+s^{2}\sum_{i=k}^{k+1} \sum_{j=1}^{k-1}\beta_{i,j}+ \cdots+s^{k-1}\sum_{i=3}^{k+1}\sum _{j=1}^{2}\beta_{i,j}+s^{k} \sum_{i=2}^{k+1}\beta_{i,1}\Biggr] \\& \qquad {} + s^{2}\Biggl[s\beta_{k+1,k+1}+s^{2}\sum _{i=k}^{k+1}\sum_{j=k}^{k+1} \beta _{i,j}+\cdots+s^{k-1}\sum_{i=3}^{k+1} \sum_{j=3}^{k+1}\beta_{i,j}+ s^{k}\sum_{i=2}^{k+1}\sum _{j=2}^{k+1}\beta_{i,j}\Biggr] \\& \quad = \bigl[s^{3}\alpha_{k}+s^{4} \alpha_{k-1}+s^{5}\alpha_{k-2}+\cdots+s^{k+2} \alpha _{1}\bigr]+s^{3}\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta_{i,j}+s^{4} \sum_{i=1}^{k+1}\sum _{j=1}^{k+1}\beta_{i,j} \\& \qquad {} + s^{5}\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta_{i,j}+\cdots +s^{k+2}\sum_{i=1}^{k+1}\sum _{j=1}^{k+1}\beta_{i,j} \\& \quad = \bigl[s^{3}\alpha_{k}+s^{4} \alpha_{k-1}+s^{5}\alpha_{k-2}+\cdots+s^{k+2} \alpha _{1}\bigr] \\& \qquad {} + \bigl[s^{3}+s^{4}+s^{5}+ \cdots+s^{k+2}\bigr]\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta _{i,j} \\& \quad = \sum_{n=1}^{k}s^{k+3-n} \Biggl[\alpha_{n}+\sum_{i=1}^{k+1} \sum_{j=1}^{k+1}\beta_{i,j}\Biggr]< 1, \end{aligned}$$

and so \(s\lambda<1\).

Thus, for all \(n,p\in N\),

$$\begin{aligned} d(gx_{n},gx_{n+p}) \leq& sd(gx_{n},gx_{n+1})+s^{2}d(gx_{n+1},gx_{n+2})+ \cdots+s^{p-1}d(gx_{n+(p-1)},gx_{n+p}) \\ =&sd_{n}+s^{2}d_{n+1}+\cdots+s^{p-1}d_{n+(p-1)} \\ \leq&s\lambda^{n}d_{0}+s^{2} \lambda^{n+1}d_{0}+\cdots+s^{p-1}\lambda ^{n+(p-1)}d_{0} \\ \leq& \frac{s\lambda^{n}}{1-s\lambda}d_{0} \rightarrow0\quad \mbox{as } n \rightarrow\infty. \end{aligned}$$

Thus \(\{gx_{n}\}\) is a Cauchy sequence. By completeness of \(g(X)\), there exists \(u\in X\) such that

$$ \lim_{n\rightarrow\infty}gx_{n}=u \mbox{ and there exists } p\in X \mbox{ such that } g(p)=u. $$
(3.5)

We shall show that u is the fixed point of f and g. Using a similar process as the one used in the calculation of \(d_{n+1}\), we obtain

$$\begin{aligned} d\bigl(g(p),f(p,\ldots,p)\bigr) \leq&s\bigl[d\bigl(g(p),y_{n+1}\bigr)+d \bigl(y_{n+1},f(p,p,\ldots, p)\bigr)\bigr] \\ \leq&s\bigl[d\bigl(g(p),y_{n+1}\bigr)+d(Fx_{n},Fp)\bigr] \\ \leq&s\bigl[d\bigl(g(p),y_{n+1}\bigr)+Ad(gx_{n},gp)+Bd \bigl(gx_{n},f(x_{n},x_{n},\ldots,x_{n}) \bigr) \\ &{}+Cd\bigl(gx_{n},f(p,p,\ldots,p)\bigr) \\ &{}+Ed\bigl(gp,f(x_{n},x_{n},\ldots,x_{n}) \bigr)+Fd\bigl(gp,f(p,p,\ldots,p)\bigr)\bigr]. \end{aligned}$$

It follows from (3.5) that

$$ d\bigl(g(p),f(p,\ldots,p)\bigr)\leq s(C+F)d\bigl(gp,f(p,p,\ldots,p)\bigr). $$
(3.6)

As \(s(C+F)<1\), we obtain \(F(p)=g(p)=f(p,p,\ldots,p)=u\) Thus, u is a point of coincidence of f and g. If \(u'\) is another point of coincidence of f and g, then there exists \(p'\in X\) such that \(F(p')=g(p')=f(p',p',\ldots,p')=u'\).

Then we have

$$\begin{aligned} d\bigl(u,u'\bigr) =&d\bigl(Fp,Fp'\bigr) \\ \leq&Ad\bigl(gp,gp'\bigr)+Bd\bigl(gp,f(p,p,\ldots,p)\bigr) \\ &{}+Cd\bigl(gp,f\bigl(p',p',\ldots,p' \bigr)\bigr) \\ &{}+Ed\bigl(gp',f(p,p,\ldots,p)\bigr)+Fd\bigl(gp',f \bigl(p',p',\ldots,p'\bigr)\bigr) \\ =&Ad\bigl(u,u'\bigr)+Bd(u,u)+Cd\bigl(u,u'\bigr)+Ed \bigl(u',u\bigr)+Fd\bigl(u',u\bigr) \\ =&(A+C+E+F)d\bigl(u,u'\bigr). \end{aligned}$$

As \(A+C+E+F<1\), we obtain from the above inequality that \(d(u,u')=0\), that is, \(u=u'\). Thus the point of coincidence u is unique. Further, if f and g are weakly compatible, then by Lemma 2.5, u is the unique common fixed point of f and g. □

Remark 3.2

Taking \(s=1\), \(g=I\) and \(\theta(t_{1},t_{2},t_{3},t_{4})=0\) in Theorem 3.1, we get Theorem 4 of Shukla et al. [13].

Remark 3.3

For \(s=1\), \(g=I\), \(i=j\), \(\beta_{ij}=\delta_{k+1}\), ∀i, \(L=1\), we obtain Theorem 2.1 of Khan et al. [8].

Remark 3.4

For \(s=1\), \(g=I\), \(\beta_{ij}=0\), \(\forall i,j \in\{1,2,\ldots,k+1\} \) and \(\theta(t_{1},t_{2},t_{3},t_{4})=\min\{(t_{1},t_{2}, t_{3},t_{4})\}\), we obtain the result of Pacurar [5].

Remark 3.5

For \(s=1\), \(g=I\), \(\alpha_{i}=0\), \(i=j\), \(\beta_{ij}=a\), \(L=0\), we obtain the result of Pacurar [4].

Remark 3.6

For \(s=1\), \(g=I\), \(\beta_{ij}=0\), \(\forall i,j \in\{1,2,\ldots ,k+1\}\), \(L=0\), we obtain the result of Presic [2].

Next we prove a generalized Ciric-Presic type fixed point theorem in a b-metric space. Consider a function \(\phi: R^{k}\rightarrow R\) such that

  1. 1.

    Ï• is an increasing function, i.e., \(x_{1}< y_{1},x_{2}< y_{2},\ldots,x_{k}< y_{k}\) implies \(\phi(x_{1},x_{2},\ldots,x_{k})< \phi (y_{1},y_{2},\ldots,y_{k})\);

  2. 2.

    \(\phi(t,t,t,\ldots)\leq t \) for all \(t\in X\);

  3. 3.

    Ï• is continuous in all variables.

Theorem 3.7

Let \((X,d)\) be a b-metric space with \(s \ge1\). For any positive integer k, let \(f:X^{k} \rightarrow X\) and \(g:X\rightarrow X\) be mappings satisfying the following conditions:

$$\begin{aligned}& f\bigl(X^{k}\bigr) \subseteq g(X), \end{aligned}$$
(3.7)
$$\begin{aligned}& d\bigl(f(x_{1},x_{2},\ldots, x_{k}), f(x_{2},x_{3},\ldots, x_{k+1})\bigr) \\& \quad \leq\lambda\phi\bigl(d(gx_{1},gx_{2}),d(gx_{2},gx_{3}),d(gx_{3},gx_{4}), \ldots ,d(gx_{k},gx_{k+1})\bigr), \end{aligned}$$
(3.8)

where \(x_{1},x_{2},\ldots,x_{k+1}\) are arbitrary elements in X, \(\lambda \in(0,\frac{1}{s^{k}})\),

$$ g(X)\textit{ is complete} $$
(3.9)

and

$$ d\bigl(f(u,u,\ldots,u),f(v,v,\ldots,v)\bigr)< d(gu,gv) $$
(3.10)

for all \(u,v\in X\). Then f and g have a coincidence point, i.e., \(C(f, g) \neq\emptyset\). In addition, if f and g are weakly compatible, then f and g have a unique common fixed point. Moreover, for any \(x_{1}\in X\), the sequence \(\{y_{n}\}\) defined by \(y_{n} = g(x_{n}) = f(x_{n}, x_{n+1}, \ldots, x_{n+k-1})\) converges to the common fixed point of f and g.

Proof

For arbitrary \(x_{1},x_{2},\ldots,x_{k}\) in X, let

$$ R=\max\biggl(\frac{d(gx_{1},gx_{2})}{\theta },\frac{d(gx_{2},gx_{3})}{\theta^{2}},\ldots, \frac {d(gx_{k},f(x_{1},x_{2},\ldots,x_{k}))}{\theta^{k}}\biggr), $$
(3.11)

where \(\theta=\lambda^{\frac{1}{k}}\). By (3.7) we define the sequence \(\langle y_{n}\rangle\) in \(g(X)\) as \(y_{n}=gx_{n}\) for \(n = 1,2,\ldots, k\) and \(y_{n+k} = g(x_{n+k}) = f(x_{n}, x_{n+1}, \ldots, x_{n+k-1})\), \(n = 1,2,\ldots\) .

Let \(\alpha_{n} = d(y_{n}, y_{n+1})\). By the method of mathematical induction, we will prove that

$$ \alpha_{n}\leq R\theta^{n} \quad \mbox{for all }n. $$
(3.12)

Clearly, by the definition of R, (3.12) is true for \(n = 1,2, \ldots, k\). Let the k inequalities \(\alpha_{n}\leq R\theta^{n} ,\alpha_{n+1}\leq R\theta^{n+1},\ldots,\alpha _{n+k-1}\leq R\theta^{n+k-1}\) be the induction hypothesis. Then we have

$$\begin{aligned} \alpha_{n+k} =& d(y_{n+k}, y_{n+k+1}) \\ =&d\bigl(f(x_{n}, x_{n+1}, \ldots, x_{n+k-1}), f(x_{n+1}, x_{n+2}, \ldots, x_{n+k})\bigr) \\ \leq&\lambda\phi \bigl(d(gx_{n},gx_{n+1}),d(gx_{n+1},gx_{n+2}), \ldots,d(gx_{n+k-1},gx_{n+k}), \\ &d\bigl(gx_{n},f(x_{n},x_{n}, \ldots,x_{n})\bigr),d\bigl(gx_{n+k},f(x_{n+k},x_{n+k}, \ldots,x_{n+k})\bigr)\bigr) \\ =&\lambda\phi({ \alpha_{n}, \alpha_{n+1},\ldots, \alpha_{n+k-1}}) \\ \leq&\lambda\phi\bigl({R\theta^{n}, R\theta^{n+1},\ldots,R \theta ^{n+k-1}}\bigr) \\ \leq&\lambda\phi\bigl({R\theta^{n}, R\theta^{n},\ldots,R \theta^{n}}\bigr) \\ \leq&\lambda R\theta^{n} \\ =&R\theta^{n+k}. \end{aligned}$$

Thus the inductive proof of (3.12) is complete. Now, for \(n,p \in N\), we have

$$\begin{aligned} \begin{aligned} d(y_{n}, y_{n+p}) &\leq sd(y_{n}, y_{n+1})+s^{2}d(y_{n+1},y_{n+2})+ \cdots+s^{p-1}d(y_{n+p-1},y_{n+p}), \\ &\leq sR\theta^{n}+s^{2}R\theta^{n+1}+ \cdots+s^{p-1}R\theta^{n+p-1} \\ &\leq sR\theta^{n}\bigl(1+s\theta+s^{2} \theta^{2}+\cdots\bigr) \\ &= {sR\theta^{n} \over 1-s\theta}. \end{aligned} \end{aligned}$$

Hence the sequence \(\langle y_{n}\rangle\) is a Cauchy sequence in \(g(X)\) and since \(g(X)\) is complete, there exist \(v, u\in X\) such that \(\lim_{n\rightarrow\infty}y_{n} = v = g(u)\),

$$\begin{aligned} d\bigl(gu, f(u,u,\ldots,u)\bigr) \leq& s\bigl[d(gu, y_{n+k}) + d \bigl(y_{n+k}, f(u,u,\ldots ,u)\bigr)\bigr] \\ =&s\bigl[d(gu,y_{n+k})+d\bigl(f(x_{n}, x_{n+1}, \ldots, x_{n+k-1}),f(u,u,\ldots,u)\bigr)\bigr] \\ =& sd(gu, y_{n+k}) + sd\bigl(f(x_{n}, x_{n+1}, \ldots,x_{n+k-1}), f(u,u,\ldots,u)\bigr) \\ \leq& sd(gu,y_{n+k})+s^{2}d\bigl(f(u,u,\ldots,u),f(u,u, \ldots,x_{n})\bigr) \\ &{}+s^{3}d\bigl(f(u,u,\ldots,x_{n}),f(u,u, \ldots,x_{n},x_{n+1})\bigr) \\ &{}+\cdots+s^{k-1}d\bigl(f(u,x_{n},\ldots ,x_{n+k-2}),f(x_{n},x_{n+1},\ldots,x_{n+k-1}) \bigr) \\ \leq& sd(gu,y_{n+k})+s^{2}\lambda\phi\bigl\{ d(gu,gu),d(gu,gu),\ldots ,d(gu,gx_{n})\bigr\} \\ &{}+s^{3}\lambda\phi\bigl\{ d(gu, gu),d(gu, gu),\ldots,d(gu, gx_{n}),d(gx_{n}, gx_{n+1})\bigr\} +\cdots \\ &{}+s^{k-1}\lambda\phi\bigl\{ d(gu, gx_{n}),d(gx_{n}, gx_{n+1}),\ldots ,d(gx_{n+k-2}, gx_{n+k-1})\bigr\} \\ =&sd(gu,y_{n+k})+ s^{2}\lambda\phi\bigl(0,0, \ldots,d(gu,gx_{n})\bigr) \\ &{}+s^{3} \lambda\phi\bigl(0,0,\ldots,d(gu, gx_{n}),d(gx_{n}, gx_{n+1})\bigr)+\cdots \\ &{}+ s^{k-1}\lambda\phi\bigl(d(gu, gx_{n}),d(gx_{n}, gx_{n+1}),\ldots ,d(gx_{n+k-2}, gx_{n+k-1})\bigr). \end{aligned}$$

Taking the limit when n tends to infinity, we obtain \(d(gu, f(u,u,\ldots, u))\leq0\). Thus \(gu = f(u,u,u,\ldots,u)\), i.e., \(C(g,f)\neq\emptyset\). Thus there exist \(v, u\in X\) such that \(\lim_{n\rightarrow \infty}y_{n} = v = g(u)=f(u,u,u,\ldots,u)\). Since g and f are weakly compatible, \(g(f(u,u,\ldots,u)) = f(gu,gu,gu,\ldots,gu)\). By (3.10) we have that

$$\begin{aligned} d(ggu,gu) =&d\bigl(gf(u,u,\ldots,u),f(u,u,\ldots,u)\bigr) \\ =&d\bigl(f(gu,gu,gu,\ldots,gu),f(u,u,\ldots,u)\bigr) \\ < &d(ggu,gu) \end{aligned}$$

implies \(d(ggu,gu)=0\) and so \(ggu=gu\). Hence we have \(gu=ggu=g(f(u,u,\ldots,u))=f(gu,gu, gu,\ldots,gu)\), i.e., gu is a common fixed point of g and f, and \(\lim_{n\rightarrow\infty} y_{n}=g(u)\). Now suppose that x, y are two fixed points of g and f. Then

$$\begin{aligned} d(x,y) =& d\bigl(f(x,x,x,\ldots,x),f(y,y,y,\ldots,y)\bigr) \\ < &d(gx,gy) \\ =&d(x,y). \end{aligned}$$

This implies \(x=y\). Hence the common fixed point is unique. □

Remark 3.8

Taking \(s=1\), \(g=I\) and \(\phi(x_{1},x_{2}, \ldots, x_{k}) = \max\{ x_{1},x_{2}, \ldots, x_{k}\}\) in Theorem 3.7, we obtain Theorem 1.2, i.e., the result of Ciric and Presic [3].

Remark 3.9

For \(\lambda\in(0,\frac{1}{s^{k+1}})\), we can drop the condition (3.10) of Theorem 3.7. In fact we have the following.

Theorem 3.10

Let \((X,d)\) be a b-metric space with \(s\ge 2\). For any positive integer k, let \(f:X^{k} \rightarrow X\) and \(g:X\rightarrow X\) be mappings satisfying conditions (3.7), (3.8) and (3.9) with \(\lambda\in(0,\frac{1}{s^{k+1}})\). Then all conclusions of Theorem  3.7 hold.

Proof

As proved in Theorem 3.7, there exist \(v, u\in X\) such that \(\lim_{n\rightarrow\infty}y_{n} = v = g(u)=f(u,u,\ldots,u)\), i.e., \(C(g,f)\neq\emptyset\). Since g and f are weakly compatible, \(g(f(u,u,\ldots,u)) = f(gu,gu,gu,\ldots,gu)\). By (3.8) we have

$$\begin{aligned} d(ggu,gu) =&d\bigl(gf(u,u,\ldots,u),f(u,u,\ldots,u)\bigr) \\ =&d\bigl(f(gu,gu,gu,\ldots,gu),f(u,u,\ldots,u)\bigr) \\ \leq& sd\bigl(f(gu,gu,gu,\ldots,gu),f(gu,gu,\ldots,gu,u)\bigr) \\ &{}+s^{2}d\bigl(f(gu,gu,\ldots,gu,u),f(gu,gu,\ldots,u,u)\bigr) \\ &{}+\cdots+s^{k-1}d\bigl(f(gu,gu,\ldots,u,u),f(u,u,\ldots,u)\bigr) \\ &{}+s^{k-1}d\bigl(f(gu,u,\ldots,u,u),f(u,u,\ldots,u)\bigr) \\ \leq& s\lambda\phi\bigl(d(ggu,ggu),\ldots,d(ggu,ggu),d(ggu,gu)\bigr) \\ &{}+s^{2}\lambda\phi\bigl(d(ggu,ggu),\ldots,d(ggu,gu),d(gu,gu)\bigr) \\ &{}+ \cdots +s^{k-1}\lambda\phi\bigl(d(ggu,gu),\ldots,d(gu,gu),d(gu,gu) \bigr) \\ =&s\lambda\phi\bigl(0,0,0,\ldots,d(ggu,gu)\bigr)+s^{2}\lambda\phi \bigl(0,0,\ldots,0,d(ggu,gu),0\bigr) \\ &{}+\cdots+s^{k-1}\lambda\phi\bigl(d(ggu,gu),0,0,\ldots,0\bigr) \\ =&s\lambda\bigl[1+s+s^{2}+s^{3}+\cdots+s^{k-2}+s^{k-2} \bigr]d(ggu,gu) \\ \leq&s\lambda\bigl[1+s+s^{2}+s^{3}+\cdots+s^{k-2}+s^{k-1} \bigr]d(ggu,gu) \\ =&s\lambda\frac{s^{k}-1}{s-1}d(ggu,gu). \end{aligned}$$

\(s\lambda\frac{s^{k}-1}{s-1}<1\) implies \(d(ggu,gu)=0\) and so \(ggu=gu\). Hence we have \(gu=ggu=g(f(u,u, \ldots,u))=f(gu,gu,gu,\ldots,gu)\), i.e., gu is a common fixed point of g and f, and \(\lim_{n\rightarrow\infty} y_{n}=g(u)\). Now suppose that x, y are two fixed points of g and f. Then

$$\begin{aligned} d(x,y) =& d\bigl(f(x,x,x,\ldots,x),f(y,y,y,\ldots,y)\bigr) \\ \leq& sd\bigl(f(x,x,\ldots,x),f(x,x,\ldots,x,y)\bigr)+s^{2} d \bigl(f(x,x,\ldots,x,y), \\ &f(x,x,x,\ldots,x,y,y)\bigr) +\cdots+s^{k-1}d\bigl(f(x,x,y, \ldots,y),f(y,y,\ldots,y)\bigr) \\ &{}+s^{k-1}d\bigl(f(x,y,y,\ldots,y),f(y,y,\ldots,y)\bigr) \\ \leq&s \lambda\phi\bigl\{ d(fx,fx),d(fx,fx),\ldots,d(fx,fy)\bigr\} +s^{2}\lambda \phi\bigl\{ d(fx, fx), \\ &d(fx, fx),\ldots,d(fx,fy),d(fy, fy)\bigr\} \\ &{}+\cdots+s^{k-1}\lambda\phi\bigl\{ d(fx,fy),d(fy,fy),\ldots,d(fy,fy) \bigr\} \\ =& s\lambda\phi\bigl(0,0,\ldots,d(fx,fy)\bigr)+s^{2} \lambda\phi \bigl(0,0,\ldots,d(fx, fy),0\bigr)+\cdots \\ &{}+s^{k-1} \lambda\phi\bigl(d(fx, fy),0,0,0,\ldots,0\bigr) \\ =&\lambda\bigl[s+s^{2}+s^{3}+\cdots+s^{k-1}+s^{k-1} \bigr] d(fx, fy) \\ =&s\lambda\bigl[1+s+s^{2}+s^{3}+\cdots+s^{k-2}+s^{k-2} \bigr] d(fx, fy) \\ \leq&s\lambda\bigl[1+s+s^{2}+s^{3}+\cdots+s^{k-2}+s^{k-1} \bigr] d(fx, fy) \\ =&s\lambda\frac{s^{k}-1}{s-1}d(fx, fy). \\ =&s\lambda\frac{s^{k}-1}{s-1}d(x, y). \end{aligned}$$

This implies \(x=y\). Hence the common fixed point is unique. □

Example 3.11

Let \(X=R\) and \(d:X\times X\rightarrow X\) such that \(d(x,y)=| x-y|^{3}\). Then d is a b-metric on X with \(s=4\). Let \(f:X^{2} \rightarrow X\) and \(g:X \rightarrow X\) be defined as follows:

$$\begin{aligned}& f(x,y) = \frac{x^{2}+y^{2}}{13}+\frac{18}{13} \quad \mbox{if }(x,y) \in R, \\& gx = x^{2}-2\quad \mbox{if }x \in R. \end{aligned}$$

We will prove that f and g satisfy condition (3.8):

$$\begin{aligned} d\bigl(f(x,y),f(y,z)\bigr) =& \bigl\vert {f(x,y)-f(y,z)}\bigr\vert ^{3} \\ =&\biggl\vert \frac{x^{2}-z^{2}}{13}\biggr\vert ^{3}= \biggl\vert \frac {x^{2}-y^{2}+y^{2}-z^{2}}{13}\biggr\vert ^{3} \\ \leq&4\biggl(\biggl\vert \frac{x^{2}-y^{2}}{13}\biggr\vert ^{3}+ \biggl\vert \frac{y^{2}-z^{2}}{13}\biggr\vert ^{3}\biggr) \\ =& \frac{4}{13^{3}}\bigl[\bigl\vert x^{2}-y^{2}\bigr\vert ^{3}+ \bigl\vert y^{2}-z^{2}\bigr\vert ^{3}\bigr] \\ =& \frac{8}{13^{3}}\frac{1}{2}\bigl[\bigl\vert x^{2}-y^{2} \bigr\vert ^{3}+ \bigl\vert y^{2}-z^{2}\bigr\vert ^{3}\bigr] \\ \leq& \frac{8}{13^{3}}\max \bigl\{ \bigl\vert x^{2}-y^{2}\bigr\vert ^{3}, \bigl\vert y^{2}-z^{2}\bigr\vert ^{3}\bigr\} \\ =& \frac{8}{13^{3}}\max \bigl\{ d(gx,gy),d(gy,gz)\bigr\} . \end{aligned}$$

Thus, f and g satisfy condition (3.8) with \(\lambda= \frac{8}{13^{3}}\in(0,\frac{1}{4^{3}})\). Clearly \(C(f, g) =2\), f and g commute at 2. Finally, 2 is the unique common fixed point of f and g. But f and g do not satisfy condition (3.10) as at \(x=-1\) and \(y=1\), \(d(f(x,x),f(y,y))=d(f(-1,-1),f(1,1))=d(\frac{2}{13}+\frac {18}{13},\frac{2}{13}+\frac{18}{13})=0=d(-1,-1)=d(g(-1),g(1))=d(gx,gy)\).

4 Application to matrix equation

In this section we have applied Theorem 3.7 to study the existence of solutions of the nonlinear matrix equation

$$ X=Q+\sum_{i=1}^{m}A_{i}X^{\delta _{i}}A_{i}^{*}, \quad 0< |\delta_{i}|< 1, $$
(4.1)

where Q is an \(n\times n\) positive semidefinite matrix and \(A_{i}\)’s are nonsingular \(n\times n\) matrices, or Q is an \(n\times n\) positive definite matrix and \(A_{i}\)’s are arbitrary \(n\times n\) matrices, and a positive definite solution X is sought. Here \(A_{i}^{*}\) denotes the conjugate transpose of the matrix \(A_{i}\). The existence and uniqueness of positive definite solutions and numerical methods for finding a solution of (4.1) have recently been studied by many authors (see [25–30]). The Thompson metric on the open convex cone \(P(N)\) (\(N\geq2\)), the set of all \(N\times N\) Hermitian positive definite matrices, is defined by

$$ d(A,B)=\max\bigl\{ \log M(A/B),\log M(B/A)\bigr\} , $$
(4.2)

where \(M(A/B)=\inf\{\lambda>0:A\leq\lambda B\}=\lambda _{\mathrm{max}}(B^{-1/2}AB^{-1/2})\), the maximal eigenvalue of \(B^{-1/2}AB^{-1/2}\). Here \(X\leq Y\) means that \(Y-X\) is positive semidefinite and \(X< Y\) means that \(Y-X\) is positive definite. Thompson [31] has proved that \(P(N)\) is a complete metric space with respect to the Thompson metric d and \(d(A,B)=\| \log(A^{-1/2}BA^{-1/2})\|\), where \(\|\cdot\|\) stands for the spectral norm. The Thompson metric exists on any open normal convex cone of real Banach spaces [31, 32]; in particular, the open convex cone of positive definite operators of a Hilbert space. It is invariant under the matrix inversion and congruence transformations:

$$ d(A,B)=d\bigl(A^{-1},B^{-1}\bigr)=d \bigl(MAM^{*},MBM^{*}\bigr) $$
(4.3)

for any nonsingular matrix M. One remarkable and useful result is the nonpositive curvature property of the Thompson metric:

$$ d\bigl(X^{r},Y^{r}\bigr)\leq rd(X,Y), \quad r \in[0,1]. $$
(4.4)

By the invariant properties of the metric, we then have

$$ d\bigl(MX^{r}M^{*},MY^{r}M^{*}\bigr)=\vert r \vert d(X,Y),\quad r\in[-1,1] $$
(4.5)

for any \(X,Y\in P(N)\) and a nonsingular matrix M. Proceeding as in [30] we prove the following lemma.

Lemma 4.1

For any \(A_{1},A_{2},\ldots,A_{k}\in P(N)\), \(B_{1},B_{2},\ldots ,B_{k}\in P(N)\), \(d(A_{1}+A_{2}+\cdots+A_{k},B_{1}+B_{2}+\cdots+B_{k})\leq \max\{ d(A_{1},B_{1}),d(A_{2},B_{2}),\ldots,d(A_{k},B_{k})\}\).

Proof

Without loss of generality we can assume that \(d(A_{1},B_{1})\leq d(A_{2},B_{2})\leq\cdots\leq d(A_{k},B_{k})=\log r\). Then \(B_{1}\leq rA_{1},B_{2}\leq rA_{2},\ldots,B_{k}\leq rA_{k}\) and \(A_{1}\leq rB_{1},A_{2}\leq rB_{2},\ldots,A_{k}\leq rB_{k}\), and thus \(B_{1}+A_{1}\leq r(A_{1}+B_{1}),B_{2}+A_{2}\leq r(A_{2}+B_{2}),\ldots,B_{k}+A_{k}\leq r(A_{k}+B_{k})\). Hence \(A_{1}+A_{2}+\cdots+A_{k}\leq r[B_{1}+B_{2}+\cdots+B_{k}]\) and \(B_{1}+B_{2}+\cdots+B_{k}\leq r[A_{1}+A_{2}+\cdots+A_{k}]\). Hence \(d(A_{1}+A_{2}+\cdots+A_{k},B_{1}+B_{2}+\cdots+B_{k})\leq \log r=d(A_{k},B_{k})=\max\{ d(A_{1},B_{1}),d(A_{2},B_{2}),\ldots,d(A_{k},B_{k})\}\). □

For arbitrarily chosen positive definite matrices \(X_{n-r}, X_{n-(r-1)},\ldots, X_{n}\), consider the iterative sequence of matrices, given by

$$ X_{n+1}=Q+A_{1}^{*}X_{n-r}^{\alpha _{1}}A_{1}+A_{2}^{*}X_{n-(r-1)}^{\alpha_{2}}A_{2}+ \cdots+A_{r+1}^{*}X_{n}^{\alpha _{r+1}}A_{r+1}, $$
(4.6)

\(\alpha_{1}, \alpha_{2},\ldots,\alpha_{r+1}\) are real numbers.

Theorem 4.2

Suppose that \(\lambda=\max\{\vert\alpha _{1}\vert,\vert\alpha_{2}\vert,\ldots,\vert\alpha_{r+1}\vert\} \in(0,1)\).

  1. (i)

    Equation (4.6) has a unique equilibrium point in \(P(N)\), that is, there exists unique \(U\in P(N)\) such that

    $$ U=Q+A_{1}^{*}U^{\alpha_{1}}A_{1}+A_{2}^{*}U^{\alpha _{2}}A_{2}+ \cdots+A_{r+1}^{*}U^{\alpha_{r+1}}A_{r+1}. $$
    (4.7)
  2. (ii)

    The iterative sequence \(\{X_{n}\}\) defined by (4.6) converges to a unique solution of (4.1).

Proof

Define the mapping \(f:P(N)\times P(N)\times P(N)\times \cdots\times P(N)\rightarrow P(N)\) by

$$ f(X_{1},X_{2},X_{n-(r-2)},\ldots, X_{k})=Q+A_{1}^{*}X_{1}^{\alpha_{1}}A_{1}+A_{2}^{*}X_{2}^{\alpha_{2}}A_{2}+ \cdots +A_{r+1}^{*}X_{k}^{\alpha_{r+1}}A_{r+1}, $$
(4.8)

where \(X_{1},X_{2},\ldots, X_{k}\in P(N)\).

For all \(X_{n-r},X_{n-(r-1)},X_{n-(r-2)},\ldots, X_{n+1}\in P(N)\), we have

$$\begin{aligned}& d\bigl(f(X_{n-r},X_{n-(r-1)},X_{n-(r-2)}, \ldots, X_{n}),f(X_{n-(r-1)},X_{n-(r-2)},X_{n-(r-2)}, \ldots, X_{n+1})\bigr) \\& \quad =d\bigl(Q+A_{1}^{*}X_{n-r}^{\alpha_{1}}A_{1}+A_{2}^{*}X_{n-(r-1)}^{\alpha _{2}}A_{2}+ \cdots+A_{r+1}^{*}X_{n}^{\alpha_{r+1}}A_{r+1}, \\& \qquad Q+A_{2}^{*}X_{n-(r-1)}^{\alpha_{1}}A_{2}+A_{3}^{*}X_{n-(r-2)}^{\alpha _{3}}A_{3}+ \cdots+A_{r+2}^{*}X_{n+1}^{\alpha_{r+2}}A_{r+2}\bigr) \\& \quad \leq d\bigl(A_{1}^{*}X_{n-r}^{\alpha_{1}}A_{1}+A_{2}^{*}X_{n-(r-1)}^{\alpha _{2}}A_{2}+ \cdots+A_{r+1}^{*}X_{n}^{\alpha_{r+1}}A_{r+1}, \\& \qquad A_{2}^{*}X_{n-(r-1)}^{\alpha_{1}}A_{2}+A_{3}^{*}X_{n-(r-2)}^{\alpha _{3}}A_{3}+ \cdots+A_{r+2}^{*}X_{n+1}^{\alpha_{r+2}}A_{r+2}\bigr) \\& \quad \leq\max\bigl\{ d\bigl(A_{1}^{*}X_{n-r}^{\alpha_{1}}A_{1},A_{2}^{*}X_{n-(r-1)}^{\alpha _{1}}A_{2} \bigr),d\bigl(A_{2}^{*}X_{n-(r-1)}^{\alpha_{2}}A_{2},A_{3}^{*}X_{n-(r-2)}^{\alpha _{3}}A_{3} \bigr), \\& \qquad \ldots, d\bigl(A_{r+1}^{*}X_{n}^{\alpha _{r+1}}A_{r+1},A_{r+2}^{*}X_{n+1}^{\alpha_{r+2}}A_{r+2} \bigr)\bigr\} \\& \quad \leq \max\bigl\{ \vert\alpha_{1}\vert d(X_{n-r},X_{n-(r-1)}), \vert\alpha _{2}\vert d(X_{n-(r-1)},X_{n-(r-2)}), \\& \qquad \ldots,\vert\alpha_{r+1}\vert d(X_{n},X_{n+1}) \bigr\} \\& \quad \leq \max\bigl\{ \vert \alpha_{1}\vert ,\vert \alpha_{2}\vert,\ldots,\vert \alpha_{r+1}\vert\bigr\} \max \bigl\{ d(X_{n-r},X_{n-(r-1)}),d(X_{n-(r-1)},X_{n-(r-2)}), \\& \qquad \ldots, d(X_{n},X_{n+1})\bigr\} \\& \quad \leq\lambda \max \bigl\{ d(X_{n-r},X_{n-(r-1)}),d(X_{n-(r-1)},X_{n-(r-2)}), \ldots, d(X_{n},X_{n+1})\bigr\} \end{aligned}$$
(4.9)

for all \(X_{n-r},X_{n-(r-1)},X_{n-(r-2)},\ldots, X_{n+1}\in P(N)\). \(X,Y\in P(N)\), we have

$$\begin{aligned}& d\bigl(f(X,X,\ldots, X),f(Y,Y,\ldots, Y)\bigr) \\& \quad = d\bigl(Q+A_{1}^{*}X^{\alpha_{1}}A_{1}+A_{2}^{*}X^{\alpha_{2}}A_{2}+ \cdots +A_{r+1}^{*}X^{\alpha_{r+1}}A_{r+1}, \\& \qquad Q+A_{2}^{*}Y^{\alpha_{1}}A_{2}+A_{3}^{*}Y^{\alpha_{3}}A_{3}+ \cdots +A_{r+2}^{*}Y^{\alpha_{r+2}}A_{r+2}\bigr) \\& \quad \leq d\bigl(A_{1}^{*}X^{\alpha_{1}}A_{1}+A_{2}^{*}X^{\alpha_{2}}A_{2}+ \cdots +A_{r+1}^{*}X^{\alpha_{r+1}}A_{r+1}, \\& \qquad A_{2}^{*}Y^{\alpha_{1}}A_{2}+A_{3}^{*}Y^{\alpha_{3}}A_{3}+ \cdots +A_{r+2}^{*}Y^{\alpha_{r+2}}A_{r+2}\bigr) \\& \quad \leq \max\bigl\{ d\bigl(A_{1}^{*}X^{\alpha_{1}}A_{1},A_{2}^{*}Y^{\alpha _{1}}A_{2} \bigr),d\bigl(A_{2}^{*}X^{\alpha_{2}}A_{2},A_{3}^{*}Y^{\alpha_{3}}A_{3} \bigr), \\& \qquad \ldots, d\bigl(A_{r+1}^{*}X^{\alpha_{r+1}}A_{r+1},A_{r+2}^{*}Y^{\alpha_{r+2}}A_{r+2} \bigr)\bigr\} \\& \quad \leq \max\bigl\{ \vert\alpha_{1}\vert d(X,Y),\vert \alpha_{2}\vert d(X,Y), \ldots,\vert\alpha_{r+1}\vert d(X,Y)\bigr\} \\& \quad \leq \max\bigl\{ \vert \alpha_{1}\vert ,\vert \alpha_{2}\vert,\ldots,\vert \alpha_{r+1}\vert\bigr\} \max \bigl\{ d(X,Y),d(X,Y), \ldots, d(X,Y)\bigr\} \\& \quad \leq \lambda \max \bigl\{ d(X,Y),d(X,Y),\ldots, d(X,Y)\bigr\} \\& \quad < d(X,Y). \end{aligned}$$

Since \(\lambda\in(0,1)\), (i) and (ii) follow immediately from Theorem 3.7 with \(s=1\) and \(g=I\). □

Numerical experiment illustrating the above convergence algorithm

Consider the nonlinear matrix equation

$$ X=Q+A^{*}X^{\frac{1}{2}}A+B^{*}X^{\frac {1}{3}}B+C^{*}X^{\frac{1}{4}}C, $$
(4.10)

where

$$\begin{aligned}& A=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 14/3 &1/3& 1/4 \\ 2/15 &1/12 &1/23 \\ 3/10 &9/20& 11/4 \end{array}\displaystyle \right ),\qquad B=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 2/5& 3/2& 4/6 \\ 10/4& 6/13& 7/46 \\ 5/2& 4/7& 6/13 \end{array}\displaystyle \right ) , \\& C=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1/3& 19/24 &22/55 \\ 17/10& 27/15 &45/17 \\ 13/8& 1/3& 1/4 \end{array}\displaystyle \right ), \qquad Q=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 &2& 3 \\ 2 &6 &4 \\ 1 &2 &7 \end{array}\displaystyle \right ). \end{aligned}$$

We define the iterative sequence \(\{X_{n}\}\) by

$$ X_{n+1}=Q+A^{*}X_{n-2}^{\frac {1}{2}}A+B^{*}X_{n-1}^{\frac{1}{3}}B+C^{*}X_{n}^{\frac{1}{4}}C. $$
(4.11)

Let \(R_{m}\) (\(m\geq2\)) be the residual error at the iteration m, that is, \(R_{m}=\| X_{m+1}-(Q+A^{*}X_{m+1}^{\frac {1}{2}}A+B^{*}X_{m+1}^{\frac{1}{3}}B+C^{*}X_{m+1}^{\frac{1}{4}}C)\| \), where \(\|\cdot\|\) is the spectral norm. For initial values

$$\begin{aligned}& X_{0}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 &0& 0 \\ 0 &1 &0 \\ 0 &0 &1 \end{array}\displaystyle \right ),\qquad X_{1}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 &1& 0 \\ 1 &1 &0 \\ 1 &0 &1 \end{array}\displaystyle \right ), \\& X_{2}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 &1&-1 \\ -1 &1 &1 \\ -1 &1 &1 \end{array}\displaystyle \right ), \end{aligned}$$

we computed the successive iterations and the error \(R_{m}\) using MATLAB and found that after thirty five iterations the sequence given by (4.11) converges to

$$U=X_{35}=\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 639.1810& 54.1681 &107.3574 \\ 54.1285& 44.7768 & 44.1469 \\ 104.3977& 42.1095& 112.5509 \end{array}\displaystyle \right ), $$

which is clearly a solution of (4.10). The convergence history of algorithm (4.11) is given in Figure 1.

Figure 1
figure 1

Convergence history for Equation ( 4.11 ).

References

  1. Presic, SB: Sur la convergence des suites. C. R. Acad. Sci. Paris 260, 3828-3830 (1965)

    MATH  MathSciNet  Google Scholar 

  2. Presic, SB: Sur une classe d’inequations aux differences finite et sur la convergence de certaines suites. Publ. Inst. Math. (Belgr.) 5(19), 75-78 (1965)

    MathSciNet  Google Scholar 

  3. Ciric, LB, Presic, SB: On Presic type generalisation of Banach contraction principle. Acta Math. Univ. Comen. LXXVI(2), 143-147 (2007)

    MathSciNet  Google Scholar 

  4. Pacurar, M: Approximating common fixed points of Presic-Kannan type operators by a multi-step iterative method. An. Ştiinţ. Univ. ‘Ovidius’ Constanţa 17(1), 153-168 (2009)

    MathSciNet  Google Scholar 

  5. Pacurar, M: Fixed points of almost Presic operators by a k-step iterative method. An. Ştiinţ. Univ. ‘Al.I. Cuza’ Iaşi, Mat. LVII, 199-210 (2011)

    MathSciNet  Google Scholar 

  6. Rao, KPR, Mustaq Ali, M, Fisher, B: Some Presic type generalizations of the Banach contraction principle. Math. Morav. 15(1), 41-47 (2011)

    MATH  MathSciNet  Google Scholar 

  7. Pacurar, M: Common fixed points for almost Presic type operators. Carpath. J. Math. 28(1), 117-126 (2012)

    MATH  MathSciNet  Google Scholar 

  8. Khan, MS, Berzig, M, Samet, B: Some convergence results for iterative sequences of Presic type and applications. Adv. Differ. Equ. (2012). doi:10.1186/1687-1847-2012-38

    MathSciNet  Google Scholar 

  9. George, R, Reshma, KP, Rajagopalan, R: A generalised fixed point theorem of Presic type in cone metric spaces and application to Markov process. Fixed Point Theory Appl. (2011). doi:10.1186/1687-1812-2011-85

    MathSciNet  Google Scholar 

  10. Shukla, S: Presic type results in 2-Banach spaces. Afr. Math. (2013). doi:10.1007/s13370-013-0174-2

    Google Scholar 

  11. Shukla, S, Fisher, B: A generalization of Presic type mappings in metric-like spaces. J. Oper. 2013, Article ID 368501 (2013)

    Google Scholar 

  12. Shukla, S, Sen, R: Set-valued Presic-Reich type mappings in metric spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. (2012). doi:10.1007/s13398-012-0114-2

    Google Scholar 

  13. Shukla, S, Radenovic, S, Pantelic, S: Some fixed point theorems for Presic-Hardy-Rogers type contractions in metric spaces. J. Math. 2013, Article ID 295093 (2013). doi:10.1155/2013/295093

    MathSciNet  Google Scholar 

  14. Chen, YZ: A Presic type contractive condition and its applications. Nonlinear Anal. (2009). doi:10.1016/j.na.2009.03.006

    Google Scholar 

  15. Bakhtin, IA: The contraction mapping principle in quasimetric spaces. Funct. Anal. Unianowsk Gos. Pedagog. Inst. 30, 26-37 (1989)

    MATH  MathSciNet  Google Scholar 

  16. Boriceanu, M: Strict fixed point theorems for multivalued operators in b-metric spaces. Int. J. Mod. Math. 4(3), 285-301 (2009)

    MATH  MathSciNet  Google Scholar 

  17. Boriceanu, M, Bota, M, Petrusel, A: Multivalued fractals in b-metric spaces. Cent. Eur. J. Math. 8(2), 367-377 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  18. Bota, M, Molnar, A, Csaba, V: On Ekeland’s variational principle in b-metric spaces. Fixed Point Theory 12, 21-28 (2011)

    MATH  MathSciNet  Google Scholar 

  19. Plebaniak, R: New generalized pseudodistance and coincidence point theorem in a b-metric space. Fixed Point Theory Appl. 2013, 270 (2013)

    Article  Google Scholar 

  20. Czerwik, S: Contraction mappings in b-metric spaces. Acta Math. Inform. Univ. Ostrav. 1, 5-11 (1993)

    MATH  MathSciNet  Google Scholar 

  21. Czerwik, S: Nonlinear set-valued contraction mappings in b-metric spaces. Atti Semin. Mat. Fis. Univ. Modena 46, 263-276 (1998)

    MATH  MathSciNet  Google Scholar 

  22. Czerwik, S, Dlutek, K, Singh, SL: Round-off stability of iteration procedures for operators in b-metric spaces. J. Natur. Phys. Sci. 11, 87-94 (1997)

    MATH  MathSciNet  Google Scholar 

  23. Czerwik, S, Dlutek, K, Singh, SL: Round-off stability of iteration procedures for set valued operators in b-metric spaces. J. Natur. Phys. Sci. 15, 1-8 (2001)

    MATH  MathSciNet  Google Scholar 

  24. Pacurar, M: A multi-step iterative method for approximating common fixed points of Presic-Rus type operators on metric spaces. Stud. Univ. BabeÅŸ-Bolyai, Math. LV(1), 149-162 (2010)

    MathSciNet  Google Scholar 

  25. Ferrante, A, Levy, B: Hermitian solution of the equation \(X= Q+N^{*}X^{-1}N\). Linear Algebra Appl. 247, 359-373 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  26. Huang, M, Huang, C, Tsai, T: Application of Hilbert’s projective metric to a class of positive nonlinear operators. Linear Algebra Appl. 413, 202-211 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  27. Hasanov, VI: Positive definite solutions of matrix equations \(X\pm A^{T}X^{-q}A=Q\). Linear Algebra Appl. 404, 166-182 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  28. Duan, X, Liao, A, Tang, B: On the nonlinear matrix equation \(X-\sum_{i=1}^{m}A_{i}^{*}X^{\delta_{i}}A_{i}^{*}=Q\). Linear Algebra Appl. 429, 110-121 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  29. Liu, XG, Gao, H: On the positive definite solutions of the matrix equation \(X^{s}=\pm A^{T}X^{-t}A=I_{n}\). Linear Algebra Appl. 368, 83-97 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  30. Lim, Y: Solving the nonlinear matrix equation \(X=Q+\sum_{i=1}^{m}M_{i}^{*}X^{\delta_{i}}M_{i}^{*}\), via a contraction principle. Linear Algebra Appl. 430, 1380-1383 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  31. Thompson, AC: On certain contraction mappings in a partially ordered vector space. Proc. Am. Math. Soc. 14, 438-443 (1963)

    MATH  Google Scholar 

  32. Nussbaum, RD: Hilbert’s projective metric and iterated nonlinear maps. Mem. Am. Math. Soc. 14, 438-443 (1963)

    Google Scholar 

Download references

Acknowledgements

The authors are thankful to the learned referees for their valuable comments which helped in bringing this paper to its present form. The second, third and fourth authors would like to thank the Deanship of Scientific Research, Salman bin Abdulaziz University, Al Kharj, Kingdom of Saudi Arabia for the financial assistance provided. The research of second, third and fourth authors is supported by the Deanship of Scientific Research, Salman bin Abdulaziz University, Alkharj, Kingdom of Saudi Arabia, Research Grant No. 2014/01/2046.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reny George.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally in this work. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pathak, H.K., George, R., Nabwey, H.A. et al. Some generalized fixed point results in a b-metric space and application to matrix equations. Fixed Point Theory Appl 2015, 101 (2015). https://doi.org/10.1186/s13663-015-0343-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0343-0

MSC

Keywords