# Fixed point theorems in generalized metric spaces with applications to computer science

- Maryam A Alghamdi
^{1}, - Naseer Shahzad
^{2}Email author and - Oscar Valero
^{3}

**2013**:118

https://doi.org/10.1186/1687-1812-2013-118

© Alghamdi et al.; licensee Springer 2013

**Received: **3 January 2013

**Accepted: **17 April 2013

**Published: **3 May 2013

## Abstract

In 1994, Matthews introduced the notion of a partial metric space in order to obtain a suitable mathematical tool for program verification (Matthews in Ann. N.Y. Acad. Sci. 728:183-197, 1994). He gave an application of this new structure to formulate a suitable test for lazy data flow deadlock in Kahn’s model of parallel computation by means of a partial metric version of the celebrated Banach fixed point theorem (Matthews in Theor. Comput. Sci. 151:195-205, 1995). In this paper, motivated by the utility of partial metrics in computer science, we discuss whether they are a suitable tool for asymptotic complexity analysis of algorithms. Concretely, we show that the Matthews fixed point theorem does not constitute, in principle, an appropriate implement for the aforementioned purpose. Inspired by the preceding fact, we prove two fixed point theorems which provide the mathematical basis for a new technique to carry out asymptotic complexity analysis of algorithms via partial metrics. Furthermore, in order to illustrate and to validate the developed theory, we apply our results to analyze the asymptotic complexity of two celebrated recursive algorithms.

**MSC:**47H10, 54E50, 54F05, 68Q25, 68W40.

## Keywords

## 1 Introduction

When a computational program uses a recursion process to find the solution to a problem, such a process is characterized by obtaining in each step of the computation an approximation to the aforementioned solution which is better than the approximations obtained in the preceding steps and, in addition, by obtaining always the final approximation to the problem solution as the ‘limit’ of the computing process. A mathematical model to this sort of situations is the so-called Scott model which is based on ideas from order theory and topology (see [1, 2] for a detailed account of the Scott model and its applications). In particular, the order represents some notion of information in such a way that each step of the computation is identified with an element of the mathematical model which is greater than (or equal to) the other ones associated with the preceding steps, since each approximation gives more information about the final solution than those computed before. The final output of the computational process is seen as the limit of the successive approximations. Thus the recursion processes are modeled as increasing sequences of elements of the mathematical model, which is identified with an ordered set, that converge to its least upper bound with respect to the given topology.

In 1994, Matthews introduced the notion of Scott-like topology as a mathematical framework to model increasing information content sequences in computer science in the spirit of Scott [3].

Let us recall, with the aim of reminding the concept of Scott-like topology, that a pair $(X,\le )$ is said to be an ordered set provided that *X* is a nonempty set and ≤ is a reflexive, antisymmetric and transitive binary relation on *X* [2]. Given a subset $Y\subseteq X$, an upper bound for *Y* is an element $x\in X$ such that $y\le x$ for all $y\in Y$. A least element for *Y* is an element $z\in Y$ such that $z\le y$ for all $y\in Y$. Moreover, a sequence ${({x}_{n})}_{n\in \mathbb{N}}$ in $(X,\le )$ is increasing if ${x}_{n}\le {x}_{n+1}$ for all $n\in \mathbb{N}$.

*X*such that $x\le y\iff x\in cl(y)$ for all $x,y\in X$, where by $cl(y)$ we denote the closure of $\{y\}$ with respect to $\mathcal{T}$. Furthermore, a Scott-like topology over an ordered set $(X,\le )$ is a weakly consistent topology $\mathcal{T}$ over

*X*satisfying the following properties:

- (1)
every increasing sequence ${({x}_{n})}_{n\in \mathbb{N}}$ in $(X,\le )$ has least upper bound, where ℕ denotes the set of positive integer numbers;

- (2)
for every $O\in \mathcal{T}$ containing the least upper bound of an increasing sequence ${({x}_{n})}_{n\in \mathbb{N}}$, there exists ${n}_{0}\in \mathbb{N}$ such that ${x}_{n}\in O$ for all $n>{n}_{0}$.

In the aforesaid reference [3], Matthews introduced also the notion of a partial metric. In order to recall this new concept, let us denote by ${\mathbb{R}}^{+}$ the set of nonnegative real numbers.

*X*is a function $p:X\times X\to {\mathbb{R}}^{+}$ such that for all $x,y,z\in X$:

- (i)
$p(x,x)=p(x,y)=p(y,y)\iff x=y$;

- (ii)
$p(x,x)\le p(x,y)$;

- (iii)
$p(x,y)=p(y,x)$;

- (iv)
$p(x,y)\le p(x,z)+p(z,y)-p(z,z)$.

Of course a partial metric space is a pair $(X,p)$ such that *X* is a nonempty set and *p* is a partial metric on *X*.

The concept of a partial metric, since its introduction by Matthews, has been widely accepted in computer science (see, for instance, [4–15]). This is due to the fact that partial metric spaces can be used as a mathematical tool to model computational processes in the spirit of Scott. Indeed, each partial metric *p* on *X* generates a ${T}_{0}$ topology $\mathcal{T}(p)$ on *X* which has as a base the family of open *p*-balls $\{{B}_{p}(x,\epsilon ):x\in X,\epsilon >0\}$, where ${B}_{p}(x,\epsilon )=\{y\in X:p(x,y)<p(x,x)+\epsilon \}$ for all $x\in X$ and $\epsilon >0$. Moreover, every partial metric *p* induces an order ${\le}_{p}$ on *X* as follows: $x{\le}_{p}y\iff p(x,y)=p(x,x)$.

The next result reveals the reason for which the partial metric spaces can be used as a suitable mathematical tool to describe Scott processes (see [3] for a deeper discussion).

**Proposition 1** *Let* $(X,p)$ *be a complete partial metric space*. *Then the partial metric topology* $\mathcal{T}(p)$ *is a Scott*-*like topology over* $(X,{\le}_{p})$, *and thus every increasing sequence* ${({x}_{n})}_{n\in \mathbb{N}}$ *in* $(X,{\le}_{p})$ *has least upper bound and converges to it with respect to* $\mathcal{T}(p)$.

**Remark 2** Note that the fact that the sequence ${({x}_{n})}_{n\in \mathbb{N}}$ is ascending with least upper bound *x* provides that ${x}_{n}{\le}_{p}x$ for all $n\in \mathbb{N}$ and, thus, that $p({x}_{n},x)-p({x}_{n},{x}_{n})=0$. So, in the preceding proposition, we actually have the convergence of ${({x}_{n})}_{n\in \mathbb{N}}$ to *x* is with respect to $\mathcal{T}({p}^{s})$.

Inspired by the applications to program verification, Matthews extended Banach’s fixed point theorem to the framework of partial metric spaces in [3], and he used it to formulate a suitable test for lazy data flow deadlock in Kahn’s model of parallel computation in [4]. The aforementioned extension of Banach’s fixed point theorem can be stated as follows.

**Theorem 3**

*Let*

*f*

*be a mapping of a complete partial metric space*$(X,p)$

*into itself such that there is*$s\in [0,1[$

*such that*

*for all* $x,y\in X$. *Then* *f* *has a unique fixed point*. *Moreover*, *if* ${x}^{\ast}\in X$ *is the fixed point of* *f*, *then* $p({x}^{\ast},{x}^{\ast})=0$.

According to [3], a sequence ${({x}_{n})}_{n\in \mathbb{N}}$ in a partial metric space $(X,p)$ converges to a point $x\in X$ with respect to $\mathcal{T}(p)\iff p(x,x)={lim}_{n\to \mathrm{\infty}}p(x,{x}_{n})$. Moreover, a sequence ${({x}_{n})}_{n\in \mathbb{N}}$ in a partial metric space $(X,p)$ is called a Cauchy sequence if ${lim}_{n,m\to \mathrm{\infty}}p({x}_{n},{x}_{m})$ exists and is finite. Furthermore, a partial metric space $(X,p)$ is said to be complete if every Cauchy sequence ${({x}_{n})}_{n\in \mathbb{N}}$ in *X* converges, with respect to $\mathcal{T}(p)$, to a point $x\in X$ such that $p(x,x)={lim}_{n,m\to \mathrm{\infty}}p({x}_{n},{x}_{m})$.

for all $x,y\in X$ is a metric on *X*. Moreover, a sequence ${({x}_{n})}_{n\in \mathbb{N}}$ in $(X,p)$ converges to $x\in X$ with respect to $\mathcal{T}({p}^{s})$ if and only if ${lim}_{n\to \mathrm{\infty}}p(x,{x}_{n})={lim}_{n\to \mathrm{\infty}}p({x}_{n},{x}_{n})=p(x,x)$. Furthermore, a sequence in $(X,p)$ is Cauchy if and only if it is Cauchy in the metric space $(X,{p}^{s})$, and $(X,p)$ is complete if and only if $(X,{p}^{s})$ is complete.

Since Matthews published Theorem 3, an intense research activity on fixed point results in partial metric spaces have been developed. A large number of fixed point results in the metric framework have been extended to the partial metric context in [13, 16–41].

In the light of the facts that, on the one hand, partial metric spaces are a useful tool for solving practical problems that arise in several fields of computer science and, on the other hand, the scientific community has a growing interest in partial metric fixed point theory, in this paper our goal is to show that partial metric spaces can be used satisfactorily for the asymptotic complexity analysis of algorithms by means of fixed point techniques. To this end, we discuss whether Theorem 3 can be used for such a purpose. However, we show that, in principle, this question has a negative answer and, for this reason, we delve into the study of fixed point techniques in asymptotic complexity analysis of algorithms. Thus, we present a mathematical technique for discussing the complexity of algorithms whose foundation lies in the use of two new fixed point results that we provide for partial metric spaces. In order to show the potential applicability of the developed theory and to validate our new fixed point technique, we apply it to analyze formally the asymptotic complexity of two celebrated recursive algorithms, namely, Quicksort and Hanoi.

The remainder of the paper is organized as follows. In Section 2 we prove the announced fixed point theorems for self-mappings defined on complete partial metric spaces. Moreover, in the same section, we provide examples that show that the hypothesis in the statements of our results cannot be weakened. Section 3 is devoted to introducing the reader to fixed point techniques for asymptotic complexity analysis of algorithms. Concretely, general and fundamental aspects of asymptotic complexity analysis are recalled and, in addition, the reference fixed point technique to carry out the complexity analysis of algorithms, due to Schellekens and in which our study is based, is presented in detail in order to motivate our subsequent work developed in Section 4. In the latter section, we discuss the feasibility of using the Matthews fixed point theorem as a mathematical tool for the asymptotic complexity analysis of algorithms and, in particular, we show that the aforesaid result is not, in principle, appropriate for such a purpose. Accordingly, in the same section, we introduce a new mathematical technique for the asymptotic complexity analysis of algorithms whose basis is provided by the fixed point results proved in Section 2. We end the section, and thus the paper, applying the developed fixed point method to analyze the asymptotic complexity of the aforesaid recursive algorithms.

## 2 The fixed point theorems

In this section we present our main results which will play a central role in the application to asymptotic complexity analysis developed in Section 4.1. To this end, let us recall that a mapping *f* from an ordered set $(X,\le )$ into itself is monotone if $f(x)\le f(y)$ whenever $x\le y$. Moreover, according to [42], a mapping from an ordered set $(X,\le )$ into itself is said to be ≤-continuous provided that the least upper bound of ${(f({x}_{n}))}_{n\in \mathbb{N}}$ is $f({x}^{\ast})$ for every increasing sequence ${({x}_{n})}_{n\in \mathbb{N}}$ whose least upper bound exists and is ${x}^{\ast}$. Of course every ≤-continuous mapping is monotone.

**Theorem 4** *Let* $(X,p)$ *be a complete partial metric space and let* $f:X\to X$ *be a* ${\le}_{p}$-*continuous mapping*. *If there exists* ${x}_{0}\in X$ *such that* ${x}_{0}{\le}_{p}f({x}_{0})$, *then* *f* *has a fixed point in* $\uparrow {x}_{0}=\{x\in X:{x}_{0}{\le}_{p}x\}$.

*Proof*Let ${x}_{0}\in X$ such that ${x}_{0}{\le}_{p}f({x}_{0})$. Since

*f*is monotone, we have that

Observe that we can assume, without loss of generality, that ${x}_{0}\ne f({x}_{0})$ since otherwise we have guaranteed the existence of the fixed point in $\uparrow {x}_{0}$.

Since the sequence ${({f}^{n}({x}_{0}))}_{n\in \mathbb{N}}$ is increasing in $(X,{\le}_{p})$, we have, by Proposition 1 and Remark 2, that there exists ${x}^{\ast}\in X$ such that ${x}^{\ast}$ is the least upper bound of ${({f}^{n}({x}_{0}))}_{n\in \mathbb{N}}$ and, in addition, that ${lim}_{n\to \mathrm{\infty}}{f}^{n}({x}_{0})={x}^{\ast}$ with respect to $\mathcal{T}({p}^{s})$. Since *f* is ${\le}_{p}$-continuous, we have that $f({x}^{\ast})$ is the least upper bound of ${({f}^{n}({x}_{0}))}_{n\in \mathbb{N}}$. Whence we immediately obtain that $f({x}^{\ast})={x}^{\ast}$ and that ${x}^{\ast}\in \uparrow {x}_{0}$. □

**Remark 5** Observe that the proof of Theorem 4 follows applying similar arguments to those given in the proof of Kleen’s theorem or Tarski-Kantorovitch’s theorem for mappings defined from *ω*-chain-complete ordered sets into itself (see [42, 43] for more details). However, we have included a detailed proof of the aforementioned result for the sake of completeness and in order to help the reader.

The next example shows that the ${\le}_{p}$-continuity of the mapping cannot be deleted in the statement of Theorem 4.

**Example 6**Let

*p*be the partial metric space on $(0,\mathrm{\infty}]$ given by

for all $x,y\in (0,\mathrm{\infty}]$, where we adopt the convention that $\frac{1}{\mathrm{\infty}}=0$. It is not hard to check that the partial metric space $((0,\mathrm{\infty}],{p}_{\mathrm{\infty}})$ is complete and that $x{\le}_{{p}_{\mathrm{\infty}}}y\iff x{\le}_{\mathrm{\infty}}y$, where ${\le}_{\mathrm{\infty}}$ stands for the usual order on the extended real line. Consider the subset $X=\{1,2\}$ of $(0,\mathrm{\infty}]$. Then the partial metric space $(X,{p}_{\mathrm{\infty}})$ is also complete. Now, define the mapping $f:X\to X$ by $f(1)=2$ and $f(2)=1$. Clearly, $1{\le}_{{p}_{\mathrm{\infty}}}=f(1)=2$. Observe that *f* is not ${\le}_{p}$-continuous because *f* is not monotone. In fact, $1{\le}_{{p}_{\mathrm{\infty}}}2$ and $2=f(1){\nleqq}_{{p}_{\mathrm{\infty}}}f(2)=1$. It is clear that *f* has no fixed points.

Let us recall that, given a partial metric $(X,p)$, a mapping $f:X\to X$ is continuous provided that *f* is continuous from $(X,\mathcal{T}(p))$ into itself.

**Remark 7** Note that every monotone and continuous mapping *f* from a complete partial metric space into itself is ${\le}_{p}$-continuous. Indeed, assume that ${(({x}_{n}))}_{n\in \mathbb{N}}$ is an increasing sequence in $(X,{\le}_{p})$. Since $(X,p)$ is complete, we have guaranteed, by Proposition 1 and Remark 2, that there exists ${x}^{\ast}\in X$ such that ${x}^{\ast}$ is the least upper bound of ${({x}_{n})}_{n\in \mathbb{N}}$ and, in addition, that ${lim}_{n\to \mathrm{\infty}}{x}_{n}={x}^{\ast}$ with respect to $\mathcal{T}({p}^{s})$. Since *f* is continuous, we have that ${lim}_{n\to \mathrm{\infty}}f({x}_{n})=f({x}^{\ast})$ with respect to $\mathcal{T}(p)$. The monotonicity of *f* provides that $f({x}_{n}){\le}_{p}f({x}^{\ast})$ and, thus, that $p(f(x),f({x}_{n}))-p(f({x}_{n}),f({x}_{n}))=0$. Whence we deduce that ${lim}_{n\to \mathrm{\infty}}f({x}_{n})=f({x}^{\ast})$ with respect to $\mathcal{T}({p}^{s})$. Moreover, since the sequence ${(f({x}_{n}))}_{n\in \mathbb{N}}$ is increasing and the partial metric space $(X,p)$ is complete, we have guaranteed the existence of the least upper bound of ${(f({x}_{n}))}_{n\in \mathbb{N}}$, say ${y}^{\ast}\in X$, in such a way that ${lim}_{n\to \mathrm{\infty}}f({x}_{n})={y}^{\ast}$ with respect to $\mathcal{T}({p}^{s})$. Therefore $f({x}^{\ast})={y}^{\ast}$, as claimed.

In the light of Theorem 4 and Remark 7, we immediately obtain the following result.

**Corollary 8** *Let* $(X,p)$ *be a complete partial metric space and let* $f:X\to X$ *be a monotone and continuous mapping*. *If there exists* ${x}_{0}\in X$ *such that* ${x}_{0}{\le}_{p}f({x}_{0})$, *then* *f* *has a fixed point in* $\uparrow {x}_{0}=\{x\in X:{x}_{0}{\le}_{p}x\}$.

In the following, given a partial metric space $(X,p)$, we will say that a mapping $f:X\to X$ is ${p}^{s}$-continuous provided that *f* is continuous from $(X,\mathcal{T}({p}^{s}))$ into itself. In our subsequent result, the ${p}^{s}$-continuity plays a central role.

**Theorem 9** *Let* $(X,p)$ *be a complete partial metric space and let* $f:X\to X$ *be a monotone and* ${p}^{s}$-*continuous mapping*. *If there exists* ${x}_{0}\in X$ *such that* $f({x}_{0}){\le}_{p}{x}_{0}$, *then* *f* *has a fixed point in* $\downarrow {x}_{0}=\{x\in X:x{\le}_{p}{x}_{0}\}$.

*Proof* Let ${x}_{0}\in X$ such that $f({x}_{0}){\le}_{p}{x}_{0}$. We assume, without loss of generality, that ${x}_{0}\ne f({x}_{0})$ since otherwise we have guaranteed the existence of the fixed point in $\downarrow {x}_{0}$.

*f*is monotone, we have that

for all $n\in \mathbb{N}$. Thus the sequence ${(p({f}^{n}({x}_{0}),{f}^{n}({x}_{0})))}_{n\in \mathbb{N}}$ is decreasing in ${\mathbb{R}}^{+}$. So, there exists $r\in {\mathbb{R}}^{+}$ such that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{f}^{n}({x}_{0}))=r$. Then ${lim}_{m,n\to \mathrm{\infty}}p({f}^{m}({x}_{0}),{f}^{n}({x}_{0}))=r$, since $p({f}^{m}({x}_{0}),{f}^{n}({x}_{0}))=p({f}^{n}({x}_{0}),{f}^{n}({x}_{0}))$ for all $m,n\in \mathbb{N}$ with $n\ge m$. It follows that the sequence ${({f}^{n}({x}_{0}))}_{n\in \mathbb{N}}$ is Cauchy. The fact that $(X,p)$ is complete yields the existence of ${x}^{\ast}\in X$ such that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{f}^{n}({x}_{0}))={lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{x}^{\ast})=p({x}^{\ast},{x}^{\ast})$.

since $p({f}^{n}({x}_{0}),{x}_{0})=p({f}^{n}({x}_{0}),{f}^{n}({x}_{0}))$ for all $n\in \mathbb{N}$. By the fact that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{x}^{\ast})=p({x}^{\ast},{x}^{\ast})$, we conclude that $p({x}^{\ast},{x}_{0})=p({x}^{\ast},{x}^{\ast})$ and, thus, that ${x}^{\ast}{\le}_{p}{x}_{0}$.

*f*. On the one hand, we have that

for all $n\in \mathbb{N}$. On the other hand, we have that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{f}^{n}({x}_{0}))=p({f}^{n}({x}_{0}),{x}^{\ast})$ and, by the ${p}^{s}$-continuity of *f*, that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),f({x}^{\ast}))=p(f({x}^{\ast}),f({x}^{\ast}))$. Thus, we deduce that $p({x}^{\ast},f({x}^{\ast}))=p(f({x}^{\ast}),f({x}^{\ast}))$. So, we obtain that $f({x}^{\ast}){\le}_{p}{x}^{\ast}$.

for all $n\in \mathbb{N}$. Again, we have that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{x}^{\ast})=p({x}^{\ast},{x}^{\ast})$ and, by the ${p}^{s}$-continuity of *f*, that ${lim}_{n\to \mathrm{\infty}}p({f}^{n}({x}_{0}),{f}^{n}({x}_{0}))=p({f}^{n}({x}_{0}),f({x}^{\ast}))$. Thus we deduce that $p({x}^{\ast},f({x}^{\ast}))=p({x}^{\ast},{x}^{\ast})$. Whence we obtain that ${x}^{\ast}{\le}_{p}f({x}^{\ast})$.

We conclude from $f({x}^{\ast}){\le}_{p}{x}^{\ast}{\le}_{p}f({x}^{\ast})$ that $f({x}^{\ast})={x}^{\ast}$. □

The next examples show that the ${p}^{s}$-continuity and monotonicity of the mapping cannot be deleted in the statement of Theorem 9.

**Example 10**Consider the partial metric $p:{\mathbb{R}}^{+}\times {\mathbb{R}}^{+}\to {\mathbb{R}}^{+}$ given by $p(x,y)=max\{x,y\}$. It is clear that the partial metric space $({\mathbb{R}}^{+},p)$ is complete and ${p}^{s}(x,y)=|y-x|$ for all $x,y\in {\mathbb{R}}^{+}$. Moreover, we have that $x{\le}_{p}y\iff y\le x$, where ≤ stands for the usual order on ${\mathbb{R}}^{+}$. Define the mapping $f:{\mathbb{R}}^{+}\to {\mathbb{R}}^{+}$ by

for all $x\in {\mathbb{R}}^{+}$. Then we have that *f* is monotone and that $1=f(0){\le}_{p}0$. It is easy to check that *f* is not ${p}^{s}$-continuous. Moreover, it is clear that *f* has no fixed points.

**Example 11** Let $X=\{0,1\}$. Consider the restriction of the partial metric *p* defined in Example 10 to the set *X*. Denote the aforesaid restriction by *p* again. Of course the partial metric space $(X,p)$ is complete. Now, define the mapping $f:X\to X$ by $f(0)=1$ and $f(1)=0$. It follows easily that *f* is ${p}^{s}$-continuous. Clearly, $1=f(0){\le}_{p}0$. Moreover, *f* is not monotone since $1{\le}_{p}0$ and $0=f(1){\nleqq}_{p}f(0)=1$. Furthermore, *f* has no fixed points.

## 3 Asymptotic complexity analysis of algorithms

### 3.1 Preliminaries

In computer science, the complexity analysis of an algorithm is based on determining mathematically the quantity of resources needed by the algorithm to solve the problem for which it has been designed. A typical resource, playing a central role in complexity analysis, is the execution time or running time of computing. Since there are often many algorithms to solve the same problem, one objective of the complexity analysis is to assess which of them is faster when large inputs are considered. To this end, it is necessary to compare their running time of computing. This is usually done by means of the asymptotic analysis in which the running time of an algorithm is denoted by a function $T:\mathbb{N}\to (0,\mathrm{\infty}]$ in such a way that $T(n)$ represents the time taken by the algorithm to solve the problem under consideration when the input of the algorithm is of size *n*. Let us denote by $\mathcal{RT}$ the set formed by all functions from ℕ into $(0,\mathrm{\infty}]$. Of course the running time of an algorithm does not only depend on the input size *n*, but it depends also on the particular input of the size *n* (and the distribution of the data). Thus the running time of an algorithm is different when the algorithm processes certain instances of input data of the same size *n*. As a consequence, for the purpose of size-based comparisons, it is necessary to distinguish three possible behaviors in the complexity analysis of algorithms. These are the so-called best case, the worst case and the average case. The best case and the worst case for an input of size *n* are defined by the minimum and the maximum running time of computing over all inputs of the size *n*, respectively. The average case for an input of size *n* is defined by the expected value or average over all inputs of size *n*.

In general, to determine exactly the function which describes the running time of computing of an algorithm is an arduous task. However, in most situations it is useful to know the running time of computing of an algorithm in an ‘approximate’ way rather than in an exact one. For this reason, the asymptotic complexity analysis of algorithms is focused on obtaining the ‘approximate’ running time of computing of an algorithm, and this is done by means of the Θ-notation. Let us recall how the Θ-notation allows to achieve such a goal.

Let $f\in \mathcal{RT}$ denote the running time of computing of an algorithm, then the statement $f\in \mathcal{O}(g)$, where $g\in \mathcal{RT}$, means that there exist ${n}_{0}\in \mathbb{N}$ and $c\in {\mathbb{R}}^{+}$ such that $f(n){\le}_{\mathrm{\infty}}cg(n)$ for all $n\in \mathbb{N}$ such that ${n}_{0}\le n$ (${\le}_{\mathrm{\infty}}$ stands for the usual order on the extended real line). So, the function *g* gives an asymptotic upper bound of the running time *f* and, thus, an ‘approximate’ information of the running time of the algorithm. Following the standard notation, when *g* is an asymptotic upper bound of *f*, we write $f\in \mathcal{O}(g)$.

Sometimes in the analysis of the complexity of an algorithm, it is useful to assess an asymptotic lower bound of the running time of computing. In this case, the Ω-notation plays a central role. Thus the statement $f\in \mathrm{\Omega}(g)$ means that there exist ${n}_{0}\in \mathbb{N}$ and $c>0$ such that $cg(n){\le}_{\mathrm{\infty}}f(n)$ for all $n\in \mathbb{N}$ with ${n}_{0}\le n$. Of course, and similarly to the $\mathcal{O}$-notation case, when the time taken by the algorithm to solve the problem *f* is unknown, the function *g* yields an ‘approximate’ information of the running time of the algorithm in the sense that the algorithm takes a time to solve the problem bounded below by *g*.

It is clear that the best situation, when the complexity of an algorithm is discussed, matches up with the case in which we can find a function $g\in \mathcal{RT}$ in such a way that the running time *f* satisfies the condition $f\in \mathcal{O}(g)\cap \mathrm{\Omega}(g)$, denoted by $f\in \mathrm{\Theta}(g)$, because, in this case, we obtain a ‘tight’ asymptotic bound of *f* and, thus, a total asymptotic information about the time taken by the algorithm to solve the problem under consideration. From now on, we will say that *f* belongs to the asymptotic complexity class of *g* whenever $f\in \mathrm{\Theta}(g)$.

In the light of the preceding discussion, to determine, from an asymptotic complexity analysis viewpoint, the running time of an algorithm consists of obtaining its asymptotic complexity class.

For a fuller treatment of asymptotic complexity analysis of algorithms, we refer the reader to [44, 45].

### 3.2 The complexity space approach

In 1995, Schellekens introduced a topological foundation for the asymptotic complexity analysis of algorithms [46]. The aforementioned foundation is based on the notions of quasi-metric and complexity space.

*X*is a function $d:X\times X\to {\mathbb{R}}^{+}$ such that for all $x,y,z\in X$:

- (i)
$d(x,y)=d(y,x)=0\iff x=y$;

- (ii)
$d(x,y)\le d(x,z)+d(z,y)$.

A quasi-metric space is a pair $(X,d)$ such that *X* is a nonempty set and *d* is a quasi-metric on *X*.

Each quasi-metric *d* on *X* generates a ${T}_{0}$-topology $\mathcal{T}(d)$ on *X* which has as a base the family of open *d*-balls $\{{B}_{d}(x,\epsilon ):x\in X,\epsilon >0\}$, where ${B}_{d}(x,\epsilon )=\{y\in X:d(x,y)<\epsilon \}$ for all $x\in X$ and $\epsilon >0$.

Given a quasi-metric *d* on *X*, the function ${d}^{s}:X\times X\to {\mathbb{R}}^{+}$ defined by ${d}^{s}(x,y)=max\{d(x,y),d(y,x)\}$ is a metric.

A quasi-metric space $(X,d)$ is called bicomplete if the metric space $(X,{d}^{s})$ is complete.

Obviously, we adopt the convention that $\frac{1}{\mathrm{\infty}}=0$.

According to [46], from a complexity analysis point of view, it is possible to associate a function of $\mathcal{C}$ with each algorithm in such a way that such a function represents, as a function of the size of the input data, the running time of computing of the algorithm. Because of this, the elements of $\mathcal{C}$ are called complexity functions. Moreover, given two functions $f,g\in \mathcal{C}$, the numerical value ${d}_{\mathcal{C}}(f,g)$ (the complexity distance from *f* to *g*) can be interpreted as the relative progress made in lowering the complexity by replacing any program *P* with a complexity function *f* by any program *Q* with a complexity function *g*. Therefore, if $f\ne g$, the condition ${d}_{\mathcal{C}}(f,g)=0$ can be read as *f* is ‘at least as efficient’ as *g* on all inputs. In fact, we have that ${d}_{\mathcal{C}}(f,g)=0\iff f(n){\le}_{\mathrm{\infty}}g(n)$ for all $n\in \mathbb{N}$ and, thus, the fact that ${d}_{\mathcal{C}}(f,g)=0$ (${d}_{\mathcal{C}}(g,f)=0$) implies that $f\in \mathcal{O}(g)$ ($f\in \mathrm{\Omega}(g)$).

The applicability of the complexity space to the asymptotic complexity analysis of algorithms was illustrated by Schellekens in [46].

In particular, he introduced a method, based on a fixed point theorem for functionals defined on the complexity space into itself, to provide the asymptotic upper bound of those algorithms whose running time of computing satisfies a recurrence equation of Divide and Conquer type. Let us recall the aforenamed method.

*n*($n\in \mathbb{N}$) splitting it into

*a*subproblems of size $\frac{n}{b}$, for some constants

*a*,

*b*with $a,b\in \mathbb{N}$ and $a,b>1$, and solving them separately by the same algorithm. After obtaining the solution of the subproblems, the algorithm combines all subproblem solutions to give a global solution to the original problem. The recursive structure of a Divide and Conquer algorithm leads to a recurrence equation for the running time of computing. In many cases the running time of a Divide and Conquer algorithm is the solution to the Divide and Conquer recurrence equation of the form

where ${\mathbb{N}}_{b}=\{{b}^{k}:k\in \mathbb{N}\}$, $c>0$ denotes the complexity on the base case (*i.e.*, the problem size is small enough and the solution takes constant time), and $h(n)$ represents the time taken by the algorithm in order to divide the original problem into *a* subproblems and to combine all subproblems solutions into a unique one ($h\in \mathcal{C}$ and $h(n){<}_{\mathrm{\infty}}\mathrm{\infty}$ for all $n\in \mathbb{N}$).

Notice that for Divide and Conquer algorithms with running time satisfying the recurrence equation (1), it is typically sufficient to obtain the complexity on inputs of size *n*, where *n* ranges over the set ${\mathbb{N}}_{b}$ [45, 46].

Typical examples of algorithms whose running time of computing can be obtained by means of the recurrence (1) are Quicksort (best case behavior) and Mergesort (all behaviors) (see [45] for a detailed discussion about the both aforesaid algorithms).

and by ${d}_{\mathcal{C}}{|}_{{\mathcal{C}}_{b,c}}$ the restriction of ${d}_{\mathcal{C}}$ to ${\mathcal{C}}_{b,c}$.

We have that the quasi-metric space $({\mathcal{C}}_{b,c},{d}_{\mathcal{C}}{|}_{{\mathcal{C}}_{b,c}})$ is bicomplete since the quasi-metric space $(\mathcal{C},{d}_{\mathcal{C}})$ is bicomplete [48] and the set ${\mathcal{C}}_{b,c}$ is closed in $(\mathcal{C},{d}_{\mathcal{C}}^{s})$.

for all $f,g\in {\mathcal{C}}_{b,c}$. So, by Banach’s fixed point theorem for metric spaces, we deduce that the functional ${\mathrm{\Phi}}_{T}:{\mathcal{C}}_{b,c}\to {\mathcal{C}}_{b,c}$ has a unique fixed point and, thus, the recurrence equation (1) has a unique solution.

In order to obtain the asymptotic upper bound of the solution to the recurrence equation (1), Schellekens introduced a special class of functionals known as improvers.

Let $C\subseteq \mathcal{C}$, a functional $\mathrm{\Phi}:C\to C$ is called an improver with respect to a function $f\in C$ provided that Φ is monotone and that $\mathrm{\Phi}(f)(n){\le}_{\mathrm{\infty}}f(n)$ for all $n\in \mathbb{N}$.

Taking into account the exposed facts, the following result was stated in [46].

**Theorem 12** *The Divide and Conquer recurrence equation* (1) *has a unique solution* ${f}_{T}$ *in* ${\mathcal{C}}_{b,c}$. *Moreover*, *if the monotone functional* ${\mathrm{\Phi}}_{T}$ *associated to* (1), *and given by* (2), *is an improver with respect to some function* $g\in {\mathcal{C}}_{b,c}$, *then the solution to the recurrence equation satisfies that* ${f}_{T}\in \mathcal{O}(g)$.

Therefore, by Theorem 12, we conclude that ${f}_{T}^{M}\in \mathcal{O}({g}_{\frac{1}{2}})$. Hence the complexity function ${g}_{\frac{1}{2}}$, or equivalently $\mathcal{O}(n{log}_{2}(n))$, gives an asymptotic upper bound of the running time of computing of the aforenamed algorithm.

Furthermore, it must be stressed that Schellekens provided the asymptotic lower bound, and thus the asymptotic complexity class, of the running time of Mergesort (average case behavior) in [46]. Concretely, it was obtained that such a running time of computing belongs to $\mathrm{\Omega}(n{log}_{2}(n))$. Nevertheless, the asymptotic lower bound was obtained applying standard arguments that are not based on the use of fixed point techniques. So, Schellekens proved that Mergesort running time (average case behavior) belongs to the asymptotic complexity class $\mathrm{\Theta}(n{log}_{2}(n))$, but the fixed point technique was used only to provide the asymptotic upper bound.

## 4 Asymptotic complexity analysis of algorithms and partial metric spaces

Motivated by the usefulness of partial metric spaces in program verification, we wonder if this kind of generalized metric spaces are also useful in asymptotic complexity analysis in the spirit of Schellekens. Of course it seems natural to attempt to apply Theorem 3 (in Section 1) in order to obtain a fixed point technique for asymptotic complexity analysis of algorithms based on the use of partial metric spaces. However, the following reasoning shows that the aforementioned result cannot be applied for this purpose. To this end, let us recall some additional and useful concepts about partial metrics and complexity spaces.

Obviously, we make again the assumption that $\frac{1}{\mathrm{\infty}}=0$.

Notice that although, in principle, several partial metrics could be defined on $\mathcal{C}$ with the aim of developing a mathematical foundation of asymptotic complexity analysis by means of partial metric spaces, it seems reasonable to consider ${p}_{\mathcal{C}}$ as a (partial metric) complexity distance because of the following reasons.

On the one hand, it allows to retrieve the Schellekens (quasi-metric) complexity distance ${d}_{\mathcal{C}}$. Indeed, the following correspondences between quasi-metric and partial metric spaces was stated in [3].

**Proposition 13** *If* $(X,p)$ *is a partial metric space*, *then the function* ${d}_{p}:X\times X\to {\mathbb{R}}^{+}$ *defined by* ${d}_{p}(x,y)=p(x,y)-p(x,x)$ *is a quasi*-*metric such that* $\mathcal{T}(p)=\mathcal{T}({d}_{p})$.

$f,g\in \mathcal{C}$ and, thus, that ${d}_{{p}_{\mathcal{C}}}(f,g)={d}_{\mathcal{C}}(f,g)$ for all $f,g\in \mathcal{C}$.

for all $n\in \mathbb{N}$. Whence we obtain that $f{\le}_{{p}_{\mathcal{C}}}g$ implies that $f\in \mathcal{O}(g)$. Observe that the last implication recovers the information yielded by the condition ${d}_{\mathcal{C}}(f,g)=0$ in the Schellekens approach.

Furthermore, the completeness of the partial metric space $(\mathcal{C},{p}_{\mathcal{C}})$ is guaranteed by the bicompleteness of the complexity space $(\mathcal{C},{d}_{\mathcal{C}})$ and the following result which was proved in [49].

**Proposition 14**

*If*$(X,p)$

*is a partial metric space*,

*then the below assertions are equivalent*:

- (1)
$(X,p)$

*is complete*. - (2)
$(X,{d}_{p})$

*is bicomplete*.

Since the partial metric space $(\mathcal{C},{p}_{\mathcal{C}})$ is complete, so is the partial metric $({\mathcal{C}}_{b,c},{p}_{\mathcal{C}}{|}_{{\mathcal{C}}_{b,c}})$.

for all $f,g\in {\mathcal{C}}_{b,c}$, where ${\mathrm{\Phi}}_{T}$ is the functional given by (2).

Of course, we only consider the case of $s\in \phantom{\rule{0.2em}{0ex}}]0,1[$ because it is evident that the case $s=0$ gives a contradiction.

As a result we deduce that $1\le s<1$, which is a contradiction.

Consequently, when we consider the partial metric ${p}_{\mathcal{C}}$ as a complexity distance instead the original quasi-metric ${d}_{\mathcal{C}}$, Theorem 3 cannot be applied to the asymptotic complexity analysis of those algorithms whose running time of computing leads to the Divide and Conquer recurrence equation (1).

We want to point out that the above reasoning was first introduced in [50] in order to show the impossibility of using Theorem 3 to analyze the Divide and Conquer recurrence equations. As a consequence, in the aforesaid reference a fixed point technique, which differs from the technique that we will introduce in the remainder of this section, was introduced to discuss the complexity of algorithms via the use of partial quasi-metrics, and not partial metrics, and a few aspects of language theory.

### 4.1 The fixed point technique for asymptotic complexity analysis based on partial metric spaces

Inspired by the impossibility of developing a fixed point technique for the asymptotic complexity analysis of algorithms based on the use of Theorem 3, we present a new fixed point technique that respects the spirit of the original Schellekens technique and whose foundation lies in the use of Theorems 4 and 9 in Section 2.

where $c>0$, $a\ge 1$ and $h\in \mathcal{C}$ with $h(n){<}_{\mathrm{\infty}}\mathrm{\infty}$ for all $n\in \mathbb{N}$.

where $S(m)=T({b}^{m-1})$ and $r(m)=h({b}^{m-1})$ for all $m\in \mathbb{N}$.

The remainder of this section is devoted to introducing, by means of the partial metric space $(\mathcal{C},{p}_{\mathcal{C}})$, a new fixed point technique in the spirit of Schellekens for yielding the asymptotic complexity class of those recursive algorithms whose running time is the solution to the recurrence equation (5).

#### 4.1.1 The existence and uniqueness of solution

for all $f\in {\mathcal{C}}_{c}$. It is clear that a complexity function in ${\mathcal{C}}_{c}$ is a solution to the recurrence equation (5) if and only if it is a fixed point of the functional ${\mathrm{\Psi}}_{T}$.

In order prove the announced existence and uniqueness of solution to the recurrence equation (5), we will need the following auxiliary result.

**Lemma 15**

*Let*${\mathrm{\Psi}}_{T}:{\mathcal{C}}_{c}\to {\mathcal{C}}_{c}$

*be the functional given by*(7)

*and let*${f}_{c,\mathrm{\infty}}\in {\mathcal{C}}_{c}$

*be the complexity function given by*

*Then the following statements hold*:

- (1)
${\mathrm{\Psi}}_{T}$

*is monotone*. - (2)
${\mathrm{\Psi}}_{T}$

*is continuous and*${p}^{s}$-*continuous*. - (3)
${\mathrm{\Psi}}_{T}({f}_{c,\mathrm{\infty}}){\le}_{{p}_{{\mathcal{C}}_{c}}}{f}_{c,\mathrm{\infty}}$.

*Proof*(1) Consider $f,g\in {\mathcal{C}}_{c}$ such that $f{\le}_{{p}_{{\mathcal{C}}_{c}}}g$. Then $f(n){\le}_{\mathrm{\infty}}g(n)$ for all $n\in \mathbb{N}$. Thus we have that ${\mathrm{\Psi}}_{T}(f)(1)={\mathrm{\Psi}}_{T}(g)(1)=c$ and

for all $n\in \mathbb{N}$ with $n\ge 2$. It follows that ${\mathrm{\Psi}}_{T}(f){\le}_{{p}_{{\mathcal{C}}_{c}}}{\mathrm{\Psi}}_{T}(g)$.

which is a contradiction. It follows that ${\mathrm{\Psi}}_{T}$ is continuous from $({\mathcal{C}}_{c},\mathcal{T}({p}_{{\mathcal{C}}_{c}}))$ into itself.

which is a contradiction. Whence we conclude that ${\mathrm{\Psi}}_{T}$ is continuous from $({\mathcal{C}}_{c},\mathcal{T}({p}_{{\mathcal{C}}_{c}}^{s}))$ into itself, *i.e.*, ${\mathrm{\Psi}}_{T}$ is ${p}^{s}$-continuous.

for all $n\in \mathbb{N}$ with $n>2$. Consequently, we obtain that ${\mathrm{\Psi}}_{T}({f}_{c,\mathrm{\infty}})(n){\le}_{\mathrm{\infty}}{f}_{c,\mathrm{\infty}}(n)$ for all $n\in \mathbb{N}$. Therefore ${\mathrm{\Psi}}_{T}({f}_{c,\mathrm{\infty}}){\le}_{{p}_{{\mathcal{C}}_{c}}}{f}_{c,\mathrm{\infty}}$. □

**Remark 16** It should be stressed that, given a partial metric space $(X,p)$, the continuous and ${p}^{s}$-continuous mappings $f:X\to X$ are called properly continuous in [5]. So, the functional ${\mathrm{\Psi}}_{T}$, by assertion (2) in the statement of Lemma 15, is properly continuous.

According to [51], the quasi-metric space $({\mathcal{C}}_{c},{d}_{\mathcal{C}}{|}_{{\mathcal{C}}_{c}})$ is bicomplete. Hence, by Proposition 14, we have that the partial metric space $({\mathcal{C}}_{c},{p}_{{\mathcal{C}}_{c}})$ is complete.

**Theorem 17** *The recurrence equation* (5) *has a unique solution* ${f}_{T}$ *in* ${\mathcal{C}}_{c}$.

*Proof* The completeness of $({\mathcal{C}}_{c},{p}_{{\mathcal{C}}_{c}})$ and Lemma 15 provide the conditions in the statement of Theorem 9. So, we have that ${\mathrm{\Psi}}_{T}$ has a fixed point in $\downarrow {f}_{c,\mathrm{\infty}}$. Since $\downarrow {f}_{c,\mathrm{\infty}}\cap {\mathcal{C}}_{c}={\mathcal{C}}_{c}$, we obtain the existence of a fixed point of *f* in ${\mathcal{C}}_{c}$.

*n*that ${f}_{T}={g}_{T}$. Since ${f}_{T}$, ${g}_{T}$ are solutions to the recurrence equation (5), we have that they are fixed points of the functional ${\mathrm{\Psi}}_{T}$,

*i.e.*, ${\mathrm{\Psi}}_{T}({f}_{T})={\mathrm{\Psi}}_{T}({g}_{T})$. Hence we have that

Consequently, ${f}_{T}={g}_{T}$ and, thus, ${\mathrm{\Psi}}_{T}$ has a unique fixed point in ${\mathcal{C}}_{c}$. Therefore the recurrence equation (5) has a unique solution in ${\mathcal{C}}_{c}$. □

#### 4.1.2 The asymptotic complexity class of the solution

In the next result, we obtain the announced method to provide the complexity class of an algorithm whose running time of computing satisfies the recurrence equation (5). To this end, let us recall that, given $C\subseteq \mathcal{C}$, a monotone functional $\mathrm{\Phi}:C\to C$ is called an improver with respect to a function $f\in C$ provided that $\mathrm{\Phi}(f){\le}_{{p}_{{\mathcal{C}}_{c}}}f$ (see Section 3.2). Furthermore, on account of [51], a monotone functional $\mathrm{\Phi}:C\to C$ is said to be a worsener with respect to a function $f\in C$ provided that $f{\le}_{{p}_{{\mathcal{C}}_{c}}}\mathrm{\Phi}(f)$.

Observe that the computational meaning of the improvers, as discussed in [46], can be interpreted as that this type of functionals correspond to a transformation on programs in such a way that the iterative applications of the transformation yield, from a complexity point of view, an improved program at each step of the iteration. Similarly, the worseners can be interpreted as those functionals which correspond to a transformation on programs in such a way that the iterative applications of the transformation yield a worsened, from a complexity point of view, program at each step of the iteration.

**Theorem 18**

*Let*${f}_{T}\in {\mathcal{C}}_{c}$

*be the unique solution to the recurrence equation*(5).

*Then the following assertions hold*:

- (1)
*If the functional*${\mathrm{\Psi}}_{T}$*associated to*(5),*and given by*(7),*is a worsener with respect to some complexity function*$g\in {\mathcal{C}}_{c}$,*then*${f}_{T}\in \mathrm{\Omega}(g)$. - (2)
*If the functional*${\mathrm{\Psi}}_{T}$*associated to*(5),*and given by*(7),*is an improver with respect to some complexity function*$g\in {\mathcal{C}}_{c}$,*then*${f}_{T}\in \mathcal{O}(g)$.

*Proof* (1) Suppose that there exists $g\in {\mathcal{C}}_{c}$ such that ${\mathrm{\Psi}}_{T}(g){\le}_{{p}_{{\mathcal{C}}_{c}}}g$. Then, by Corollary 8, we deduce the existence of a fixed point ${g}_{T}$ of ${\mathrm{\Psi}}_{T}$ such that ${g}_{T}\in \uparrow g$, *i.e.*, ${g}_{T}{\le}_{{p}_{{\mathcal{C}}_{c}}}g$ and, thus, ${g}_{T}(n){\le}_{\mathrm{\infty}}g(n)$ for all $n\in \mathbb{N}$. So, ${g}_{T}\in \mathrm{\Omega}(g)$. Since ${f}_{T}$ is the unique fixed point of ${\mathrm{\Psi}}_{T}$ in ${\mathcal{C}}_{c}$, we deduce that ${f}_{T}={g}_{T}$ and, hence, that ${f}_{T}\in \mathrm{\Omega}(g)$.

(2) Assume that there exists $g\in {\mathcal{C}}_{c}$ such that $g{\le}_{{p}_{{\mathcal{C}}_{c}}}{\mathrm{\Psi}}_{T}(g)$. Then Theorem 9 gives the existence of a fixed point ${g}_{T}$ of ${\mathrm{\Psi}}_{T}$ such that ${g}_{T}\in \downarrow g$, *i.e.*, $g{\le}_{{p}_{{\mathcal{C}}_{c}}}{g}_{T}$ and hence $g(n){\le}_{\mathrm{\infty}}{g}_{T}(n)$ for all $n\in \mathbb{N}$. So, ${g}_{T}\in \mathcal{O}(g)$. The uniqueness of a fixed point of ${\mathrm{\Psi}}_{T}$ in ${\mathcal{C}}_{c}$ allows us to deduce that ${f}_{T}={g}_{T}$ and, thus, that ${f}_{T}\in \mathcal{O}(g)$. □

**Remark 19** Notice that, indeed, Theorem 18 yields the complexity class of algorithms whose running time of computing satisfies the recurrence equation (5) because when there exist $l\in {\mathcal{C}}_{c}$, $r,t>0$ and ${n}_{0}\in \mathbb{N}$ such that $g(n)=rl(n)$ and $h=tl(n)$ for all $n>{n}_{0}$ and, besides, ${\mathrm{\Psi}}_{T}$ is an improver and a worsener with respect to *g* and *h*, respectively, then ${f}_{T}\in \mathrm{\Theta}(l)$.

### 4.2 Analyzing the running time computing of two examples

The aim of this section is twofold. On the one hand, we show that the developed method in Section 4.1 is useful to analyze the asymptotic complexity of recursive algorithms. On the other hand, in order to validate the new results, we will retrieve, by means of their application, the complexity class of two well-known algorithms in the literature.

Typical examples of algorithms whose running time of computing is the solution to the recurrence equation (5) are Quicksort (worst case behavior) and Hanoi.

with $j>0$ and where *c* is the time taken by the algorithm in the base case.

with $c,d>0$ and where, again, *c* represents the time taken by the algorithm to solve the base case. Note that it does not make sense to distinguish three possible running time behaviors for Hanoi since the distribution of the input data is always the same for each size *n*.

For a deeper discussion about Quicksort and Hanoi, we refer the reader to [44, 45] and [44, 52], respectively.

Next we discuss the running time of computing of the aforesaid algorithms through our results.

**Corollary 20** *The running time of computing of Quicksort* (*worst case behavior*) *is in the complexity class* $\mathrm{\Theta}({n}^{2})$.

*Proof* The running time of computing of Quicksort (worst case behavior) is provided by the solution to the recurrence equation (8). Theorem 17 guarantees the existence and uniqueness of such a solution. Denote it by ${f}_{T}^{Q}$.

*i.e.*,

for all $f\in {\mathcal{C}}_{c}$.

where $r>0$. Then it is not hard to see that ${\mathrm{\Psi}}_{T}$ is an improver with respect to ${h}_{r}\in {\mathcal{C}}_{c}$ (*i.e.*, ${\mathrm{\Psi}}_{T}^{Q}({h}_{r}){\le}_{{p}_{{\mathcal{C}}_{c}}}{h}_{r}$) if and only if $r\ge max\{\frac{3j}{5},\frac{c}{4}+\frac{j}{2}\}$. It follows, by statement (2) in Theorem 18, that ${f}_{T}^{Q}\in \mathcal{O}({h}_{max\{\frac{3j}{5},\frac{c}{4}+\frac{j}{2}\}})$.

In order to provide the asymptotic complexity class, it remains to yield an asymptotic lower bound of ${f}_{T}^{Q}$. Now it is a routine to check that ${\mathrm{\Psi}}_{T}^{Q}$ is a worsener with respect to the complexity function ${h}_{s}$ (*i.e.*, ${h}_{s}{\le}_{{p}_{{\mathcal{C}}_{c}}}{\mathrm{\Psi}}_{T}^{Q}({h}_{s})$) if and only if $s\le min\{\frac{j}{2},\frac{c}{4}+\frac{j}{2}\}$, whence we deduce, by statement (1) in Theorem 18, that ${f}_{T}^{Q}\in \mathrm{\Omega}({h}_{min\{\frac{j}{2},\frac{c}{4}+\frac{j}{2}\}})$.

Therefore we obtain that ${f}_{T}^{Q}\in \mathcal{O}({h}_{max\{\frac{3j}{5},\frac{c}{4}+\frac{j}{2}\}})\cap \mathrm{\Omega}({h}_{min\{\frac{j}{2},\frac{c}{4}+\frac{j}{2}\}})$. Whence we deduce, by Remark 19, that ${f}_{T}^{Q}\in \mathrm{\Theta}({n}^{2})$, which is in accordance with the Quicksort (worst case behavior) asymptotic complexity class that can be found in the literature [44, 45]. □

**Corollary 21** *The running time of computing of Hanoi*, *under the uniform const criterion*, *is in the complexity class* $\mathrm{\Theta}({2}^{n})$.

*Proof* The running time of computing of Hanoi, under the uniform const criterion, is provided by the solution to the recurrence equation (9). Theorem 17 guarantees the existence and uniqueness of such a solution. Denote it by ${f}_{T}^{H}$.

*i.e.*,

for all $f\in {\mathcal{C}}_{c}$.

where $r>0$. It is not hard to check that ${\mathrm{\Psi}}_{T}^{H}$ is an improver with respect to ${h}_{r}$ (*i.e.*, ${\mathrm{\Psi}}_{T}^{H}({h}_{r}){\le}_{{p}_{{\mathcal{C}}_{c}}}{h}_{r}$) if and only if $r\ge max\{d,\frac{2c+d}{3}\}$. It follows, by statement (2) in Theorem 18, that ${f}_{T}^{H}\in \mathcal{O}({h}_{max\{d,\frac{2c+d}{3}\}})$.

Next we provide an asymptotic lower bound of ${f}_{T}^{H}$. It is a routine to check that ${\mathrm{\Psi}}_{T}^{H}$ is a worsener with respect to the complexity function ${h}_{s}$ (*i.e.*, ${h}_{s}{\le}_{{p}_{{\mathcal{C}}_{c}}}{\mathrm{\Psi}}_{T}^{H}({h}_{s})$) if and only if $s\le min\{d,\frac{2c+d}{3}\}$, whence we deduce, by statement (1) in Theorem 18, that ${f}_{T}^{H}\in \mathrm{\Omega}({h}_{min\{d,\frac{2c+d}{3}\}})$.

Therefore we obtain that ${f}_{T}^{H}\in \mathcal{O}({h}_{max\{d,\frac{2c+d}{3}\}})\cap \mathrm{\Omega}({h}_{min\{d,\frac{2c+d}{3}\}})$. Thus, by Remark 19, we obtain that ${f}_{T}^{H}\in \mathrm{\Theta}({2}^{n})$, which is in accordance with the Hanoi asymptotic complexity class that can be found in the literature [44, 52]. □

## Declarations

### Acknowledgements

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. 363-001-D1433. The authors, therefore, acknowledge with thanks DSR technical and financial support.

## Authors’ Affiliations

## References

- Gierz G, Hofmann KH, Keimel K, Lawson JD, Mislove M, Scott DS:
*Continuous Lattices and Domains*. Cambridge University Press, Cambridge; 2003.View ArticleGoogle Scholar - Davey BA, Priestley HA:
*Introduction to Lattices and Order*. Cambridge University Press, Cambridge; 1990.Google Scholar - Matthews SG: Partial metric topology.
*Ann. N.Y. Acad. Sci.*1994, 728: 183–197. 10.1111/j.1749-6632.1994.tb44144.xMathSciNetView ArticleGoogle Scholar - Matthews SG: An extensional treatment of lazy data flow deadlock.
*Theor. Comput. Sci.*1995, 151: 195–205. 10.1016/0304-3975(95)00051-WMathSciNetView ArticleGoogle Scholar - O’Neill, SJ: Two topologies are better than one. Technical report 283, Department of Computer Science, University of Warwick (1995)Google Scholar
- O’Neill SJ: Partial metrics, valuations, and domain theory.
*Ann. N.Y. Acad. Sci.*1996, 806: 304–315. 10.1111/j.1749-6632.1996.tb49177.xMathSciNetView ArticleGoogle Scholar - Bukatin MA, Scott JS: Towards computing distances between programs via Scott domains. LNCS 1234.
*Logical Foundations of Computer Science*1997, 33–43.View ArticleGoogle Scholar - Heckmann R: Approximation of metric spaces by partial metric spaces.
*Appl. Categ. Struct.*1999, 7: 71–83. 10.1023/A:1008684018933MathSciNetView ArticleGoogle Scholar - Schellekens MP: A characterization of partial metrizability: domains are quantifiable.
*Electron. Notes Theor. Comput. Sci.*2003, 305: 409–432. 10.1016/S0304-3975(02)00705-3MathSciNetView ArticleGoogle Scholar - Schellekens MP: The correspondence between partial metrics and semivaluations.
*Theor. Comput. Sci.*2004, 315: 135–149. 10.1016/j.tcs.2003.11.016MathSciNetView ArticleGoogle Scholar - Romaguera S, Valero O: A quantitative computational model for complete partial metric spaces via formal balls.
*Math. Struct. Comput. Sci.*2009, 19: 541–563. 10.1017/S0960129509007671MathSciNetView ArticleGoogle Scholar - Romaguera S, Valero O: Domain theoretic characterizations of quasi-metric completeness in terms of formal balls.
*Math. Struct. Comput. Sci.*2010, 20: 453–472. 10.1017/S0960129510000010MathSciNetView ArticleGoogle Scholar - Seda AK, Hitzler P: Generalized distance functions in the theory of computation.
*Comput. J.*2010, 53: 443–464. 10.1093/comjnl/bxm108View ArticleGoogle Scholar - Romaguera S, Schellekens MP, Valero O: Complexity spaces as quantitative domains of computation.
*Topol. Appl.*2011, 158: 853–860. 10.1016/j.topol.2011.01.005MathSciNetView ArticleGoogle Scholar - Romaguera S, Tirado P, Valero O: Complete partial metric spaces have partially metrizable computational models.
*Int. J. Comput. Math.*2012, 89: 284–290. 10.1080/00207160.2011.559229MathSciNetView ArticleGoogle Scholar - Abbas M, Nazir T, Romaguera S: Fixed point results for generalized cyclic contraction mappings in partial metric spaces.
*Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat.*2012, 106: 287–297. 10.1007/s13398-011-0051-5MathSciNetView ArticleGoogle Scholar - Abdeljawad, T, Alzabut, JO, Mukheimer, A, Zaidan, Y: Banach contraction principle for cyclical mappings on partial metric spaces. arXiv:1112.5891v1 [math.GN] (2011)Google Scholar
- Abdeljawad T, Karapinar E, Tas K: Existence and uniqueness of a common fixed point on partial metric spaces.
*Appl. Math. Lett.*2011, 24: 1900–1904. 10.1016/j.aml.2011.05.014MathSciNetView ArticleGoogle Scholar - Alghamdi MA, Shahzad N, Valero O: On fixed point theory in partial metric spaces.
*Fixed Point Theory Appl.*2012., 2012: Article ID 175. doi:10.1186/1687–1812–2012–175Google Scholar - Altun I, Erduran A: Fixed point theorems for monotone mappings on partial metric spaces.
*Fixed Point Theory Appl.*2011., 2011: Article ID 508730Google Scholar - Altun I, Simsek H: Some fixed points theorems on dualistic partial metric spaces.
*J. Adv. Math. Stud.*2008, 1: 1–8.MathSciNetGoogle Scholar - Altun I, Sadarangani K: Corrigendum to ‘Generalized contractions on partial metric spaces’ [Topology Appl. 157 (2010) 2778–2785].
*Topol. Appl.*2011, 158: 1738–1740. 10.1016/j.topol.2011.05.023MathSciNetView ArticleGoogle Scholar - Altun I, Sola F, Simsek H: Generalized contractions on partial metric spaces.
*Topol. Appl.*2010, 157: 2778–2785. 10.1016/j.topol.2010.08.017MathSciNetView ArticleGoogle Scholar - Agarwal RP, Alghamdi MA, Shahzad N: Fixed point theory for cyclic generalized contractions in partial metric spaces.
*Fixed Point Theory Appl.*2012., 2012: Article ID 40. doi:10.1186/1687–1812–2012–40Google Scholar - Aydi H, Abbas M, Vetro C: Partial Hausdorff metric and Nadler’s fixed point theorem on partial metric spaces.
*Topol. Appl.*2012, 159: 3234–3242. 10.1016/j.topol.2012.06.012MathSciNetView ArticleGoogle Scholar - Aydi H, Vetro C, Sintunavarat W, Kumam P: Coincidence and fixed points for contractions and cyclical contractions in partial metric spaces.
*Fixed Point Theory Appl.*2012., 2012: Article ID 124. doi:10.1186/1687–1812–2012–124Google Scholar - Ćirić L, Samet B, Aydi H, Vetro C: Common fixed points of generalized contractions on partial metric spaces and an application.
*Appl. Math. Comput.*2011, 218: 2398–2406. 10.1016/j.amc.2011.07.005MathSciNetView ArticleGoogle Scholar - Di Bari, C, Vetro, P: Common fixed points for ψ-contractions on partial metric spaces. Hacet. J. Math. Stat. (to appear)Google Scholar
- Di Bari C, Vetro P: Fixed points for weak
*φ*-contractions on partial metric spaces.*Int. J. Eng. Contemp. Math. Sci.*2011, 1: 5–13.Google Scholar - Ilić D, Pavlović V, Rakočević V: Some new extensions of Banach’s contraction principle to partial metric space.
*Appl. Math. Lett.*2011, 24: 1326–1330. 10.1016/j.aml.2011.02.025MathSciNetView ArticleGoogle Scholar - Ilić D, Pavlović V, Rakočević V: Extensions of Zamfirescu theorem to partial metric spaces.
*Math. Comput. Model.*2012, 55: 801–809. 10.1016/j.mcm.2011.09.005View ArticleGoogle Scholar - Karapinar E, Erhan IM: Fixed point theorems for operators on partial metric spaces.
*Appl. Math. Lett.*2011, 24: 1894–1899. 10.1016/j.aml.2011.05.013MathSciNetView ArticleGoogle Scholar - Karapinar E: Fixed point theory for cyclic weak
*ϕ*-contraction.*Appl. Math. Lett.*2011, 24: 822–825. 10.1016/j.aml.2010.12.016MathSciNetView ArticleGoogle Scholar - Karapinar E: Weak
*ϕ*-contraction on partial metric spaces and existence of fixed points in partially ordered sets.*Math. Aeterna*2011, 1: 237–244.MathSciNetGoogle Scholar - Oltra S, Valero O: Banach’s fixed point theorem for partial metric spaces.
*Rend. Ist. Mat. Univ. Trieste*2004, 36: 17–26.MathSciNetGoogle Scholar - Romaguera S: Matkowski’s type theorems for generalized contractions on (ordered) partial metric spaces.
*Appl. Gen. Topol.*2011, 12: 213–220.MathSciNetGoogle Scholar - Romaguera S: Fixed point theorems for generalized contractions on partial metric spaces.
*Topol. Appl.*2012, 159: 194–199. 10.1016/j.topol.2011.08.026MathSciNetView ArticleGoogle Scholar - Rus IA: Fixed point theory in partial metric spaces.
*An. Univ. Vest. Timiş., Ser. Mat.-Inform.*2008, XLVI: 149–160.MathSciNetGoogle Scholar - Valero O: On Banach fixed point theorems for partial metric spaces.
*Appl. Gen. Topol.*2005, 6: 229–240.MathSciNetView ArticleGoogle Scholar - Vetro F, Radenović S: Nonlinear
*ψ*-quasi-contractions of Ćirić-type in partial metric spaces.*Appl. Math. Comput.*2012, 219: 1594–1600. 10.1016/j.amc.2012.07.061MathSciNetView ArticleGoogle Scholar - Paesano D, Vetro P: Suzuki’s type characterizations of completeness for partial metric spaces and fixed points for partially ordered metric spaces.
*Topol. Appl.*2012, 159: 911–920. 10.1016/j.topol.2011.12.008MathSciNetView ArticleGoogle Scholar - Baranga A: The contraction principle as a particular case of Kleen’s fixed point theorem.
*Discrete Math.*1991, 98: 75–79. 10.1016/0012-365X(91)90413-VMathSciNetView ArticleGoogle Scholar - Dugundji J, Granas A:
*Fixed Point Theory*. Polish Sci., Warsaw; 1982.Google Scholar - Aho AV, Hopcroft JE, Ullman JD:
*The Design and Analysis of Computer Algorithms*. Addison-Wesley, Reading; 1974.Google Scholar - Brassard G, Bratley P:
*Algorithms: Theory and Practice*. Prentice Hall, New York; 1988.Google Scholar - Schellekens MP: The Smyth completion: a common foundation for denotational semantics and complexity analysis.
*Electron. Notes Theor. Comput. Sci.*1995, 1: 211–232.MathSciNetView ArticleGoogle Scholar - Künzi HPA: Nonsymmetric distances and their associated topologies: about the origins of basic ideas in the area of asymmetric topology. 3. In
*Handbook of the History of General Topology*. Edited by: Aull CE, Lowen R. Kluwer Academic, Dordrecht; 2001:853–968.View ArticleGoogle Scholar - Romaguera S, Schellekens MP: Quasi-metric properties of complexity spaces.
*Topol. Appl.*1999, 98: 311–322. 10.1016/S0166-8641(98)00102-3MathSciNetView ArticleGoogle Scholar - Oltra S, Romaguera S, Sánchez-Pérez EA: Bicompleting weightable quasi-metric spaces and partial metric spaces.
*Rend. Circ. Mat. Palermo*2002, 51: 151–162. 10.1007/BF02871458MathSciNetView ArticleGoogle Scholar - Cerdà-Uguet MA, Schellekens MP, Valero O: The Baire partial quasimetric space: a mathematical tool for the asymptotic complexity analysis in computer science.
*Theory Comput. Syst.*2012, 50: 387–399. 10.1007/s00224-010-9310-7MathSciNetView ArticleGoogle Scholar - Romaguera S, Tirado P, Valero O: New results on mathematical foundations of asymptotic complexity analysis of algorithms via complexity spaces.
*Int. J. Comput. Math.*2012, 89: 1728–1741. 10.1080/00207160.2012.659246MathSciNetView ArticleGoogle Scholar - Cull P, Flahive M, Robson R:
*Difference Equations: From Rabbits to Chaos*. Springer, New York; 2005.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.