Skip to main content

RETRACTED ARTICLE: Fixed point theorems and explicit estimates for convergence rates of continuous time Markov chains

This article was retracted on 27 February 2020

This article has been updated

Abstract

In this paper we give Banach fixed point theorems and explicit estimates on the rates of convergence of the transition function to the stationary distribution for a class of exponential ergodic Markov chains. Our results are different from earlier estimates using coupling theory, and from estimates using stochastically monotone one. Our estimates show a noticeable improvement on existing results if Markov chains contain instantaneous states or nonconservative states. The proof uses existing results of discrete time Markov chains together with h-skeleton. At last, we apply this result, Ray-Knight compactification and Itô excursion theory to two examples: a class of singular Markov chains and Kolmogorov matrix.

1 Introduction

Throughout this paper, unless otherwise specified, let \(\{X_{t};t \in [0,\infty)\}\) be a time homogeneous, continuous time Markov chain with an honest and standard transition function \(p_{ij}(t)\) on a state space \(E=\{1,2,3,\ldots\}\), and its density matrix is \(Q=(q_{ij})\), \(q_{i}=-q_{ii}\). Let \(P^{x} \) and \(E^{x}\) denote the probability law and expectation of the Markov chain respectively under the initial condition of \(X_{0}=x\), where \(x\in E\). Let \(X=(\varOmega ,\mathscr{F},\mathscr{F}_{t},X_{t},\theta_{t},P^{x})\) be the right process associated with \(p_{ij}(t)\).

In this paper we consider the Markov chain which is an exponential ergodicity, that means, there is a unique stationary distribution \(\pi=(\pi_{j})\) (\(j\in E\)), constants \(R_{i}<\infty\) and \(\alpha>0\) such that

$$\sum_{j} \bigl|p_{ij}(t)-\pi_{i}\bigr| \leq R_{i} e^{-\alpha t} $$

for all \(i,j\in E\). Our goal is to find out the computable bounds of the constants \(R_{i}\) and α, especially α.

There has been considerable recent work on the problem of computable bounds for convergence rates of Markov chains. Recently, the authors (see [14]) gave the bounds of convergence rates for Markov chains. Their main methods are based on renewal theory and coupling theory. And in [57], the authors gave the convergence rates of stochastically monotone Markov chains. Their results and methods have the advantages of being applicable to some Markov chains or processes.

However, their methods are not fitted for the general continuous time Markov chains, especially when the symmetric condition, coupling condition or stochastically monotone one is not satisfied. For example, the bounds of Markov chains with instantaneous states such as Kolmogorov matrix, or the regular birth and death process. In this paper, we discuss this problem.

Let \(i\in E\) and suppose that \(X_{0}=i\), define

$$T_{1}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \inf\{t>0|X_{t}\neq i\}&\mbox{if this set is not empty}, \\ +\infty,&\mbox{otherwise} \end{array}\displaystyle \right . $$

to be the sojourn time in state i.

Define

$$\tau^{+}_{j}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \inf\{t>T_{1}|X_{t}=j\}&\mbox{if this set is not empty},\\ +\infty,&\mbox{otherwise}. \end{array}\displaystyle \right . $$

Our central result is the following theorem.

Theorem 1

Suppose that \(p_{ij}(t)\)is an irreducible and ergodic transition function with stationary distribution \(\{\pi_{j},j\in E\}\)and \(m\in E\)is a stable state. If there is a positive constantλsuch that \(\lambda< \inf_{m\in E} q_{m}\)and \(E^{i}\{e^{\lambda\tau^{+}_{m}}\}<\infty\)for all \(i \in E\), then we know that \(p_{ij}(t)\)is exponentially ergodic. Moreover, if

$$\alpha< \frac{\lambda^{2}}{\lambda+ (q_{m} -\lambda)(E^{m}\{e^{\lambda \tau^{+}_{m}}\}-1)}, $$

then there exists \(R_{i}<\infty\)for some (and then for all) isuch that

$$\sum_{j} \bigl|p_{ij}(t)-\pi_{j}\bigr| \leq R_{i} e^{-\alpha t} . $$

In this paper we shall first develop the methods in [2] to the continuous time situation, which leads to considerable improvements of convergence rates. And this result shall be in a wider range of application than existing results in [57]. Next we shall give some fundamental lemmas and the proof of the main theorem in this paper. Finally, we shall apply our result and the Itô excursion theorem to compute two examples in Section 3, which will show the advantages of our result.

2 Proof of Theorem 1

2.1 Definitions and some fundamental lemmas

Let \(\{Y_{n}\}^{\infty}_{n=0}\) be a time homogeneous Markov chain with one-step transition matrix \(\varPi =(\varPi _{ij})\) on the state space E. Suppose that \(\{Y_{n}\}^{\infty}_{n=0}\) is an aperiodic, irreducible ergodic Markov chain with a transition function \(\varPi _{ij}\) and stationary distribution \(\pi_{j}\) (\(j\in E\)). Let \(\varPi =(\varPi _{ij}(n))\) be an n-step transition matrix and \(\eta_{i}^{+}=\inf\{n|n\geq1, Y_{n}=i\}\) for all \(i\in E\).

Definition 1

We say that \(\{Y_{n}\}^{\infty}_{n=0}\) is ρ-geometrically ergodic (for short, geometrically ergodic) if there exits a number ρ with \(0<\rho<1\) such that

$$ \bigl|\varPi _{ij}(n)-\pi_{j}\bigr|< C_{ij} \rho^{n} $$
(1)

for any \(n\in\mathbb{N}\) and \(i,j\in E\), where ρ is called ergodic index.

Lemma 1

Suppose \(\varPi _{ij}\)and \(\pi_{j}\)are defined as above, \(m\in E \)is a fixed state, \(a<1\), \(b>0\), there is a function \(V(x)\geq1\)on E such that

$$ \sum_{ij}\varPi _{ij}V(j)=a V(i)+bI_{\{m\}}(i) $$
(2)

(called drift inequality). If \(\varPi _{mm}>\delta>0\), then we have

$$ \sum_{j}\bigl|\varPi _{ij}(n)- \pi_{j}\bigl|\leq\frac{\rho}{\rho-(1-M^{-1})}V(i)\rho^{n} $$
(3)

for \(1>\rho>1-M^{-1}\), where

$$ M=\frac{1}{(1-a )^{2}}\biggl\{ 1-a+b+b^{2}+\frac{32-8\delta^{2}}{\delta^{3}} \biggl(\frac {b}{1-a}\biggr)^{2}\bigl[(1-a)b+b^{2}\bigr] \biggr\} . $$
(4)

Proof

From (1) together with Theorems 2.1 and 2.2 in [2], we can get the proof of Lemma 1. □

Definition 2

Given a number \(h>0\), the discrete time Markov chain \(\{X_{nh}\}^{\infty}_{n=0}\) having a one-step transition function \(p_{ij}(h)\) (and therefore an n-step transition function \(p_{ij}(nh)\)) is called the h-skeleton of \(\{X_{t},t\geq0\}\).

Lemma 2

Suppose that \(p_{ij}(t)\)is an irreducible and ergodic transition function, \(m\in E\)is a fixed state; for a constantλ (\(0<\lambda <q_{m}\)), we have \(E^{m}\{e^{\lambda\tau^{+}_{m}}\}<\infty\). Let

$$\eta^{h}_{m}=\inf\{nh|n\geq1, X_{nh}=m\} $$

for all \(h>0\). If \((1-e^{(\lambda-q_{m})h})E^{m}\{e^{\lambda\tau^{+}_{m}}\}<1\), then we know that

$$\begin{aligned}& E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} \leq \frac{e^{(\lambda-q_{m})h} E^{i}\{e^{\lambda\tau_{m}^{+}}\}}{1-(1-e^{(\lambda-q_{m})h})E^{m}\{e^{\lambda\tau ^{+}_{m}}\}}\quad ( i\neq m), \end{aligned}$$
(5)
$$\begin{aligned}& E^{m}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} \leq \frac{e^{(\lambda-q_{m})h}}{1-(1- e^{(\lambda-q_{m})h})E^{m}\{e^{\lambda\tau^{+}_{m}}\}}. \end{aligned}$$
(6)

Proof

It is obvious that m is not an absorbing state, otherwise \(p_{ij}(t)\) is reducible. Suppose that \(E^{i}\{e^{\lambda\tau^{+}_{m}}\}<\infty\). Let

$$\begin{aligned}& \tau_{1}=\inf\{t|X_{t}=m\}, \\& \gamma_{1}=\inf\{t|t>\tau_{1}, X_{t}\neq m\}, \\& \tau_{k+1}=\inf\{t|t>\gamma_{k}, X_{t}=m\} \end{aligned}$$

and

$$\gamma_{k+1}=\inf\{t|t>\tau_{k+1}, X_{t}\neq m\}, $$

where \(k=1,2,\ldots\) and m is recurrent. Then the stopping times mentioned above are almost surely finite and

$$\tau_{1}< \gamma_{1}< \tau_{2}< \gamma_{2}< \cdots. $$

From the strong Markov property of X, it is easily known that \(\gamma_{1}-\tau_{1}, \gamma_{2}-\tau_{2}, \gamma_{3}-\tau_{3}, \ldots \) are independent identically distributed exponential random variables with mean \(q_{m}\). So we have \(P^{i}\{\gamma_{k}-\tau_{k}\leq h, \forall k\}=0\).

We can easily get \(\eta^{h}_{i}\leq\tau_{1}+h\) on \(\{\gamma_{1}-\tau_{1}>h\}\) and \(\eta^{h}_{m}\leq \tau_{k+1}+h\) on

$$\{\gamma_{k+1}-\tau_{k+1}>h, \gamma_{n}- \tau_{n}\leq h, \forall n\leq k\}. $$

If \(i\neq m\), then we have

$$\begin{aligned} E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} =& E^{i} \bigl\{ e^{\lambda\eta^{h}_{m}}; \gamma_{1}-\tau _{1}>h\bigr\} \\ &{}+\sum_{k=1}^{\infty}E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}; \gamma_{k+1}-\tau _{k+1}>h, \gamma_{n}-\tau_{n}\leq h, \forall n\leq k\bigr\} \\ \leq& e^{\lambda h}E^{i}\bigl\{ e^{\tau^{+}_{m}}; T_{1} \circ\theta_{\tau^{+}_{m}}>h\bigr\} \\ &{}+ e^{\lambda h}\sum_{k=1}^{\infty}E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma _{k+1}- \tau_{k+1}>h, \gamma_{n}-\tau_{n}\leq h, \forall n \leq k\bigr]. \end{aligned}$$
(7)

If \(i=m\) and \(\tau_{1}=0\), then we have

$$\begin{aligned} E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} =& E^{i} \bigl\{ e^{\lambda\eta^{h}_{m}}; \gamma_{1}-\tau _{1}>h\bigr\} \\ &{}+\sum_{k=1}^{\infty}E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}; \gamma_{k+1}-\tau _{k+1}>h, \gamma_{n}-\tau_{n}\leq h, \forall n\leq k\bigr\} \\ \leq& e^{\lambda h}P^{i}\{T_{1}>h\} \\ &{}+ e^{\lambda h}\sum_{k=1}^{\infty}E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma _{k+1}- \tau_{k+1}>h, \gamma_{n}-\tau_{n}\leq h, \forall n \leq k\bigr]. \end{aligned}$$
(8)

If \(i\neq m\), then we have, for each \(k\geq1\),

$$\begin{aligned} & E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma_{k+1}- \tau_{k+1}>h, \gamma_{n}-\tau_{n}\leq h, \forall n \leq k\bigr] \\ &\quad= E^{i}\bigl\{ E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma_{k+1}-\tau_{k+1}>h, \gamma_{n}- \tau_{n}\leq h, \forall n\leq k | \mathscr{F}_{\tau _{k+1}}\bigr] \bigr\} \\ &\quad=e^{-q_{m} h}E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma_{n}- \tau_{n}\leq h, \forall n\leq k\bigr] \\ &\quad=e^{-q_{m} h}E^{i}\bigl\{ E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma_{n}-\tau_{n}\leq h, \forall n\leq k | \mathscr{F}_{\tau_{k}}\bigr] \bigr\} \\ &\quad=e^{-q_{m} h}E^{i}\bigl\{ E^{i}\bigl[e^{\lambda(\tau^{+}_{m}\circ\theta_{\tau_{k}}+\tau _{k})}; T_{1} \circ\theta_{\tau_{k}}\leq h, \ldots| \mathscr{F}_{\tau_{k}} \bigr] \bigr\} \\ &\quad=e^{-q_{m} h}E^{i}\bigl\{ e^{\lambda\tau_{k}}E^{m} \bigl[e^{\lambda\tau^{+}_{m}}; T_{1} \leq h\bigr]; \gamma_{n}- \tau_{n}\leq h, \forall n\leq k-1 \bigr\} \\ &\quad=e^{-q_{m} h} E^{m}\bigl[e^{\lambda\tau^{+}_{m}}; T_{1} \leq h\bigr]E^{i}\bigl\{ e^{\lambda\tau_{k}}; \gamma_{n}- \tau_{n}\leq h, \forall n\leq k-1 \bigr\} \\ &\quad=\cdots \\ &\quad=e^{-q_{m} h}\bigl(E^{m}\bigl[e^{\lambda\tau^{+}_{m}}; T_{1} \leq h\bigr]\bigr)^{k} E^{i}\bigl\{ e^{\lambda\tau_{1}}\bigr\} \\ &\quad=e^{-q_{m} h}\bigl(E^{m}\bigl[e^{\lambda\tau^{+}_{m}}; T_{1} \leq h\bigr]\bigr)^{k} E^{i}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} \end{aligned}$$

and

$$\begin{aligned} E^{m}\bigl[e^{\lambda\tau^{+}_{m}}; T_{1}> h\bigr] =& E^{m}\bigl\{ E^{m}\bigl[e^{\lambda(\tau ^{+}_{i}\circ\theta_{h}+h)}; T_{1}>h | \mathscr{F}_{h} \bigr]\bigr\} \\ =&E^{m}\bigl[e^{\lambda h}E^{m}\bigl\{ e^{\lambda\tau^{+}_{i}} \bigr\} ; T_{1}>h\bigr]\\ =&e^{(\lambda-q_{m}) h} E^{i}\bigl\{ e^{\lambda\tau^{+}_{i}}\bigr\} . \end{aligned}$$

So

$$ E^{m}\bigl[e^{\lambda\tau^{+}_{m}}; T_{1}\leq h \bigr]=E^{m}\bigl[e^{\lambda\tau^{+}_{i}}\bigr]- E^{m} \bigl[e^{\lambda\tau^{+}_{m}}; T_{1}>h\bigr]=\bigl(1-e^{(\lambda-q_{m}) h} \bigr)E^{m}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} . $$
(9)

By (9) and the equations above we have

$$\begin{aligned} & E^{i}\bigl[e^{\lambda\tau_{k+1}}; \gamma_{k+1}- \tau_{k+1}>h, \gamma_{n}-\tau_{n}\leq h, \forall n \leq k\bigr]\\ &\quad= e^{-q_{m} h} \bigl(1-e^{(\lambda-q_{m}) h}\bigr)^{k} \bigl[E^{m}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} \bigr]^{k} E^{i}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} \end{aligned}$$

by (7). If

$$\bigl(1-e^{(\lambda-q_{m})h}\bigr)E^{m}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} < 1, $$

then we have

$$\begin{aligned} E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} \leq \frac{e^{(\lambda-q_{m})h}E^{i}\{e^{\lambda\tau_{m}^{+}}\}}{1-(1-e^{(\lambda -q_{m})h})E^{m}\{e^{\lambda\tau^{+}_{m}}\}}. \end{aligned}$$

And by (8) we have

$$\begin{aligned} E^{m}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} \leq \frac{e^{(\lambda-q_{m})h}}{1-(1- e^{(\lambda-q_{m})h})E^{m}\{e^{\lambda\tau^{+}_{m}}\}}. \end{aligned}$$

So Lemma 2 is proved. □

Remark 1

From (5) we get that when \(h\downarrow0\) and \(i\neq m\),

$$ E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} = \bigl[1+(q_{m}-\lambda) \bigl(E^{m}\bigl\{ e^{\lambda\tau^{+}_{m}} \bigr\} -1\bigr)h+O\bigl(h^{2}\bigr)\bigr]E^{i}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} . $$
(10)

From (6) we see that when \(h\downarrow0\),

$$ E^{m}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} -1=(q_{m}- \lambda) \bigl(E^{m}\bigl\{ e^{\lambda\tau^{+}_{m}}\bigr\} -1\bigr)h+O \bigl(h^{2}\bigr). $$
(11)

Proposition 1

(see [8], p.224)

Let \(\{ G_{k}, k=1,2,\ldots\}\)be at most countable collection of unbounded open subsets of \((0, \infty)\). Then, in any nonempty open subintervalIof \((0, \infty)\), there exits a numberhwith the property that for eachk, \(nh\in G_{k}\)for infinitely many integersn.

2.2 Proof of Theorem 1

(1) For each \(h>0\) such that \((1-e^{(\lambda-q_{m})h})E^{m}\{e^{\lambda\tau^{+}_{m}}\}<1\), we write \(p^{h}_{ij}(n)\) for the transition function of h-skeleton \(\{X_{nh}\}_{n=1}^{\infty}\). Consider

$$V_{h}(i)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} E^{i}\{e^{\lambda\eta^{h}_{m}}\}&\mbox{if }i\neq m,\\ 1&\mbox{if }i=m \end{array}\displaystyle \right . $$

and

$$a_{h}=e^{-\lambda h},\qquad b_{h}=e^{-\lambda h} \bigl(E^{m}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} -1\bigr). $$

For any \(i\neq m\), by (10) we have

$$\begin{aligned} e^{\lambda h}\sum_{j\in E}p^{h}_{ij}V_{h}(j) =& e^{\lambda h}\sum_{j\neq m}p^{h}_{ij} E^{j}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} + e^{\lambda h}p^{h}_{im}\\ =& E^{i}\bigl[e^{\lambda(\eta^{h}_{m}\circ\theta_{h}+h)}; X_{h}\neq m \bigr]+E^{i}\bigl[e^{\lambda h}; X_{h}=m\bigr]\\ =& E^{i}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} =V_{h}(i). \end{aligned}$$

Similarly we can get

$$\begin{aligned} e^{\lambda h}\sum_{j\in E}p^{h}_{mj}V_{h}(j) =& e^{\lambda h}\sum_{j\neq m}p^{h}_{mj} E^{j}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} + e^{\lambda h}p^{h}_{mm}\\ =& E^{m}\bigl[e^{\lambda(\eta^{h}_{m}\circ\theta_{h}+h)}; X_{h}\neq m \bigr]+E^{m}\bigl[e^{\lambda h}; X_{h}=m\bigr]\\ =&E^{m}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} =1+\bigl(E^{m}\bigl\{ e^{\lambda\eta^{h}_{m}}\bigr\} -1\bigr). \end{aligned}$$

By (11) we have

$$\sum_{j\in E}p^{h}_{ij}V_{h}(j)=a_{h}V_{h}(i)+b_{h}I_{\{m\}}(i) $$

for any \(i\in E\). By (2) we know that \(p^{h}_{ij}(n)\) satisfies the drift inequality.

Let \(\delta_{h}=e^{-q_{m} h}\), obviously we have \(p^{h}_{mm}>\delta_{h}\). Let

$$M_{h}=\frac{1}{(1-a_{h})^{2}}{ 1-a_{h}+b_{h}+b_{h}^{2}+ \frac{32-8\delta_{h}^{2}}{\delta _{h}^{3}}\biggl(\frac{b_{h}}{1-a_{h}}\biggr)^{2} \bigl[(1-a_{h}) b_{h}+b_{h}^{2}\bigr]}. $$

By (3) and (4) we obtain

$$ \sum_{j\in E}\bigl|p_{ij}(nh)- \pi_{j}\bigr|\leq\frac{\rho}{\rho -(1-M^{-1}_{h})}V_{h}(i)\rho^{n} $$
(12)

for any \(1>\rho>1-M_{h}^{-1}\) from Lemma 1.

(2) From (11) we have

$$M_{h}=\frac{1}{ 1-a_{h} }\biggl[1+\frac{(q_{m}-\lambda)(E^{m}\{e^{\lambda\tau^{+}_{m}}\} -1)}{\lambda}+O(h)\biggr], $$

which gives

$$M^{-1}_{h}=\frac{\lambda^{2}}{\lambda+ (q_{m}-\lambda)(E^{m}\{e^{\lambda\tau ^{+}_{m}}\}-1)}h+O\bigl(h^{2}\bigr). $$

Hence, for any

$$\alpha< \frac{\lambda^{2}}{\lambda+ (q_{m}-\lambda)(E^{m}\{e^{\lambda\tau ^{+}_{m}}\}-1)}, $$

there exist \(\varepsilon>0\) and \(0< h<\varepsilon\) such that \(e^{-\alpha h}>1-M_{h}^{-1}\). By (12) we get

$$ \sum_{j\in E}\bigl|p_{ij}(nh)- \pi_{j}\bigr|\leq\frac{e^{-\alpha h}}{e^{-\alpha h}-(1-M^{-1}_{h})} V_{h}(i)e^{-\alpha nh}. $$
(13)

(3) For each \(i\in E\), let

$$\beta_{i}=\inf\biggl\{ \beta\Big| e^{\beta t}\sum _{j\in E}\bigl|p_{ij}(t)-\pi_{j}\bigr| \mbox{ is bounded on } (0, \infty)\biggr\} . $$

We have

$$f_{il}(t)=e^{(\beta_{i} +l^{-1})t}\sum_{j\in E}\bigl|p_{ij}(t)- \pi_{j}\bigr| $$

for any \(l>1\), which is a continuous and unbounded function on \((0, \infty)\). Then we know that

$$G_{il}=\bigl\{ t|f_{il}(t)>1\bigr\} , $$

for \(i\in E\) and \(l>1\), which is a class of nonempty and unbounded open sets on \((0, \infty)\). From Proposition 1, for every \(G_{il}\), there exists \(0< h<\varepsilon\) such that there are infinitely many nh belonging to \(G_{il}\), where \(n=1,2,\ldots \) .

If \(nh\in G_{il}\), then by (13) we have

$$f_{il}(nh)=e^{(\beta_{i} +l^{-1})nh}\sum_{j\in E}\bigl|p_{ij}(nh)- \pi_{j}\bigr| \leq \frac{e^{-\alpha h}}{e^{-\alpha h}-(1-M^{-1}_{h})} V_{h}(i) e^{(\beta+l^{-1}-\alpha)nh}, $$

which gives \(\beta_{i} +l^{-1}-\alpha\geq0\).

By the arbitrariness of l and α, we get

$$\beta_{i}\geq\frac{\lambda^{2}}{\lambda+ (q_{m}-\lambda)(E^{m}\{e^{\lambda\tau^{+}_{m}}\}-1)}. $$

From the definition of \(\beta_{i}\), it is easy to know that for any \(\alpha>0\),

$$\alpha< \frac{\lambda^{2}}{\lambda+ (q_{m}-\lambda)(E^{m}\{e^{\lambda\tau ^{+}_{m}}\}-1)} $$

and \(i\in E\), there exists \(R_{i}>0\) such that

$$\sum_{j\in E}\bigl|p_{ij}(t)-\pi_{j}\bigr| \leq R_{i} e^{-\alpha t}. $$

Definition 3

Let

$$\alpha^{*}=\sup\bigl\{ \alpha| \mbox{for all } i,j\in E, \exists R_{ij}>0\mbox{ s.t. } \bigl|p_{ij}(t)-\pi_{j}\bigr| \leq R_{ij}e^{\alpha t}\bigr\} . $$

The constant \(\alpha^{*}\) is called the maximal exponentially ergodic constant of a transition function \(p_{ij}(t)\).

Remark 2

If \(p_{ij}(t)\) is irreducible, m is a stable state and \(\lambda>0\) which make \(E^{m}\{e^{\lambda\tau^{+}_{m}}\}<\infty\), then we know that \(p_{ij}(t)\) is still ergodic and the result in Theorem 1 remains valid from the proof of Theorem 1.

Remark 3

From Theorem 1 and Definition 3 we know

$$\alpha^{*}\geq\frac{\lambda^{2}}{\lambda+ (q_{m}-\lambda)(E^{m}\{e^{\lambda\tau^{+}_{m}}\}-1)}. $$

3 Two examples

In this section we compute the maximal exponentially ergodic constants for two types of chains: a kind of singular Markov chain in which all states are not conservative and Kolmogorov matrix in which state 1 is an instantaneous state.

3.1 A kind of singular Markov chain

Suppose \(E=\{1,2,\ldots\}\) and

$$ Q= \begin{pmatrix} -q_{1} & 0 & 0&0 & \cdots \\ 0&-q_{2} & 0 &0 & \cdots\\ 0&0 &-q_{3} & 0 & \cdots\\ 0&0&0 &-q_{4} & \cdots\\ \vdots& \vdots& \vdots&\vdots& \cdots \end{pmatrix}, $$
(14)

where \(0< q_{1}< q_{2}<\cdots<q_{n}<\cdots<\infty\) and \(\inf_{i} q^{-1}_{i}=0\) for \(i=1,2,\ldots \) . In this case the transition function with Q-matrix above is not unique (see [9, 10]), but the honest transition function \(p_{ij}(t)\) with Q-matrix is unique and its resolvent is

$$ R_{ij}(\lambda)=R^{\min}_{ij}( \lambda)+ E^{i}\bigl\{ e^{-\lambda\sigma}\bigr\} \frac{\sum_{k\in E}a_{k} R^{\min }_{kj}(\lambda)}{\lambda\sum_{k\in E} \sum_{m\in E}a_{k} R^{\min}_{km}(\lambda)}, $$
(15)

where \(a_{k}\) (\(k\in E\)) are sequences of nonnegative real numbers such that

$$\sum_{k\in E}a_{k}=\infty \quad\mbox{and}\quad \sum _{k\in E}\sum_{m\in E}a_{k} R^{\min}_{km}(1)=1, $$

where \(R^{\min}_{ij}(\lambda)\) is the resolvent of the minimal transition function \(p^{\min}_{ij}(t)\). From [5] it is known that this chain is not symmetric, so we cannot discuss its ergodicity with coupling theory. We also cannot adapt existing results to this chain. The following are our main methods and result.

3.1.1 Ray-Knight compactification

Theorem 2

For the Markov chain withQ-matrix (14), the Ray-Knight compactification of the state spaceEis \(\overline{E}=E\cup{\{\infty\}}\), \(X=(\varOmega ,\mathscr{F},\mathscr{F}_{t},X_{t},\theta_{t},P^{x})\)is the right processes with the transition function \(p_{ij}(t)\).

Proof

Consider

$$\sigma=\inf\bigl\{ t|t>0,\forall\varepsilon>0,\mbox{ there are infinite jumps of } X \mbox{ in } (t-\varepsilon,t+\varepsilon)\bigr\} . $$

Then we have

$$\begin{aligned} R^{\min}_{ij}(\lambda) =&E^{i}\biggl[\int _{0}^{\infty} e^{-\lambda t}I_{\{j\} }(X_{t})\,dt \biggr] \\ =&\delta_{ij} E^{i}\biggl[\int_{0}^{T_{1}}e^{-\lambda t}\,dt \biggr] \\ =&\frac{\delta_{ij}}{\lambda+q_{i}} \end{aligned}$$

and

$$\begin{aligned} E^{i}\bigl\{ e^{-\lambda\sigma}\bigr\} =& E^{i}\bigl\{ e^{-\lambda T_{1}}\bigr\} \\ =&\frac{q_{i}}{\lambda+q_{i}}. \end{aligned}$$

Then by (15) we have

$$\begin{aligned} R_{ij}(\lambda)=\frac{\delta_{ij}}{\lambda+q_{i}}+\frac{q_{i}}{\lambda +q_{i}} \frac{\frac{a_{j}}{\lambda+q_{j}}}{\lambda\sum_{k=1}^{\infty }\frac{a_{k}}{\lambda+q_{k}}}, \end{aligned}$$

which gives

$$\lim_{i\rightarrow\infty}R_{ij}(\lambda)=\frac{\frac{a_{j}}{\lambda +q_{j}}}{\lambda\sum_{k=1}^{\infty}\frac{a_{k}}{\lambda+q_{k}}}. $$

So we can get that the Ray-Knight compactification based on \(p_{ij}(t)\) is \(\overline {E}=E\cup{\{\infty\}}\) and

$$R_{\infty j}(\lambda)= \frac{\frac{a_{j}}{\lambda+q_{j}}}{\lambda\sum_{k=1}^{\infty}\frac{a_{k}}{\lambda+q_{k}}}. $$

After the Ray-Knight compactification, the Markov chain \(X=(\varOmega ,\mathscr{F},\mathscr{F}_{t},X_{t}, \theta_{t},P^{x})\) is the right process with the transition function \(p_{ij}(t)\). □

Remark 4

This chain also holds the strong Markov property.

3.1.2 Excursion leaving from state ∞

For each \(i\in\overline{E}\), let

$$\sigma_{i}=\inf\{t\geq0|X_{t}=i\} $$

be the hitting time of state i. By \(T_{1}\), σ and \(\sigma_{i}\) defined above, we have \(P^{i}[\sigma<\infty]=1\) (\(i\in E\)).

Consider excursion leaving from state ∞ of X, and let \(\varphi (x)=E^{x}[e^{-\sigma_{\infty}}]\) for \(x\in E\cup\{\infty\}\); it is easily verified that \(\varphi(\cdot)\) is a 1-excessive function of X.

And then we have the following result.

Theorem 3

There exists a continuous additive function \(L_{t}\) of X such that

  1. (1)

    \(E^{x}\{\int_{0}^{\infty}e^{-s}\,dL_{s}\}=\varphi(x)\)for any \(x\in E\cup\{ \infty\}\);

  2. (2)

    \(\operatorname{supp}dL=\overline{\{t|X_{t}=\infty\}}\);

  3. (3)

    \(L_{\infty}=\infty\).

Let

$$U={\bigl\{ w(\cdot) | w(\cdot)\in D_{\bar{E}}[0, \infty)}, { \exists s>0, w(s)\in E, w(\cdot)\equiv\infty \mbox{ on }\bigl[\eta_{\infty}(w), \infty\bigr)\bigr\} }, $$

where \(\eta_{\infty}(w)=\inf\{t|t>0, w(t)=\infty\}\). We write \(\mathcal{U}\) for Boolean algebra on U, \(\{W_{t}\}_{t\geq0}\) for coordinate process, \(\{\mathcal{U}_{t}\}_{t>0}\) for natural filtration and \(\theta_{t}\) for shift operator. \((U,\mathcal{U})\) is called the excursion space.

For any \(t\geq0\), define \(\beta_{t}=\inf\{s|L_{s}>t\}\), \(\{\beta_{t}\}\) is the right reverse of local time \(L_{t}\). Let \(D_{p}(\omega)=\{t|\beta_{t-}<\beta_{t}\}\). We have known that between excursion leaving from state ∞ of X and \(D_{p}(\omega)\) is one-to-one correspondence (see [11, 12]).

Theorem 4

For any \(t\in D_{p}\), let

$$Y_{t}(\omega)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} X_{\beta_{t-}+s}(\omega),&\textit{if }0\leq s< \beta_{t}-\beta_{t-},\\ \infty,&\textit{if }s\geq\beta_{t}-\beta_{t-}, \end{array}\displaystyle \right . $$

then \(\{Y_{t}; t\in D_{p}\}\)is the Poisson point process on the excursion space \((U,\mathscr{U})\) (see [13]), and the characteristic measure \(\hat{P}(\cdot)\)satisfies

$$\hat{P}\bigl(\{W_{t_{1}}=i_{1},\ldots,W_{t_{n}}=i_{n} \}\bigr)=\sum_{k\in E}a_{k}p^{\min }_{ki_{1}}(t_{1}) \cdots p^{\min}_{i_{n-1}i_{n}} (t_{n}-t_{n-1}). $$

Remark 5

This means that the characteristic measure \(\hat{P}( \cdot)\) has the same distribution as \(\sum_{k\in E}a_{k} P^{k}\{ \cdot \}\).

Remark 6

  1. (1)

    The state space of the right process X is \(E\cup\{\infty\}\), where ∞ is the branching point;

  2. (2)

    The local time \(\mathcal{L}_{t}\) of X on \(\mathcal{S}_{\infty}\) is continuous;

  3. (3)

    The excursion measure \(\hat{P}( \cdot )\) is σ-finite and satisfies

    $$\hat{P}\bigl(\{W_{0}\notin E\}\bigr)=0 \quad\mbox{and}\quad \hat{P}\bigl( \{W_{0}=k\}\bigr)=a_{k} $$

    for all \(k\in E\).

3.1.3 Maximal exponentially ergodic constant

The following theorem gives the stationary distribution.

Theorem 5

The transition function \(p_{ij}(t)\)defined above is ergodic, and its stationary distribution is

$$\pi_{i}=\frac{a_{i}q^{-1}_{i}}{\sum_{k=1}^{\infty}a_{k} q^{-1}_{k}},\quad\forall{i\in E}. $$

Proof

(1) According to \(R^{\min}_{ij}(\lambda)=\frac{\delta _{ij}}{\lambda+q_{i}}\) and

$$1=\sum_{k\in E}a_{k}\sum _{m\in E}R^{\min}_{km}(1)=\sum _{k\in E}\frac{a_{k}}{1+q_{k}}, $$

we have \(\sum_{k=1}^{\infty}a_{k} q_{k}^{-1}<+\infty\).

(2) The resolvent of \(p_{ij}(t)\) is

$$R_{ij}(\lambda)=\frac{\delta_{ij}}{\lambda+q_{i}}+\frac{q_{i}}{\lambda +q_{i}} \frac{\frac{a_{j}}{\lambda+q_{j}}}{\lambda\sum_{k=1}^{\infty }\frac{a_{k}}{\lambda+q_{k}}}, $$

which gives

$$\pi_{i}=\lim_{\lambda\rightarrow 0}\lambda R_{ii}( \lambda)=\frac{a_{i}q^{-1}_{i}}{\sum_{k=1}^{\infty }a_{k}q_{k}^{-1}}< \infty. $$

Then we complete the proof. □

In the following we discuss the conditions of exponential ergodicity and convergence rate of exponential ergodicity.

Theorem 6

If

$$ 0< \lambda< \frac{a_{1}}{\sum_{k=2}^{\infty}a_{k} (q_{k}-q_{1})^{-1}}\wedge q_{1}, $$
(16)

then for some (and then for all) \(i\in E\), \(E^{i}[e^{\lambda\sigma _{1}}]<\infty\), \(p_{ij}(t)\)is exponentially ergodic. Moreover, for thisλ, if

$$\alpha< \frac{\lambda^{2}}{\lambda+(q_{1}-\lambda)(E^{1}\{e^{\lambda\tau _{1}^{+}}\}-1)}, $$

where

$$E^{1}\bigl\{ e^{\lambda\tau_{1}^{+}}\bigr\} =\frac{a_{1}}{q_{1}-\lambda} \frac{a_{1}}{a_{1}-\sum_{k=2}^{\infty}\frac{a_{k} \lambda}{q_{k}-\lambda}}, $$

then there exists \(R_{i}<\infty\)for any \(i\in E\)such that

$$\sum_{j\in E}\bigl|p_{ij}(t)-\pi_{j}\bigr| \leq R_{i} e^{-\alpha t} . $$

Proof

(1) For any \(i\in E \) (\(i\neq1\)),

$$E^{i}\bigl[e^{\lambda\sigma_{1}}\bigr]=E^{i}\bigl[e^{\lambda(T_{1}+\sigma_{1}\circ\theta _{T_{1}})} \bigr]=\frac{a_{1}}{q_{i}-\lambda}E^{\infty}\bigl[e^{\lambda\sigma_{1}}\bigr]. $$

In the following we compute \(E^{\infty}[e^{\lambda\sigma_{1}}]\).

Consider the coordinate process \(W(s)\) on the excursion space \((U,\mathscr{U})\). For any \(i\in E\), define

$$\eta_{i}=\inf\bigl\{ s|W(s)=i\bigr\} \quad (i\in E),\qquad\eta_{\infty}=\inf \{s|W_{s}=\infty\}. $$

When \(s>\eta_{\infty}\), \(W(\cdot)\) equal to ∞, so \(\eta_{i}>\eta_{\infty}\) equal to \(\eta_{i}=\infty\). Let

$$C_{0}=\{W|W_{0}\neq1\},\qquad C_{1}= \{W|W_{0}=1\}. $$

Obviously \(C_{0}, C_{1}\in\mathscr{U}\) and \(C_{0}\cup C_{1}=U\). Let \(\tau=\inf\{t|\beta_{t}>\sigma_{1}\}\) (\(t>0\)) and

$$Z_{t}^{1}=\sharp\{s|s\in D_{p},s\leq t,Y_{s}\in C_{1}\}, $$

where ♯A denotes the cardinal of A, then we have

$$\begin{aligned} P^{\infty}\{\tau>t\} =&P^{\infty}\bigl\{ Z_{t}^{1}=0 \bigr\} \\ =& e^{-a_{1} t}, \end{aligned}$$

which gives that τ is exponential random variable with mean \(a_{1}\).

And hence we have

$$\begin{aligned} E^{\infty}\bigl[e^{\lambda\sigma_{1}}\bigr] =& E^{\infty}\biggl[\exp \biggl\{ \lambda\biggl(\sum_{s\in G}\bigl(I_{(0,\sigma_{1}]}(s) \sigma_{\infty}I_{\{C_{0}\}}(Y_{s})\bigr)\circ \theta_{s}\biggr)\biggr\} \biggr]\\ =&E^{\infty}\bigl\{ \exp[\lambda\beta_{\tau}]\bigr\} \\ =&\int_{0}^{\infty} E^{\infty}\bigl\{ e^{\lambda\beta_{t}}\bigr\} a_{1} e^{-a_{1}t}\,dt. \end{aligned}$$

From the computation of the Poisson point process, we know that

$$\begin{aligned} E^{\infty}\bigl\{ e^{\lambda\beta_{t}}\bigr\} =& E^{\infty}\bigl\{ e^{\lambda z^{1}_{t}}\bigr\} \\ =&\exp\bigl\{ t\hat{P}\bigl(\bigl(e^{\lambda\sigma_{\infty}}-1\bigr) I_{\{C_{1}\}}(Y_{s}) \bigr)\bigr\} , \end{aligned}$$

which gives

$$\hat{P}\bigl(\bigl(e^{\lambda\sigma_{\infty}}-1\bigr)I_{\{C_{1}\}}(Y_{s}) \bigr) =\sum_{k=2}^{\infty}a_{k} E^{k}\bigl[e^{\lambda\sigma}-1\bigr] =\sum_{k=2}^{\infty} \frac{a_{k} \lambda}{q_{k}-\lambda}. $$

Hence

$$\begin{aligned} E^{\infty}\bigl[e^{\lambda\sigma_{1}}\bigr] =&\int_{0}^{\infty}a_{1} \exp t\biggl\{ \sum_{k=2}^{\infty} \frac{a_{k} \lambda}{q_{k}-\lambda}-a_{1}\biggr\} \,dt\\ =&\frac{a_{1}}{a_{1}-\sum_{k=2}^{\infty}\frac{a_{k} \lambda}{q_{k}-\lambda}}. \end{aligned}$$

Therefore, if (16) is satisfied, then we have \(E^{\infty}[e^{\lambda \sigma_{1}}]<\infty\), which gives \(E^{i}[e^{\lambda \sigma_{1}}]<\infty\). From [13], Lemma 6.3, we know that \(p_{ij}(t)\) is exponentially ergodic.

(2) If \(i\neq1\), then \(\tau_{1}^{+}=\sigma_{1}\). If λ satisfies (16), then we have

$$E^{i}\bigl\{ e^{\lambda\tau_{1}^{+}}\bigr\} =E^{i} \bigl[e^{\lambda \sigma_{1}}\bigr]< \infty $$

for any \(i\in E\) from Theorems 1 and 5.

If \(m=1\) in Theorem 1, by the method of (1) above, we have

$$\begin{aligned} E^{1}\bigl\{ e^{\lambda\tau_{1}^{+}}\bigr\} =&E^{1}\bigl\{ e^{\lambda(T_{1}+\sigma_{1}\circ T_{1})}\bigr\} \\ =&E^{1}\bigl[e^{\lambda T_{1}}\bigr]E^{\infty} \bigl[e^{\lambda\sigma_{1}}\bigr] \\ =&\frac{a_{1}}{q_{1}-\lambda}\frac{a_{1}}{a_{1}-\sum_{k=2}^{\infty}\frac{a_{k} \lambda}{q_{k}-\lambda}}. \end{aligned}$$

We complete the proof of this theorem. □

Remark 7

Obviously, the maximal exponentially ergodic constant satisfies

$$\alpha^{*} \geq \frac{\lambda^{2}}{\lambda+(q_{1}-\lambda)(E^{1}\{e^{\lambda \tau_{1}^{+}}\}-1)}. $$

3.2 Kolmogorov matrix

This following example contains an instantaneous state.

Suppose that \(q_{2},q_{3},\ldots \) are sequences of positive real numbers and consider Q-matrix as follows:

$$Q= \begin{pmatrix} -\infty& 1 & 1&1 & \cdots\\ q_{2}&-q_{2} & 0 &0 & \cdots\\ q_{3}&0 &-q_{3} & 0&\cdots\\ q_{4}& 0&0& -q_{4}&\cdots\\ \vdots& \vdots& \vdots& \vdots& \cdots \end{pmatrix}, $$

where \(\sum_{i=2}^{\infty}{q_{i}}^{-1}<\infty\). This matrix is called the Kolmogorov matrix. There are infinitely many dishonest processes with this Q-matrix. The authors (see [8, 9, 14]) have shown that the process with the following resolvents is the only honest one.

Let

$$\begin{aligned}& R_{11}(\lambda)=\frac{1}{\lambda}\biggl(1+\sum _{k=2}^{\infty}\frac{1}{\lambda +q_{k}}\biggr)^{-1}, \\& R_{1j}(\lambda)= R_{11}(\lambda)\cdot\frac{1}{\lambda+q_{j}}\quad (j \geq2), \\& R_{i1}(\lambda)= \frac{q_{i}}{\lambda+q_{i}} \cdot R_{11}(\lambda)\quad (i\geq2) \end{aligned}$$

and

$$R_{ij}(\lambda)= \frac{q_{i}}{\lambda+q_{i}}\cdot R_{11}(\lambda) \cdot\frac{1}{\lambda+q_{j}}+\frac{\delta_{ij}}{\lambda +q_{j}}\quad (i,j\geq2), $$

where \(\lambda>0\) and the state space is \(E=\{1,2,3,\ldots\}\).

Obviously, the transition function \(p_{ij}(t)\) which corresponds with the resolvents above is the only honest one. Though this chain is weakly symmetric, its convergence rate is still unknown because of its instantaneous state.

3.2.1 Ray-Knight compactification

Theorem 7

For the Markov chain withQ-matrix above, the Ray-Knight compactification of the state spaceEis stillE, \(X=(\varOmega ,\mathscr{F},\mathscr{F}_{t},X_{t},\theta_{t},P^{x})\)is the right process with the transition function \(p_{ij}(t)\).

Proof

It is obvious that

$$\lim_{i\rightarrow\infty}R_{ij}(\lambda)=R_{1j}( \lambda). $$

By using the methods in [11], we show that \(\overline{E}=E\). In the Ray-Knight topology, instantaneous state 1 is the limit point of sequences \(\{2,3,\ldots\}\). So we know that the Markov chain \(X=(\varOmega ,\mathscr{F},\mathscr{F}_{t},X_{t},\theta_{t},P^{x})\) is the right process with the transition function \(p_{ij}(t)\) (see [14, 15]). □

Remark 8

This chain holds the strong Markov property.

3.2.2 Excursion leaving from state 1

For each \(i\in E\), the definition of \(T_{1}\), σ, \(\sigma_{i}\) as above. Then obviously for each \(i\in E\), we have \(P^{i}[\sigma<\infty]=1\), which means that instantaneous state 1 is a recurrent state of X.

Consider excursion leaving from state 1 of X, and let

$$\varphi(x)=E^{x}\bigl[e^{-\sigma_{1}}\bigr] $$

for all \(x\in E\); it is easily verified that \(\varphi(\cdot)\) is a 1-excessive function of X.

And then we have the following result.

Theorem 8

There exists a continuous additive function \(L_{t}\) of X such that

  1. (1)

    \(E^{x}\{\int_{0}^{\infty}e^{-s}\,dL_{s}\}=\varphi(x)\)for all \(x\in E\);

  2. (2)

    \(\operatorname{supp}dL=\overline{\{t|X_{t}=1\}}\);

  3. (3)

    \(L_{1}=\infty\).

Let

$$U={\bigl\{ w(\cdot) | w(\cdot)\in D_{E}[0, \infty)}, \exists s>0, w(s)\in E, w(\cdot)\equiv1 \mbox{ on }\bigl[\eta_{1}(w), \infty\bigr)\bigr\} , $$

where \(\eta_{1}(w)=\inf\{t|t>0, w(t)=1\}\). We write \(\mathcal{U}\) for Boolean algebra on U, \(\{W_{t}\}_{t\geq0}\) for coordinate process, \(\{\mathcal{U}_{t}\}_{t>0}\) for natural filtration and \(\theta_{t}\) for shift operator. \((U,\mathcal{U})\) is called the excursion space.

For any \(t\geq0\), let \(\beta_{t}=\inf\{s|L_{s}>t\}\), \(\{\beta_{t}\}\) be the right reverse of local time \(L_{t}\). Let \(D_{p}(\omega)=\{t|\beta_{t-}<\beta_{t}\}\). We have known that between excursion leaving from state 1 of X and \(D_{p}(\omega)\) is a one-to-one correspondence.

Theorem 9

For any \(t\in D_{p}\), define

$$Y_{t}(\omega)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} X_{\beta_{t-}+s}(\omega),&\textit{if }0\leq s< \beta_{t}-\beta_{t-};\\ 1,&\textit{if }s\geq\beta_{t}-\beta_{t-}, \end{array}\displaystyle \right . $$

then \(\{Y_{t}; t\in D_{p}\}\)is the Poisson point process on the excursion space \((U,\mathscr{U})\), and the characteristic measure \(\hat{P}(\cdot)\)satisfies

$$\hat{P}\bigl(\{W_{t_{1}}=i_{1},\ldots,W_{t_{n}}=i_{n} \}\bigr)=\sum_{k\in E}p^{\min }_{ki_{1}}(t_{1}) \cdots p^{\min}_{i_{n-1}i_{n}} (t_{n}-t_{n-1}). $$

Remark 9

Theorem 9 means that the characteristic measure \(\hat{P}( \cdot)\) has the same distribution as \(\sum_{k\in E} P^{k}\{ \cdot \}\).

Remark 10

  1. (1)

    The state space of the right process X is E, where 1 is the branching point;

  2. (2)

    The local time \(\mathcal{L}_{t}\) of X on \(\mathcal{S}_{1}\) is continuous;

  3. (3)

    The excursion measure \(\hat{P}( \cdot )\) is σ-finite and satisfies

    $$\hat{P}\bigl(\{W_{0}= 1\}\bigr)=0 \quad\mbox{and}\quad \hat{P}\bigl( \{W_{0}=k\}\bigr)=1 $$

    for all \(k\in E\backslash\{1\}\).

3.2.3 Maximal exponentially ergodic constant

The following theorem will give the stationary distribution.

Theorem 10

The transition function \(p_{ij}(t)\)defined above is ergodic, and we know that its stationary distribution is

$$\pi_{1}=\frac{1}{1+\sum_{k=2}^{\infty}q^{-1}_{k}} \quad\textit{and}\quad \pi_{j}= \frac{q^{-1}_{j}}{1+\sum_{k=2}^{\infty} q^{-1}_{k}} $$

for all \(j\in E\backslash\{1\}\).

Proof

(1) According to Theorem 1.3 in [8], p.157, we have

$$\pi_{j}=\lim_{\lambda\rightarrow0 }\lambda R_{ij}( \lambda)=\lim_{t \rightarrow\infty} p_{ij}(t), $$

which gives

$$\pi_{1}=\lim_{\lambda\rightarrow0 }\lambda R_{i1}( \lambda)=\lim_{\lambda\rightarrow0 }\lambda R_{11}(\lambda)= \frac{1}{1+\sum_{k=2}^{\infty}q^{-1}_{k}}< \infty $$

and

$$\pi_{j}= \lim_{\lambda\rightarrow0 }\lambda R_{ij}( \lambda)=\frac {q^{-1}_{j}}{1+\sum_{k=2}^{\infty} q^{-1}_{k}}< \infty $$

for each \(i\geq2\).

Thus the proof is completed. □

In the following we discuss the conditions of exponential ergodicity and the convergence rates of exponential ergodicity.

Theorem 11

If

$$ 0< \lambda< \frac{1}{\sum_{k=3}^{\infty} (q_{k}-q_{2})^{-}1}\wedge q_{2} , $$
(17)

then for some (and then for all) \(i\in E \), \(E^{i}[e^{\lambda\sigma_{1}}]<\infty\), so \(p_{ij}(t)\)is exponentially ergodic. Moreover, for thisλ, if

$$\alpha< \frac{\lambda^{2}}{\lambda+(q_{2}-\lambda)(E^{2}\{e^{\lambda\tau _{2}^{+}}\}-1)}, $$

where

$$E^{2}\bigl\{ e^{\lambda\tau_{2}^{+}}\bigr\} =\frac{1}{q_{2}-\lambda} \frac{1}{1-\sum_{k=2}^{\infty}\frac{a_{k} \lambda}{q_{k}-\lambda}}, $$

then there exists \(R_{i}<\infty\)for any \(i\in E\)such that

$$\sum_{j\in E}\bigl|p_{ij}(t)-\pi_{j}\bigr| \leq R_{i} e^{-\alpha t} . $$

Proof

(1) For any \(i\in E\) (\(i\geq3\)), we know

$$E^{i}\bigl\{ e^{\lambda\sigma_{2}}\bigr\} =E^{i}\bigl\{ e^{\lambda(T_{1}+\sigma_{2}\circ\theta _{T_{1}})}\bigr\} =\frac{1}{q_{i}-\lambda}E^{1}\bigl\{ e^{\lambda\sigma_{2}}\bigr\} . $$

Then we will compute \(E^{1}\{e^{\lambda\sigma_{1}}\}\). Consider the coordinate process \(W(s)\) on the excursion space \((U,\mathscr{U})\). For any \(i\in E\), define

$$\eta_{i}=\inf\bigl\{ s|W(s)=i\bigr\} . $$

Let

$$C_{0}=\{W|W_{0}\neq2\} \quad\mbox{and}\quad C_{1}= \{W|W_{0}=2\}, $$

then we know that \(C_{0}, C_{1}\in\mathscr{U}\) and \(C_{0}\cup C_{1}=U\).

Let \(\tau=\inf\{t|\beta_{t}>\sigma_{2}\}\) (\(t>0\)) and

$$Z_{t}^{1}=\sharp\{s|s\in D_{p},s\leq t,Y_{s}\in C_{1}\}, $$

we have

$$\begin{aligned} P^{1} \{\tau>t\} =&\hat{P}\bigl\{ Z_{t}^{1}=0 \bigr\} \\ =& e^{-\hat{P}(W_{0}=2)t}=e^{-t}, \end{aligned}$$

which gives τ is exponential random variable with mean 1 and

$$\begin{aligned} E^{1}\bigl\{ e^{\lambda\sigma_{2}}\bigr\} =& E^{1}\biggl[\exp \biggl\{ \lambda\biggl(\sum_{s\in G}\bigl(I_{(0,\sigma_{2}]}(s) \sigma_{\infty}I_{\{C_{0}\}}(Y_{s})\bigr)\circ \theta_{s}\biggr)\biggr\} \biggr] \\ =&E^{1}\bigl\{ \exp[\lambda\beta_{\tau}]\bigr\} \\ =&\int_{0}^{\infty} E^{1}\bigl\{ e^{\lambda\beta_{t}}\bigr\} e^{-t}\,dt. \end{aligned}$$

From the computation of the Poisson point process, we have

$$\begin{aligned} E^{1}\bigl\{ e^{\lambda\beta_{t}}\bigr\} =& E^{1}\bigl\{ e^{\lambda z^{1}_{t}}\bigr\} \\ =&\exp\bigl\{ t\hat{P}\bigl(\bigl(e^{\lambda\sigma_{1}}-1\bigr) I_{\{C_{1}\}}(Y_{s}) \bigr)\bigr\} , \end{aligned}$$

which gives

$$\hat{P}\bigl(\bigl(e^{\lambda\sigma_{1}}-1\bigr)I_{\{C_{1}\}}(Y_{s}) \bigr) =\sum_{k=3}^{\infty} E^{k} \bigl[e^{\lambda\sigma_{1}}-1\bigr] =\sum_{k=3}^{\infty} \frac{ \lambda}{q_{k}-\lambda}. $$

So

$$\begin{aligned} E^{1}\bigl\{ e^{\lambda\sigma_{1}}\bigr\} =&\int_{0}^{\infty} \exp t\biggl\{ \sum_{k=3}^{\infty} \frac{ \lambda}{q_{k}-\lambda}-1\biggr\} \,dt\\ =&\frac{1}{1-\sum_{k=3}^{\infty}\frac{ \lambda}{q_{k}-\lambda}}. \end{aligned}$$

Therefore, if (17) is satisfied, then we get that \(E^{1}[e^{\lambda \sigma_{2}}]<\infty\) and \(E^{i}\{e^{\lambda \sigma_{1}}\}<\infty\). From Lemma 6.3 in [8], p.228, we know that \(p_{ij}(t)\) is exponentially ergodic.

(2) If \(i\geq3\), then we have \(\tau_{2}^{+}=\sigma_{2} \). If λ satisfies (17), then we have

$$E^{i}\bigl\{ e^{\lambda\tau_{1}^{+}}\bigr\} =E^{i}\bigl\{ e^{\lambda \sigma_{2}}\bigr\} < \infty $$

for any \(i\in E \) from Theorem 1.

Let \(m=2\) in Theorem 1, by the method of (1) above, we have

$$\begin{aligned} E^{2}\bigl\{ e^{\lambda\tau_{1}^{+}}\bigr\} =&E^{2}\bigl\{ e^{\lambda(T_{1}+\sigma_{2}\circ T_{1})}\bigr\} \\ =&E^{2}\bigl[e^{\lambda T_{1}}\bigr]E^{1} \bigl[e^{\lambda\sigma_{2}}\bigr] \\ =&\frac{1}{q_{1}-\lambda}\frac{1}{1-\sum_{k=3}^{\infty}\frac{ \lambda}{q_{k}-\lambda}}. \end{aligned}$$

So the result is proved from Theorem 1. □

Remark 11

According to the results above, the maximal exponentially ergodic constant of this example satisfies

$$\alpha^{*} \geq \frac{\lambda^{2}}{\lambda+(q_{2}-\lambda)(E^{2}\{e^{\lambda \tau_{1}^{+}}\}-1)}. $$

Change history

  • 27 February 2020

    The Editors-in-Chief have retracted this article [1] because it showed evidence of peer review manipulation. In addition, the identity of the corresponding author could not be verified: Nagoya University have confirmed that Ikudol Miyamato has not been affiliated with their Graduate School of Mathematics.

References

  1. Baxendale, PH: Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Probab. 15, 700-738 (2005)

    Article  MathSciNet  Google Scholar 

  2. Meyn, S, Tweedie, R: Computable bounds for convergence rates of Markov chains. Ann. Appl. Probab. 4, 981-1011 (1994)

    Article  MathSciNet  Google Scholar 

  3. Roberts, GO, Tweedie, RL: Bounds on regeneration and convergence rates of Markov chain. Stoch. Process. Appl. 80, 211-229 (1999)

    Article  MathSciNet  Google Scholar 

  4. Rosenthal, J: Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Am. Stat. Assoc. 90, 558-566 (1995)

    Article  MathSciNet  Google Scholar 

  5. Lund, R, Tweedie, R: Geometric convergence rates for stochastically ordered Markov chains. Math. Oper. Res. 21, 182-194 (1996)

    Article  MathSciNet  Google Scholar 

  6. Roberts, GO, Tweedie, RL: Rates of convergence of stochastically monotone and continuous time Markov models. J. Appl. Probab. 37, 359-373 (2000)

    Article  MathSciNet  Google Scholar 

  7. Scott, D, Tweedie, R: Explicit rates of convergence of stochastically monotone Markov chain. In: Proceedings of the Athens Conference on Applied Probability and Time Series Analysis, pp. 176-191. Springer, New York (1996) (Papers in Honour of J. Gani and E. Hannom)

    Chapter  Google Scholar 

  8. William, J: Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer, New York (1991)

    MATH  Google Scholar 

  9. Hou, ZT, Zou, JZ, et al.: The Q-Matrix Problem for Markov Chains. Hunan Science and Technology Press, Changsha (1994)

    Google Scholar 

  10. Rogers, L, Williams, D: Diffusions, Markov Processes, and Martingales: Itô Calculus. Wiley, Chichester (1987)

    MATH  Google Scholar 

  11. Getoor, R: Excursion of Markov process. Ann. Probab. 7(2), 244-266 (1979)

    Article  MathSciNet  Google Scholar 

  12. Salisbury, T: On the Itô excursion process. Probab. Theory Relat. Fields 73, 319-350 (1986)

    Article  Google Scholar 

  13. Kiyosi, I: Poisson point processes attached to Markov processes. In: Proc. Sixth Berkeley Symp. on Math. Statist. and Prob., vol. 3, pp. 225-239. University of California Press, Berkeley (1972)

    Google Scholar 

  14. Reuter, G: Remarks on a Markov chain example of Kolmogorov. Z. Wahrscheinlichkeitstheor. Verw. Geb. 14, 56-61 (1969)

    MATH  Google Scholar 

  15. Ronald, K: Markov Processes: Ray Processes and Right Process. Springer, New York (1975)

    MATH  Google Scholar 

Download references

Acknowledgements

This paper was written during a short stay of the corresponding author at the Graduate School of Mathematics of Nagoya University as a visiting professor. I would also like to thank the Graduate School of Mathematics and their members for their warm hospitality.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ikudol Miyamoto.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors contributed equally to this work and read and approved the final manuscript.

The Editors-in-Chief have retracted this article because it showed evidence of peer review manipulation. In addition, the identity of the corresponding author could not be verified: Nagoya University have confirmed that Ikudol Miyamato has not been affiliated with their Graduate School of Mathematics.

The authors have not responded to any correspondence regarding this retraction.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, Z., Yan, G. & Miyamoto, I. RETRACTED ARTICLE: Fixed point theorems and explicit estimates for convergence rates of continuous time Markov chains. Fixed Point Theory Appl 2015, 197 (2015). https://doi.org/10.1186/s13663-015-0443-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0443-x

Keywords