Skip to main content

Generalized contraction mapping principle and generalized best proximity point theorems in probabilistic metric spaces

Abstract

The purpose of this paper is to introduce some basic definitions about fixed point and best proximity point in two classes of probabilistic metric spaces and to prove contraction mapping principle and relevant best proximity point theorems. The first class is the so-called S-probabilistic metric spaces. In S-probabilistic metric spaces, the generalized contraction mapping principle and generalized best proximity point theorems have been proved by authors. These results improve and extend the recent results of Su and Zhang (Fixed Point Theory Appl. 2014:170, 2014). The second class is the so-called Menger probabilistic metric spaces. In Menger probabilistic metric spaces, the contraction mapping principle and relevant best proximity point theorems have been proved by authors. These results also improve and extend the results of many authors. In order to get the results of this paper, some new methods have been used. Meanwhile some error estimate inequalities have been established.

1 Introduction and preliminaries

Probabilistic metric spaces were introduced in 1942 by Menger [1]. In such spaces, the notion of distance between two points x and y is replaced by a distribution function \(F_{x,y}(t)\). Thus one thinks of the distance between points as being probabilistic with \(F_{x,y}(t)\) representing the probability that the distance between x and y is less than t. Sehgal, in his PhD thesis [2], extended the notion of a contraction mapping to the setting of Menger probabilistic metric spaces. For example, a mapping T is a probabilistic contraction if T is such that for some constant \(0 < k < 1\), the probability that the distance between image points Tx and Ty is less than kt is at least as large as the probability that the distance between x and y is less than t.

In 1972, Sehgal and Bharucha-Reid proved the following result.

Theorem 1.1

(Sehgal and Bharucha-Reid [3])

Let \((E,F,\triangle)\) be a complete Menger probabilistic metric space for which the triangular norm â–³ is continuous and satisfies \(\triangle(a,b)=\min (a,b)\). If T is a mapping of E into itself such that for some \(0< k<1\) and all \(x,y \in E\),

$$ F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{k}\biggr), \quad \forall t >0, $$
(1.1)

then T has a unique fixed point \(x^{*}\) in E, and for any given \(x_{0} \in X\), \(T^{n}x_{0}\) converges to  \(x^{*}\).

The mapping T satisfying (1.1) is called a k-probabilistic contraction or a Sehgal contraction [3]. The fixed point theorem obtained by Sehgal and Bharucha-Reid is a generalization of the classical Banach contraction principle and is further investigated by many authors [2, 4–17]. Some results in this theory have found their applications to control theory, system theory and optimization problems.

Next we recall some well-known definitions and results in the theory of probabilistic metric spaces which are used later in this paper. For more details, we refer the reader to [8].

Definition 1.2

A triangular norm (shorter â–³-norm) is a binary operation â–³ on \([0,1]\) which satisfies the following conditions:

  1. (a)

    â–³ is associative and commutative;

  2. (b)

    â–³ is continuous;

  3. (c)

    \(\triangle(a,1)=a\) for all \(a\in[0,1]\);

  4. (d)

    \(\triangle(a,b)\leq\triangle(c,d)\) whenever \(a \leq c\) and \(b \leq d\) for each \(a, b, c, d \in[0,1]\).

The following are the three basic â–³-norms:

  • \(\triangle_{1}(a,b)=\max(a+b-1,0)\);

  • \(\triangle_{2}(a,b)=a\cdot b\);

  • \(\triangle_{3}(a,b)=\min(a,b)\).

It is easy to check that the above three â–³-norms have the following relations:

$$\triangle_{1}(a,b)\leq\triangle_{2}(a,b)\leq \triangle_{3}(a,b) $$

for any \(a,b \in[0,1]\).

Definition 1.3

A function \(F(t): (-\infty,+\infty)\rightarrow[0, 1]\) is called a distribution function if it is non-decreasing and left-continuous with \(\lim_{t\rightarrow-\infty}F(t)=0\). If in addition \(F(0)=0\), then F is called a distance distribution function.

Definition 1.4

A distance distribution function F satisfying \(\lim_{t\rightarrow+\infty}F(t)=1\) is called a Menger distance distribution function. The set of all Menger distance distribution functions is denoted by \(D^{+}\). A special Menger distance distribution function is given by

$$H(t)= \begin{cases} 0, & t\leq0, \\ 1, & t>0. \end{cases} $$

Definition 1.5

A probabilistic metric space is a pair \((E, F)\), where E is a nonempty set, F is a mapping from \(E\times E\) into \(D^{+}\) such that, if \(F_{x,y}\) denotes the value of F at the pair \((x, y)\), the following conditions hold:

  1. (PM-1)

    \(F_{x,y}(t)=H(t)\) if and only if \(x=y\);

  2. (PM-2)

    \(F_{x,y}(t)=F_{y,x}(t)\) for all \(x,y \in E\) and \(t \in (-\infty,+\infty)\);

  3. (PM-3)

    \(F_{x,z}(t)=1\), \(F_{z,y}(s)=1\) implies \(F_{x,y}(t+s)=1\)

for all \(x,y,z \in E\) and \(-\infty< t< +\infty\).

Definition 1.6

A Menger probabilistic metric space (abbreviated, Menger PM space) is a triple \((E, F, \triangle )\), where E is a nonempty set, â–³ is a continuous t-norm and F is a mapping from \(E\times E\) into \(D^{+}\) such that, if \(F_{x,y}\) denotes the value of F at the pair \((x, y)\), the following conditions hold:

  1. (MPM-1)

    \(F_{x,y}(t)=H(t)\) if and only if \(x=y\);

  2. (MPM-2)

    \(F_{x,y}(t)=F_{y,x}(t)\) for all \(x,y \in E\) and \(t \in(-\infty,+\infty)\);

  3. (MPM-3)

    \(F_{x,y}(t+s)\geq\triangle(F_{x,z}(t),F_{z,y}(s))\) for all \(x,y,z \in E\) and \(t>0\), \(s>0\).

In 2014, authors gave a new definition of probabilistic metric space, the so-called S-probabilistic metric space. This definition reflects more probabilistic meaning and probabilistic background. In this definition, the triangle inequality changed to a new form.

Definition 1.7

([18])

An S-probabilistic metric space is a pair \((E, F)\), where E is a nonempty set, F is a mapping from \(E\times E\) into \(D^{+}\) such that, if \(F_{x,y}\) denotes the value of F at the pair \((x, y)\), the following conditions hold:

  1. (SPM-1)

    \(F_{x,y}(t)=H(t)\) if and only if \(x=y\);

  2. (SPM-2)

    \(F_{x,y}(t)=F_{y,x}(t)\) for all \(x,y \in E\) and \(t \in (-\infty,+\infty)\);

  3. (SPM-3)

    \(F_{x,y}(t)\geq F_{x,z}(t)\ast F_{z,y}(t)\), \(\forall x,y,z \in E\), where \(F_{x,z}(t)\ast F_{z,y}(t)\) is the convolution between \(F_{x,z}(t)\) and \(F_{z,y}(t)\) defined by

    $$F_{x,z}(t)\ast F_{z,y}(t)=\int_{0}^{+\infty}F_{x,z}(t-u) \, dF_{z,y}(u). $$

Example

([18])

Let X be a nonempty set, S be a measurable space which consists of some metrics on the X, \((\Omega, P)\) be a complete probabilistic measure space and \(f : \Omega\rightarrow S\) be a measurable mapping. It is easy to think that S is a random metric on the X, of course, \((X,S)\) is a random metric space. The following expression of distribution functions \(F_{x,y}(t)\), \(F_{x,z}(t)\) and \(F_{z,y}(t)\) is reasonable:

$$F_{x,y}(t)=P\bigl\{ f^{-1}\bigl\{ d \in S; d(x,y)< t\bigr\} \bigr\} $$

and

$$F_{x,z}(t)=P\bigl\{ f^{-1}\bigl\{ d \in S; d(x,z)< t\bigr\} \bigr\} , $$

and

$$F_{z,y}(t)=P\bigl\{ f^{-1}\bigl\{ d \in S; d(z,y)< t\bigr\} \bigr\} $$

for all \(x,y,z \in X\). Since

$$P\bigl\{ f^{-1}\bigl\{ d \in S; d(x,y)< t\bigr\} \bigr\} \geq P\bigl\{ f^{-1}\bigl\{ d \in S; d(x,z)+d(z,y)< t\bigr\} \bigr\} $$

and it follows from probabilistic theory that

$$P\bigl\{ f^{-1}\bigl\{ d \in S; d(x,z)+d(z,y)< t\bigr\} \bigr\} =F_{x,z}(t)\ast F_{z,y}(t). $$

Therefore

$$F_{x,y}(t)\geq F_{x,z}(t)\ast F_{z,y}(t) ,\quad \forall x,y,z \in X. $$

In addition, the conditions (SPM-1) and (SPM-2) are obvious.

In this paper, both the Menger probabilistic metric spaces and S-probabilistic metric spaces are included in the probabilistic metric spaces.

On the other hand, several problems can be changed as equations of the form \(Tx = x\), where T is a given self-mapping defined on a subset of a metric space, a normed linear space, a topological vector space or some suitable space. However, if T is a non-self mapping from A to B, then the aforementioned equation does not necessarily admit a solution. In this case, it is contemplated to find an approximate solution x in A such that the error \(d(x, Tx)\) is minimum, where d is the distance function. In view of the fact that \(d(x, Tx)\) is at least \(d(A,B)\), a best proximity point theorem guarantees the global minimization of \(d(x, Tx)\) by the requirement that an approximate solution x satisfies the condition \(d(x, Tx) = d(A,B)\). Such optimal approximate solutions are called best proximity points of the mapping T. Interestingly, best proximity point theorems also serve as a natural generalization of fixed point theorems, for a best proximity point becomes a fixed point if the mapping under consideration is a self-mapping. Research on best proximity point is an important topic in the nonlinear functional analysis and applications (see [19–31]).

Let A, B be two nonempty subsets of a complete metric space and consider a mapping \(T:A\rightarrow B\). The best proximity point problem is whether we can find an element \(x_{0}\in A\) such that \(d(x_{0},Tx_{0})=\min\{d(x,Tx): x\in A\}\). Since \(d(x,Tx)\geq d(A,B)\) for any \(x\in A\), in fact, the optimal solution to this problem is the one for which the value \(d(A,B)\) is attained.

Let A, B be two nonempty subsets of a metric space \((X,d)\). We denote by \(A_{0}\) and \(B_{0}\) the following sets:

$$\begin{aligned}& A_{0}=\bigl\{ x\in A: d(x,y)=d(A,B) \mbox{ for some } y\in B \bigr\} , \\& B_{0}=\bigl\{ y\in B: d(x,y)=d(A,B) \mbox{ for some } x\in A \bigr\} , \end{aligned}$$

where \(d(A,B)=\inf\{d(x,y): x\in A \mbox{ and } y\in B \}\).

It is interesting to notice that \(A_{0}\) and \(B_{0}\) are contained in the boundaries of A and B, respectively, provided A and B are closed subsets of a normed linear space such that \(d(A, B)>0\) [19].

In order to study the best proximity point problems, we need the following notations.

Definition 1.8

([30])

Let \((A,B)\) be a pair of nonempty subsets of a metric space \((X,d)\) with \(A_{0}\neq \emptyset\). Then the pair \((A,B)\) is said to have the P-property if and only if, for any \(x_{1},x_{2}\in A_{0}\) and \(y_{1},y_{2}\in B_{0}\),

$$\left \{\begin{array}{l} d(x_{1},y_{1})=d(A,B), \\ d(x_{2},y_{2})=d(A,B) \end{array} \right .\quad \Rightarrow \quad d(x_{1},x_{2})=d(y_{1},y_{2}). $$

In [31], the authors prove that any pair \((A,B)\) of nonempty closed convex subsets of a real Hilbert space H satisfies the P-property.

In [25, 26], P-property was weakened to weak P-property. And an example that satisfies P-property but not weak P-property can be found there.

Definition 1.9

([25, 26])

Let \((A,B)\) be a pair of nonempty subsets of a metric space \((X,d)\) with \(A_{0}\neq \emptyset\). Then the pair \((A,B)\) is said to have the weak P-property if and only if, for any \(x_{1},x_{2}\in A_{0}\) and \(y_{1},y_{2}\in B_{0}\),

$$\left \{\begin{array}{l} d(x_{1},y_{1})=d(A,B), \\ d(x_{2},y_{2})=d(A,B) \end{array} \right .\quad \Rightarrow \quad d(x_{1},x_{2}) \leq d(y_{1},y_{2}). $$

Recently, many best proximity point problems with applications have been discussed and some best proximity point theorems have been proved. For more details, we refer the reader to [27].

In 2014, authors established some definitions and basic concepts of best proximity point in the framework of probabilistic metric spaces.

Definition 1.10

([18])

Let \((E,F)\) be a probabilistic metric space, \(A,B\subset E\) be two nonempty sets. Let

$$F_{A,B}(t)=\sup_{x\in A,y\in B}F_{x,y}(t), \quad \forall t \in (-\infty, +\infty), $$

which is said to be the probabilistic distance of A, B.

Example

([18])

Let X be a nonempty set and \(d_{1}\), \(d_{2}\) be two metrics defined on X with the probabilities \(p_{1}=0.5\), \(p_{2}=0.5\), respectively. Assume that

$$d_{1}(x,y)\leq d_{2}(x,y), \quad \forall x, y \in X. $$

For any \(x,y \in X\), Table 1 is a discrete random variable with the distribution function

$$F_{x,y}(t)= \begin{cases} 0, & t\leq d_{1}(x,y), \\ 0.5, & d_{1}(x,y)< t\leq d_{2}(x,y), \\ 1, & d_{2}(x,y)< t. \end{cases} $$

Let A, B be two nonempty sets of X, Table 2 is also a discrete random variable with the distribution function

$$F_{A,B}(t)= \begin{cases} 0, & t\leq d_{1}(A,B), \\ 0.5, & d_{1}(A,B)< t\leq d_{2}(A,B), \\ 1, & d_{2}(A,B)< t, \end{cases} $$

where

$$d_{i}(A,B)=\inf_{x\in A, y \in B}d_{i}(x,y), \quad i=1,2. $$

It is easy to see that

$$F_{A,B}(t)=\sup_{x\in A,y\in B}F_{x,y}(t), \quad \forall t \in (-\infty, +\infty). $$
Table 1 The random variable \(\pmb{d(x,y)}\)
Table 2 The random variable \(\pmb{d(A,B)}\)

Definition 1.11

([18])

Let \((E,F)\) be a probabilistic metric space, \(A,B\subset E\) be two nonempty subsets and \(T:A\rightarrow B\) be a mapping. We say that \(x^{*}\in A\) is the best proximity point of the mapping T if the following equality holds:

$$F_{x^{*},Tx^{*}}(t)= F_{A,B}(t), \quad \forall t \in (-\infty, + \infty). $$

Example

([18])

Let X be a nonempty set and \(d_{1}\), \(d_{2}\) be two metrics defined on X with the probabilities \(p_{1}=0.5\), \(p_{2}=0.5\), respectively. Let A, B be two nonempty sets of X and \(T:A\rightarrow B\) be a mapping. Assume

$$d_{1}(x,y)\leq d_{2}(x,y), \quad \forall x, y \in X. $$

If there exists a point \(x^{*} \in A\) such that

$$\begin{aligned}& d_{1}\bigl(x^{*},Tx^{*}\bigr)=d_{1}(A,B), \\& d_{2}\bigl(x^{*},Tx^{*}\bigr)=d_{2}(A,B), \end{aligned}$$

then Table 3 is a discrete random variable with the distribution function

$$F_{x^{*},Tx^{*}}(t)= \begin{cases} 0, & t\leq d_{1}(x^{*},Tx^{*}), \\ 0.5, & d_{1}(x^{*},Tx^{*})< t\leq d_{2}(x^{*},Tx^{*}), \\ 1, & d_{2}(x^{*},Tx^{*})< t. \end{cases} $$

It is obvious that \(F_{x^{*},Tx^{*}}(t)=F_{A,B}(t)\).

Table 3 The random variable \(\pmb{d(x^{*},Tx^{*})}\)

It is clear that the notion of fixed point coincided with the notion of best proximity point when the underlying mapping is a self-mapping. Let \((E, F)\) be a probabilistic metric space. Suppose that \(A\subset E\) and \(B\subset E\) are nonempty subsets. We define the following sets:

$$\begin{aligned}& A_{0}=\bigl\{ x\in A: F_{x,y}(t)=F_{A,B}(t) \mbox{ for some } y\in B\bigr\} , \\& B_{0}=\bigl\{ y\in A: F_{x,y}(t)=F_{A,B}(t) \mbox{ for some } x\in A\bigr\} . \end{aligned}$$

Definition 1.12

([18])

Let \((A,B)\) be a pair of nonempty subsets of a probabilistic metric space \((E, F)\) with \(A_{0}\neq\emptyset\). Then the pair \((A,B)\) is said to have the P-property if and only if for any \(x_{1}, x_{2} \in A\) and \(y_{1}, y_{2} \in B\),

$$F_{x_{1}, y_{1}}(t) = F_{A,B}(t),\qquad F_{x_{2}, y_{2}}(t) = F_{A,B}(t) \quad \Rightarrow\quad F_{x_{1}, x_{2}}(t) = F_{y_{1}, y_{2}}(t). $$

Definition 1.13

([18])

Let \((A,B)\) be a pair of nonempty subsets of a probabilistic metric space \((E, F)\) with \(A_{0}\neq\emptyset\). Then the pair \((A,B)\) is said to have the weak P-property if and only if for any \(x_{1}, x_{2} \in A\) and \(y_{1}, y_{2} \in B\),

$$F_{x_{1}, y_{1}}(t) = F_{A,B}(t), \qquad F_{x_{2}, y_{2}}(t) = F_{A,B}(t)\quad \Rightarrow\quad F_{x_{1}, x_{2}}(t) \geq F_{y_{1}, y_{2}}(t). $$

Definition 1.14

([3])

Let \((E,F)\) be a probabilistic metric space.

  1. (1)

    A sequence \(\{x_{n}\}\) in E is said to converge to \(x\in E\) if for any given \(\varepsilon > 0\) and \(\lambda>0\), there must exist a positive integer \(N = N(\varepsilon, \lambda)\) such that \(F_{x_{n},x}(\varepsilon)>1-\lambda\) whenever \(n>N\).

  2. (2)

    A sequence \(\{x_{n}\}\) in E is called a Cauchy sequence if for any \(\varepsilon>0\) and \(\lambda>0\), there must exist a positive integer \(N = N(\varepsilon,\lambda)\) such that \(F_{x_{n},x_{m}}(\varepsilon)>1-\lambda\), whenever \(n,m >N\).

  3. (3)

    \((E,F,\triangle)\) is said to be complete if each Cauchy sequence in E converges to some point in E.

We denote by \(x_{n}\rightarrow x\) that \(\{x_{n}\}\) converges to x. It is easy to see that \(x_{n}\rightarrow x\) if and only if \(F_{x_{n},x}(t)\rightarrow H(t)\) for any given \(t \in (-\infty,+\infty)\) as \(n\rightarrow\infty\).

The purpose of this paper is to introduce some basic definitions about fixed point and best proximity point in two classes of probabilistic metric spaces and to prove contraction mapping principle and relevant best proximity point theorems. The first class is the so-called S-probabilistic metric spaces. In S-probabilistic metric spaces, the generalized contraction mapping principle and generalized best proximity point theorems have been proved by authors. These results improve and extend the recent results of Su and Zhang [18]. The second class is the so-called Menger probabilistic metric spaces. In Menger probabilistic metric spaces, the contraction mapping principle and relevant best proximity point theorems have been proved by authors. These results also improve and extend the results of many authors. In order to get the results of this paper, some new methods have been used. Meanwhile some error estimate inequalities have been established.

2 Contraction mapping principle in S-probabilistic metric spaces

Let \((E, F)\) be an S-probabilistic metric space. For any \(x,y \in E\), we define

$$d_{F}(x,y)=\int_{0}^{+\infty}t\, dF_{x,y}(t). $$

Since t is a continuous function and \(F_{x,y}\) is a bounded variation function, so the above integral is well defined. In fact, the above integral is just the mathematical expectation of \(F_{x,y}(t)\). Throughout this paper we assume that

$$d_{F}(x,y)=\int_{0}^{+\infty}t\, dF_{x,y}(t)< +\infty, \quad \forall x,y \in E $$

for all probabilistic metric spaces \((E, F)\) presented in this paper.

Theorem 2.1

Let \((E, F)\) be an S-probabilistic metric space. For any \(x,y \in E\), we define

$$d_{F}(x,y)=\int_{0}^{+\infty}t\, dF_{x,y}(t). $$

Then \(d_{F}(x,y)\) is a metric on E.

Proof

Since \(F_{x,y}(t)=H(t)\) (\(\forall t\in R\)) if and only if \(x=y\). And

$$\int_{0}^{+\infty}t\, dH(t)=0, $$

we know that the condition \(d_{F}(x,y)=0 \Leftrightarrow x=y\) holds. The condition \(d_{F}(x,y)=d_{F}(y,x)\), for all \(x,y \in E\), is obvious. Next we prove the triangle inequality. For any \(x,y,z \in E\), from (SPM-3) we have

$$F_{x,y}(t) \geq\int^{+\infty}_{0}F_{x,z}(t-u)\, dF_{ z,y}(u)=F_{ x,z}(t) \ast F_{z,y}(t). $$

By using probabilistic theory we know that

$$\int^{+\infty}_{0}t\, dF_{x,y}(t)\leq\int ^{+\infty}_{0}t\, dF_{ x,z}(t)+\int ^{+\infty}_{0}t\, dF_{z,y}(t), $$

which implies that

$$d_{F}(x,y) \leq d_{F}(x,z)+d_{F}(z,y). $$

This completes the proof. □

Now we prove the following generalized contraction mapping principle in the S-probabilistic metric spaces which is a generalized form of the result in [18].

Theorem 2.2

Let \((E, F)\) be a complete S-probabilistic metric space. Let \(T: E \rightarrow E\) be a mapping satisfying the following condition:

$$ F_{Tx,Ty}\bigl(\psi(t)\bigr)\geq F_{x,y}\bigl(\phi(t)\bigr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$
(2.1)

where \(\psi(t)\), \(\phi(t)\) are two functions which satisfy

  1. (1)

    \(\psi(t)\), \(\phi(t)\) are strictly monotone increasing and continuous;

  2. (2)

    \(\psi(t) < \phi(t)\) for all \(t>0\);

  3. (3)

    \(\psi(0) = \phi(0)\).

Then T has a unique fixed point \(x^{*} \in E\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=Tx_{n}\) converges to \(x^{*}\).

Proof

By using probabilistic theory we know that for all \(x,y \in E\),

$$\begin{aligned} \begin{aligned} &\psi^{-1}\biggl(\int_{0}^{+\infty}t\, dF_{Tx,Ty}(t)\biggr)=\int_{0}^{+\infty} \psi ^{-1}(t)\, dF_{Tx,Ty}(t)=\int_{0}^{+\infty}t \, dF_{Tx,Ty}\bigl(\psi(t)\bigr), \\ &\phi^{-1}\biggl(\int_{0}^{+\infty}t\, dF_{x,y}(t)\biggr)=\int_{0}^{+\infty} \phi ^{-1}(t)\, dF_{x,y}(t)=\int_{0}^{+\infty}t \, dF_{x,y}\bigl(\phi(t)\bigr), \end{aligned} \end{aligned}$$

which together with (2.1) implies that

$$ \psi^{-1}\bigl(d_{F}(Tx,Ty)\bigr)\leq\phi^{-1} \bigl(d_{F}(x,y)\bigr) $$
(2.2)

for all \(x,y \in E\).

For any given \(x_{0} \in X\), define an iterative sequence as follows:

$$x_{1}= Tx_{0}, \qquad x_{2}= Tx_{1}, \qquad \ldots ,\qquad x_{n+1}= Tx_{n},\qquad \ldots. $$

Then, for each integer \(n\geq1\), from (2.2) we get

$$ \psi^{-1}\bigl(d_{F}(x_{n+1},x_{n}) \bigr)=\psi^{-1}\bigl(d_{F}(Tx_{n},Tx_{n-1}) \bigr)\leq\phi ^{-1}\bigl(d_{F}(x_{n},x_{n-1}) \bigr). $$
(2.3)

Using condition (2) we have

$$d_{F}(x_{n+1},x_{n})\leq d_{F}(x_{n},x_{n-1}) $$

for all \(n\geq1\). Hence the sequence \({d(x_{n+1},x_{n})} \) is decreasing, and consequently there exists \(r\geq0\) such that

$$d_{F}(x_{n+1},x_{n})\rightarrow r $$

as \(n\rightarrow\infty\). By using conditions (2) and (3) we know \(r=0\).

In what follows, we show that \(\{x_{n}\} \) is a Cauchy sequence in the metric space \((E, d_{F})\). Suppose that \(\{x_{n}\}\) is not a Cauchy sequence. Then there exists \(\varepsilon>0\) for which we can find subsequences \(\{x_{n_{k}}\}\), \(\{ x_{m_{k}}\}\) with \(n_{k}> m_{k}>k\) such that

$$ d_{F}(x_{n_{k}},x_{m_{k}})\geq\varepsilon $$
(2.4)

for all \(k\geq1\). Further, corresponding to \(m_{k}\) we can choose \(n_{k}\) in such a way that it is the smallest integer with \(n_{k}>m_{k}\) satisfying (2.4). Then

$$ d_{F}(x_{n_{k}-1},x_{m_{k}})< \varepsilon. $$
(2.5)

From (2.4) and (2.5), we have

$$\varepsilon\leq d_{F}(x_{n_{k}},x_{m_{k}}) \leq d_{F}(x_{n_{k}},x_{n_{k}-1})+d_{F}(x_{n_{k}-1},x_{m_{k}}) < d_{F}(x_{n_{k}},x_{n_{k}-1})+\varepsilon. $$

Letting \(k\rightarrow\infty\), we get

$$ \lim_{k\rightarrow\infty}d_{F}(x_{n_{k}},x_{m_{k}})= \varepsilon. $$
(2.6)

By using the triangular inequality we have

$$\begin{aligned}& d_{F}(x_{n_{k}},x_{m_{k}})\leq d_{F}(x_{n_{k}},x_{n_{k}-1})+d_{F}(x_{n_{k}-1},x_{m_{k}-1})+d_{F}(x_{m_{k}-1},x_{m_{k}}), \\& d_{F}(x_{n_{k}-1},x_{m_{k}-1})\leq d_{F}(x_{n_{k}-1},x_{n_{k}})+d_{F}(x_{n_{k}},x_{m_{k}})+d_{F}(x_{m_{k}},x_{m_{k}-1}). \end{aligned}$$

Letting \(k\rightarrow\infty\) in the above two inequalities and applying (2.6), we have

$$\lim_{k\rightarrow\infty}d_{F}(x_{n_{k}-1},x_{m_{k}-1})= \varepsilon. $$

Since

$$\psi\bigl(d_{F}(x_{n_{k}},x_{m_{k}})\bigr)\leq\phi \bigl(d_{F}(x_{n_{k}-1},x_{m_{k}-1})\bigr), $$

by using condition (2) we know \(\varepsilon=0\), this is a contradiction. This shows that \(\{x_{n}\}\) is a Cauchy sequence in the metric space \((E,d_{F})\).

We prove that the sequence \(\{x_{n}\}\) is also a Cauchy sequence in an S-probabilistic space \((E,F)\), that is, we need to prove

$$ \lim_{n\rightarrow\infty}F_{x_{n},x_{n+m}}(t)=H(t). $$
(2.7)

If not, there must exist the numbers \(t_{0}>0\), \(0<\lambda_{0}<1\) and subsequences \(\{n_{k}\}\), \(\{m_{k}\}\) of \(\{n\}\) such that \(F_{x_{n_{k}},x_{n_{k}+m_{k}}}(t_{0})\leq\lambda_{0}\) for all \(k\geq1\). In this case, we have

$$\begin{aligned} d_{F}(x_{n_{k}}, x_{n_{k}+m_{k}}) =&\int_{0}^{+\infty}t \, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t) \\ =&\int_{0}^{t_{0}}t\, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t) \\ &{}+\int _{t_{0}}^{+\infty }t\, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t) \\ \geq&\int_{t_{0}}^{+\infty}t\, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t) \\ \geq& t_{0} \bigl(1-F_{x_{n_{k}},x_{n_{k}+m_{k}}}(t_{0})\bigr) \\ \geq& t_{0} (1-\lambda_{0})>0. \end{aligned}$$

This is a contradiction.

From (2.7) we know that \(\{x_{n}\}\) is a Cauchy sequence in a complete S-probabilistic metric space \((E,F)\). Hence there exists a point \(x^{*} \in E\) such that \(\{x_{n}\}\) converges to \(x^{*}\) in the meaning of

$$\lim_{n\rightarrow \infty}F_{x_{n},x^{*}}(t)= H(t), \quad \forall t\geq0. $$

Therefore

$$\lim_{n\rightarrow \infty}F_{x_{n},Tx^{*}}\bigl(\psi(t)\bigr)\geq\lim _{n\rightarrow \infty}F_{x_{n-1},x^{*}}\bigl(\phi(t)\bigr)= H(t), \quad \forall t \geq0, $$

which implies that

$$\lim_{n\rightarrow\infty} F_{x_{n},Tx^{*}}(t)=H(t), \quad \forall t\geq0. $$

We claim that \(x^{*}\) is a fixed point of T. In fact, for any \(t>0\), it follows from condition (SPM-3) that

$$\begin{aligned} \begin{aligned} F_{x^{*},Tx^{*}}(t)&\geq \int_{0}^{+\infty}F_{x^{*},x_{n}}(t-u) \, dF_{x_{n},Tx^{*}}(u) \\ &\geq\int_{0}^{\frac{t}{2}}F_{x^{*},x_{n}}(t-u)\, dF_{x_{n},Tx^{*}}(u) \\ &=F_{x^{*},x_{n}}\biggl(\frac{t}{2}\biggr) \biggl(F_{x_{n},Tx^{*}}\biggl( \frac{t}{2}\biggr)-0\biggr)\rightarrow 1 \end{aligned} \end{aligned}$$

as \(n\rightarrow\infty\), which implies \(F_{x^{*},Tx^{*}}(t)=H(t)\) and hence \(x^{*}=Tx^{*}\). The \(x^{*}\) is a fixed point of T. If there exists another fixed point \(x^{**}\) of T, we observe

$$F_{x^{*}, x^{**}}(t)= F_{Tx^{*}, Tx^{**}}(t)\geq F_{x^{*}, x^{**}}\biggl( \frac{t}{h}\biggr), $$

which implies \(F_{x^{*}, x^{**}}(t)=H(t)\), \(\forall t\in R\), and hence \(x^{*}=x^{**}\). Then the fixed point of T is unique. Meanwhile, for any given \(x_{0}\), the iterative sequence \(x_{n}=T^{n}x_{0}\) converges to \(x^{*}\). This completes the proof. □

Theorem 2.3

Let \((E, F, \triangle )\) be a complete Menger probabilistic metric space. Assume

$$ \triangle\biggl(F_{x,z}\biggl(\frac{t}{2}\biggr),F_{z,y} \biggl(\frac{t}{2}\biggr)\biggr)\geq \int_{0}^{+\infty}F_{x,z}(t-u) \, dF_{z,y}(u) $$
(2.8)

for all \(x,y,z \in E\), \(t>0\). Let \(T: E \rightarrow E\) be a mapping satisfying the following conditions:

$$ F_{Tx,Ty}\bigl(\psi(t)\bigr)\geq F_{x,y}\bigl(\phi(t)\bigr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$
(2.9)

where \(\psi(t)\), \(\phi(t)\) are two functions which satisfy

  1. (1)

    \(\psi(t)\), \(\phi(t)\) are strictly monotone increasing and continuous;

  2. (2)

    \(\psi(t) < \phi(t)\) for all \(t>0\);

  3. (3)

    \(\psi(0) = \phi(0)\).

Then T has a unique fixed point \(x^{*} \in E\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=Tx_{n}\) converges to \(x^{*}\).

Proof

From (2.8) we know that \((E,F,\triangle)\) is an S-probabilistic metric space. This together with (2.9), by using Theorem 2.2, proves the conclusion. □

3 Best proximity point theorems in S-probabilistic spaces

We first define the notion of P-operator \(P: B_{0}\rightarrow A_{0}\), which is very useful for the proof of the theorem. From the definitions of \(A_{0}\) and \(B_{0}\), we know that for any given \(y \in B_{0}\), there exists an element \(x \in A_{0}\) such that \(F_{x,y}(t)=F_{A,B}(t)\). Because \((A,B)\) has the weak P-property, so such x is unique. We denote by \(x=Py\) the P-operator from \(B_{0}\) into \(A_{0}\).

Theorem 3.1

Let \((E, F)\) be a complete S-probabilistic metric space. Let \((A,B)\) be a pair of nonempty subsets in E and \(A_{0}\) be a nonempty closed subset. Suppose that \((A,B)\) satisfies the weak P-property. Let \(T: A \rightarrow B\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}\bigl(\psi(t)\bigr)\geq F_{x,y}\bigl(\phi(t)\bigr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$

where \(\psi(t)\), \(\phi(t)\) are two functions which satisfy

  1. (1)

    \(\psi(t)\), \(\phi(t)\) are strictly monotone increasing and continuous;

  2. (2)

    \(\psi(t) < \phi(t)\) for all \(t>0\);

  3. (3)

    \(\psi(0) = \phi(0)\).

Assume that \(T(A_{0})\subset B_{0}\). Then T has a unique best proximity point \(x^{*} \in A\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\).

Proof

Since the pair \((A,B)\) has the weak P-property, so we have

$$F_{PTx_{1},PTx_{2}}\bigl(\psi(t)\bigr)\geq F_{Tx_{1},Tx_{2}}\bigl(\psi(t)\bigr) \geq F_{x_{1},x_{2}}\bigl(\phi(t)\bigr),\quad \forall t > 0 $$

for any \(x_{1},x_{2} \in A_{0}\). This shows that \(PT: A_{0} \rightarrow A_{0}\) is a contraction from a complete S-probabilistic metric subspace \(A_{0}\) into itself. Using Theorem 2.2, we know that PT has a unique fixed point \(x^{*}\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\). Since \(PTx^{*}=x^{*}\) if and only if \(F_{x^{*},Tx^{*}}(t)=F_{A,B}(t)\), so the point \(x^{*}\) is the unique best proximity point of \(T:A\rightarrow B\). This completes the proof. □

Theorem 3.2

Let \((E, F, \triangle)\) be a complete Menger probabilistic metric space. Assume that

$$ \triangle\biggl(F_{x,z}\biggl(\frac{t}{2}\biggr),F_{z,y} \biggl(\frac{t}{2}\biggr)\biggr)\geq \int_{0}^{+\infty}F_{x,z}(t-u) \, dF_{z,y}(u) $$
(3.1)

for all \(x,y,z \in E\), \(t>0\). Let \((A,B)\) be a pair of nonempty subsets in E and \(A_{0}\) be a nonempty closed subset. Suppose that \((A,B)\) satisfies the weak P-property. Let \(T: A \rightarrow B\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}\bigl(\psi(t)\bigr)\geq F_{x,y}\bigl(\phi(t)\bigr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$

where \(\psi(t)\), \(\phi(t)\) are two functions which satisfy

  1. (1)

    \(\psi(t)\), \(\phi(t)\) are strictly monotone increasing and continuous;

  2. (2)

    \(\psi(t) < \phi(t)\) for all \(t>0\);

  3. (3)

    \(\psi(0) = \phi(0)\).

Assume that \(T(A_{0})\subset B_{0}\). Then T has a unique best proximity point \(x^{*} \in A\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\).

Proof

From (3.1) we know that \((E,F,\triangle)\) is an S-probabilistic metric space. By using Theorem 3.1, the conclusion is proved. □

4 Contraction mapping principle in Menger probabilistic metric spaces

Let \((E, F, \triangle )\) be a Menger probabilistic metric space. For any \(x,y \in E\), we define

$$d_{F}(x,y)=\int_{0}^{+\infty}t\, dF_{x,y}(t). $$

Since t is a continuous function and \(F_{x,y}\) is a bounded variation functions, so the above integral is well defined. In fact, the above integer is just the mathematical expectation of \(F_{x,y}(t)\). Throughout this paper we assume that

$$d_{F}(x,y)=\int_{0}^{+\infty}t\, dF_{x,y}(t)< +\infty,\quad \forall x,y \in E $$

for all Menger probabilistic metric spaces \((E, F, \triangle )\).

In 1973, Czerwik [32] presented a notable generalization of the classical Banach fixed point theorem in the so-called b-metric spaces.

Definition 4.1

Let E be a nonempty set and \(s>1\) be a given real number. A function \(d : E \times E \rightarrow R^{+}\) is called a b-metric provided that, for all \(x, y, z \in E\),

  1. (BM-1)

    \(d(x, y) = 0\) if and only if \(x=y\);

  2. (BM-2)

    \(d(x, y) = d(y, x)\);

  3. (BM-3)

    \(d(x; y) \leq s(d(x, z) + d(z, y))\).

\((E,d)\) is called a b-metric space with coefficient s.

The notions of topology including the convergence, completeness and Cauchy sequence are similar to those of metric spaces. Now, we are in a position to present the interesting result of our paper as follows.

Theorem 4.2

Let \((E, F, \triangle _{1})\) be a Menger probabilistic metric space, where \(\triangle_{1}(a,b)=\max \{a+b-1,0\}\). For any \(x,y \in E\), define

$$d_{F}(x,y)=\int_{0}^{+\infty}t\, dF_{x,y}(t). $$

Then \(d_{F}(x,y)\) is a b-metric with \(s=2\) on E.

Proof

Since \(F_{x,y}(t)=H(t)\) (\(\forall t\in R\)) if and only if \(x=y\), and

$$\int_{0}^{+\infty}t\, dH(t)=0. $$

We know that condition (BM-1) holds and condition (BM-2) is obvious. Next we prove condition (BM-3). For any \(x,y,z \in E\), from (PM-3)

$$F_{x,y}(t)\geq \triangle_{1}\biggl( F_{x,z}\biggl( \frac{t}{2}\biggr), F_{z,y}\biggl(\frac{t}{2}\biggr)\biggr), \quad \forall t \in R=(-\infty,+\infty), $$

by using the property of Lebesgue-Stieltjes integral we have

$$\begin{aligned} d_{F}(x,y) =&\int_{0}^{+\infty}t\, dF_{x,y}(t)\leq \int_{0}^{+\infty} t\, d \triangle_{1}\biggl( F_{x,z}\biggl(\frac{t}{2}\biggr), F_{z,y}\biggl(\frac{t}{2}\biggr)\biggr) \\ =& \int_{0}^{+\infty} t\, d \max\biggl( F_{x,z}\biggl(\frac{t}{2}\biggr)+ F_{z,y}\biggl( \frac{t}{2}\biggr)-1,0\biggr) \\ =& \int_{0}^{+\infty} t\, d \biggl( F_{x,z} \biggl(\frac{t}{2}\biggr)+ F_{z,y}\biggl(\frac{t}{2}\biggr) \biggr) \\ =& \int_{0}^{+\infty} t\, d F_{x,z}\biggl( \frac{t}{2}\biggr)+\int_{0}^{+\infty} t\, d F_{z,y}\biggl(\frac{t}{2}\biggr) \\ =& 2\int_{0}^{+\infty} \frac{t}{2}\, d F_{x,z}\biggl(\frac{t}{2}\biggr)+2\int_{0}^{+\infty} \frac{t}{2}\, d F_{z,y}\biggl(\frac{t}{2}\biggr) \\ =& 2\int_{0}^{+\infty} u\, d F_{x,z}(u)+2\int _{0}^{+\infty} u\, d F_{z,y}(u) \\ =&2d_{F}(x,z)+2d_{F}(z,y). \end{aligned}$$

This completes the proof. □

Theorem 4.3

Let \((E, F, \triangle _{1})\) be a complete Menger probabilistic metric space, where \(\triangle_{1}(a,b)=\max\{a+b-1,0\}\). Let \(T: E \rightarrow E\) be a mapping satisfying the following condition:

$$ F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$
(4.1)

where \(0< h<1\) is a constant. Then T has a unique fixed point \(x^{*} \in E\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=Tx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{T^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{T^{i}x_{0},T^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

Proof

For any \(x,y\in E\), from (4.1), by using the property of Lebesgue-Stieltjes integral we have

$$\begin{aligned} d_{F}(Tx,Ty) =&\int_{0}^{+\infty}t\, dF_{Tx,Ty}(t) \\ \leq&\int_{0}^{+\infty}t\, dF_{x,y}\biggl( \frac{t}{h}\biggr)=h \int_{0}^{+\infty } \frac{t}{h}\, dF_{x,y}\biggl(\frac{t}{h}\biggr) \\ =&h \int_{0}^{+\infty}u\, dF_{x,y}(u)=hd_{F}(x,y). \end{aligned}$$

Further, for any positive integer l, we have

$$d_{F}\bigl(T^{l}x,T^{l}y\bigr)\leq hd_{F}\bigl(T^{l-1}x,T^{l-1}y\bigr) \leq hd_{F}\bigl(T^{l-2}x,T^{l-2}y\bigr)\leq\cdots\leq h^{l}d_{F}(x,y). $$

Choose a sufficiently large integer L such that \(2h^{L}<\frac{1}{2}\), then

$$d_{F}\bigl(T^{L}x,T^{L}y\bigr)\leq h^{L}d_{F}(x,y)= g d_{F}(x,y),\quad \forall x,y \in E, $$

where \(g=h^{L}\) and \(0< g<\frac{1}{2}\). For any given \(x_{0} \in E\), define \(x_{n+1}=T^{L}x_{n}\) for all \(n=0,1,2, \ldots \) . Observe that

$$\begin{aligned} d_{F}(x_{n},x_{n+m}) \leq&2d_{F}(x_{n}, x_{n+1})+ 2 d_{F}(x_{n+1}, x_{n+m}) \\ \leq&2d_{F}(x_{n}, x_{n+1})+ 4 d_{F}(x_{n+1}, x_{n+2}) \\ &{} +4d_{F}(x_{n+2}, x_{n+m}) \\ \leq& \bigl(2g^{n}+2^{2}g^{n+1}+2^{3}g^{n+2}+ \cdots+2^{m}g^{n+m-1}\bigr)d_{F}(x_{0},x_{1}). \end{aligned}$$
(4.2)

Since \(0< g<\frac{1}{2}\), we have

$$\bigl(2g^{n}+2^{2}g^{n+1}+2^{3}g^{n+2}+ \cdots +2^{m}g^{n+m-1}\bigr)d_{F}(x_{0},x_{1}) \rightarrow 0 $$

as \(n\rightarrow\infty\). Hence

$$\int_{0}^{+\infty}t\, dF_{x_{n},x_{n+m}}(t)=d_{F}(x_{n},x_{n+m}) \rightarrow0 $$

as \(n\rightarrow\infty\). We claim that

$$ \lim_{n\rightarrow \infty}F_{x_{n}, x_{n+m}}= H(t). $$
(4.3)

If not, there exist numbers \(t_{0}>0\), \(0<\lambda_{0}<1\) and two subsequences \(\{n_{k}\}\), \(\{m_{k}\}\) of \(\{n\}\) such that \(F_{x_{n_{k}},x_{n_{k}+m_{k}}}(t_{0})\leq\lambda_{0}\) for all \(k\geq1\). In this case, we have

$$\begin{aligned} d_{F}(x_{n_{k}}, x_{n_{k}+m_{k}}) =&\int_{0}^{+\infty}t \, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t) \\ =&\int_{0}^{t_{0}}t\, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t)+\int _{t_{0}}^{+\infty }t\, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t) \\ \geq&\int_{t_{0}}^{+\infty}t\, dF_{x_{n_{k}},x_{n_{k}+m_{k}}}(t)\geq t_{0} \bigl(1-F_{x_{n_{k}},x_{n_{k}+m_{k}}}(t_{0})\bigr) \\ \geq& t_{0} (1-\lambda_{0})>0. \end{aligned}$$

This is a contradiction. From (4.3) we know that \(\{x_{n}\}\) is a Cauchy sequence in a complete Menger probabilistic metric space \((E,F,\triangle_{1})\). Hence there exists a point \(x^{*} \in E\) such that \(\{x_{n}\}\) converges to \(x^{*}\) in the meaning of

$$\lim_{n\rightarrow \infty}F_{x_{n},x^{*}}= H(t). $$

We claim that \(x^{*}\) is a fixed point of \(T^{L}\). In fact, for any \(t\in R\), it follows from condition (PM-3) and the property of â–³-norm that

$$\begin{aligned} F_{x^{*},T^{L}x^{*}}(t) \geq&\triangle_{1}\biggl( F_{x^{*},x_{n}}\biggl( \frac{t}{2}\biggr), F_{x_{n},T^{L}x^{*}}\biggl(\frac{t}{2}\biggr)\biggr) \\ =& \triangle_{1}\biggl( F_{x^{*},x_{n}}\biggl(\frac{t}{2} \biggr), F_{T^{L}x_{n-1},T^{L}x^{*}}\biggl(\frac{t}{2}\biggr)\biggr) \\ \geq&\triangle_{1}\biggl( F_{x^{*},x_{n}}\biggl(\frac{t}{2} \biggr), F_{x_{n-1},x^{*}}\biggl(\frac{t}{2g}\biggr)\biggr) \rightarrow H(t) \end{aligned}$$

as \(n\rightarrow\infty\), which implies \(F_{x^{*},T^{L}x^{*}}(t)=H(t)\) and hence \(x^{*}=T^{L}x^{*}\). The \(x^{*}\) is a fixed point of \(T^{L}\). If there exists another fixed point \(x^{**}\) of \(T^{L}\), we observe that

$$F_{x^{*}, x^{**}}(t)= F_{T^{L}x^{*}, T^{L}x^{**}}(t)\geq F_{x^{*}, x^{**}}\biggl( \frac{t}{g}\biggr), $$

which implies \(F_{x^{*}, x^{**}}(t)=H(t)\), \(\forall t\in R\), i.e., \(x^{*}=x^{**}\). Hence the fixed point of \(T^{L}\) is unique. On the other hand, it follows from \(x^{*}=T^{L}x^{*}\) that \(Tx^{*}=T^{L}(Tx^{*})\), by the uniqueness we know \(x^{*}=Tx^{*}\). Then \(x^{*}\) is the unique fixed point of T. Now we prove that, for any given \(x_{0}\), the iterative sequence \(x_{n}=T^{n}x_{0}\) converges to \(x^{*}\). Observe that any positive integer n can be expressed as \(n=mL+i\), where m, i are some positive integers and \(0\leq i< L\). In this case, \(T^{n}x_{0}=T^{mL}T^{i}x_{0}\rightarrow x^{*}\). Finally, we prove the error estimate formula. Let \(m\rightarrow\infty\) in inequality (4.2), we get

$$ d_{F}\bigl(x_{n},x^{*}\bigr)\leq \frac{2g^{n}}{1-2g}d_{F}(x_{0},x_{1}). $$
(4.4)

Since \(x_{n}=T^{nL}x_{0}\) for all \(n\geq0\), the above inequality (4.4) can be rewritten as follows:

$$d_{F}\bigl(T^{nL}x_{0},x^{*}\bigr)\leq \frac{2g^{n}}{1-2g}d_{F}(x_{0},x_{1}). $$

Because any positive integer n can be expressed as \(n=mL+i\), where m, i are some positive integers and \(0\leq i< L\), we can get the following inequality:

$$d_{F}\bigl(T^{n}x_{0},x^{*}\bigr)=d_{F} \bigl(T^{mL}T^{i}x_{0},x^{*}\bigr)\leq \frac{2g^{m}}{1-2g}d_{F}\bigl(T^{i}x_{0},T^{i+1}x_{0} \bigr), $$

where \(m=[\frac{n}{L}]\) and \(i=1,2,3, \ldots,L-1\). Finally we can get the error estimate formula

$$\begin{aligned} d_{F}\bigl(T^{n}x_{0},x^{*}\bigr) & \leq \frac{2g^{[\frac{n}{L}]}}{1-2g} \max_{0\leq i < L} \bigl( d_{F} \bigl(T^{i}x_{0},T^{i+1}x_{0}\bigr)\bigr) \\ &= \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \bigl( d_{F} \bigl(T^{i}x_{0},T^{i+1}x_{0}\bigr)\bigr) \\ &= \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{T^{i}x_{0},T^{i+1}x_{0}}(t). \end{aligned}$$

That is,

$$\int_{0}^{+\infty}t\, d F_{T^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{T^{i}x_{0},T^{i+1}x_{0}}(t). $$

This completes the proof. □

Theorem 4.4

Let \((E, F, \triangle )\) be a complete Menger probabilistic metric space. Assume \(\triangle(a,b)\geq\triangle_{1}(a,b)=\max\{a+b-1,0\}\). Let \(T: E \rightarrow E\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$

where \(0< h<1\) is a constant. Then T has a unique fixed point \(x^{*} \in E\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=Tx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{T^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{T^{i}x_{0},T^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

Proof

Since \(\triangle(a,b)\geq \triangle_{1}(a,b)=\max\{a+b-1,0\}\), if \((E, F, \triangle )\) is a complete Menger probabilistic metric space, so is \((E, F, \triangle _{1})\). By using Theorem 2.3, we get the conclusion of Theorem 4.4. This completes the proof. □

Corollary 4.5

(Sehgal and Bharucha-Reid [3], 1972)

Let \((E, F, \mathrm{min})\) be a complete Menger probabilistic metric space. Let \(T: E \rightarrow E\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in E, \forall t \in R=(-\infty,+\infty), $$

where \(0< h<1\) is a constant. Then T has a unique fixed point \(x^{*} \in E\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=Tx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{T^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{T^{i}x_{0},T^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

5 Best proximity point theorems in Menger probabilistic metric spaces

We first define the notion of P-operator \(P: B_{0}\rightarrow A_{0}\), which is useful for our best proximity point theorem. From the definitions of \(A_{0}\) and \(B_{0}\), we know that for any given \(y \in B_{0}\), there exists an element \(x \in A_{0}\) such that \(F_{x,y}(t)=F_{A,B}(t)\). Because \((A,B)\) has the weak P-property, so such x is unique. We denote by \(x=Py\) the P-operator from \(B_{0}\) into \(A_{0}\).

Theorem 5.1

Let \((E, F, \triangle _{1})\) be a complete Menger probabilistic metric space, where \(\triangle_{1}(a,b)=\max\{a+b-1,0\}\). Let \((A,B)\) be a pair of nonempty subsets in E and \(A_{0}\) be a nonempty closed subset. Suppose that \((A,B)\) satisfies the weak P-property. Let \(T: A \rightarrow B\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in A, \forall t \in R=(-\infty,+\infty), $$

where \(0< h<1\) is a constant. Assume \(T(A_{0})\subset B_{0}\). Then T has a unique best proximity point \(x^{*} \in A\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{(PT)^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{(PT)^{i}x_{0},(PT)^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

Proof

Since the pair \((A,B)\) has the weak P-property, we have

$$F_{PTx_{1},PTx_{2}}(t)\geq F_{Tx_{1},Tx_{2}}(t) \geq F_{x_{1},x_{2}}\biggl( \frac{t}{h}\biggr) ,\quad \forall t \in R=(-\infty,+\infty) $$

for any \(x_{1},x_{2} \in A_{0}\). This shows that \(PT: A_{0} \rightarrow A_{0}\) is a contraction from a complete Menger probabilistic metric subspace \(A_{0}\) into itself. Using Theorem 4.3, we know that PT has a unique fixed point \(x^{*}\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{(PT)^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{(PT)^{i}x_{0},(PT)^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\). Since \(PTx^{*}=x^{*}\) if and only if \(F_{x^{*},Tx^{*}}(t)=F_{A,B}(t)\), so the point \(x^{*}\) is the unique best proximity point of \(T:A\rightarrow B\). This completes the proof. □

Theorem 5.2

Let \((E, F, \triangle )\) be a complete Menger probabilistic metric space. Assume \(\triangle(a,b)\geq\triangle_{1}(a,b)=\max\{a+b-1,0\}\). Let \((A,B)\) be a pair of nonempty subsets in E and \(A_{0}\) be a nonempty closed subset. Suppose that \((A,B)\) satisfies the weak P-property. Let \(T: A \rightarrow B\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in A, \forall t \in R=(-\infty,+\infty), $$

where \(0< h<1\) is a constant. Assume \(T(A_{0})\subset B_{0}\). Then T has a unique best proximity point \(x^{*} \in A\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{(PT)^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{(PT)^{i}x_{0},(PT)^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

Proof

Because \(\triangle(a,b)\geq\triangle_{1}(a,b)=\max\{a+b-1,0\}\), by using Theorem 5.1, we can get the conclusion of Theorem 3.2. This completes the proof. □

Corollary 5.3

Let \((E, F, \mathrm{min})\) be a complete Menger probabilistic metric space. Let \((A,B)\) be a pair of nonempty subsets in E and \(A_{0}\) be a nonempty closed subset. Suppose that \((A,B)\) satisfies the weak P-property. Let \(T: A \rightarrow B\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in A, \forall t \in R=(-\infty,+\infty), $$

where \(0< h<1\) is a constant. Assume that \(T(A_{0})\subset B_{0}\). Then T has a unique best proximity point \(x^{*} \in A\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{(PT)^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{(PT)^{i}x_{0},(PT)^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

Corollary 5.4

Let \((E, F, \triangle_{2})\) be a complete Menger probabilistic metric space, where \(\triangle_{2}(a,b)=a\cdot b\). Let \((A,B)\) be a pair of nonempty subsets in E and \(A_{0}\) be a nonempty closed subset. Suppose that \((A,B)\) satisfies the weak P-property. Let \(T: A \rightarrow B\) be a mapping satisfying the following condition:

$$F_{Tx,Ty}(t)\geq F_{x,y}\biggl(\frac{t}{h}\biggr), \quad \forall x, y \in A, \forall t \in R=(-\infty,+\infty), $$

where \(0< h<1\) is a constant. Assume that \(T(A_{0})\subset B_{0}\). Then T has a unique best proximity point \(x^{*} \in A\) and for any given \(x_{0} \in E\) the iterative sequence \(x_{n+1}=PTx_{n}\) converges to \(x^{*}\). Further, the error estimate inequality

$$\int_{0}^{+\infty}t\, d F_{(PT)^{n}x_{0},x^{*}}(t) \leq \frac{2h^{L[\frac{n}{L}]}}{1-2h^{L}} \max_{0\leq i < L} \int_{0}^{+\infty}t \, d F_{(PT)^{i}x_{0},(PT)^{i+1}x_{0}}(t) $$

holds for some positive integer L provided \(h^{L}<\frac{1}{2}\).

Remark

The research for probabilistic metric spaces (probabilistic normed spaces) and relevant fixed point theory is an important topic. Many relevant results have been given by some authors. However, the profound relationship with the probabilistic theory has not been studied closely. The S-probabilistic metric spaces and relevant probabilistic methods will play an important role in the theory and applications.

References

  1. Menger, K: Statistical metrics. Proc. Natl. Acad. Sci. USA 28, 535-537 (1942)

    Article  MATH  MathSciNet  Google Scholar 

  2. Sehgal, VM: Some fixed point theorems in functional analysis and probability. PhD thesis, Wayne State University, Detroit, MI (1966)

  3. Sehgal, VM, Bharucha-Reid, AT: Fixed points of contraction mappings on PM-spaces. Math. Syst. Theory 6, 97-102 (1972)

    Article  MATH  MathSciNet  Google Scholar 

  4. Cho, YJ, Park, KS, Chang, SS: Fixed point theorems in metric spaces and probabilistic metric spaces. Int. J. Math. Math. Sci. 19, 243-252 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  5. Ćirić, LB: Some new results for Banach contractions and Edelstein contractive mappings on fuzzy metric spaces. Chaos Solitons Fractals 42, 146-154 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  6. Ćirić, LB, Mihet, D, Saadati, R: Monotone generalized contractions in partially ordered probabilistic metric spaces. Topol. Appl. 156, 2838-2844 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  7. Fang, JX: Common fixed point theorems of compatible and weakly compatible maps in Menger spaces. Nonlinear Anal. 71, 1833-1843 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  8. Hadzi, O, Pap, E: Fixed Point Theory in Probabilistic Metric Spaces. Kluwer Academic, Dordrecht (2001)

    Book  Google Scholar 

  9. Kamran, T: Common fixed points theorems for fuzzy mappings. Chaos Solitons Fractals 38, 1378-1382 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  10. Liu, Y, Li, Z: Coincidence point theorems in probabilistic and fuzzy metric spaces. Fuzzy Sets Syst. 158, 58-70 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  11. Mihet, D: Altering distances in probabilistic Menger spaces. Nonlinear Anal. 71, 2734-2738 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  12. Mihet, D: Fixed point theorems in probabilistic metric spaces. Chaos Solitons Fractals 41(2), 1014-1019 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  13. Mihet, D: A note on a paper of Hicks and Rhoades. Nonlinear Anal. 65, 1411-1413 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  14. O’Regan, D, Saadati, R: Nonlinear contraction theorems in probabilistic spaces. Appl. Math. Comput. 195, 86-93 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  15. Saadati, R, Sedghi, S, Shobe, N: Modified intuitionistic fuzzy metric spaces and some fixed point theorems. Chaos Solitons Fractals 38, 36-47 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Schweizer, B, Sklar, A: Probabilistic Metric Spaces. North-Holland, New York (1983)

    MATH  Google Scholar 

  17. Sedghi, S, Žikić-Došenović, T, Shobe, N: Common fixed point theorems in Menger probabilistic quasimetric spaces. Fixed Point Theory Appl. 2009, Article ID 546273 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Su, Y, Zhang, J: Fixed point and best proximity point theorems for contractions in new class of probabilistic metric spaces. Fixed Point Theory Appl. 2014, 170 (2014)

    Article  MathSciNet  Google Scholar 

  19. Sadiq Basha, S: Best proximity point theorems generalizing the contraction principle. Nonlinear Anal. 74, 5844-5850 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  20. Mongkolkeha, C, Cho, YJ, Kumam, P: Best proximity points for Geraghty’s proximal contraction mappings. Fixed Point Theory Appl. 2013, 180 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Dugundji, J, Granas, A: Weakly contractive mappings and elementary domain invariance theorem. Bull. Greek Math. Soc. 19, 141-151 (1978)

    MATH  MathSciNet  Google Scholar 

  22. Alghamdi, MA, Alghamdi, MA, Shahzad, N: Best proximity point results in geodesic metric spaces. Fixed Point Theory Appl. 2013, 164 (2013)

    Article  MATH  Google Scholar 

  23. Nashine, HK, Kumam, P, Vetro, C: Best proximity point theorems for rational proximal contractions. Fixed Point Theory Appl. 2013, 95 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Karapinar, E: On best proximity point of ψ-Geraghty contractions. Fixed Point Theory Appl. 2013, 200 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Zhang, J, Su, J, Cheng, Q: Best proximity point theorems for generalized contractions in partially ordered metric spaces. Fixed Point Theory Appl. 2013, 83 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  26. Zhang, J, Su, J, Cheng, Q: A note on ‘A best proximity point theorem for Geraghty-contractions’. Fixed Point Theory Appl. 2013, 99 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Zhang, J, Su, J: Best proximity point theorems for weakly contractive mapping and weakly Kannan mapping in partial metric spaces. Fixed Point Theory Appl. 2014, 50 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  28. Amini-Harandi, A, Hussain, N, Akbar, F: Best proximity point results for generalized contractions in metric spaces. Fixed Point Theory Appl. 2013, 164 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kannan, R: Some results on fixed points. Bull. Calcutta Math. Soc. 60, 71-76 (1968)

    MATH  MathSciNet  Google Scholar 

  30. Caballero, J, Su, Y, Cheng, Q: A best proximity point theorem for Geraghty-contractions. Fixed Point Theory Appl. (2012). doi:10.1186/1687-1812-2012-231

    MathSciNet  Google Scholar 

  31. Kirk, WA, Reich, S, Veeramani, P: Proximinal retracts and best proximity pair theorems. Numer. Funct. Anal. Optim. 24, 851-862 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  32. Czerwik, S: Contraction mappings in b-metric spaces. Acta Math. Inform. Univ. Ostrav. 1, 5-11 (1993)

    MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This project is supported by the National Natural Science Foundation of China under Grant (11071279).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jen-Chih Yao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Su, Y., Gao, W. & Yao, JC. Generalized contraction mapping principle and generalized best proximity point theorems in probabilistic metric spaces. Fixed Point Theory Appl 2015, 76 (2015). https://doi.org/10.1186/s13663-015-0323-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-015-0323-4

Keywords