Skip to main content

Fixed point and best proximity point theorems for contractions in new class of probabilistic metric spaces

Abstract

The purpose of this paper is to present some definitions and basic concepts of best proximity point in a new class of probabilistic metric spaces and to prove the best proximity point theorems for the contractive mappings and weak contractive mappings. In order to get the best proximity point theorems, some new probabilistic contraction mapping principles have been proved. Meanwhile the error estimate inequalities have been established. Further, a method of the proof is also new and interesting, which is to use the mathematical expectation of the distribution function studying the related problems.

1 Introduction and preliminaries

Probabilistic metric spaces were introduced in 1942 by Menger [1]. In such spaces, the notion of distance between two points x and y is replaced by a distribution function F x , y (t). Thus one thinks of the distance between points as being probabilistic with F x , y (t) representing the probability that the distance between x and y is less than t. Sehgal, in his Ph.D. thesis [2], extended the notion of a contraction mapping to the setting of the Menger probabilistic metric spaces. For example, a mapping T is a probabilistic contraction if T is such that for some constant 0<k<1, the probability that the distance between image points Tx and Ty is less than kt is at least as large as the probability that the distance between x and y is less than t.

In 1972, Sehgal and Bharucha-Reid proved the following result.

Theorem 1.1 (Sehgal and Bharucha-Reid [3], 1972)

Let (E,F,△) be a complete Menger probabilistic metric space for which the triangular norm △ is continuous and satisfies △(a,b)=min(a,b). If T is a mapping of E into itself such that for some 0<k<1 and all x,y∈E,

F T x , T y (t)≥ F x , y ( t k ) ,∀t>0,
(1.1)

then T has a unique fixed point x ∗ in E, and for any given x 0 ∈X, T n x 0 converges to x ∗ .

The mapping T satisfying (1.1) is called a k-probabilistic contraction or a Sehgal contraction [3]. The fixed point theorem obtained by Sehgal and Bharucha-Reid is a generalization of the classical Banach contraction principle and is further investigated by many authors [2, 4–18]. Some results in this theory have found applications to control theory, system theory, and optimization problems.

Next we shall recall some well-known definitions and results in the theory of probabilistic metric spaces which are used later on in this paper. For more details, we refer the reader to [8].

Definition 1.2 A triangular norm (shortly, â–³-norm) is a binary operation â–³ on [0,1] which satisfies the following conditions:

  1. (a)

    â–³ is associative and commutative;

  2. (b)

    â–³ is continuous;

  3. (c)

    △(a,1)=a for all a∈[0,1];

  4. (d)

    △(a,b)≤△(c,d) whenever a≤c and b≤d for each a,b,c,d∈[0,1].

The following are the six basic â–³-norms:

△ 1 (a,b)=max(a+b−1,0);

â–³ 2 (a,b)=aâ‹…b;

â–³ 3 (a,b)=min(a,b);

â–³ 4 (a,b)=max(a,b);

△ 5 (a,b)=a+b−ab;

â–³ 6 (a,b)=min(a+b,1).

It is easy to check that the above six â–³-norms have the following relations:

△ 1 (a,b)≤ △ 2 (a,b)≤ △ 3 (a,b)≤ △ 4 (a,b)≤ △ 5 (a,b)≤ △ 6 (a,b),

for any a,b∈[0,1].

Definition 1.3 A function F(t):(−∞,+∞)→[0,1] is called a distribution function if it is non-decreasing and left-continuous with lim t → − ∞ F(t)=0. If in addition F(0)=0 then F is called a distance distribution function.

Definition 1.4 A distance distribution function F satisfying lim t → + ∞ F(t)=1 is called a Menger distance distribution function. The set of all Menger distance distribution functions is denoted by D + . A special Menger distance distribution function given by

H(t)={ 0 , t ≤ 0 , 1 , t > 0 .

Definition 1.5 A probabilistic metric space is a pair (E,F), where E is a nonempty set, F is a mapping from E×E into D + such that, if F x , y denotes the value of F at the pair (x,y), the following conditions hold:

(PM-1) F x , y (t)=H(t) if and only if x=y;

(PM-2) F x , y (t)= F y , x (t) for all x,y∈E and t∈(−∞,+∞);

(PM-3) F x , z (t)=1, F z , y (s)=1 implies F x , y (t+s)=1

for all x,y,z∈E and −∞<t<+∞.

Definition 1.6 A Menger probabilistic metric space (abbreviated, Menger PM space) is a triple (E,F,△) where E is a nonempty set, △ is a continuous t-norm and F is a mapping from E×E into D + such that, if F x , y denotes the value of F at the pair (x,y), the following conditions hold:

(MPM-1) F x , y (t)=H(t) if and only if x=y;

(MPM-2) F x , y (t)= F y , x (t) for all x,y∈E and t∈(−∞,+∞);

(MPM-3) F x , y (t+s)≥△( F x , z (t), F z , y (s)) for all x,y,z∈E and t>0, s>0.

Now we give a new definition of probabilistic metric space so-called S-probabilistic metric space. This definition reflects a more probabilistic meaning and the probabilistic background. In this definition, the triangle inequality has been changed to a new form.

Definition 1.7 A S-probabilistic metric space is a pair (E,F), where E is a nonempty set, F is a mapping from E×E into D + such that, if F x , y denotes the value of F at the pair (x,y), the following conditions hold:

(SPM-1) F x , y (t)=H(t) if and only if x=y;

(SPM-2) F x , y (t)= F y , x (t) for all x,y∈E and t∈(−∞,+∞);

(SPM-3) F x , y (t)≥ F x , z (t)∗ F z , y (t) ∀x,y,z∈E,

where F x , z (t)∗ F z , y (t) is the convolution between F x , z (t) and F z , y (t) defined by

F x , z (t)∗ F z , y (t)= ∫ 0 + ∞ F x , z (t−u)d F z , y (u).

Example Let X be a nonempty set, S be a measurable space which consist of some metrics on the X, (Ω,P) be a complete probabilistic measure space and f:Ω→S be a measurable mapping. It is easy to think S is a random metric on the X, of course, (X,S) is a random metric space. The following expressions of the distribution functions F x , y (t), F x , z (t), and F z , y (t) are reasonable:

F x , y ( t ) = P { f − 1 { d ∈ S ; d ( x , y ) < t } } , F x , z ( t ) = P { f − 1 { d ∈ S ; d ( x , z ) < t } } ,

and

F z , y (t)=P { f − 1 { d ∈ S ; d ( z , y ) < t } }

for all x,y,z∈X. Since

P { f − 1 { d ∈ S ; d ( x , y ) < t } } ≥P { f − 1 { d ∈ S ; d ( x , z ) + d ( z , y ) < t } }

it follows from probabilistic theory that

P { f − 1 { d ∈ S ; d ( x , z ) + d ( z , y ) < t } } = F x , z (t)∗ F z , y (t).

Therefore

F x , y (t)≥ F x , z (t)∗ F z , y (t),∀x,y,z∈X.

In addition, the conditions (SPM-1), (SPM-2) are obvious.

In this paper, both the Menger probabilistic metric spaces and the S-probabilistic metric spaces are included in the probabilistic metric spaces.

Several problems can be changed as equations of the form Tx=x, where T is a given self-mapping defined on a subset of a metric space, a normed linear space, a topological vector space or some suitable space. However, if T is a non-self-mapping from A to B, then the aforementioned equation does not necessarily admit a solution. In this case, it is contemplated to find an approximate solution x in A such that the error d(x,Tx) is minimum, where d is the distance function. In view of the fact that d(x,Tx) is at least d(A,B), a best proximity point theorem guarantees the global minimization of d(x,Tx) by the requirement that an approximate solution x satisfies the condition d(x,Tx)=d(A,B). Such optimal approximate solutions are called best proximity points of the mapping T. Interestingly, best proximity point theorems also serve as a natural generalization of fixed point theorems, for a best proximity point becomes a fixed point if the mapping under consideration is a self-mapping. Research on the best proximity point is an important topic in the nonlinear functional analysis and applications (see [19–31]).

Let A, B be two nonempty subsets of a complete metric space and consider a mapping T:A→B. The best proximity point problem is whether we can find an element x 0 ∈A such that d( x 0 ,T x 0 )=min{d(x,Tx):x∈A}. Since d(x,Tx)≥d(A,B) for any x∈A, in fact, the optimal solution to this problem is the one for which the value d(A,B) is attained.

Let A, B be two nonempty subsets of a metric space (X,d). We denote by A 0 and B 0 the following sets:

A 0 = { x ∈ A : d ( x , y ) = d ( A , B )  for some  y ∈ B } , B 0 = { y ∈ B : d ( x , y ) = d ( A , B )  for some  x ∈ A } ,

where d(A,B)=inf{d(x,y):x∈A and y∈B}.

It is interesting to notice that A 0 and B 0 are contained in the boundaries of A and B, respectively, provided A and B are closed subsets of a normed linear space such that d(A,B)>0 [19].

In order to study the best proximity point problems, we need the following notations.

Definition 1.8 ([30])

Let (A,B) be a pair of nonempty subsets of a metric space (X,d) with A 0 ≠∅. Then the pair (A,B) is said to have the P-property if and only if for any x 1 , x 2 ∈ A 0 and y 1 , y 2 ∈ B 0 ,

{ d ( x 1 , y 1 ) = d ( A , B ) , d ( x 2 , y 2 ) = d ( A , B ) ⇒d( x 1 , x 2 )=d( y 1 , y 2 ).

In [31], the author proves that any pair (A,B) of nonempty closed convex subsets of a real Hilbert space H satisfies P-property.

In [25, 26], P-property has been weakened to the weak P-property. An example that satisfies the P-property but not the weak P-property can be found there.

Definition 1.9 ([25, 26])

Let (A,B) be a pair of nonempty subsets of a metric space (X,d) with A 0 ≠∅. Then the pair (A,B) is said to have the weak P-property if and only if for any x 1 , x 2 ∈ A 0 and y 1 , y 2 ∈ B 0 ,

{ d ( x 1 , y 1 ) = d ( A , B ) , d ( x 2 , y 2 ) = d ( A , B ) ⇒d( x 1 , x 2 )≤d( y 1 , y 2 ).

Recently, many best proximity point problems with applications have been discussed and some best proximity point theorems have been proved. For more details, we refer the reader to [27].

In this paper, we establish some definitions and basic concepts of the best proximity point in the framework of probabilistic metric spaces.

Definition 1.10 Let (E,F) be a probabilistic metric space, A,B⊂E be two nonempty sets. Let

F A , B (t)= sup x ∈ A , y ∈ B F x , y (t),∀t∈(−∞,+∞),

which is said to be the probabilistic distance of A, B.

Example Let X be a nonempty set and d 1 , d 2 be two metrics defined on X with the probabilities p 1 =0.5, p 2 =0.5, respectively. Assume that

d 1 (x,y)≤ d 2 (x,y),∀x,y∈X.

For any x,y∈X, the table

is a discrete random variable with the distribution function

F x , y (t)={ 0 , t ≤ d 1 ( x , y ) , 0.5 , d 1 ( x , y ) < t ≤ d 2 ( x , y ) , 1 , d 2 ( x , y ) < t .

Let A, B be two nonempty sets of X, the table

is also a discrete random variable with the distribution function

F A , B (t)={ 0 , t ≤ d 1 ( A , B ) , 0.5 , d 1 ( A , B ) < t ≤ d 2 ( A , B ) , 1 , d 2 ( A , B ) < t ,

where

d i (A,B)= inf x ∈ A , y ∈ B d i (x,y),i=1,2.

It is easy to see that

F A , B (t)= sup x ∈ A , y ∈ B F x , y (t),∀t∈(−∞,+∞).

Definition 1.11 Let (E,F) be a probabilistic metric space, A,B⊂E be two nonempty subsets and T:A→B be a mapping. We say that x ∗ ∈A is a best proximity point of the mapping T if the following equality holds:

F x ∗ , T x ∗ (t)= F A , B (t),∀t∈(−∞,+∞).

Example Let X be a nonempty set and d 1 , d 2 be two metrics defied on X with the probabilities p 1 =0.5, p 2 =0.5, respectively. Let A, B be two nonempty sets of X and T:A→B be a mapping. Assume

d 1 (x,y)≤ d 2 (x,y),∀x,y∈X.

If there exists a point x ∗ ∈A, such that

d 1 ( x ∗ , T x ∗ ) = d 1 (A,B), d 2 ( x ∗ , T x ∗ ) = d 2 (A,B),

then the table

is a discrete random variable with the distribution function

F x ∗ , T x ∗ (t)={ 0 , t ≤ d 1 ( x ∗ , T x ∗ ) , 0.5 , d 1 ( x ∗ , T x ∗ ) < t ≤ d 2 ( x ∗ , T x ∗ ) , 1 , d 2 ( x ∗ , T x ∗ ) < t .

It is obvious that F x ∗ , T x ∗ (t)= F A , B (t).

It is clear that the notion of a fixed point coincided with the notion of a best proximity point when the underlying mapping is a self-mapping. Let (E,F) be a probabilistic metric space. Suppose that A⊂E and B⊂E are nonempty subsets. We define the following sets:

A 0 = { x ∈ A : F x , y ( t ) = F A , B ( t )  for some  y ∈ B } , B 0 = { y ∈ A : F x , y ( t ) = F A , B ( t )  for some  x ∈ A } .

Definition 1.12 Let (A,B) be a pair of nonempty subsets of a probabilistic metric space (E,F) with A 0 ≠∅. Then the pair (A,B) is said to have the P-property if and only if for any x 1 , x 2 ∈A and y 1 , y 2 ∈B,

F x 1 , y 1 (t)= F A , B (t), F x 2 , y 2 (t)= F A , B (t)⇒ F x 1 , x 2 (t)= F y 1 , y 2 (t).

Definition 1.13 Let (A,B) be a pair of nonempty subsets of a probabilistic metric space (E,F) with A 0 ≠∅. Then the pair (A,B) is said to have the weak P-property if and only if for any x 1 , x 2 ∈A and y 1 , y 2 ∈B,

F x 1 , y 1 (t)= F A , B (t), F x 2 , y 2 (t)= F A , B (t)⇒ F x 1 , x 2 (t)≥ F y 1 , y 2 (t).

Definition 1.14 Let (E,F) be a probabilistic metric space.

  1. (1)

    A sequence { x n } in E is said to converges to x∈E if for any given ε>0 and λ>0, there must exist a positive integer N=N(ε,λ) such that F x n , x (ε)>1−λ whenever n>N.

  2. (2)

    A sequence { x n } in E is called a Cauchy sequence if for any ε>0 and λ>0, there must exists a positive integer N=N(ε,λ) such that F x n , x m (ε)>1−λ, whenever n,m>N.

  3. (3)

    (E,F,â–³) is said to be complete if each Cauchy sequence in E converges to some point in E.

We denote by x n →x the { x n } converges to x. It is easy to see that x n →x if and only if F x n , x (t)→H(t) for any given t∈(−∞,+∞) as n→∞.

2 Contraction mapping principle in S-probabilistic metric spaces

Let (E,F) be a S-probabilistic metric space. For any x,y∈E we definite

d F (x,y)= ∫ 0 + ∞ td F x , y (t).

Since t is a continuous function and F x , y is a bounded variation functions, so the above integer is well definite. In fact, the above integer is just the mathematical expectation of F x , y (t). Throughout this paper we assume that

d F (x,y)= ∫ 0 + ∞ td F x , y (t)<+∞,∀x,y∈E,

for all probabilistic metric spaces (E,F) presented in this paper.

Next we give a new notation of convergence.

  1. (1)

    A sequence { x n } in E is said to converges averagely to x∈E if

    lim n → ∞ ∫ 0 + ∞ td F x n , x (t)=0.
  2. (2)

    A sequence { x n } in E is called an average Cauchy sequence if

    lim n , m → ∞ ∫ 0 + ∞ td F x n , x m (t)=0.
  3. (3)

    (E,F) is said to be average complete if each average Cauchy sequence in E converges averagely to some point in E.

We denote by x n ⇒x the { x n } that converges averagely to x.

Theorem 2.1 Let (E,F) be a S-probabilistic metric space. For any x,y∈E we define

d F (x,y)= ∫ 0 + ∞ td F x , y (t).

Then d F (x,y) is a metric on the E.

Proof Since F x , y (t)=H(t) (∀t∈R) if and only if x=y, and

∫ 0 + ∞ tdH(t)=0,

we know the condition d F (x,y)=0⇔x=y holds. The condition d F (x,y)= d F (y,x), for all x,y∈E, is obvious. Next we will prove the triangle inequality. For any x,y,z∈E, from (SPM-3) we have

F x , y (t)≥ ∫ 0 + ∞ F x , z (t−u)d F z , y (u)= F x , z (t)∗ F z , y (t).

By using probabilistic theory we know that

∫ 0 + ∞ td F x , y (t)≤ ∫ 0 + ∞ td F x , z (t)+ ∫ 0 + ∞ td F z , y (t),

which implies that

d F (x,y)≤ d F (x,z)+ d F (z,y).

This completes the proof. □

Theorem 2.2 Let (E,F) be a complete S-probabilistic metric space. Let T:E→E be a mapping satisfying the following condition:

F T x , T y (t)≥ F x , y ( t h ) ,∀x,y∈E,∀t∈R=(−∞,+∞),
(2.1)

where 0<h<1 is a constant. Then T has a unique fixed point x ∗ ∈E and for any given x 0 ∈E the iterative sequence x n + 1 =T x n converges to x ∗ . Further, the error estimate inequality

∫ 0 + ∞ td F T n x 0 , x ∗ (t)≤ h n 1 − h ∫ 0 + ∞ td F T x 0 , x 0 (t)

holds for all n≥1.

Proof For any x,y∈E, from (2.1) we have

d F ( T x , T y ) = ∫ 0 + ∞ t d F T x , T y ( t ) ≤ ∫ 0 + ∞ t d F x , y ( t h ) = h ∫ 0 + ∞ t h d F x , y ( t h ) = h ∫ 0 + ∞ u d F x , y ( u ) = h d F ( x , y ) .

For any given x 0 ∈E, define x n + 1 =T x n for all n=0,1,2,… . Observe that

d F ( x n , x n + m ) ≤ d F ( x n , x n + 1 ) + d F ( x n + 1 , x n + m ) ≤ d F ( x n , x n + 1 ) + d F ( x n + 1 , x n + 2 ) + d F ( x n + 2 , x n + m ) ≤ ( h n + h n + 1 + h n + 2 + ⋯ + h n + m − 1 ) d F ( x 0 , x 1 ) .
(2.2)

Since 0<h<1, we have

( h n + h n + 1 + h n + 2 + ⋯ + h n + m − 1 ) d F ( x 0 , x 1 )→0

as n→∞. Hence

∫ 0 + ∞ td F x n , x n + m (t)= d F ( x n , x n + m )→0

as n→∞. We claim that

lim n → ∞ F x n , x n + m =H(t).
(2.3)

If not, there must exist numbers t 0 >0, 0< λ 0 <1, and subsequences { n k }, { m k } of {n} such that F x n k , x n k + m k ( t 0 )≤ λ 0 , for all k≥1. In this case, we have

d F ( x n k , x n k + m k ) = ∫ 0 + ∞ t d F x n k , x n k + m k ( t ) = ∫ 0 t 0 t d F x n k , x n k + m k ( t ) + ∫ t 0 + ∞ t d F x n k , x n k + m k ( t ) ≥ ∫ t 0 + ∞ t d F x n k , x n k + m k ( t ) ≥ t 0 ( 1 − F x n k , x n k + m k ( t 0 ) ) ≥ t 0 ( 1 − λ 0 ) > 0 .

This is a contradiction. From (2.3) we know { x n } is a Cauchy sequence in complete S-probabilistic metric space (E,F). Hence there exists a point x ∗ ∈E such that { x n } converges to x ∗ in the mean of

lim n → ∞ F x n , x ∗ (t)=H(t),∀t≥0.

Therefore

lim n → ∞ F x n , T x ∗ (t)≥ lim n → ∞ F x n − 1 , x ∗ ( t h ) =H(t),∀t≥0.

We claim x ∗ is a fixed point of T, in fact, for any t>0, it follows from condition (SPM-3) that

F x ∗ , T x ∗ ( t ) ≥ ∫ 0 + ∞ F x ∗ , x n ( t − u ) d F x n , T x ∗ ( u ) ≥ ∫ 0 t 2 F x ∗ , x n ( t − u ) d F x n , T x ∗ ( u ) = F x ∗ , x n ( t 2 ) ( F x n , T x ∗ ( t 2 ) − 0 ) → 1

as n→∞, which implies F x ∗ , T x ∗ (t)=H(t), and hence x ∗ =T x ∗ . The x ∗ is a fixed point of T. If there exists another fixed point x ∗ ∗ of T, we obverse

F x ∗ , x ∗ ∗ (t)= F T x ∗ , T x ∗ ∗ (t)≥ F x ∗ , x ∗ ∗ ( t h ) ,

which implies F x ∗ , x ∗ ∗ (t)=H(t) ∀t∈R, and hence x ∗ = x ∗ ∗ . Then the fixed point of T is unique. Meanwhile, for any given x 0 , the iterative sequence x n = T n x 0 converges to x ∗ . Finally, we prove the error estimate formula. Let m→∞ in the inequality (2.2); we get

d F ( x n , x ∗ ) ≤ h n 1 − h d F ( x 0 , x 1 ),

which can be rewritten as the following error estimate formula:

∫ 0 + ∞ td F T n x 0 , x ∗ (t)≤ h n 1 − h ∫ 0 + ∞ td F T x 0 , x 0 (t).

This completes the proof. □

Theorem 2.3 Let (E,F,â–³) be a complete Menger probabilistic metric space. Assume

△ ( F x , z ( t 2 ) , F z , y ( t 2 ) ) ≥ ∫ 0 + ∞ F x , z (t−u)d F z , y (u),
(2.4)

for all x,y,z∈E, t>0. Let T:E→E be a mapping satisfying the following condition:

F T x , T y (t)≥ F x , y ( t h ) ,∀x,y∈E,∀t>0,
(2.5)

where 0<h<1 is a constant. Then T has a unique fixed point x ∗ ∈E and for any given x 0 ∈E the iterative sequence x n + 1 =T x n converges to x ∗ . Further, the error estimate inequality

∫ 0 + ∞ td F T n x 0 , x ∗ (t)≤ h n 1 − h ∫ 0 + ∞ td F T x 0 , x 0 (t)

holds for all n≥1.

Proof From (2.4) we know that (E,F,â–³) is a S-probabilistic metric space. This together with (2.5), by using Theorem 2.2, shows that the conclusion is proved. □

3 Best proximity point theorems for contractions

We first define the notion of P-operator P: B 0 → A 0 , it is very useful for the proof of the best proximity point theorem. From the definitions of A 0 and B 0 , we know that for any given y∈ B 0 , there exists an element x∈ A 0 such that F x , y (t)= F A , B (t). Because (A,B) has the weak P-property, such x is unique. We denote by x=Py the P-operator from B 0 into  A 0 .

Theorem 3.1 Let (E,F) be a complete S-probabilistic metric space. Let (A,B) be a pair of nonempty subsets in E and A 0 be a nonempty closed subset. Suppose (A,B) satisfies the weak P-property. Let T:A→B be a mapping satisfying the following condition:

F T x , T y (t)≥ F x , y ( t h ) ,∀x,y∈A,∀t>0,

where 0<h<1 is a constant. Assume that T( A 0 )⊂ B 0 . Then T has a unique best proximity point x ∗ ∈A and for any given x 0 ∈E the iterative sequence x n + 1 =PT x n converges to x ∗ . Further, the error estimate inequality

∫ 0 + ∞ td F ( P T ) n x 0 , x ∗ (t)≤ h n 1 − h ∫ 0 + ∞ td F P T x 0 , x 0 (t)

holds for all n≥1.

Proof Since the pair (A,B) has the weak P-property, we have

F P T x 1 , P T x 2 (t)≥ F T x 1 , T x 2 (t)≥ F x 1 , x 2 ( t h ) ,∀t>0,

for any x 1 , x 2 ∈ A 0 . This shows that PT: A 0 → A 0 is a contraction from complete S-probabilistic metric subspace A 0 into itself. Using Theorem 2.2, we know that PT has a unique fixed point x ∗ and for any given x 0 ∈E the iterative sequence x n + 1 =PT x n converges to x ∗ . Further, the error estimate inequality

∫ 0 + ∞ td F ( P T ) n x 0 , x ∗ (t)≤ h n 1 − h ∫ 0 + ∞ td F P T x 0 , x 0 (t)

holds for all n≥1. Since PT x ∗ = x ∗ if and only if F x ∗ , T x ∗ (t)= F A , B (t), so the point x ∗ is a unique best proximity point of T:A→B. This completes the proof. □

Theorem 3.2 Let (E,F,â–³) be a complete Menger probabilistic metric space. Assume that

△ ( F x , z ( t 2 ) , F z , y ( t 2 ) ) ≥ ∫ 0 + ∞ F x , z (t−u)d F z , y (u),
(3.1)

for all x,y,z∈E, t>0. Let (A,B) be a pair of nonempty subsets in E and A 0 be nonempty closed subset. Suppose that (A,B) satisfies the weak P-property. Let T:A→B be a mapping satisfying the following condition:

F T x , T y (t)≥ F x , y ( t h ) ,∀x,y∈A,∀t>0,

where 0<h<1 is a constant. Assume T( A 0 )⊂ B 0 . Then T has a unique best proximity point x ∗ ∈A and for any given x 0 ∈E the iterative sequence x n + 1 =PT x n converges to x ∗ . Further, the error estimate inequality

∫ 0 + ∞ td F ( P T ) n x 0 , x ∗ (t)≤ h n 1 − h ∫ 0 + ∞ td F P T x 0 , x 0 (t)

holds for all n≥1.

Proof From (3.1) we know that (E,F,â–³) is a S-probabilistic metric space. By using Theorem 3.1, the conclusion is proved. □

4 Best proximity point theorem for Geraghty-contractions

First, we introduce the class Γ of those functions β:[0,+∞)→[0,1) satisfying the following condition:

β( t n )→1⇒ t n →0.

Definition 4.1 Let (E,F) be a probabilistic metric space. Let (A,B) be a pair of nonempty subsets in E. A mapping T:A→B is said to be a Geraghty-contraction if there exists β∈Γ such that

F T x , T y (t)≥ F x , y ( t β ( d F ( x , y ) ) ) ,∀x,y∈A,∀t>0,
(4.1)

where

d F (x,y)= ∫ 0 + ∞ td F x , y (t).

Theorem 4.2 Let (E,F) be a complete S-probabilistic metric space. Let (A,B) be a pair of nonempty subsets in E and A 0 be a nonempty closed subset. Suppose that (A,B) satisfies the weak P-property. Let T:A→B be a Geraghty-contraction. Assume T( A 0 )⊂ B 0 . Then T has a unique best proximity point x ∗ ∈A and for any given x 0 ∈E the iterative sequence x n + 1 =PT x n converges to x ∗ .

Proof From (4.1) and the weak P-property of (A,B), we get

d F ( P T x , P T y ) = ∫ 0 + ∞ t d F P T x , P T y ( t ) ≤ ∫ 0 + ∞ t d F T x , T y ( t ) ≤ ∫ 0 + ∞ t d F x , y ( t β ( d F ( x , y ) ) ) ≤ β ( d F ( x , y ) ) ∫ 0 + ∞ t β ( d F ( x , y ) ) d F x , y ( t β ( d F ( x , y ) ) ) = β ( d F ( x , y ) ) d F ( x , y ) , ∀ x , y ∈ E .
(4.2)

We have proved that d F (â‹…,â‹…) is a metric on the E in Theorem 2.1. For any given x 0 ∈E, define x n + 1 =PT x n , n=0,1,2,… . From (4.2) we have

d F ( x n , x n + 1 ) = d F ( P T x n − 1 , P T x n ) ≤ d F ( T x n − 1 , T x n ) ≤ β ( d F ( x , y ) ) d F ( x n − 1 , x n ) < d F ( x n − 1 , x n ) .
(4.3)

Suppose that there exists n 0 such that d F ( x n 0 , x n 0 + 1 )=0. In this case, PT x n 0 = x n 0 , which implies that x n 0 is a best proximity point of T and this is the desired result. In the contrary case, suppose that d F ( x n , x n + 1 )>0, for any n≥0. By (4.3), d F ( x n , x n + 1 ) is a decreasing sequence of nonnegative real numbers, and hence there exists r≥0 such that lim n → ∞ d F ( x n , x n + 1 )=r. In the sequel, we prove that r=0. Assume r>0, then from (4.3) we have

0< d F ( x n , x n + 1 ) d F ( x n − 1 , x n ) ≤β ( d F ( x n − 1 , x n ) ) <1

for all n≥0. The last inequality implies that lim n → ∞ β( d F ( x n − 1 , x n ))=1 and since β∈Γ, we obtain r=0 and this contradicts with our assumption. Therefore,

lim n → ∞ d F ( x n , x n + 1 )=0.
(4.4)

In what follows, we prove that { x n } is a Cauchy sequence in metric space (E, d F (â‹…,â‹…)). In the contrary case, there exist two subsequences { x n k }, { x m k } such that

lim k → ∞ d F ( x n k , x m k )>0.
(4.5)

Without loss of generality, we still denote by { x n }, { x m } these subsequences. By using the triangular inequality,

d F ( x n , x m ) ≤ d F ( x n , x n + 1 ) + d ( x n + 1 , x m + 1 ) + d F ( x m + 1 , x m ) ≤ d F ( x n , x n + 1 ) + d F ( P T x n , P T x m ) + d F ( x m + 1 , x m ) ≤ d F ( x n , x n + 1 ) + d F ( T x n , T x m ) + d F ( x m + 1 , x m ) ≤ d F ( x n , x n + 1 ) + β ( d F ( x n , x m ) ) d F ( x n , x m ) + d F ( x m + 1 , x m ) ,

which implies

d F ( x n , x m )≤ 1 1 − β ( d F ( x n , x m ) ) ( d F ( x n , x n + 1 ) + d F ( x m + 1 , x m ) ) .

The last inequality together with (4.4) and (4.5) give us

lim n , m → ∞ 1 1 − β ( d F ( x n , x m ) ) =∞.

Therefore,

lim n , m → ∞ β ( d F ( x n , x m ) ) =1.

Since β∈Γ, we get

lim n , m → ∞ d F ( x n , x m )=0.

This is a contradiction with (4.5). Hence lim n , m → ∞ d F ( x n , x m )=0, the { x n } is a Cauchy sequence in metric space (E, d F (â‹…,â‹…)). By using the same method as in Theorem 2.2, we know

lim n , m → ∞ F x n , x m (t)=H(t),∀t∈R.

This shows that the { x n } is also a Cauchy sequence in S-probabilistic metric space (E,F). Since (E,F) is complete, then there exists a point x ∗ ∈E such that x n → x ∗ as n→∞. By using the same method as in Theorem 2.2, we know that x ∗ is a unique fixed point of mapping PT: A 0 → A 0 . That is, PT x ∗ = x ∗ , which is equivalent to x ∗ is a unique best proximity point of T. This completes the proof. □

References

  1. Menger K: Statistical metrics. Proc. Natl. Acad. Sci. USA 1942, 28: 535–537. 10.1073/pnas.28.12.535

    Article  MathSciNet  Google Scholar 

  2. Sehgal, VM: Some fixed point theorems in functional analysis and probability. Ph.D. thesis, Wayne State University, Detroit, MI (1966)

    Google Scholar 

  3. Sehgal VM, Bharucha-Reid AT: Fixed points of contraction mappings on PM-spaces. Math. Syst. Theory 1972, 6: 97–102. 10.1007/BF01706080

    Article  MathSciNet  Google Scholar 

  4. Cho YJ, Park KS, Chang SS: Fixed point theorems in metric spaces and probabilistic metric spaces. Int. J. Math. Math. Sci. 1996, 19: 243–252. 10.1155/S0161171296000348

    Article  MathSciNet  Google Scholar 

  5. Ćirić LB: Some new results for Banach contractions and Edelstein contractive mappings on fuzzy metric spaces. Chaos Solitons Fractals 2009, 42: 146–154. 10.1016/j.chaos.2008.11.010

    Article  MathSciNet  Google Scholar 

  6. Ćirić LB, Mihet D, Saadati R: Monotone generalized contractions in partially ordered probabilistic metric spaces. Topol. Appl. 2009, 156: 2838–2844. 10.1016/j.topol.2009.08.029

    Article  Google Scholar 

  7. Fang JX: Common fixed point theorems of compatible and weakly compatible maps in Menger spaces. Nonlinear Anal. 2009, 71: 1833–1843. 10.1016/j.na.2009.01.018

    Article  MathSciNet  Google Scholar 

  8. Hadzi O, Pap E: Fixed Point Theory in Probabilistic Metric Spaces. Kluwer Academic, Dordrecht; 2001.

    Book  Google Scholar 

  9. Kamran T: Common fixed points theorems for fuzzy mappings. Chaos Solitons Fractals 2008, 38: 1378–1382. 10.1016/j.chaos.2008.04.031

    Article  MathSciNet  Google Scholar 

  10. Liu Y, Li Z: Coincidence point theorems in probabilistic and fuzzy metric spaces. Fuzzy Sets Syst. 2007, 158: 58–70. 10.1016/j.fss.2006.07.010

    Article  Google Scholar 

  11. Mihet D: Altering distances in probabilistic Menger spaces. Nonlinear Anal. 2009, 71: 2734–2738. 10.1016/j.na.2009.01.107

    Article  MathSciNet  Google Scholar 

  12. Mihet D: Fixed point theorems in probabilistic metric spaces. Chaos Solitons Fractals 2009, 41(2):1014–1019. 10.1016/j.chaos.2008.04.030

    Article  MathSciNet  Google Scholar 

  13. Mihet D: A note on a paper of Hicks and Rhoades. Nonlinear Anal. 2006, 65: 1411–1413. 10.1016/j.na.2005.10.021

    Article  MathSciNet  Google Scholar 

  14. O’Regan D, Saadati R: Nonlinear contraction theorems in probabilistic spaces. Appl. Math. Comput. 2008, 195: 86–93. 10.1016/j.amc.2007.04.070

    Article  MathSciNet  Google Scholar 

  15. Saadati R, Sedghi S, Shobe N: Modified intuitionistic fuzzy metric spaces and some fixed point theorems. Chaos Solitons Fractals 2006, 38: 36–47.

    Article  MathSciNet  Google Scholar 

  16. Schweizer B, Sklar A: Probabilistic Metric Spaces. Elsevier, Amsterdam; 1983.

    Google Scholar 

  17. Sedghi S, Žikić-Došenović S, Shobe N: Common fixed point theorems in Menger probabilistic quasimetric spaces. Fixed Point Theory Appl. 2009., 2009: Article ID 546273

    Google Scholar 

  18. Hadzi O: Fixed point theorems for multivalued mappings in probabilistic metric spaces. Fuzzy Sets Syst. 1997, 88: 219–226. 10.1016/S0165-0114(96)00072-3

    Article  Google Scholar 

  19. Sadiq Basha S: Best proximity point theorems generalizing the contraction principle. Nonlinear Anal. 2011, 74: 5844–5850. 10.1016/j.na.2011.04.017

    Article  MathSciNet  Google Scholar 

  20. Mongkolkeha C, Cho YJ, Kumam P: Best proximity points for Geraghty’s proximal contraction mappings. Fixed Point Theory Appl. 2013., 2013: Article ID 180

    Google Scholar 

  21. Dugundji J, Granas A: Weakly contractive mappings and elementary domain invariance theorem. Bull. Soc. Math. Grèce (N.S.) 1978, 19: 141–151.

    MathSciNet  Google Scholar 

  22. Alghamdi MA, Alghamdi MA, Shahzad N: Best proximity point results in geodesic metric spaces. Fixed Point Theory Appl. 2013., 2013: Article ID 164

    Google Scholar 

  23. Nashine HK, Kumam P, Vetro C: Best proximity point theorems for rational proximal contractions. Fixed Point Theory Appl. 2013., 2013: Article ID 95

    Google Scholar 

  24. Karapınar E: On best proximity point of ψ -Geraghty contractions. Fixed Point Theory Appl. 2013., 2013: Article ID 200

    Google Scholar 

  25. Zhang JL, Su Y, Cheng Q: Best proximity point theorems for generalized contractions in partially ordered metric spaces. Fixed Point Theory Appl. 2013., 2013: Article ID 83

    Google Scholar 

  26. Zhang JL, Su Y, Cheng Q: A note on ‘A best proximity point theorem for Geraghty-contractions’. Fixed Point Theory Appl. 2013., 2013: Article ID 99

    Google Scholar 

  27. Zhang JL, Su Y: Best proximity point theorems for weakly contractive mapping and weakly Kannan mapping in partial metric spaces. Fixed Point Theory Appl. 2014., 2014: Article ID 50

    Google Scholar 

  28. Amini-Harandi A, Hussain N, Akbar F: Best proximity point results for generalized contractions in metric spaces. Fixed Point Theory Appl. 2013., 2013: Article ID 164

    Google Scholar 

  29. Kannan R: Some results on fixed points. Bull. Calcutta Math. Soc. 1968, 60: 71–76.

    MathSciNet  Google Scholar 

  30. Caballero J, Harjani J, Sadarangani K: A best proximity point theorem for Geraghty-contractions. Fixed Point Theory Appl. 2012. 10.1186/1687-1812-2012-231

    Google Scholar 

  31. Kirk WA, Reich S, Veeramani P: Proximinal retracts and best proximity pair theorems. Numer. Funct. Anal. Optim. 2003, 24: 851–862. 10.1081/NFA-120026380

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This project is supported by the National Natural Science Foundation of China under grant (11071279).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongfu Su.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Su, Y., Zhang, J. Fixed point and best proximity point theorems for contractions in new class of probabilistic metric spaces. Fixed Point Theory Appl 2014, 170 (2014). https://doi.org/10.1186/1687-1812-2014-170

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2014-170

Keywords