Skip to main content

New generalized pseudodistance and coincidence point theorem in a b-metric space

Abstract

In this paper, in a b-metric space, we introduce the concept of b-generalized pseudodistances which are the extension of b-metric. Next, inspired by ideas of Singh and Prasad we define a new contractive condition with respect to this b-generalized pseudodistance, and the condition guaranteeing the existence of coincidence points for four mappings. The examples which illustrate the main result are given. The paper includes also the comparison of our result with those existing in literature.

MSC:47H10, 54H25, 54E50, 54E35, 45M20.

1 Introduction

The study of existence and unique problems by iterative approximation originates from the work of Banach [1] concerning contractive maps.

Theorem 1.1 Let (X,d) be a complete metric space and let T:XX. If

(B) (Banach [1]) 0 λ < 1 x , y X {d(T(x),T(y))λd(x,y)},

then: (a) T has a unique fixed point w in X; and (b) w 0 X { lim m T [ m ] ( w 0 )=w}.

The Banach [1] result was an important tool to solve the following equation:

T(x)=x,
(1.1)

where T:XX and xX. If we replace the identity map on X with some map S:XX on the right-hand side of equation (1.1), then we obtain the following equation:

T(x)=S(x).
(1.2)

Equation (1.2) is called a coincidence point equation and plays a very important role in many physical formulations. To solve equation (1.2), we can use the Jungck [2, 3] iterative procedure, i.e.,

S( x n + 1 )=T( x n ),n=0,1,2,.

In 1998, Czerwik [4] introduced the following definition of a b-metric space.

Definition 1.1 Let X be a nonempty subset and s1 be a given real number. A function d:X×X[0,) is b-metric if the following three conditions are satisfied:

  1. (d1)

    x , y X {d(x,y)=0x=y};

  2. (d2)

    x , y X {d(x,y)=d(y,x)}; and

  3. (d3)

    x , y , z X {d(x,z)s[d(x,y)+d(y,z)]}.

The pair (X,d) is called a b-metric space (with constant s1). It is easy to see that each metric space is a b-metric space.

Recently, in 2009, Singh and Prasad [5] introduced and established the following interesting and important coincidence points theorem for four maps in b-metric space.

Theorem 1.2 Let (X,d) be a b-metric space (with s1), where d:X×X[0,) is continuous on X 2 ,YX, and let A,B,S,T:YX be such that T(Y)B(Y), A(Y)S(Y) and the following condition holds: there exists q(0,1) such that qs<1 and λs<1 (where λ=max{q, q s 2 q s }) and such that for all x,yX, we have

d ( T ( x ) , A ( y ) ) q max { d ( S ( x ) , B ( y ) ) , d ( S ( x ) , T ( x ) ) , d ( B ( y ) , A ( y ) ) , [ d ( S ( x ) , A ( y ) ) + d ( B ( y ) , T ( x 7 ) ) ] / 2 } .
(1.3)

If one of the images A(Y), B(Y), S(Y) or T(Y) is a complete subspace of X, then:

  1. (i)

    T and S have a coincidence point, i.e., there exists vY such that S(v)=T(v);

  2. (ii)

    A and B have a coincidence point, i.e., there exists wY such that B(w)=A(w).

It is worth noticing that condition (1.3) is a generalization of the following conditions, which are known in literature:

q ( 0 , 1 ) x , y Y { d ( T ( x ) , T ( y ) ) q d ( S ( x ) , S ( y ) ) } ;
(1.4)

and

q ( 0 , 1 ) x , y Y { d ( T ( x ) , T ( y ) ) q max { d ( S ( x ) , S ( y ) ) , d ( S ( x ) , T ( x ) ) , d ( S ( y ) , T ( y ) ) , [ d ( S ( x ) , T ( y ) ) + d ( S ( y ) , T ( x ) ) ] / 2 } } .
(1.5)

In literature, the pair of maps S:YX, T:YX satisfying (1.4) is called the Jungck contraction, and q is called the Jungck constant. Condition (1.5) with Y=X and S=id (the identity map on X) was considered by Rhoades [6].

On the other hand, the famous Banach’s result has been given extensive applications in many fields of mathematics and applied mathematics and has been extended in many different directions. One of the courses was to replace metric d by some more general maps. In the complete metric spaces (X,d), w-distances [7] and τ-distances [8] have been found to have substantial applications in fixed point theory. Due to them, some generalizations of Banach contractions have been introduced. Many interesting extensions of Theorem 1.1 to w-distances and τ-distances settings have been given and techniques based on these distances have been presented (see, for example, [8, 9]). It is worth noticing that τ-distances generalize w-distances and metrics d. In 2006, Włodarczyk and Plebaniak [10] introduced the concepts of J-families of generalized pseudodistances in uniform spaces which generalized distances of Tataru [11], w-distances of Kada et al. [7], τ-distances of Suzuki [[12], Section 2] and τ-functions of Lin and Du [13] in metric spaces and distances of Vályi [14] in uniform spaces. The distance was researched in [1517].

The main interest of this paper is the following.

Question 1.1 Do a new kind of asymmetric distances (which extend b-metric) on b-metric spaces and a new kind of completeness of b-metric spaces exist?

Question 1.2 Does a new kind of contractions of (1.3) type with respect to these new distances exist?

The answer is affirmative. In this paper, in a b-metric space, we introduce the concept of b-generalized pseudodistances which are the extension of b-metric. Next, inspired by the ideas of Singh and Prasad [5], we define a new contractive condition of (1.3) type with respect to this b-generalized pseudodistance, and the condition guaranteeing the existence of coincidence points for four mappings. The examples which illustrate the main result are given. The paper includes also the comparison of our result with those existing in literature.

2 On generalized pseudodistance, b-generalized pseudodistance and admissible b-generalized pseudodistance in b-metric spaces

In the rest of the paper, we assume that the b-metric d:X×X[0,) is continuous on X 2 . At the very beginning, in a b-metric space, we introduce the concept of b-generalized pseudodistance, which is an essential generalization of b-metric.

Definition 2.1 Let X be a b-metric space (with constant s1). The map J:X×X[0,) is said to be a generalized pseudodistance on X if the following two conditions hold:

  1. (J1)

    x , y , z X {J(x,z)J(x,y)+J(y,z)}; and

  2. (J2)

    For any sequences ( x m :mN) and ( y m :mN) in X such that

    lim n sup m > n J( x n , x m )=0
    (2.1)

    and

    lim m J( x m , y m )=0,
    (2.2)

    we have

    lim m d( x m , y m )=0.
    (2.3)

Definition 2.2 Let X be a b-metric space (with s1). The map J:X×X[0,) is called a b-generalized pseudodistance on X if the conditions (J1) and (J2) hold, where

(J1′) x , y , z X {J(x,y)s[J(x,z)+J(z,y)]}.

Now, we introduce the following denotation. Let X be a b-metric space (with s1), and let J:X×X[0,) be a b-generalized pseudodistance on X. Then

X J 0 = { x X : { J ( x , x ) = 0 } } and X J + = { x X : { J ( x , x ) > 0 } } .
(2.4)

Then, of course, X= X J 0 X J + .

Remark 2.1 (A) If (X,d) is a b-metric space (with s1), then the b-metric d:X×X[0,) is a b-generalized pseudodistance on X. However, there exists a b-generalized pseudodistance on X which is not b-metric (for details, see Example 4.1).

(B) It is clear that if the map J is a generalized pseudodistance on X, then J is a b-generalized pseudodistance on X (for s=1).

(C) From (J1) and (J2) it follows that if xy, x,yX, then

J(x,y)>0J(y,x)>0.

Indeed, if J(x,y)=0 and J(y,x)=0, then J(x,x)=0, since by (J1) we get J(x,x)s[J(x,y)+J(y,x)]=s[0+0]=0. Now, defining ( x m =x:mN) and ( y m =y:mN), we conclude that (2.1) and (2.2) hold. Consequently, by (J2), we get (2.3), which implies d(x,y)=0. However, since xy, we have d(x,y)0, a contradiction.

Now, we apply a b-generalized pseudodistance to establish a new kind of completeness, which is an extension of natural sequential completeness.

Definition 2.3 Let (X,d) be a b-metric space (with s1), and let the map J:X×X[0,) be a b-generalized pseudodistance on X. We call that X is J-complete if for all sequence ( x m :mN) in X such that

lim n sup m > n J( x n , x m )=0,

there exists xX such that

lim m J( x m ,x)= lim m J(x, x m )=0.

Remark 2.2 It is worth noticing that if J=d, then by (d2) the definitions of J-completeness and completeness are identical.

Definition 2.4 Let (X,d) be a b-metric space (with s1), and let the map J:X×X[0,) be a b-generalized pseudodistance on X. We call that the map J is admissible if for all the sequences ( x m :mN) and ( y m :mN) such that: (i) condition (2.1) (for these sequences, i.e., lim n sup m > n J( x n , x m )=0, lim n sup m > n J( y n , y m )=0) holds; and (ii) lim m d( x m ,x)= lim m d( y m ,y)=0; the following property is true:

lim m J( x m , y m )=J(x,y).

Remark 2.3 (A) It is clear that if x X J 0 , then by (2.4) for a constant sequence ( x m =x:mN), we have that

lim n sup m > n J( x n , x m )= lim n sup m > n J(x,x)=0.

(B) Let x X J 0 be arbitrary and fixed, and let ( x m =x:mN). Then, of course, by (d2) we obtain lim m d( x m ,x)=0. Next, from (A) and Definition 2.4 it follows that if a sequence ( y m :mN) satisfies the following conditions: (i) lim n sup m > n J( y n , y m )=0; and (ii) lim m d( y m ,y)=0, then lim m J( x m , y m )=J(x,y). Moreover, similarly we can obtain that lim m J( y m , x m )=J(y,x).

Remark 2.4 It is worth noticing that if (X,d) is a b-metric space (with s1), then the b-metric d:X×X[0,) is an admissible b-generalized pseudodistance on X.

Definition 2.5 Let (X,d) be a b-metric space (with s1), YX. Let T:YX and S:YX be single-valued maps. A point zY is called a coincidence point of T and S if T(z)=S(z)=u for some uX.

The main result of this paper is the following coincidence theorem.

Theorem 2.1 Let (X,d) be a b-metric space (with s1), YX, and let the map J:X×X[0,) be an admissible b-generalized pseudodistance on X. Let A,B,S,T:YX be such that T(Y)B(Y), A(Y)S(Y). Let T(Y) X J 0 and A(Y) X J 0 , and assume that the following condition holds: there exists q(0,1) such that λs<1 (where λ=max{q, q s 2 q s }) and such that for all x,yY we have

max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } .
(2.5)

If one of the images of Y under the mapping A, B, S or T is a J-complete subspace of X, then:

  1. (i)

    T and S have a coincidence point vY;

  2. (ii)

    A and B have a coincidence point zX.

Moreover, for each w 0 Y, if we define the sequences ( w n :n{0}N) and ( v n :n{0} N ) such that for each nN we get

v 2 n 1 =B ( w 2 n 1 ) =T ( w 2 n 2 ) and v 2 n =S ( w 2 n ) =A ( w 2 n 1 ) ,

then the sequence ( v n :n{0} N ) is convergent to u (i.e., lim n d( v n ,u)=0), where

u=T(v)=S(v)=A(z)=B(z).

3 Proof of Theorem 2.1

Before starting the proof of Theorem 1.2, we present a simple consequence of property (2.5) and prove an auxiliary lemma. First, we can see that (2.5) implies that

J ( T ( x ) , A ( y ) ) q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 }
(3.1)

and

J ( A ( y ) , T ( x ) ) q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } .
(3.2)

Lemma 3.1 Let w 0 Y be arbitrary and fixed. Then, for the sequences ( w n :n{0}N) and ( v n :n{0} N ) defined as follows:

n N { v 2 n 1 = B ( w 2 n 1 ) = T ( w 2 n 2 ) v 2 n = S ( w 2 n ) = A ( w 2 n 1 ) } ,
(3.3)

we have

λ = max { q , q s 2 q s } < 1 n N { J ( v n + 1 , v n + 2 ) λ J ( v n , v n + 1 ) } .

Proof For fixed nN, by (3.3) and (3.1) (for x= w 2 n 2 and y= w 2 n 1 ), we obtain

J ( v 2 n 1 , v 2 n ) = J ( T ( w 2 n 2 ) , A ( w 2 n 1 ) ) q max { min { J ( S ( w 2 n 2 ) , B ( w 2 n 1 ) ) , J ( B ( w 2 n 1 ) , S ( w 2 n 2 ) ) } , J ( S ( w 2 n 2 ) , T ( w 2 n 2 ) ) , J ( B ( w 2 n 1 ) , A ( w 2 n 1 ) ) , [ J ( S ( w 2 n 2 ) , A ( w 2 n 1 ) ) + J ( B ( w 2 n 1 ) , T ( w 2 n 2 ) ) ] / 2 } = q max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) , [ J ( v 2 n 2 , v 2 n ) + J ( v 2 n 1 , v 2 n 1 ) ] / 2 } .
(3.4)

Now, since by (3.3), v 2 n 1 =T( w 2 n 2 ) and by assumption T(Y) X J 0 , we have v 2 n 1 X J 0 (i.e., J( v 2 n 1 , v 2 n 1 )=0). Consequently, by (3.4) we get

J ( v 2 n 1 , v 2 n ) q max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) , [ J ( v 2 n 2 , v 2 n ) + J ( v 2 n 1 , v 2 n 1 ) ] / 2 } = q max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) J ( v 2 n 1 , v 2 n ) , J ( v 2 n 2 , v 2 n ) / 2 } .
(3.5)

Let us consider the following three cases.

Case 1. If

max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 2 , v 2 n ) / 2 } = min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) }

or

max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 2 , v 2 n ) / 2 } = J ( v 2 n 2 , v 2 n 1 ) ,

then, since min{J( v 2 n 2 , v 2 n 1 ),J( v 2 n 1 , v 2 n 2 )}J( v 2 n 2 , v 2 n 1 ), in both situations, by (3.5) we obtain

J ( v 2 n 1 , v 2 n ) qJ ( v 2 n 2 , v 2 n 1 ) .
(3.6)

Case 2. If

max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 2 , v 2 n ) / 2 } = J ( v 2 n 1 , v 2 n )

then by (3.5) we have

J ( v 2 n 1 , v 2 n ) qJ ( v 2 n 1 , v 2 n ) <J ( v 2 n 1 , v 2 n ) ,

which gives

J ( v 2 n 1 , v 2 n ) =0.
(3.7)

Case 3. If

max { min { J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n 2 ) } , J ( v 2 n 2 , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 2 , v 2 n ) / 2 } = J ( v 2 n 2 , v 2 n ) / 2

then by (3.5) and (J1) we get

J ( v 2 n 1 , v 2 n ) q J ( v 2 n 2 , v 2 n ) / 2 q s [ J ( v 2 n 2 , v 2 n 1 ) + J ( v 2 n 1 , v 2 n ) ] 2 = q s J ( v 2 n 2 , v 2 n 1 ) + q s J ( v 2 n 1 , v 2 n ) 2 .

Hence

J ( v 2 n 1 , v 2 n ) q s J ( v 2 n 1 , v 2 n ) 2 q s J ( v 2 n 2 , v 2 n 1 ) 2 ,

which gives

2 q s 2 J ( v 2 n 1 , v 2 n ) q s 2 J ( v 2 n 2 , v 2 n 1 ) .

In consequence, we obtain

J ( v 2 n 1 , v 2 n ) q s 2 q s J ( v 2 n 2 , v 2 n 1 ) .
(3.8)

Conditions (3.6)-(3.8) imply that

J ( v 2 n 1 , v 2 n ) max { q , q s 2 q s } J ( v 2 n 2 , v 2 n 1 ) .
(3.9)

Similarly, for fixed nN, by (3.3) and (3.2) (for x= w 2 n and y= w 2 n 1 ), we obtain

J ( v 2 n , v 2 n + 1 ) = J ( A ( w 2 n 1 ) , T ( w 2 n ) ) q max { min { J ( S ( w 2 n ) , B ( w 2 n 1 ) ) , J ( B ( w 2 n 1 ) , S ( w 2 n ) ) } , J ( S ( w 2 n ) , T ( w 2 n ) ) , J ( B ( w 2 n 1 ) , A ( w 2 n 1 ) ) , [ J ( S ( w 2 n ) , A ( w 2 n 1 ) ) + J ( B ( w 2 n 1 ) , T ( w 2 n ) ) ] / 2 } = q max { min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( v 2 n 1 , v 2 n ) , [ J ( v 2 n , v 2 n ) + J ( v 2 n 1 , v 2 n + 1 ) ] / 2 } .
(3.10)

Now, since by (3.3), v 2 n =A( w 2 n 1 ) and by assumption A(Y) X J 0 , we have v 2 n X J 0 (i.e., J( v 2 n , v 2 n )=0). Consequently, since

[ J ( v 2 n , v 2 n ) , J ( v 2 n 1 , v 2 n + 1 ) ] /2=J ( v 2 n 1 , v 2 n + 1 ) /2,

by (3.10) we get

J ( v 2 n , v 2 n + 1 ) q max { min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 1 , v 2 n + 1 ) / 2 } .
(3.11)

We will consider the following three cases.

Case 1. If

max { min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 1 , v 2 n + 1 ) / 2 } = min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) }

or

max { min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 1 , v 2 n + 1 ) / 2 } = J ( v 2 n 1 , v 2 n ) ,

then, since min{J( v 2 n , v 2 n 1 ),J( v 2 n 1 , v 2 n )}J( v 2 n 1 , v 2 n ), in both situations, by (3.11) we obtain

J ( v 2 n , v 2 n + 1 ) qJ ( v 2 n 1 , v 2 n ) .
(3.12)

Case 2. If

max { min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 1 , v 2 n + 1 ) / 2 } = J ( v 2 n , v 2 n + 1 ) ,

then by (3.11) we have

J ( v 2 n , v 2 n + 1 ) qJ ( v 2 n , v 2 n + 1 ) <J ( v 2 n , v 2 n + 1 ) ,

which gives

J ( v 2 n , v 2 n + 1 ) =0.
(3.13)

Case 3. If

max { min { J ( v 2 n , v 2 n 1 ) , J ( v 2 n 1 , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( v 2 n 1 , v 2 n ) , J ( v 2 n 1 , v 2 n + 1 ) / 2 } = J ( v 2 n 1 , v 2 n + 1 ) / 2

then, by (3.11) and (J1), we get

J ( v 2 n , v 2 n + 1 ) q J ( v 2 n 1 , v 2 n + 1 ) / 2 q s [ J ( v 2 n 1 , v 2 n ) + J ( v 2 n , v 2 n + 1 ) ] 2 = q s J ( v 2 n 1 , v 2 n ) + q s J ( v 2 n , v 2 n + 1 ) 2 .

Hence

J ( v 2 n , v 2 n + 1 ) q s J ( v 2 n , v 2 n + 1 ) 2 q s J ( v 2 n 1 , v 2 n ) 2 ,

which gives

2 q s 2 J ( v 2 n , v 2 n + 1 ) q s 2 J ( v 2 n 1 , v 2 n ) .

In consequence, we obtain

J ( v 2 n , v 2 n + 1 ) q s 2 q s J ( v 2 n 1 , v 2 n ) .
(3.14)

Conditions (3.12)-(3.14) imply that

J ( v 2 n , v 2 n + 1 ) max { q , q s 2 q s } J ( v 2 n 1 , v 2 n ) .
(3.15)

Next, conditions (3.9) and (3.15) imply that

λ = max { q , q s 2 q s } < 1 n N { J ( v n + 1 , v n + 2 ) λ J ( v n , v n + 1 ) } .

The proof of the lemma is now completed. □

Now we can start the proof of the main theorem.

Proof of Theorem 2.1 Step I. Let w 0 Y be arbitrary and fixed. Construct the sequences ( w n :n{0}N) and ( v n :n{0} N ) as in (3.3), i.e., such that for each nN we get

v 2 n 1 =B ( w 2 n 1 ) =T ( w 2 n 2 ) and v 2 n =S ( w 2 n ) =A ( w 2 n 1 ) .

Then, by Lemma 3.1, we obtain

λ = max { q , q s 2 q s } < 1 n N { J ( v n + 1 , v n + 2 ) λ J ( v n , v n + 1 ) } .
(3.16)

Step II. We show that the sequence ( v n :n{0} N ) satisfies the following equation:

lim n sup m > n J ( v n , v m ) =0.
(3.17)

Indeed, for arbitrary and fixed nN and all mN, m>n, by (J1), we calculate

J ( v n , v m ) s J ( v n , v n + 1 ) + s J ( v n + 1 , v m ) s J ( v n , v n + 1 ) + s ( s J ( v n + 1 , v n + 2 ) + s J ( v n + 2 , v m ) ) = s J ( v n , v n + 1 ) + s 2 J ( v n + 1 , v n + 2 ) + s 2 ( s J ( v n + 2 , v n + 3 ) + s J ( v n + 3 , v m ) ) i = 1 m n s i J ( v n + i 1 , v n + i ) .

Hence, by (3.16), since λ<1, we obtain

J ( v n , v m ) i = 1 m n s i J ( v n + i 1 , v n + i ) i = 1 m n s i [ λ n + i 1 J ( v 0 , v 1 ) ] = i = 1 m n s i λ n + i 1 J ( v 0 , v 1 ) = J ( v 0 , v 1 ) i = 1 m n s i λ n + i 1 = J ( v 0 , v 1 ) λ n 1 i = 1 m n ( s λ ) i = J ( v 0 , v 1 ) λ n 1 i = 1 m n γ i < [ J ( v 0 , v 1 ) i = 1 m n ( γ i ) ] λ n 1 ,
(3.18)

where γ=sλ<1. Therefore, (3.18) we have

sup m > n J ( v n , v m ) m > n J ( v n , v m ) [ J ( v 0 , v 1 ) m = n + 1 i = 1 m n ( γ i ) ] λ n 1 = [ γ 1 γ J ( v 0 , v 1 ) ] λ n 1 .
(3.19)

Since λ<1, thus, as n in (3.19), we obtain

lim n sup m > n J ( v n , v m ) =0.

Thus, condition (3.17) holds.

Step III. Now we show that if S(Y) is J-complete, then there exists a unique uS(Y) such that lim n v 2 n =u.

Indeed, let S(Y) be J-complete. By (3.17) and Definition 2.3, there exists uS(Y) such that

lim n J ( v n , u ) = lim n J ( u , v n ) =0.
(3.20)

The facts that lim n v n =u and the point u is unique are proved together. Indeed, let us suppose that there exists u ˜ S(Y), u ˜ u, such that

lim n J ( v n , u ˜ ) = lim n J ( u ˜ , v n ) =0.
(3.21)

Then for sequences ( x n = v n :n N ) , ( y n =u:n N ) and ( y n ˜ = u ˜ :n N ) , by (3.17), (3.20) and (3.21), we have, respectively,

lim n sup m > n J( x n , x m )=0,
(3.22)
lim n J( x n , y n )=0and
(3.23)
lim n J( x n , y n ˜ )=0.
(3.24)

By (3.22) and (3.23), for sequences ( x n :n N ) and ( y n :n N ) , properties (2.1) and (2.2) hold, and similarly by (3.22) and (3.24), for sequences ( x n :n N ) and ( y n ˜ :n N ) , properties (2.1) and (2.2) also hold. Hence, by (J2) we obtain

lim n d ( v n , u ) = lim n d( x n , y n )=0
(3.25)

and

lim n d ( v n , u ˜ ) = lim n d( x n , y n ˜ )=0.
(3.26)

Now, from (3.25), (3.26) and (d1)-(d3), since u ˜ u, we have that

n N { 0 < η = d ( u ˜ , u ) s [ d ( u ˜ , v n ) + d ( v n , u ) ] = s d ( v n , u ˜ ) + s d ( v n , u ) } .
(3.27)

Finally, by (3.27), (3.26) and (3.25), we have 0<η=d( u ˜ ,u)s lim n d( v n , u ˜ )+s lim n d( v n ,u)=0. Absurd.

Consequently, (3.20) holds for a unique u, and (3.25) gives that lim n v n =u.

Moreover, by (3.20), using a similar argumentation, for the subsequences ( v 2 n :n N ) and ( v 2 n + 1 :n N ) , we have

lim n J ( v 2 n , u ) = lim n J ( u , v 2 n ) =0,

and

lim n J ( v 2 n + 1 , u ) = lim n J ( u , v 2 n + 1 ) =0.

For clarity of the rest of the proof, let v= S 1 (u). Then S(v)=u.

Step IV. We can show that

J ( S ( v ) , u ) =J ( u , S ( v ) ) =0.
(3.28)

Indeed, by (J1) we have n N {0J(u,u)sJ(u, v n )+sJ( v n ,u)}. Hence, by (3.20), we get

0J(u,u)s lim n J ( u , v n ) +s lim n J ( v n , u ) =0.

Thus, u X J 0 , i.e.,

J(u,u)=0.
(3.29)

Now, since S(v)=u, we obtain that (3.28) holds.

Step V. We can show that

max { J ( T ( v ) , u ) , J ( u , T ( v ) ) } qJ ( S ( v ) , T ( v ) ) .
(3.30)

Indeed, from (2.5) and (3.3), for x=v and y= w 2 n + 1 , we calculate

n N { max { J ( T ( v ) , v 2 n + 2 ) , J ( v 2 n + 2 , T ( v ) ) } = max { J ( T ( v ) , A ( w 2 n + 1 ) ) , J ( A ( w 2 n + 1 ) , T ( v ) ) } q max { min { J ( S ( v ) , B ( w 2 n + 1 ) ) , J ( B ( w 2 n + 1 ) , S ( v ) ) } , J ( S ( v ) , T ( v ) ) , J ( B ( w 2 n + 1 ) , A ( w 2 n + 1 ) ) , [ J ( S ( v ) , A ( w 2 n + 1 ) ) + J ( B ( w 2 n + 1 ) , T ( v ) ) ] / 2 } = q max { min { J ( S ( v ) , v 2 n + 1 ) , J ( v 2 n + 1 , S ( v ) ) } , J ( S ( v ) , T ( v ) ) , J ( v 2 n + 1 , v 2 n + 2 ) , [ J ( S ( v ) , v 2 n + 2 ) + J ( v 2 n + 1 , T ( v ) ) ] / 2 } } .
(3.31)

Now, since: (a) T(v)T(Y) X J 0 ; (b) S(v)=u X J 0 ; (c) for sequences ( x n = v 2 n + 2 :n N ) , ( y n = v 2 n + 1 :n N ) by Step III, we have lim n sup m > n J( x n , x m )=0, lim n sup m > n J( y n , y m )=0; (d) lim n x n = lim n y n =u; thus using the fact that J is admissible, by Remark 2.3, we have:

  1. (i)

    lim n J(T(v), v 2 n + 2 )= lim n J(T(v), x n )=J(T(v),u);

  2. (ii)

    lim n J( v 2 n + 2 ,T(v))= lim n J( x n ,T(v))=J(u,T(v));

  3. (iii)

    lim n J(S(v), v 2 n + 1 )= lim n J(S(v), y n )=J(S(v),u);

  4. (iv)

    lim n J( v 2 n + 1 ,S(v))= lim n J( y n ,S(v))=J(u,S(v));

  5. (v)

    lim n J( v 2 n + 1 , v 2 n + 2 )= lim n J( y n , x n )=J(u,u)=0;

  6. (vi)

    lim n J(S(v), v 2 n + 2 )= lim n J(S(v), x n )=J(S(v),u);

  7. (vii)

    lim n J( v 2 n + 1 ,T(v))= lim n J( y n ,T(v))=J(u,T(v)).

Hence, in the limit, (3.31), (3.28) and (3.29) give

max { J ( T ( v ) , u ) , J ( u , T ( v ) ) } q max { min { J ( S ( v ) , u ) , J ( u , S ( v ) ) } , J ( S ( v ) , T ( v ) ) , J ( u , u ) , [ J ( S ( v ) , u ) + J ( u , T ( v ) ) ] / 2 } = q max { J ( S ( v ) , T ( v ) ) , J ( S ( v ) , T ( v ) ) / 2 } = q J ( S ( v ) , T ( v ) ) ,

thus (3.30) holds.

Step VI. We claim that

J ( S ( v ) , T ( v ) ) =0J ( T ( v ) , S ( v ) ) =0.
(3.32)

First, we can observe that

J ( S ( v ) , T ( v ) ) =0.
(3.33)

Indeed, supposing this claim is not true, then

J ( S ( v ) , T ( v ) ) >0.
(3.34)

By (3.34) and (3.30), since S(v)=u, we obtain that

0 < J ( S ( v ) , T ( v ) ) = J ( u , T ( v ) ) max { J ( T ( v ) , u ) , J ( u , T ( v ) ) } q J ( S ( v ) , T ( v ) ) < J ( S ( v ) , T ( v ) ) .

Contradiction. Thus (3.33) holds. Now, by (3.30) and (3.33), we obtain

0<J ( T ( v ) , S ( v ) ) =J ( T ( v ) , u ) max { J ( T ( v ) , u ) , J ( u , T ( v ) ) } qJ ( S ( v ) , T ( v ) ) =0.

Hence, by (3.33) we obtain that (3.32) holds.

Step VII. Now, we show that S(v)=T(v).

Indeed, this is the consequence of (3.32) and Remark 2.1(C).

Step VIII. Now, we can show that

J ( B ( z ) , A ( z ) ) =J ( A ( z ) , B ( z ) ) =0
(3.35)

for some zY.

Indeed, since u=S(v), thus by Step VII, u=T(v)T(Y)B(Y), so there exists zY such that u=B(z). Next, from (2.5) and (3.3) (for x= w 2 n and y=z), we calculate

n N { max { J ( v 2 n + 1 , A ( z ) ) , J ( A ( z ) , v 2 n + 1 ) } = max { J ( T ( w 2 n ) , A ( z ) ) , J ( A ( z ) , T ( w 2 n ) ) } q max { min { J ( S ( w 2 n ) , B ( z ) ) , J ( B ( z ) , S ( w 2 n ) ) } , J ( S ( w 2 n ) , T ( w 2 n ) ) , J ( B ( z ) , A ( z ) ) , [ J ( S ( w 2 n ) , A ( z ) ) + J ( B ( z ) , T ( w 2 n ) ) ] / 2 } = q max { min { J ( v 2 n , B ( z ) ) , J ( B ( z ) , v 2 n ) } , J ( v 2 n , v 2 n + 1 ) , J ( B ( z ) , A ( z ) ) , [ J ( v 2 n , A ( z ) ) + J ( B ( z ) , v 2 n + 1 ) ] / 2 } } .
(3.36)

Now, since: (a) A(z)A(Y) X J 0 ; (b) B(z)=u X J 0 ; (c) for sequences ( x n = v 2 n :n N ) , ( y n = v 2 n + 1 :n N ) by Step III, we have lim n sup m > n J( x n , x m )=0, lim n sup m > n J( y n , y m )=0; (d) lim n x n = lim n y n =u; thus using the fact that J is admissible, by Remark 2.3, we have:

  1. (i)

    lim n J( v 2 n + 1 ,A(z))= lim n J( y n ,A(z))=J(u,A(z));

  2. (ii)

    lim n J(A(z), v 2 n + 1 )= lim n J(A(z), y n )=J(A(z),u);

  3. (iii)

    lim n J( v 2 n ,B(z))= lim n J( x n ,B(z))=J(u,B(z));

  4. (iv)

    lim n J(B(z), v 2 n )= lim n J(B(z), x n )=J(B(z),u);

  5. (v)

    lim n J( v 2 n , v 2 n + 1 )= lim n J( x n , y n )=J(u,u)=0;

  6. (vi)

    lim n J( v 2 n ,A(z))= lim n J( x n ,A(z))=J(u,A(z));

  7. (vii)

    lim n J(B(z), v 2 n + 1 )= lim n J(B(z), y n )=J(B(z),u).

Hence, in the limit, (3.36) gives

max { J ( u , A ( z ) ) , J ( A ( z ) , u ) } q max { min { J ( u , B ( z ) ) , J ( B ( z ) , u ) } , J ( u , u ) , J ( B ( z ) , A ( z ) ) , [ J ( u , A ( z ) ) + J ( B ( z ) , u ) ] / 2 } ,
(3.37)

and consequently, since u=B(z) and u=S(v), by (3.37) and (3.28), we obtain that

max { J ( B ( z ) , A ( z ) ) , J ( A ( z ) , B ( z ) ) } q max { J ( u , u ) , J ( u , u ) , J ( B ( z ) , A ( z ) ) , [ J ( u , A ( z ) ) + J ( u , u ) ] / 2 } = q max { J ( u , S ( v ) ) , J ( u , S ( v ) ) , J ( B ( z ) , A ( z ) ) , [ J ( B ( z ) , A ( z ) ) + J ( u , S ( v ) ) ] / 2 } = q max { J ( B ( z ) , A ( z ) ) , [ J ( B ( z ) , A ( z ) ) ] / 2 } = q J ( B ( z ) , A ( z ) ) .
(3.38)

Now, we can observe that

J ( B ( z ) , A ( z ) ) =0.
(3.39)

Indeed, supposing this claim is not true, then

J ( B ( z ) , A ( z ) ) >0.
(3.40)

By (3.40) and (3.38) we obtain that

0 < J ( B ( z ) , A ( z ) ) max { J ( B ( z ) , A ( z ) ) , J ( A ( z ) , B ( z ) ) } q J ( B ( z ) , A ( z ) ) < J ( B ( z ) , A ( z ) ) .

Contradiction. Thus J(B(z),A(z))=0, i.e., (3.39) holds.

Moreover, by (3.38) we get J(A(z),B(z))max{J(B(z),A(z)),J(A(z),B(z))}qJ(B(z),A(z))<J(B(z),A(z)), which by (3.39) gives that

J ( A ( z ) , B ( z ) ) =0.
(3.41)

In consequence of (3.41), (3.39), we get that (3.35) holds.

Step IX. Now, we show that A(z)=B(z).

Indeed, this is the consequence of (3.35) and Remark 2.1(C).

Step X. Now, we see that lim n d( v n ,u)=0.

Indeed, since u=S(v) and u=B(z), thus by Steps VII and IX, we obtain that u=S(v)=T(v)=A(z)=B(z). Moreover, by (3.17) and (3.20) we know that for the sequence ( v n :nN), conditions (2.1) and (2.2) respectively hold, thus using (J2) we obtain

lim n d ( v n , u ) =0.

Step XI. If A(Y), B(Y) or T(Y) are J-complete, then the assertions (i) and (ii) hold.

Indeed, if A(Y) is J-complete, then since A(Y)S(Y), the assertions (i) and (ii) are true. If B(Y) or T(Y) is J-complete, then an analogous argument as that in Steps I-IX yields (i) and (ii). □

4 Remarks, examples and comparison

Now, we present some examples illustrating the concepts which have been introduced so far. We will show a fundamental difference between Theorem 1.2 and Theorem 2.1. At the very beginning, we give the following remark.

Remark 4.1 (A) We can observe that if (X,d) is a b-metric space (with s1) and J=d, then Theorem 2.1 and Theorem 1.2 are identical. Indeed, if J=d, then:

  1. (1)

    b-metric d:X×X[0,) is a b-generalized pseudodistance on X (see Remark 2.1(A));

  2. (2)

    b-metric d:X×X[0,) is an admissible b-generalized pseudodistance on X (see Remark 2.4);

  3. (3)

    from (d1) and (2.4) we have X d 0 =X, and consequently T(Y)X= X d 0 and A(Y)X= X d 0 ;

  4. (4)

    definition of J-completeness and usual completeness of images Y under the mapping A, B, S or T are identical (see Remark 2.2);

  5. (5)

    from symmetry of d (the property (d2)), we have that

    x , y Y { max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } = max { d ( T ( x ) , A ( y ) ) , d ( A ( y ) , T ( x ) ) } = d ( T ( x ) , A ( y ) ) } ,

    and, similarly,

    x , y Y { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } = min { d ( S ( x ) , B ( y ) ) , d ( B ( y ) , S ( x ) ) } = d ( S ( x ) , B ( y ) ) } ,

so conditions (2.5) and (1.3) are, in this case, identical.

(B) Generally, Theorem 2.1 is the essential extension of Theorem 1.2 (for details, see Example 4.3).

Now we show that Theorem 2.1 is the essential generalization of Theorem 1.2. First, we present an example of a b-generalized pseudodistance.

Example 4.1 Let X be a b-metric space (with a constant s1) equipped in b-metric d:X×X[0,). Let the closed set EX, containing at least two different points, be arbitrary and fixed. Let c>0 be such that c>δ(E), where δ(E)=sup{d(x,y):x,yX} is arbitrary and fixed. Define the map J:X×X[0,) as follows:

J(x,y)= { d ( x , y ) if  { x , y } E = { x , y } , c if  { x , y } E { x , y } .
(4.1)

(I) We show that the map J is a b-generalized pseudodistance on X.

Indeed, it is worth noticing that the condition (J1) does not hold only if some x 0 , y 0 , z 0 X such that J( x 0 , z 0 )>s[J( x 0 , y 0 )+J( y 0 , z 0 )] exist. This inequality is equivalent to c>s[d( x 0 , y 0 )+d( y 0 , z 0 )], where J( x 0 , z 0 )=c, J( x 0 , y 0 )=d( x 0 , y 0 ) and J( y 0 , z 0 )=d( y 0 , z 0 ). However, by (4.1): J( x 0 , z 0 )=c shows that there exists v{ x 0 , z 0 } such that vE; J( x 0 , y 0 )=d( x 0 , y 0 ) gives { x 0 , y 0 }E; J( y 0 , z 0 )=d( y 0 , z 0 ) gives { y 0 , z 0 }E. This is impossible. Therefore, x , y , z X {J(x,y)s[J(x,z)+J(z,y)]}, i.e., the condition (J1) holds.

Proving that (J2) holds, we assume that the sequences ( x m :mN) and ( y m :mN) in X satisfy (2.1) and (2.2). Then, in particular, (2.2) yields

0 < ε < c m 0 = m 0 ( ε ) N m m 0 { J ( x m , y m ) < ε } .
(4.2)

By (4.2) and (4.1), since ε<c, we conclude that

m m 0 { E { x m , y m } = { x m , y m } } .
(4.3)

From (4.3), (4.1) and (4.2), we get

0 < ε < c m 0 N m m 0 { d ( x m , y m ) < ε } .

Therefore, the sequences ( x m :mN) and ( y m :mN) satisfy (2.3). Consequently, the property (J2) holds.

(II) We will show that J is an admissible b-generalized pseudodistance.

Indeed, let the sequences ( x m :mN) and ( y m :mN), such that x m x and y m y, m and

lim n sup m > n J( x n , x m )=0,and
(4.4)
lim n sup m > n J( y n , y m )=0,
(4.5)

be arbitrary and fixed. Then by (4.4), (4.5) and (4.1) we obtain that

m 0 N m m 0 { { x m , y m } E = { x m , y m } } ,

and by (4.1) we obtain

m m 0 { J ( x m , y m ) = d ( x m , y m ) } .
(4.6)

Moreover, since the set E is closed, and x m x, y m y by m, so we have that {x,y}E={x,y} and, consequently, by (4.1) we have

J(x,y)=d(x,y).
(4.7)

Finally, (4.6), (4.7) and continuity of d give that

lim m J( x m , y m )= lim m d( x m , y m )=d(x,y)=J(x,y).

In the following, we illustrate how to satisfy condition (2.5) of Theorem 2.1 by an elementary example.

Example 4.2 Let X be a b-metric space (with a constant s=2>1) equipped in b-metric d:X×X[0,), where X=[1,5] and d(x,y)=|xy | 2 , x,yX. Let the set E={2,3,4,5}X and J: X 2 [0,) be defined by the formula

J(x,y)= { d ( x , y ) if  E { x , y } = { x , y } , 10 if  E { x , y } { x , y } , x,yX.
(4.8)

Of course, δ(E)=sup{|xy | 2 :x,yE}=9<10, thus by Example 4.1(I) the map J is the b-generalized pseudodistance on X. Moreover, since E is a closed set, so by Example 4.1(II) the map J is admissible on X.

Let Y=[1,2]X and let T,A,S,B:YX be given by the formulas

T(x)= { 3 if  x = 1 , 5 if  x { 5 4 , 3 2 , 7 4 , 2 } , 4 if  x Y { 1 , 5 4 , 3 2 , 7 4 , 2 } ,
(4.9)
A(x)= { 5 if  x Y { 4 3 } , 4 if  x = 4 3 ,
(4.10)
S(x)= { 15 4 for  x Y { 5 4 , 3 2 , 7 4 } , 5 for  x = 7 4 , 4 for  x = 5 4 , 3 for  x = 3 2 , and 
(4.11)
B(x)= { 15 4 for  x Y { 5 4 , 3 2 , 2 } , 5 for  x = 5 4 , 4 for  x = 3 2 , 3 for  x = 2 .
(4.12)

First, we can immediately see that T(Y)={3,4,5}{3, 15 4 ,4,5}=B(Y) and A(Y)={4,5}{3,4, 15 4 ,5}=S(Y).

Now, we will show that the maps T,A,S,B:YX satisfy condition (2.5) for (q= 1 4 ). Indeed, first we can observe that since s=2, we get qs= 1 2 <1 and λs=max{q, q s 2 q s }s=2max{ 1 4 , 1 2 2 1 2 }=2max{ 1 4 , 1 3 }= 2 3 <1. Moreover, since T(Y)={3,4,5}E and A(Y)={4,5}E, by (4.8) we get T(Y) X J 0 , A(Y) X J 0 and

x , y Y { max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } = d ( T ( x ) , A ( y ) ) 2 } .
(4.13)

Now, let x,yY be arbitrary and fixed. We consider the following four cases.

Case 1. If {x,y}{ 5 4 , 3 2 , 7 4 ,2}=, then {x,y}Y{ 5 4 , 3 2 , 7 4 ,2}, which by (4.11) and (4.12) gives B(x)=B(y)=S(x)=S(y)= 15 4 and 15 4 E. By (4.8), we get J(S(x),B(y))=10, and consequently, since {S(x),B(y)}E=, by (4.8) we have that J(S(x),T(x))=J(B(y),A(y))=J(S(x),A(y))=J(B(y),T(x))=10, thus

max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } = J ( S ( x ) , B ( y ) ) = 10 .
(4.14)

In consequence, by (4.13) and (4.14) we calculate

max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } 2 < 10 4 = 1 4 10 = q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } .
(4.15)

Case 2. If {x,y}{ 5 4 , 3 2 , 7 4 ,2}={x}, then {y}Y{ 5 4 , 3 2 , 7 4 ,2}, which by (4.12) gives B(y)= 15 4 and 15 4 E. By (4.8), we get J(S(x),B(y))=J(B(y),S(x))=10, and consequently, since all images T(Y), A(Y), S(Y) and B(Y) are subsets of E{ 15 4 }, we have that δ(l)10, where l{T(Y),A(Y),S(Y),B(Y)}. Hence, we calculate

max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } = J ( S ( x ) , B ( y ) ) = 10 .
(4.16)

In consequence, by (4.13) and (4.16) we calculate

max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } 2 < 10 4 = 1 4 10 = q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } .
(4.17)

Case 3. If {x,y}{ 5 4 , 3 2 , 7 4 ,2}={y}, then {x}Y{ 5 4 , 3 2 , 7 4 ,2}, which by (4.11) gives S(x)= 15 4 and 15 4 E. By (4.8), we get =J(B(y),S(x))=10, and consequently, since all images T(Y), A(Y), S(Y) and B(Y) are subsets of E{ 15 4 }, we have that δ(l)10, where l{T(Y),A(Y),S(Y),B(Y)}. Hence, we calculate

max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } = J ( S ( x ) , B ( y ) ) = 10 .
(4.18)

In consequence, by (4.13) and (4.18) we calculate

max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } 2 < 10 4 = 1 4 10 = q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } .
(4.19)

Case 4. If {x,y}{ 5 4 , 3 2 , 7 4 ,2}={x,y}, then (4.9) and (4.10) give T(x)=T(y)=A(x)=A(y)=5 and 5E. By (4.8), we get J(T(x),A(y))=J(A(y),T(x))=d(T(x),A(y))=d(5,5)=0, and consequently,

max { J ( T ( x ) , A ( y ) ) , J ( A ( y ) , T ( x ) ) } = 0 q max { min { J ( S ( x ) , B ( y ) ) , J ( B ( y ) , S ( x ) ) } , = 0 J ( S ( x ) , T ( x ) ) , J ( B ( y ) , A ( y ) ) , [ J ( S ( x ) , A ( y ) ) + J ( B ( y ) , T ( x ) ) ] / 2 } .
(4.20)

Consequently, (4.15), (4.17), (4.19) and (4.20) give that condition (2.5) holds.

Finally, we can observe, that: T(Y) X J 0 ; A(Y) X J 0 ; X J 0 =E; and E is a closed set. Concluding, by (4.8) and Definition 2.3, we have A(Y) and T(Y) are J-complete subsets of X. All assumptions of Theorem 2.1 are satisfied. The maps T and S have a coincidence point z= 7 4 Y (i.e., T( 7 4 )=5=S( 7 4 )), which presents that the assertion (i) holds, and B and A have a coincidence point w= 5 4 Y (i.e., A( 5 4 )=5=B( 5 4 )), which gives that the assertion (ii) holds.

The next example illustrates that Theorem 2.1 is an essential extension of Theorem 1.2.

Example 4.3 Let X be a b-metric space (with constant s=2>1) equipped in b-metric d:X×X[0,), where X=[1,5] and d(x,y)=|xy | 2 , x,yX. Let YX, and A,B,S,T:YX be such as in Example 4.2. We will show that condition (1.3) does not hold. Indeed, supposing that there exists q(0,1) such that qs<1, λs=max{q, q s 2 q s }s<1 and such that for each x,yX, we have

d ( T ( x ) , A ( y ) ) q max { d ( S ( x ) , B ( y ) ) , d ( S ( x ) , T ( x ) ) , d ( B ( y ) , A ( y ) ) , [ d ( S ( x ) , A ( y ) ) + d ( B ( y ) , T ( x ) ) ] / 2 } .
(4.21)

Let x 0 =1 and y 0 = 4 3 . Then by (4.9)-(4.12) we get:

  1. (i)

    d(T( x 0 ),A( y 0 ))=d(T(1),A( 4 3 ))=d(3,4)=1;

  2. (ii)

    d(S( x 0 ),B( y 0 ))=d(S(1),B( 4 3 ))=d( 15 4 , 15 4 )=0;

  3. (iii)

    d(S( x 0 ),T( x 0 ))=d(S(1),T(1))=d( 15 4 ,3)= ( 3 4 ) 2 = 9 16 ;

  4. (iv)

    d(B( y 0 ),A( y 0 ))=d(B( 4 3 ),A( 4 3 ))=d( 15 4 ,4)= ( 1 4 ) 2 = 1 16 ;

  5. (v)

    [d(S( x 0 ),A( y 0 ))+d(B( y 0 ),T( x 0 ))]/2=[d(S(1),A( 4 3 ))+d(B( 4 3 ),T(1))]/2 = [d( 15 4 ,4)+d( 15 4 ,3)]/2=(1/16+9/16)/2=5/16; and

  6. (vi)

    max{d(S( x 0 ),B( y 0 )),d(S( x 0 ),T( x 0 )),d(B( y 0 ),A( y 0 )),[d(S( x 0 ),A( y 0 ))+d(B( y 0 ),T( x 0 ))]/2} = max{0,1/16,5/16,9/16}=9/16.

Now, since q<1, by (i), (4.21) and (vi) we have

1 = d ( T ( x 0 ) , A ( y 0 ) ) q max { d ( S ( x 0 ) , B ( y 0 ) ) , d ( S ( x 0 ) , T ( x 0 ) ) , d ( B ( y 0 ) , A ( y 0 ) ) , [ d ( S ( x 0 ) , A ( y 0 ) ) + d ( B ( y 0 ) , T ( x 0 ) ) ] / 2 } < max { d ( S ( x 0 ) , B ( y 0 ) ) , d ( S ( x 0 ) , T ( x 0 ) ) , d ( B ( y 0 ) , A ( y 0 ) ) , [ d ( S ( x 0 ) , A ( y 0 ) ) + d ( B ( y 0 ) , T ( x 0 ) ) ] / 2 } = 9 16 ,

which is absurd. This shows that condition (1.3) does not hold, so the main assumption of Theorem 1.2 is not true.

Remark 4.2 Examples 4.2 and 4.3 show that there exist the maps and b-metrics such that we cannot use Theorem 1.2, but we can use Theorem 2.1.

References

  1. Banach S: Sur les opérations dans les ensembles abstraits et leurs applications aux équations intégrales. Fundam. Math. 1922, 3: 133–181.

    Google Scholar 

  2. Jungck G: Commuting mappings and fixed points. Am. Math. Mon. 1976, 83: 261–263. 10.2307/2318216

    Article  MathSciNet  Google Scholar 

  3. Jungck G: Common fixed points for commuting and compatible maps on compacta. Proc. Am. Math. Soc. 1988, 103: 977–983. 10.1090/S0002-9939-1988-0947693-2

    Article  MathSciNet  Google Scholar 

  4. Czerwik S: Nonlinear set-valued contraction mappings in b -metric spaces. Atti Semin. Mat. Fis. Univ. Modena 1998, 46(2):263–276.

    MathSciNet  Google Scholar 

  5. Singh SL, Prasad B: Some coincidence theorems and stability of iterative procedures. Comput. Math. Appl. 2008, 55: 2512–2520. 10.1016/j.camwa.2007.10.026

    Article  MathSciNet  Google Scholar 

  6. Rhoades BE: A comparison of various definitions of contractive mappings. Trans. Am. Math. Soc. 1977, 226: 257–290.

    Article  MathSciNet  Google Scholar 

  7. Kada O, Suzuki T, Takahashi W: Nonconvex minimization theorems and fixed point theorems in complete metric spaces. Math. Jpn. 1996, 44: 381–391.

    MathSciNet  Google Scholar 

  8. Suzuki T: Generalized distance and existence theorems in complete metric spaces. J. Math. Anal. Appl. 2001, 253: 440–458. 10.1006/jmaa.2000.7151

    Article  MathSciNet  Google Scholar 

  9. Suzuki T: Several fixed point theorems concerning τ -distance. Fixed Point Theory Appl. 2004, 2004: 195–209.

    Article  Google Scholar 

  10. Włodarczyk K, Plebaniak R: Maximality principle and general results of Ekeland and Caristi types without lower semicontinuity assumptions in cone uniform spaces with generalized pseudodistances. Fixed Point Theory Appl. 2010., 2010: Article ID 175453

    Google Scholar 

  11. Tataru D: Viscosity solutions of Hamilton-Jacobi equations with unbounded nonlinear terms. J. Math. Anal. Appl. 1992, 163: 345–392. 10.1016/0022-247X(92)90256-D

    Article  MathSciNet  Google Scholar 

  12. Suzuki T: Generalized distance and existence theorems in complete metric spaces. J. Math. Anal. Appl. 2001, 253: 440–458. 10.1006/jmaa.2000.7151

    Article  MathSciNet  Google Scholar 

  13. Lin L-J, Du W-S: Ekeland’s variational principle, minimax theorems and existence of nonconvex equilibria in complete metric spaces. J. Math. Anal. Appl. 2006, 323: 360–370. 10.1016/j.jmaa.2005.10.005

    Article  MathSciNet  Google Scholar 

  14. Vályi I: A general maximality principle and a fixed point theorem in uniform spaces. Period. Math. Hung. 1985, 16: 127–134. 10.1007/BF01857592

    Article  Google Scholar 

  15. Włodarczyk K, Plebaniak R: Periodic point, endpoint, and convergence theorems for dissipative set-valued dynamic systems with generalized pseudodistances in cone uniform and uniform spaces. Fixed Point Theory Appl. 2010., 2010: Article ID 864536

    Google Scholar 

  16. Włodarczyk K, Plebaniak R, Doliński M: Cone uniform, cone locally convex and cone metric spaces, endpoints, set-valued dynamic systems and quasi-asymptotic contractions. Nonlinear Anal. 2009, 71: 5022–5031. 10.1016/j.na.2009.03.076

    Article  MathSciNet  Google Scholar 

  17. Włodarczyk K, Plebaniak R, Obczyński C: Convergence theorems, best approximation and best proximity for set-valued dynamic systems of relatively quasi-asymptotic contractions in cone uniform spaces. Nonlinear Anal. 2010, 72: 794–805. 10.1016/j.na.2009.07.024

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Plebaniak.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Plebaniak, R. New generalized pseudodistance and coincidence point theorem in a b-metric space. Fixed Point Theory Appl 2013, 270 (2013). https://doi.org/10.1186/1687-1812-2013-270

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-270

Keywords