Open Access

Random semilinear system of differential equations with impulses

Fixed Point Theory and Applications20172017:27

Received: 12 August 2017

Accepted: 11 November 2017

Published: 10 December 2017


In this paper, we study the existence of solutions for systems of random semilinear impulsive differential equations. The existence results are established by means of a new version of Perov’s, a nonlinear alternative of Leray-Schauder’s fixed point principles combined with a technique based on vector-valued metrics and convergent to zero matrices. Also, we give a random abstract formulation to Sadovskii’s fixed point theorem in a vector-valued Banach space. Examples illustrating the results are included.


random variablemild solutionvector-valued normfixed point theoremmatrixcondensingmultivaluedmeasurable selection



1 Introduction

Many evolution processes are characterized by the fact that at certain moments of time, they experience change of state abruptly in a form of shocks, harvesting, natural disasters, etc. These phenomena involve short term perturbations from continuous and smooth dynamics, whose duration is negligible in comparison with the duration of entire evolution. This has been the main reason for the development of a new branch of the theory of ordinary differential equations called impulsive differential equations. The theory of impulsive differential equations has also attracted much attention in recent years (see [18] and the references therein).

Unfortunately, in most cases the available data for the description and evaluation of parameters of a dynamic system are inaccurate, imprecise, or confusing. In other words, evaluation of parameters of a dynamical system is not without uncertainties. Differential equations with random coefficients are used as models in many different applications. This is due to a combination of uncertainties, complexities, and ignorance on our part which inevitably cloud our mathematical modeling process (e.g., Kampé de Feriet [9], Becus [10] and their references). This interest is due to the fact that there are many applications of this theory to various applied fields such as control theory, statistics, biological sciences, and others. For a discussion of such applications, one may consult the books [1115] and the papers [1621], and the references therein.

In this paper we consider the following system of impulsive differential equations with the random effects (random parameters):
$$ \textstyle\begin{cases} x'(t,\omega)=A_{1}(\omega)x(t,\omega)+f_{1}(t,x(t,\omega),y(t, \omega),\omega), \quad t\in J=[0,b], \\ y'(t,\omega)=A_{2}(\omega)y(t,\omega)+f_{2}(t,x(t,\omega),y(t, \omega),\omega), \quad t\in J=[0,b], \\ x(t_{k}^{+},\omega)- x(t_{k}^{-},\omega)=I_{k}(x(t_{k}^{-},\omega),y(t _{k}^{-},\omega)), \quad k=1,2,\ldots,m, \\ y(t_{k}^{+},\omega)- y(t_{k}^{-},\omega)=\overline{I}_{k}( x(t_{k} ^{-},\omega),y(t_{k}^{-},\omega)),\quad k=1,2,\ldots,m, \\ x(\omega,0)=\varphi_{1} (\omega), \quad \omega\in\Omega, \\ y(\omega,0)=\varphi_{2} (\omega),\quad \omega\in\Omega, \end{cases} $$
where \(f_{i}: J\times X\times X \times\Omega\to X \) are given functions, \(I_{k}, \overline{I_{k}} \in C(X\times X,X)\), \(k=1,2,\ldots,m\), \(0=t_{0}< t_{1}<\cdots<t_{n}<t_{m+1}=b\), \(\varphi_{1}\), \(\varphi_{2}\) are two random maps, X is a separable Banach space and \(A_{i}: \Omega\times X\to X\), \(i=1,2\), are random operators.

The main goal of this paper is to apply some new random fixed point theorem to a system of impulsive random differential equations. Also we give a random version of Sadovskii’s fixed point theorem in a separable vector-valued Banach space.

The paper is organized as follows. In Section 2, we introduce all the background material needed such as generalized metric spaces, some random fixed point theorems, and \(C_{0}\)-semigroup. In Section 3, by some new random versions of Perov’s and Leray-Schauder’s fixed point theorems in a vector Banach space, we prove some existence and compactness results for problem (1.1). In Section 4, by using the measure of noncompactness, we prove some random versions of Sadovskii’s fixed point theorems in a vector separable Banach space. Finally, in Section 5, an example is given to demonstrate the applicability of our result.

2 Preliminary

In this section, we recall from the literature some notations, definitions, and auxiliary results which will be used throughout this paper.

2.1 Vector metric space

If \(x,y\in \mathbb{R}^{n}\), with \(x=(x_{1},\ldots,x_{n})\) and \(y=(y_{1}, \ldots,y_{n})\), then by \(x\leq y\) we mean \(x_{i}\leq y_{i}\) for all \(i=1,\ldots,n\). Also we set \(\vert x\vert =(\vert x_{1}\vert ,\ldots, \vert x_{n}\vert )\), \(\max(x,y)=(\max(x_{1},y_{1}),\ldots,\max(x_{n},y_{n}))\) and \(\mathbb{R}_{+}^{n}=\{x\in\mathbb{R}^{n}: x_{i}>0\}\). If \(c\in\mathbb{R}\), then \(x\leq c\) means \(x_{i}\leq c\) for each \(i=1,\ldots,n\).

Definition 2.1

Let X be a nonempty set. By a generalized metric space on X, we mean a map \(d: X\times X\to\mathbb{R}^{n}\) with the following properties:
  1. (i)

    \(d(u,v)\geq0\) for all \(u,v\in X\); if \(d(u,v)=0\), then \(u=v\);

  2. (ii)

    \(d(u,v)=d(v,u)\) for all \(u,v\in X\);

  3. (iii)

    \(d(u,v)\leq d(u,w)+d(w,v)\) for all \(u,v,w\in X\).

We call the pair \((X,d)\) a generalized metric space with
$$d(x,y):= \left( \begin{matrix} d_{1}(x,y) \\ \vdots \\ d_{n}(x,y) \end{matrix}\right). $$
Notice that d is a generalized metric space(or a vector-valued metric space) on X if and only if \(d_{i}\), \(i=1,\ldots,n\), are metrics on X.
For \(r=(r_{1},\ldots,r_{n})\in\mathbb{R}_{+}^{n}\), we will denote by
$$B(x_{0},r)=\bigl\{ x\in X: d(x_{0},x)< r\bigr\} =\bigl\{ x\in X: d_{i}(x_{0},x)< r_{i}, i=1, \ldots,n\bigr\} $$
the open ball centered in \(x_{0}\) with radius r and
$$\overline{B(x_{0},r)}=\bigl\{ x\in X: d(x_{0},x)\leq r\bigr\} =\bigl\{ x\in X: d_{i}(x _{0},x)\leq r_{i}, i=1, \ldots,n\bigr\} $$
the closed ball centered in \(x_{0}\) with radius r. We mention that for a generalized metric space, the notions of open subset, closed set, convergence, Cauchy sequence, and completeness are similar to those in the usual metric spaces.

Definition 2.2

A square matrix of real numbers is said to be convergent to zero if and only if its spectral radius \(\rho(M)\) is strictly less than 1. In other words, this means that all the eigenvalues of M are in the open unit disc, i.e., \(\vert \lambda \vert <1\), for every \(\lambda\in\mathbb{C}\) with \(\operatorname{det}(M-\lambda I)=0\), where I denotes the unit matrix of \({\mathcal {M}} _{n\times n}(\mathbb{R})\).

Lemma 2.1

[22] Let \(M \in{ \mathcal {M}}_{n\times n} (\mathbb{R}_{+})\). Then the following assertions are equivalent:
  1. (i)

    M is convergent towards zero;

  2. (ii)

    \(M^{k} \to0\) as \(k \to\infty\);

  3. (iii)
    The matrix \((I-M)\) is nonsingular and
    $$(I-M)^{-1}=I+M+M^{2}+\cdots+M^{k}+\cdots; $$
  4. (iv)

    The matrix \((I-M)\) is nonsingular and \((I-M)^{-1}\) has nonnegative elements.


Remark 2.1

Some examples of matrix convergent to zero are as follows:
  1. 1.

    Any matrix \(M= \bigl({\scriptsize\begin{matrix}{}a&a\cr b&b\end{matrix}}\bigr) \), where \(a,b \in \mathbb{R}_{+}\) and \(a+b<1\).

  2. 2.

    Any matrix \(M=\bigl({\scriptsize\begin{matrix}{} a&b\cr a&b \end{matrix}}\bigr) \), where \(a,b \in \mathbb{R}_{+}\) and \(a+b<1\).

  3. 3.

    Any matrix \(M=\bigl({\scriptsize\begin{matrix}{} a&b\cr 0&c \end{matrix}}\bigr) \), where \(a,b,c \in \mathbb{R}_{+}\) and \(\max{a,c}<1\).


For other examples and considerations on matrices which converge to zero, see Precup [23], Rus [24], and Turinici [25].

2.2 Random variable and some selection theorems

In this section, we introduce notations, definitions, and preliminary facts from multivalued analysis and random variable which are used throughout this paper. Let \((X,d)\) be a metric space or a generalized metric space and Y be a subset of X. We denote:
  • \({\mathcal {P}}(X)=\{Y \subset X: Y\neq\emptyset\}\) and

  • \({\mathcal {P}}_{p}(X)=\{Y\in{ \mathcal {P}}(X): Y \mbox{ has the property `p'}\}\), where p could be: cl = closed, b = bounded, cp = compact, etc.

  • \({\mathcal {P}}_{cl}(X)=\{Y\in{ \mathcal {P}}(X): Y \mbox{ closed}\}\),

  • \({\mathcal {P}}_{b}(X)=\{Y\in{ \mathcal {P}}(X): Y \mbox{ bounded}\}\),

  • \({\mathcal {P}}_{cv}(X)=\{Y\in{ \mathcal {P}}(X): Y \mbox{ convex}\}\), where X is a Banach space,

  • \({\mathcal {P}}_{cp}(X)=\{Y\in{ \mathcal {P}}(X): Y \mbox{ compact}\}\).

Let \((\Omega, \Sigma)\) be a measurable space and \(F: \Omega\to{ \mathcal {P}}(X)\) be a multivalued mapping, F is called measurable if \(F^{+}(Q)=\{\omega\in\Omega: F(\omega)\subset Q\}\) for every \(Q\in{ \mathcal {P}}_{cl}(X)\); equivalently, for every U open set of X, the set \(F^{-}(Q)=\{\omega\in\Omega: F(\omega)\cap U\neq \emptyset\}\) is measurable.

If X is a metric space, we shall use \({\mathcal {B}}(X)\) to denote the Borel σ-algebra on X. The \(\Sigma\otimes{ \mathcal {B}}(X)\) denotes the smallest σ-algebra on \(\Omega\times X\) which contains all the sets \(A\times S\), where \(Q\in\Sigma\) and \(S\in{ \mathcal {B}}(X)\). Let \(F: X\to{ \mathcal {P}}(Y)\) be a multivalued map. A single-valued map \(f: X \to Y\) is said to be a selection of G, and we write (\(f\subset F\)) whenever \(f(x)\in F(x)\) for every \(x\in X\).

Definition 2.3

Recall that a mapping \(F: \Omega\times X\to X\) is said to be a random operator if, for any \(x\in X\), \(f(\cdot ,x)\) is measurable.

Definition 2.4

A random fixed point of f is a measurable function \(y: \Omega \to X\) such that
$$y(\omega)=f\bigl(\omega,y(\omega)\bigr)\quad \mbox{for all } \omega\in\Omega. $$
Equivalently, a measurable selection for the multivalued map \(\operatorname{Fix}F_{\omega}: \Omega\to{ \mathcal {P}}(X)\) is defined by
$$\operatorname{Fix}F_{\omega}(x)=\bigl\{ x\in X: x=f(\omega,x)\bigr\} . $$

Theorem 2.1


Let \((\Omega, \Sigma)\), Y be a separable metric space and \(F: \Omega\to{\mathcal{P}}_{cl}(Y)\) be measurable multivalued. Then F has a measurable selection.

As a consequence of Kuratowski-Ryll-Nardzewski and Aumann’s selection theorems, we can conclude the following results.

Theorem 2.2


Let \((\Omega, \Sigma)\), Y be a separable generalized metric space and \(F: \Omega\to{\mathcal{P}}_{cl}(Y)\) be measurable multivalued. Then F has a measurable selection.

Theorem 2.3


Let X be a separable metric space, Y be a metric space, \(f: \Omega\times X\to X\) be a Carathéodory function, and U be an open subset of Y. Then the multivalued map \(F: \Omega\to{\mathcal{P}}(X)\) defined by
$$F(\omega)= \bigl\{ \omega\in\Omega: f(\omega,x)\in U \bigr\} $$
is measurable. In particular, if f is real-valued, then
$$F_{*}(\omega)= \bigl\{ \omega\in\Omega: f(\omega,x)>\lambda \bigr\} ,\qquad \widetilde{F}(\omega)= \bigl\{ \omega\in\Omega: f(\omega,x)< \lambda \bigr\} $$
are measurable.

Next, we present some random fixed point theorem in a separable vector Banach space.

Theorem 2.4


Let \((\Omega, {\mathcal {F}},\mu)\) be a probability space, X be a real separable generalized Banach space and \(F: \Omega \times X\to X\) be a continuous random operator, and let \(M(\omega) \in{ \mathcal {M}}_{n\times n}(\mathbb{R}_{+})\) be a random variable matrix such that \(M(\omega)\) converges to 0 a.s. and
$$d\bigl(F(\omega,x_{1}),F(\omega,x_{2})\bigr)\leq M( \omega)d(x_{1},x_{2}) \quad \textit{for each } x_{1},x_{2} \in X, \omega\in\Omega. $$
Then there exists any random variable \(x: \Omega\to X\) which is the unique random fixed point of F.

Theorem 2.5


Let \((\Omega, {\mathcal {F}})\) be a measurable space, X be a real separable generalized Banach space and \(F: \Omega\times X\to X\) be a continuous random operator, and let \(M(\omega)\in{ \mathcal {M}}_{n \times n}(\mathbb{R}_{+})\) be a random variable matrix such that, for every \(\omega\in\Omega\), the matrix \(M(\omega)\) converges to 0 and
$$d\bigl(F(\omega,x_{1}),F(\omega,x_{2})\bigr)\leq M( \omega)d(x_{1},x_{2})\quad \textit{for each } x_{1},x_{2} \in X, \omega\in\Omega. $$
Then there exists any random variable \(x: \Omega\to X\) which is the unique random fixed point of F.

Theorem 2.6


Let X be a separable generalized Banach space, and let \(F: \Omega \times X\to X\) be a completely continuous random operator. Then either of the following holds:
  1. (i)

    The random equation \(F(\omega,x) = x\) has a random solution, i.e., there is a measurable function \(x: \Omega\to X\) such that \(F(\omega,x(\omega)) = x(\omega)\) for all \(\omega\in\Omega \), or

  2. (ii)

    The set \({\mathcal {M}} =\{x:\Omega\to X \textit{ is measurable} \vert \lambda(\omega)F(\omega,x) = x\}\) is unbounded for some measurable \(\lambda: \Omega\to X\) with \(0 <\lambda( \omega) < 1\) on Ω.


Definition 2.5

A function \(f: [0,b]\times\mathbb{R} \times\Omega\to\mathbb{R}\) is called random Carathéodory if the following conditions are satisfied:
  1. (i)

    The map \((t,\omega)\longmapsto f (t,x,\omega)\) is jointly measurable for all \(x\in\mathbb{R}\),

  2. (ii)

    The map \(x\longmapsto f (t,x,\omega)\) is continuous for all \(t\in[0,b]\) and \(\omega\in\Omega\).


Lemma 2.2


Let X be a separable generalized metric space and \(G: \Omega\times X\to X\) be a mapping such that \(G(\cdot ,x)\) is measurable for all \(x\in X\) and \(G(\omega,\cdot)\) is continuous for all \(\omega \in\Omega\). Then the map \((\omega,x)\to G(\omega,x)\) is jointly measurable.

Proposition 2.1


Let X be a separable Banach space, and D be a dense linear subspace of X. Let \(L: \Omega\times D\to X\) be a closed linear random operator such that, for each \(\omega\in\Omega, L(\omega)\) is one to one and onto. Then the operator \(S: \Omega\times X\to X\) defined by \(S(\omega)x = L^{-1}(\omega)x\) is random.

2.3 \(C_{0}\)-semigroups

In all this section, \(B(X)\) refers to the Banach space of linear bounded operators from X to X with the norm
$$\Vert A\Vert _{B(X)}=\sup\bigl\{ \bigl\vert A(y)\bigr\vert , \vert y\vert =1\bigr\} . $$

Definition 2.6

A one-parameter family \(\{S(t), t\geq0\} \subset B(X)\) is said to be of class \(C_{0}\) if it satisfies the conditions:
  1. (i)

    \(S(0)=I\) (I is the identity operator on X).

  2. (ii)

    \(S(t+s)=S(t)\circ S(s)\) for \(t,s\geq0\) (the semigroup property).

  3. (iii)
    The map \(x\longmapsto S(t)x\) is strongly continuous for each \(x\in E\), i.e.,
    $$\lim_{t\to0}S(t)x=x \quad \mbox{for all } x\in X. $$
A semigroup of bounded linear operators \(T(t)\) is uniformly continuous if
$$\lim_{t\to0} \bigl\Vert S(t)-I\bigr\Vert _{B(X)}=0. $$

Theorem 2.7


Let \(\{S(t), t\geq0\}\) be a \(C_{0}\)-semigroup of bounded linear operators, then there exist constants \(\alpha\in \mathbb{R}\) and \(K >0\) such that
$$\bigl\Vert S(t)\bigr\Vert _{B(X)}\leq Ke^{\alpha t}\quad \textit{for } t\geq0. $$

Definition 2.7

Let \(S(t)\) be a semigroup of class \(C_{0}\) defined on X. The infinitesimal generator A of \(S(t)\) is the linear operator defined by
$$A(x) = \lim_{h\to0} \frac{S(h)x-x}{h}\quad \mbox{for } x\in D(A), $$
where \(D(A) =\{x\in X \vert \lim_{h\to0} \frac{T(h)x-x}{h} \mbox{ exists in } X\}\).

Theorem 2.8


Let \(S(t)\) be a \(C_{0}\)-semigroup, and let A be its infinitesimal generator. Then, for \(x\in D(A)\), \(S(t)x \in D(A)\) and
$$\frac{d}{dt}S(t)x=AS(t)x=S(t)Ax. $$

More details on evolution systems and their properties could be found in the books of Engel and Nagel [31], Pazy [30], and Vrabie [32].

3 Main results

In this section, we establish the existence, uniqueness, and compactness of solutions set of a random system of impulsive differential equations (1.1).

3.1 Existence of solution

Let \(J_{k}=(t_{k},t_{k+1}], k=0,\ldots,m\), and let \(y_{k}\) be the restriction of a function y to \(J_{k}\). In order to define mild solutions for problem (1.1), consider the space
$$\begin{aligned} PC =&\bigl\{ y\colon [0,b]\to X, y_{k}\in C(J_{k},X), k=0,\ldots,m, \mbox{ such that } \\ &{}y\bigl(t^{-}_{k}\bigr) \mbox{ and } y\bigl(t^{+}_{k}\bigr) \mbox{ exist and satisfy } y(t^{-}_{k})=y\bigl(t_{k}\bigr) \mbox{ for } k=1,\ldots,m\bigr\} . \end{aligned}$$
Endowed with the norm
$$\Vert y\Vert _{PC}= \max \bigl\{ \Vert y_{k}\Vert _{\infty}, k=0,\ldots,m \bigr\} , $$
PC is a Banach space.

Definition 3.1

A function \(x,y: \Omega\to PC([0,b],X)\) is called a random mild solution of (1.1) if
$$ \textstyle\begin{cases} x(t,\omega)=S_{1}(\omega,t)\varphi_{1}(\omega)+\int_{0}^{t} S _{1}(\omega,t-s)f_{1}(s,x(s,\omega),y(s,\omega),\omega)\,ds \\ \hphantom{x(t,\omega)=}{}+ \sum _{0< t_{k}< t}S(\omega,t-t_{k})I_{k}(x(t_{k},\omega),y(t_{k}, \omega)), \quad t\in[0,b], \\ y(t,\omega)= S_{2}(\omega,t)\varphi_{2}(\omega)+\int_{0}^{t}S _{2}(\omega,t-s)f_{2}(s,x(s,\omega),y(s,\omega),\omega)\,ds \\ \hphantom{y(t,\omega)=}{}+ \sum _{0< t_{k}< t}S(\omega,t-t_{k})\overline{I}_{k}(x(t_{k}, \omega),y(t_{k},\omega)), \quad t\in[0,b], \end{cases} $$
where \(\{S_{1}(\omega,t)\}_{t\geq0}\), \(\{S_{2}(\omega,t)\}_{t \geq0}\) are random \(C_{0}\)-semigroups of bounded linear operators on X with infinitesimal generators \(A_{1}\), \(A_{2}\), respectively.

Theorem 3.1

Let \(f_{1}, f_{2} : J\times X \times X \times\Omega\to X\) be two Carathéodory functions. Assume that the following conditions hold:
  1. (H1)
    There exist \(p_{1}, p_{2}, p_{3}, p_{4}: \Omega\to L ^{1}([0,b],\mathbb{R}_{+})\) random variables such that
    $$\begin{aligned}& \bigl\vert f_{1}(t,x,y,\omega)-f_{1}(t,\widetilde{x}, \widetilde{y},\omega) \bigr\vert \\& \quad \leq p_{1}(\omega, t)\vert x- \widetilde{x}\vert +p_{2}(\omega,t)\vert y-\widetilde{y}\vert ,\quad x,y, \widetilde{x},\widetilde{y}\in X, t\in J, \omega\in\Omega \end{aligned}$$
    $$\begin{aligned}& \bigl\vert f_{2}(t,x,y,\omega)-f_{2}(t,\widetilde{x}, \widetilde{y},\omega)\bigr\vert \\& \quad \leq p_{3}(\omega,t)\vert x- \widetilde{x}\vert +p_{4}(\omega,t)\vert y-\widetilde{y}\vert ,\quad x,y, \widetilde{x},\widetilde{y}\in X, t\in J, \omega\in\Omega. \end{aligned}$$
  2. (H2)
    There exist random variables \(K_{1} , K_{2} : \Omega \longrightarrow(0,+\infty)\) such that
    $$\begin{aligned}& \bigl\Vert S_{1}(\omega,t)\bigr\Vert \leq K_{1}( \omega),\\& \bigl\Vert S_{2}(\omega,t)\bigr\Vert \leq K_{2}( \omega)\quad \textit{for each }\omega\in\Omega. \end{aligned}$$
If \(M(\omega)\) converges to 0, then problem (1.1) has a unique random solution.


We are going to study problem (1.1) in the intervals \([0,t_{1}], (t_{1},t_{2}]\), \(\ldots,(t_{m},b]\), respectively. The proof will be given in three steps and then continued by induction.

Step 1. We consider the following problem:
$$ \textstyle\begin{cases} x'(t,\omega)=A_{1}(\omega)x(t,\omega)+f_{1}(t,x(t,\omega),y(t, \omega),\omega), \quad t\in[0,t_{1}], \\ y'(t,\omega)=A_{2}(\omega)y(t,\omega)+f_{2}(t,x(t,\omega),y(t, \omega),\omega), \quad t\in[0,t_{1}], \\ x(0,\omega)=\varphi_{1} (\omega), \quad \omega\in\Omega, \\ y(0,\omega)=\varphi_{2} (\omega), \quad \omega\in\Omega. \end{cases} $$
Consider the operator \(N_{1}: C([0, t_{1}],X)\times C([0,t_{1}],X) \times\Omega\to C([0, t_{1}],X) \times C([0, t_{1}],X)\),
$$(x,y)\mapsto\bigl(N_{1}^{1}(\omega,x,y),N_{2}^{1}( \omega,x,y)\bigr), $$
$$\begin{aligned} N_{1}^{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr) =& S_{1}(\omega, t) \varphi_{1}(\omega) \\ &{} + \int_{0}^{t}S_{1}(\omega,t-s)f_{1} \bigl(s,x(s, \omega),y(s,\omega),\omega\bigr)\,ds,\quad t\in[0,t_{1}] \end{aligned}$$
$$\begin{aligned} N_{2}^{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr) =& S_{2}(\omega, t) \varphi_{1}(\omega) \\ &{}+ \int_{0}^{t} S_{2}(\omega,t-s)f_{2} \bigl(s,x(s, \omega),y(s,\omega),\omega\bigr)\,ds,\quad t\in[0,t_{1}]. \end{aligned}$$

First we show that \(N_{1}\) is a random operator on \(C([0, t_{1}], X) \times C([0, t_{1}],X)\).

Since \(f_{1}\) and \(f_{2}\) are Carathéodory functions, then \(\omega\longmapsto f_{1} (t,x,y,\omega)\) and \(\omega\longmapsto f _{2} (t,x,y,\omega)\) are measurable maps in view of Lemma 2.2. By the Crandall-Liggett formula, we have
$$S_{i}(\omega,t)= \lim _{n\to\infty} \biggl( I-\frac{t}{n}A_{i}( \omega) \biggr) ^{-n}x,\quad i=1,2. $$
From Proposition 2.1, we know that \(\omega\to ( I- \frac{t}{n}A_{i}(\omega)) ^{-n}x\) are measurable operators, thus \(\omega\to S_{i}(\omega,t)\) are measurable. Using the continuity properties of the semigroups \(S_{1}(\omega,\cdot), S_{2}(\omega,\cdot)\), we get
$$\omega\to S_{i}(\omega,t)\phi_{i}(\omega) \quad \mbox{and}\quad (s, \omega)\to S_{i}(\omega,t-s)f_{i}\bigl(s,x(s,\omega),y(s, \omega),\omega\bigr) $$
are measurable. Further, the integral is a limit of a finite sum of measurable functions; therefore, the maps
$$\omega\longmapsto N_{1}^{1}\bigl(x(t,\omega),y(t,\omega), \omega\bigr),\qquad \omega\longmapsto N_{2}^{1}\bigl(x(t, \omega),y(t,\omega),\omega\bigr) $$
are measurable. As a result, \(N_{1}\) is a random operator on \(C([0, t_{1}],X)\times C([0,t_{1}],X)\times\Omega\) into \(C([0,t_{1}],X) \times C([0,t_{1}],X)\).

We show that \(N_{1}\) satisfies all the conditions of Theorem 2.4 on \(C([0,t_{1}],X)\times C([0,t_{1}],X)\).

Let \((x,y),(\widetilde{x},\widetilde{y})\in C([0,t_{1}],X)\times C([0,t _{1}],X)\), then
$$\begin{aligned}& \bigl\vert N_{1}^{1}\bigl(x(t,\omega),y(t,\omega),\omega \bigr)-N_{1}^{1}\bigl(\widetilde{x}(t,\omega), \widetilde{y}(t,\omega),\omega\bigr)\bigr\vert \\& \quad =\biggl\vert S_{1}(\omega,t)\varphi_{1}(\omega)+ \int_{0}^{t}S_{1}( \omega,t-s)f_{1} \bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\,ds \\& \qquad {}-S_{1}(\omega,t)\varphi_{1}(\omega)- \int_{0}^{t}S_{1}( \omega,t-s)f_{1} \bigl(s,\widetilde{x}(s,\omega),\widetilde{y}(s,\omega), \omega\bigr)\,ds\biggr\vert \\& \quad \leq \int_{0}^{t}\bigl\Vert S_{1}( \omega,t-s)\bigr\Vert \bigl\vert f_{1}\bigl(s,x(s,x),y(s,\omega), \omega\bigr)-f_{1}\bigl(s,\widetilde{x}(s,\omega),\widetilde{y}(s, \omega),\omega\bigr)\bigr\vert \,ds \\& \quad \leq K_{1}(\omega) \biggl( \int_{0}^{t} p_{1}(\omega,t) \bigl\vert x(s,\omega)-\widetilde{x}(s,\omega)\bigr\vert \,ds + \int_{0}^{t}p_{2}(\omega,t)\bigl\vert y(s,\omega)-\widetilde{y}(s,\omega)\bigr\vert \,ds \biggr). \end{aligned}$$
$$\bigl\Vert N_{1}^{1}(x,y,\omega)-N_{1}^{1}( \widetilde{x},\widetilde{y},\omega)\bigr\Vert _{*}\leq \frac{K_{1}(\omega)}{\tau} \Vert x-\widetilde{x}\Vert _{*} + \frac{K _{1}(\omega)}{\tau} \Vert y-\widetilde{y}\Vert _{*} ), $$
$$\Vert x\Vert _{*}= \sup _{t\in[0,t_{1}]} e^{-\tau\int_{0}^{t}p(s,\omega)\,ds}\bigl\vert x(t)\bigr\vert ,\qquad p(t,\omega)= \sum _{i=1}^{4}p_{i}(t, \omega),\quad \tau>K_{1}( \omega)+K_{2}(\omega). $$
Similarly, we obtain
$$\bigl\Vert N_{2}(x,y,\omega)-N_{2}(\widetilde{x}, \widetilde{y},\omega)\bigr\Vert _{*} \leq\frac{K_{2}(\omega)}{\tau} \Vert x- \widetilde{x}\Vert _{*} +\frac{K_{2}( \omega)}{\tau} \Vert y-\widetilde{y} \Vert _{*} ). $$
$$d_{0}\bigl(N_{1}(x,y,\omega),N_{1}( \widetilde{x},\widetilde{y},\omega)\bigr) \leq{M}_{0}( \omega)d_{0}\bigl((x,y),(\widetilde{x},\widetilde{y})\bigr), $$
$$d_{0}(x,y)= \left( \begin{matrix} \Vert x-y\Vert _{*} \\ \Vert x-y\Vert _{*} \end{matrix}\right) $$
$$M_{0}(\omega)= \left( \begin{matrix} {\frac{K_{1}(\omega)}{\tau}}&\frac{K_{1}(\omega)}{\tau} \\ {\frac{K_{2}(\omega)}{\tau}} & \frac{K_{2}(\omega)}{\tau} \end{matrix}\right). $$
It is clear that the radius spectral \(\rho(M(\omega))=\frac{K_{1}( \omega)+K_{2}(\omega)}{\tau}<1\). By Lemma 2.1, \(M_{0}(\omega)\) converges to zero. From Theorem 2.5 there exists a unique random solution of problem (3.2). We denote by \((x_{1}(t,\omega),y_{1}(t,\omega))\) the mild solution of (3.2).
Step 2. We consider the problem
$$ \textstyle\begin{cases} x'(t,\omega)=A_{1}^{1}(\omega)x(t,\omega)+f_{1}(t,x(t,\omega),y(t, \omega),\omega) \quad t\in(t_{1},t_{2}], \\ y'(t,\omega)=A_{2}^{1}(\omega)y(t,\omega)+f_{2}(t,x(t,\omega),y(t, \omega),\omega) \quad t\in(t_{1},t_{2}], \\ x(t_{1}^{+},\omega)=x_{1}(t_{1},\omega)+I_{1}(x_{1}(t_{1},\omega),y _{1}(t_{1},\omega)), \\ y(t_{1}^{+},\omega)=y_{1}(t_{1},\omega)+\bar{I}_{1}(x_{1}(t_{1}, \omega),y_{1}(t_{1},\omega)). \end{cases} $$
$$C_{*}\bigl([t_{1},t_{2}], X\bigr)=\bigl\{ y\in C\bigl((t_{1},t_{2}],X\bigr): y\bigl(t_{1}^{+} \bigr) \mbox{ exists}\bigr\} . $$
Consider the operator \(N_{2}: C_{*}([t_{1}, t_{2}],X) \times C_{*}([t _{1},t_{2}],X)\times\Omega\longrightarrow C_{*}([t_{1},t_{2}],X) \times C_{*}([t_{1},t_{2}],X) \),
$$(x,y)\longmapsto\bigl(N_{2}^{1}(\omega,x,y),N_{2}^{2}( \omega,x,y)\bigr), $$
$$\begin{aligned} N_{2}^{1}(x,y,\omega) =& S_{1}(\omega, t)x_{1}(\omega,t_{1})+I _{1}\bigl(x_{1}(t_{1},\omega),y(t_{1},\omega)\bigr) \\ &{}+ \int_{t_{1}}^{t} S_{1}(\omega,t-s)f_{1}\bigl(s, x(s,\omega),y(s,\omega), \omega\bigr)\,ds \quad t\in(t_{1},t_{2}] \end{aligned}$$
$$\begin{aligned} N_{2}^{2}(x,y,\omega) =&S_{2}(\omega, t)y_{1}(t_{1},\omega)+I_{1}\bigl(x _{1}(t_{1},\omega),y_{1}(t_{1},\omega)\bigr) \\ &{}+\int_{t_{1}}^{t}S_{2}(\omega,t-s)f_{2}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\,ds \quad t\in(t_{1},t_{2}]. \end{aligned}$$
\(N_{2}\) is a random operator on \(C_{*}([t_{1},t_{2}],X) \times C_{*}([t _{1},t_{2}],X)\). Now we show that \(N_{2}\) satisfies all the conditions of Theorem 2.4 on \(C_{*}([t_{1},t_{2}],X)\times C_{*}([t _{1},t_{2}],X)\).
Let \((x,y),(\widetilde{x},\widetilde{y})\in C_{*}([t_{1},t_{2}],X) \times C_{*}([t_{1},t_{2}],X)\), then
$$\begin{aligned}& \bigl\vert N_{2}^{1}\bigl(x(\cdot ,\omega),y(\cdot ,\omega),\omega \bigr)-N_{2}^{1}\bigl(\widetilde{x}(\cdot ,\omega), \widetilde{y}(\cdot ,\omega),\omega\bigr)\bigr\vert \\& \quad \leq K_{1}(\omega) \int_{t_{1}}^{t} p_{1}(\omega,t)\bigl\vert x(s,\omega)-\widetilde{x}(s,\omega)\bigr\vert \,ds \\& \qquad {}+K_{1}(\omega) \int_{0}^{t}p_{2}(\omega,t)\bigl\vert y(s,\omega)-\widetilde{y}(s,\omega)\bigr\vert \,ds. \end{aligned}$$
$$\begin{aligned}& \bigl\Vert N_{2}^{1}\bigl(x(\cdot ,\omega),y(\cdot ,\omega),\omega \bigr)-N_{2}^{1}\bigl(\omega,\widetilde{x}(\cdot ,\omega), \widetilde{y}(\cdot ,\omega),\omega\bigr)\bigr\Vert _{**} \\& \quad \leq \frac{K_{1}(\omega)}{\tau} \Vert x-\widetilde{x}\Vert _{**}+ \frac{K _{1}(\omega)}{\tau} \Vert y-\widetilde{y}\Vert _{**}, \end{aligned}$$
$$\Vert x\Vert _{*}= \sup _{t\in[0,t_{1}]} e^{-\tau\int_{t_{1}}^{t}p(s,\omega)\,ds}\bigl\vert x(t)\bigr\vert , \qquad p(t,\omega)= \sum _{i=1}^{4}p_{i}(t, \omega),\quad \tau>K_{1}(\omega)+K_{2}(\omega). $$
Similarly, we have
$$\begin{aligned} \bigl\Vert N_{2}^{2}\bigl(x(\cdot ,\omega),y(\cdot ,\omega),\omega \bigr)-N_{2}^{2}\bigl(\widetilde{x}(\cdot ,\omega),\widetilde{y}(\cdot , \omega),\omega\bigr)\bigr\Vert _{**} \leq&\frac{K_{2}(\omega)}{\tau} \Vert x-\widetilde{x}\Vert _{**}+\frac{K _{2}(\omega)}{\tau} \Vert y-\widetilde{y} \Vert _{**}. \end{aligned}$$
$$d_{1}\bigl(N_{2}(x,y,\omega),N_{2}(\widetilde{x}, \widetilde{y},\omega)\bigr) \leq{M}(\omega)d_{1}\bigl((x,y),( \widetilde{x},\widetilde{y})\bigr), $$
$$d_{1}(x,y)= \left( \begin{matrix} \Vert x-y\Vert _{**} \\ \Vert x-y\Vert _{**} \end{matrix}\right) $$
$$M(\omega)= \left( \begin{matrix} {\frac{K_{1}(\omega)}{\tau}}&\frac{K_{1}(\omega)}{\tau} \\ {\frac{K_{2}(\omega)}{\tau}} & \frac{K_{2}(\omega)}{\tau} \end{matrix}\right). $$
From Theorem 2.5 there exists a unique random solution of problem (3.3), we denote it by \((x_{2}(t,\omega),y _{2}(t,\omega))\).
Step 3. We continue this process until we arrive at the random variable \(\omega\to(x_{m+1}(\cdot ,\omega), y_{m+1}(\cdot ,\omega))\) as a solution of the problem
$$ \textstyle\begin{cases} x'(t,\omega)=A_{1}^{1}(\omega)x(t,\omega)+f_{1}(t,x(t,\omega),y(t, \omega),\omega) \quad t\in(t_{m},b], \\ y'(t,\omega)=A_{2}^{1}(\omega)y(t,\omega)+f_{2}(t,x(t,\omega),y(t, \omega),\omega) \quad t\in(t_{m},b], \\ x(t_{m}^{+},\omega)=x_{1}(t_{m},\omega)+I_{m}(x_{m}(t_{m},\omega),y _{m}(t_{m},\omega)), \\ y(t_{m}^{+},\omega)=y_{m}(t_{m},\omega)+\bar{I}_{m}(x_{m}(t_{m}, \omega),y_{m}(t_{m},\omega)). \end{cases} $$
Then a random solution of problem (1.1) is defined by
$$\bigl(x(t,\omega),y(t,\omega)\bigr)= \textstyle\begin{cases} (x_{1}(t,\omega),y_{1}(t,\omega)), & \mbox{if } t\in[0,t_{1}], \\ (x_{2}(t,\omega),y_{2}(t,\omega)), & \mbox{if } t\in(t_{1},t_{2}], \\ \ldots& \ldots \\ (x_{m+1}(t,\omega),y_{m+1}(t,\omega)), & \mbox{if } t\in(t_{m},b]. \end{cases} $$

3.2 Existence and compactness results

In this subsection, we prove the existence and compactness of a solution set of problem (1.1). For this we assume that the \(C_{0}\)-semigroup \(S_{1}(\cdot ,t)\), \(S_{2}(\cdot ,t)\), \(t>0\) is compact.

Now, we consider the following set of hypotheses in what follows:
  1. (H3)

    The functions \(f_{1}\) and \(f_{2}\) are random Carathéodory on \([0,b]\times X\times X\times\Omega\).

  2. (H4)
    There exist bounded measurable functions \(\gamma_{1}, \gamma_{2}: \Omega\to L^{1}([0,b],\mathbb{R}_{+})\) and nondecreasing continuous functions \(\psi_{1},\psi_{2}: \mathbb{R}_{+}\to(0, \infty)\) such that
    $$\bigl\vert f_{1}(t,x,y,\omega)\bigr\vert \leq \gamma_{1}(t,\omega)\psi_{1}\bigl(\vert x\vert +\vert y\vert \bigr) \quad \mbox{a.e. } t\in[0,b] $$
    $$\bigl\vert f_{2}(t,x,y,\omega)\bigr\vert \leq\gamma_{2}(t, \omega)\psi_{2}\bigl(\vert x\vert +\vert y\vert \bigr) \quad \mbox{a.e. } t\in[0,b] $$
    for all \(\omega\in\Omega\) and \(x,y\in X\).
  3. (H5)
    There exist constants \(\alpha_{i} \geq0\) and \(\lambda_{i} \geq1\) for \(i\in\{1,2\}\) such that
    $$\bigl\Vert S_{i}(t,\omega)\bigr\Vert \leq\lambda_{i} e^{\alpha_{i} t}\quad \mbox{for all } \omega\in\Omega. $$
  4. (H6)
    There exist constants \(c_{k}\), \(\bar{c}_{k} >0\) with \(k=1,\ldots,n\) and continuous functions \(\phi_{k}\), \(\overline{ \phi}_{k} : \mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}\) such that
    $$\begin{aligned}& \bigl\vert I_{k}(x,y)\bigr\vert \leq c_{k} \phi_{k}\bigl(\vert x\vert +\vert y\vert \bigr)\quad \mbox{for all } x,y \in X, \\& \bigl\vert \overline{I}_{k}(x,y)\bigr\vert \leq \bar{c}_{k} \overline{\phi}_{k}\bigl(\vert x\vert +\vert y\vert \bigr)\quad \mbox{for all } x,y\in X. \end{aligned}$$
The following result is known as the Gronwall-Bihari theorem.

Lemma 3.1


Let \(u, \bar{g}\colon [a,b]\longrightarrow \mathbb{R}\) be positive real continuous functions. Assume that there exist \(c>0\) and a continuous nondecreasing function \(\phi: \mathbb{R}_{+}\longrightarrow(0,+\infty)\) such that
$$u(t)\le c+ \int_{a}^{t} \bar{g}(s)\phi\bigl(u(s)\bigr)\,ds,\quad \forall t \in J. $$
$$u(t)\le H^{-1} \biggl( \int_{a}^{t} \bar{g}(s)\,ds \biggr),\quad \forall t\in J $$
provided that
$$\int_{c}^{+\infty} \frac{dy}{\phi(y)}> \int_{a}^{b}\bar{g}(s)\,ds. $$
Here, \(H^{-1}\) refers to the inverse of the function \(H(u)=\int_{c} ^{u} \frac{dy}{\phi(y)}\) for \(u\ge c\).

Now, we give our existence and compactness results for problem (1.1).

Theorem 3.2

Assume that (H3)-(H6) are satisfied and
$$\int_{0}^{b} \Gamma(s,\omega)\,ds < \int_{c} ^{\infty}\frac{du}{ \psi(u)}\quad \textit{for all } \omega\in\Omega, $$
$$c=\lambda_{1}e^{\alpha_{1} b} \bigl\vert \varphi_{1}( \omega)\bigl\vert + \lambda_{2}e ^{\alpha_{2} b} \bigr\vert \varphi_{2}(\omega)\bigr\vert , \qquad \psi=\psi_{1}+ \psi_{2} \quad \textit{and} \quad \Gamma=\gamma_{1}+\gamma_{2}. $$
Then problem (1.1) has a random solution defined on \([0,b]\).


Consider the operator \(T: C([0, b],X) \times C([0,b],X)\times\Omega \longrightarrow C([0, b],X) \times C([0, b],X) \),
$$(x,y)\longmapsto\bigl(T_{1}(\omega,x,y),T_{2}(\omega,x,y) \bigr),\quad (x,y) \in PC\times PC, $$
$$\begin{aligned} T_{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr) =&S_{1}( \omega, t)\varphi _{1}(\omega) + \int_{0}^{t} S_{1}(\omega,t-s)f_{1} \bigl(s,x(s,\omega),y(s, \omega),\omega\bigr)\,ds \\ &{}+\sum_{0< t_{k}< t} S_{1}(\omega,t-t_{k}) I_{k}\bigl(x(t_{k},\omega),y(t _{k},\omega)\bigr),\quad t\in[0,b] \end{aligned}$$
$$\begin{aligned} T_{2}\bigl(\omega,x(t,\omega),y(t,\omega)\bigr) =& S_{2}( \omega, t) \varphi _{2}(\omega)+ \int_{0}^{t} S_{2}(\omega,t-s) f_{2}\bigl(s,x(s,\omega),y(s, \omega),\omega\bigr)\,ds \\ & {}+\sum_{0< t_{k}< t} S_{2}(\omega,t-t_{k}) \bar{I}_{k}\bigl(x(t_{k},\omega),y(t_{k},\omega) \bigr),\quad t\in[0,b]. \end{aligned}$$
Clearly fixed points of the operator T are random mild solutions of problem (1.1). For \(\omega\in\Omega\) fixed, consider \(T_{\omega}: PC\times PC \to PC\times PC\) by
$$T_{\omega}\bigl(x(t,\omega),y(t,\omega)\bigr)= \bigl(T_{1} \bigl(x(t,\omega),y(t, \omega),\omega\bigr),T_{2}\bigl(x(t,\omega),y(t, \omega),\omega\bigr)\bigr),\quad (x,y) \in PC\times PC. $$
We shall show that T satisfies assumptions of Theorem 2.6. We split the proof into several steps. First we show that \(T_{\omega}\) is completely continuous.

Step 1. T maps bounded sets into bounded sets in \(PC\times PC\).

$$B_{p} \times B_{q} = \left \{ (x,y)\in PC\times PC: \left \Vert \left( \begin{matrix} x\\ y \end{matrix}\right) \right \Vert \leq \left( \begin{matrix} p\\ q \end{matrix}\right) \right \} , $$
$$\left \Vert \left( \begin{matrix} x\\ y \end{matrix}\right) \right \Vert = \left( \begin{matrix} \Vert x\Vert _{PC}\\ \Vert y\Vert _{PC} \end{matrix}\right). $$
Let \((x,y)\in B_{p} \times B_{q}\), then for each \(t\in[0,b]\),
$$\begin{aligned} \bigl\vert T_{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr)\bigr\vert =&\biggl\vert S_{1}(\omega,t) \varphi_{1}(\omega) + \int_{0}^{t}S_{1}(\omega,t-s) f_{1}\bigl(s,x(s, \omega),y(s,\omega),\omega\bigr)\,ds \\ &{} +\sum_{0< t_{k}< t} S_{1}( \omega,t-t_{k}) I_{k}\bigl(x(t_{k}, \omega),y(t_{k},\omega)\bigr)\biggr\vert \\ \leq& \lambda_{1} e^{\alpha_{1} b}\bigl\vert \varphi_{1}( \omega)\bigr\vert + \lambda _{1}e^{\alpha_{1} b} \int_{0}^{t} \bigl\vert f_{1} \bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\bigr\vert \,ds \\ &{}+\lambda_{1}e^{\alpha_{1} b}\sum_{k=1}^{m}c_{k} \phi _{k}\bigl(\bigl\vert x(t,\omega)\bigr\vert +\bigl\vert y(t, \omega)\bigr\vert \bigr) \\ \leq& \lambda_{1} e^{\alpha_{1} b}\vert \varphi_{1} \vert +\lambda_{1} e^{ \alpha_{1}b} \int_{0}^{t} \gamma_{1}(s,\omega) \psi _{1}\bigl(\bigl\vert x(s,\omega)\bigr\vert + \bigl\vert y(s, \omega)\bigr\vert \bigr)\,ds \\ & {}+e^{\alpha_{1} b}\sum_{k=1}^{m}c_{k} \phi_{k}\bigl(\Vert x\Vert _{PC}+\Vert y\Vert _{PC}\bigr). \end{aligned}$$
$$\bigl\vert T_{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr)\bigr\vert \leq \lambda_{1}e^{ \alpha_{1} b} \Biggl(\bigl\vert \varphi_{1}(\omega)\bigr\vert +\psi_{1}(\rho) \Vert \gamma_{1}\Vert _{L_{1}} +\sum_{k=1}^{m} c_{k} \phi_{k}(\rho) \Biggr):=l_{1}< \infty. $$
Similarly, for \(T_{2}\), we have
$$\bigl\Vert T_{2}(x,y,\omega)\bigr\Vert \leq\lambda_{2}e^{\alpha_{2} b} \Biggl(\bigl\vert \varphi_{2}(\omega)\bigr\vert + \psi_{2}(\rho) \Vert \gamma_{2}\Vert _{L_{1}} +\sum _{k=1} ^{m} \bar{c}_{k} \overline{ \phi}_{k}(\rho) \Biggr):=l_{2}< \infty. $$
Step 2. \(T_{\omega}\) maps bounded sets into equicontinuous sets of \(PC\times PC\). Let \(B_{p}\times B_{q}\) be a bounded set in \(PC\times PC\) as in Step 1 is an equicontinuous set of \(PC\times PC\). Let \(\tau_{1}, \tau_{2}\in[0,b]\) such that \(0<\tau_{1}<\tau_{2} \leq b\), and \((x,y)\in(B_{p},B_{q})\). Then
$$\begin{aligned} \bigl\vert h_{1}(\tau_{1})-h_{1}( \tau_{2})\bigr\vert \leq&\bigl\vert S_{1}(\omega, \tau_{1})\varphi_{1}(\omega) - S_{1}(\omega, \tau_{2}) \varphi_{1}(\omega)\bigr\vert \\ &{}+ \int_{0}^{\tau_{1}}\bigl\Vert S_{1}(\omega, \tau_{1}-s)-S_{1}(\omega,\tau_{2}-s)\bigr\Vert \vert f_{1}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\,ds \\ &{}+\sum_{0< t_{k}< \tau_{1}}\bigl\Vert S_{1}(\omega, \tau_{1}-t_{k})-S_{1}(\omega,\tau_{2}-t_{k}) \bigr\Vert \bigl\vert I_{k}\bigl(\omega,x(\omega,t_{k}),y( \omega,t_{k})\bigr)\bigr\vert \\ &{}- \int_{\tau_{1}}^{\tau_{2}}\bigl\Vert S_{1}(\omega, \tau_{2}-s)\bigr\Vert \bigl\vert f_{1}\bigl(s,x(s, \omega),y(s,\omega),\omega\bigr)\bigr\vert \,ds \\ &{}-\sum_{\tau_{1}< t_{k}< \tau_{2}} \bigl\Vert S_{1}(\omega, \tau_{2}-t_{k})\bigr\Vert \bigl\vert I_{k} \bigl(x(t_{k},\omega),y(t_{k},\omega)\bigr)\bigr\vert . \end{aligned}$$
$$\begin{aligned} \bigl\vert h_{1}(\tau_{1})-h_{1}( \tau_{2})\bigr\vert \leq&\bigl\Vert S_{1}( \omega, \tau_{2}-\tau_{1})-I\bigr\Vert \bigl\vert S_{1}(\omega,\tau_{1})\varphi_{1}(\omega)\bigr\vert \\ &{}+ \int_{0}^{\tau_{1}}\bigl\Vert S_{1}(\omega, \tau_{2}-\tau_{1})-I)\bigr\Vert \bigl\Vert S_{1}(\omega,\tau_{1}-s)\bigr\Vert \bigl\vert f_{1}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\bigr\vert \,ds \\ &{}+ \int_{\tau_{1}} ^{\tau_{2}} \bigl\Vert S_{1}(\omega, \tau_{2}-s)\bigr\Vert \bigl\vert f_{1}\bigl(s,x(s, \omega),y(s,\omega),\omega\bigr)\bigr\vert \,ds \\ &{}+\sum_{0< t_{k}< \tau_{1}} \bigl\Vert S_{1}(\omega, \tau_{2} - \tau_{1})-I)\bigr\Vert \bigl\Vert S_{1}(\omega,\tau_{1}-t_{k})\bigr\Vert \bigl\vert I_{k}\bigl(x(t_{k},\omega),y(t_{k},\omega) \bigr)\bigr\vert \\ &{}+ \sum_{\tau_{1}< t_{k}< \tau_{2}} \bigl\Vert S_{1}(\omega, \tau_{2}-t_{k})\bigr\Vert \bigl\vert I_{k} \bigl(x(t_{k},\omega),y(t_{k},\omega)\bigr)\bigr\vert , \end{aligned}$$
$$h_{1}(\tau_{i})=T_{1}\bigl(\omega,x(\omega, \tau_{i}),y(\omega,\tau_{i})\bigr),\quad i=1,2. $$
By (H5), we get
$$\begin{aligned} \bigl\vert h_{1}(\tau_{1})-h_{1}( \tau_{2})\bigr\vert \leq& \lambda_{1} e^{\alpha_{1} b} \bigl\Vert S_{1}(\omega,\tau_{2}-\tau_{1})-I \bigr\Vert \\ &{}+ \lambda_{1} e^{\alpha_{1} b} \psi_{1}(p+q) \bigl\Vert S_{1}(\omega,\tau_{2}-\tau_{1})-I\bigr\Vert \int_{0}^{\tau_{1}} \gamma_{1}(s,\omega)\,ds \\ &{}+\lambda_{1} e^{\alpha_{1} b}\psi_{1}(p+q) \int_{\tau_{1}}^{\tau_{2}} \gamma_{1}(s,\omega)\,ds \\ &{}+\lambda_{1} e^{\alpha_{1} b} \sum_{0< t_{k}< \tau_{1}} c_{k} \phi _{k}( p+q)\bigl\Vert S_{1}(\omega, \tau_{2}-\tau_{1})-I\bigr\Vert \\ &{}+ \sum_{\tau_{1}< t_{k}< \tau_{2}}c_{k} \phi_{k}( p+q)\bigl\Vert S_{1}(\omega,\tau_{1}+h-t_{k}) \bigr\Vert . \end{aligned}$$
Since \(\{S_{1}(\omega,t)\}_{t\geq0}\) is uniformly continuous, then \(\Vert S_{1}(h)-I\Vert \to0\) as \(h \to0^{+}\). Thus the right-hand side tends to zero as \(\tau_{2} \to\tau_{1}\). This proves equicontinuity for the case where \(t\neq t_{i}\), \(i=1,\ldots,m\).

Now we prove equicontinuity at \(t=t_{i}^{-}\). Let \(\xi_{1} >0\) such that \(\{t_{k} : k\neq i\}\cap[t_{i}-\xi_{1},t_{i}+\xi_{1}] \neq\emptyset \).

For \(0< \varepsilon< \xi_{1}\), we get
$$\begin{aligned} \bigl\vert h_{1}(t_{i})-h_{1}(t_{i}- \varepsilon)\bigr\vert \leq& \bigl\vert S_{1}( \omega,t_{i})\varphi_{1}(\omega) - S_{1}( \omega,t_{i}-\varepsilon) \varphi_{1}(\omega)\bigr\vert \\ &{}+\psi_{1}(p+q) \int_{0}^{t_{i}}\bigl\Vert S_{1}( \omega,t_{i}-s)-S_{1}(\omega,t_{i}-\varepsilon-s) \bigr\Vert \gamma_{1}(s,\omega)\,ds \\ &{}+\sum_{k=1} ^{i-1}\bigl\Vert S_{1}(\omega,t_{i}-t_{k})-S_{1}( \omega,t_{i}-\varepsilon-t_{k})\bigr\Vert \phi_{k}(p+q) \\ &{}+\psi_{1}(p+q) \int^{t_{i}} _{t_{i}-\varepsilon} \bigl\Vert S_{1}( \omega,t_{i}-\varepsilon-s)\bigr\Vert \gamma_{1}(s, \omega)\,ds. \end{aligned}$$
The right-hand side tends to zero as \(\varepsilon\longrightarrow0\). Next we prove equicontinuity at \(t=t_{i}^{+}\). Fix \(\xi_{2}>0\) such that \(\{t_{k}:k\neq i\}\cap [t_{i}-\xi_{2},t_{i}+\xi_{2}] \neq\emptyset \). For \(0<\varepsilon< \xi_{2}\), we have
$$\begin{aligned} \bigl\vert h_{1}(t_{i}+\varepsilon)-h_{1}(t_{i}) \bigr\vert \leq& \bigl\vert S_{1}(\omega,t_{i}+ \varepsilon)\varphi_{1}(\omega) - S_{1}( \omega,t_{i}) \varphi_{1}(\omega)\bigr\vert \\ &{}+\psi_{1}(p+q) \int_{0}^{t_{i}}\bigl\Vert S_{1}( \omega,t_{i}+\varepsilon-s)-S_{1}(\omega,t_{i}-s) \bigr\Vert \gamma_{1}(s,\omega)\,ds \\ &{}+\sum_{0< t_{k}< t_{k}} \bigl\Vert S_{1}( \omega,t_{i}+\varepsilon-t_{k})-S_{1}( \omega,t_{i}-t_{k})\bigr\Vert c_{k} \phi_{k}(p+q) \\ &{}+\psi_{1}(p+q) \int_{t_{i}}^{t_{i}+\varepsilon}\bigl\Vert S_{1}( \omega,t_{i}-s)\bigr\Vert \gamma_{1}(s,\omega)\,ds \\ &{}+\sum_{t_{i}< t_{k}< t_{i}+\varepsilon} c_{k}\bigl\Vert S_{1}(\omega,t_{i}+\varepsilon-t_{k}) \bigr\Vert \phi_{k}(p+q). \end{aligned}$$
The right-hand side tends to zero as \(h \longrightarrow0\). By a similar way we can prove the equicontinuity for \(T_{2}(B_{p},B_{q})\).
Step 3. Now we will prove that \(T_{\omega}(B_{p}\times B_{q})(t)\) for \(t \in[0,b]\) is relatively compact in PC. For \(0< \varepsilon<t\) and \(t\in[0,b]\), let
$$T_{\varepsilon}(x,y,\omega)=\bigl(T_{1} ^{\varepsilon}(x,y,\omega), T _{2} ^{\varepsilon}(x,y,\omega)\bigr), $$
$$\begin{aligned}& T_{1} ^{\varepsilon}\bigl(\omega,x(t,\omega),y(t,\omega)\bigr) \\& \quad =S_{1}( \omega, t)\varphi_{1}(\omega) + \int_{0}^{t-\varepsilon} S_{1}( \omega,t-s)f_{1}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\,ds \\& \qquad {}+\sum_{0< t_{k}< t-\varepsilon} S_{1}(\omega,t-t_{k}) I_{k}\bigl(\omega,x(t_{k},\omega),y(t_{k},\omega) \bigr) \\& \quad =S_{1}(\omega, t)\varphi_{1}(\omega) +S_{1}( \omega,\varepsilon) \int_{0}^{t-\varepsilon} S_{1}(\omega,t-s- \varepsilon)f_{1}\bigl(s,x(s, \omega),y(s,\omega),\omega\bigr)\,ds \\& \qquad {}+S_{1}(\omega,\varepsilon)\sum_{0< t_{k}< t-\varepsilon} S_{1}( \omega,t-t_{k}) I_{k}\bigl(x(t_{k}, \omega),y(t_{k},\omega)\bigr) \end{aligned}$$
$$\begin{aligned}& T_{2} ^{\varepsilon}\bigl(\omega,x(t,\omega),y(t,\omega)\bigr) \\& \quad = S_{2}( \omega, t) \varphi_{2}(\omega)+ S_{2}( \omega,\varepsilon) \int _{0}^{t-\epsilon} S_{2}(\omega,t-s- \varepsilon) f_{2}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\,ds \\& \qquad {} +S_{2}(\omega,\varepsilon)\sum_{0< t_{k}< t-\varepsilon} S_{2}( \omega,t-t_{k}) \bar{I}_{k} \bigl(x(t_{k},\omega),y(t_{k},\omega)\bigr). \end{aligned}$$
The compactness of the semigroup \(\{S_{i}(\omega,t)\}_{t>0}\) for \(i=1,2\) implies that the set \(T_{\varepsilon}(B_{p}\times B_{q})(t)\) is precompact. Moreover,
$$\begin{aligned}& \bigl\vert T_{1} ^{\varepsilon}\bigl(\omega,x(t,\omega),y(t, \omega)\bigr)-T_{1} \bigl(\omega,x(t,\omega),y(t,\omega)\bigr)\bigr\vert \\& \quad =\biggl\vert S_{1}(\omega,\varepsilon) \int_{t-\varepsilon} ^{t} S_{1}( \omega,t-s- \varepsilon)f_{1}\bigl(s,x(s,\omega),y(s,\omega),\omega \bigr)\,ds \\& \qquad {} +S_{1}(\omega,\varepsilon)\sum _{t-\varepsilon< t_{k}< t} S _{1}(\omega,t-t_{k}) I_{k}\bigl(x(t_{k},\omega), y(t_{k},\omega)\bigr) \biggr\vert \\& \quad \leq \lambda_{1} e^{\alpha_{1} b} \int_{t-\varepsilon} ^{t} \gamma _{1}(\omega,s) \psi_{1}\bigl(\bigl\vert x(s,\omega)\bigr\vert + \bigl\vert y( \omega,s)\bigr\vert \bigr)\,ds \\& \qquad {}+\lambda_{1} e^{\alpha_{1} b} \sum_{t-\varepsilon< t_{k}< t} c_{k} \phi_{k}\bigl( \bigl\vert x(s,\omega)\bigr\vert + \bigl\vert y(s,\omega)\bigr\vert \bigr) \\& \quad \leq \lambda_{1} e^{\alpha_{1} b} \psi_{1}(p+q) \int _{t-\varepsilon } ^{t} \gamma_{1}(\omega,s)\,ds +\lambda_{1} e^{\alpha_{1} b} \sum_{t-\varepsilon< t_{k}< t} c_{k} \phi_{k}(p+q). \end{aligned}$$
The right-hand term tends to 0 uniformly in t as \(\varepsilon \longrightarrow0\).This implies that the set \(T_{1} ^{\varepsilon}(B _{p})(t)\) is relatively compact for \(t\in[0,b]\).

By a similar way as above, we prove that \(T_{2} ^{\varepsilon}(B_{p} \times B_{q})(t)\) is also relatively compact. This implies that \(T_{\varepsilon}(B_{p}\times B_{q})(t)\) is relatively compact.

Step 4. Now we show that the operator T is continuous.

Let \((x_{n},x_{n})\) be a sequence such that \((x_{n},y_{n}) \longrightarrow (x,y)\) in \(PC\times PC\) as \(n\longrightarrow\infty\). By (H4), (H5) we obtain
$$\begin{aligned}& \bigl\vert T_{1}\bigl(x_{n}(t,\omega),y_{n}(t, \omega),\omega\bigr)-T_{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr) \bigr\vert \\& \quad \leq \lambda_{1}e^{\alpha_{1} b} \int_{0}^{t} \bigl\vert f_{1} \bigl(s,x_{n}(\omega,s),y_{n}(s,\omega),\omega \bigr)-f_{1}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\bigr\vert \,ds \\& \qquad {}+\lambda_{1}e^{\alpha_{1} b} \sum_{0< t_{k}< t} \bigl\vert I_{k}\bigl(\omega,x_{n}(t_{k}, \omega),y_{n}(t_{k},\omega)\bigr)-I_{k} \bigl(x(t_{k},\omega),y(t_{k},\omega)\bigr)\bigr\vert . \end{aligned}$$
$$\begin{aligned}& \bigl\Vert T_{1}\bigl(x_{n}(\cdot ,\omega),y_{n}(\cdot , \omega),\omega\bigr)-T_{1}\bigl(x(\cdot ,\omega),y(\cdot ,\omega),\omega\bigr) \bigr\Vert _{PC} \\& \quad \leq \lambda_{1}e^{\alpha_{1} b} \int_{0}^{b} \bigl\vert f_{1} \bigl(s,x_{n}(s,\omega),y_{n}(s,\omega),\omega \bigr)-f_{1}\bigl(s,x(s,\omega),y(s,\omega),\omega\bigr)\bigr\vert \,ds \\& \qquad {}+\lambda_{1}e^{\alpha_{1} b} \sum_{k=1}^{m} \bigl\vert I_{k}\bigl(x_{n}(t_{k}, \omega),y_{n}(t_{k},\omega)\bigr)-I_{k} \bigl(x(t_{k},\omega),y(t_{k},\omega)\bigr)\bigr\vert . \end{aligned}$$
Since f is a Carathéodory function, by the Lebesgue dominated convergence theorem and the continuity of \(I_{k}\), we get
$$\bigl\Vert T_{1}(x_{n},y_{n}, \omega)-T_{1}(x,y,\omega)\bigr\Vert _{PC} \longrightarrow 0 \quad \mbox{as } n\longrightarrow\infty. $$
$$\bigl\Vert T_{2}(x_{n},y_{n}, \omega)-T_{2}(x,y,\omega)\bigr\Vert _{PC} \longrightarrow 0 \quad \mbox{as } n\longrightarrow\infty. $$
Thus T is continuous.
Step 5. Now, we show that the set
$${\mathcal {M}} =\bigl\{ (x,y)\in PC\times PC : (x,y)=\lambda(\omega) T(x,y), \lambda( \omega)\in(0,1)\bigr\} $$
is bounded for some measurable function \(\lambda:\Omega \longrightarrow \mathbb{R}\).
Let \((x,y) \in{ \mathcal {M}}\), then for each \(t\in[0,t_{1}]\),
$$x(t,\omega)=\lambda(\omega)T_{1}\bigl(x(t,\omega),y(t,\omega),\omega \bigr),\qquad y(t,\omega)=\lambda(\omega) T_{2}\bigl(x(t,\omega),y(t,\omega), \omega\bigr) $$
such that
$$\begin{aligned} T_{1}\bigl(x(t,\omega),y(t,\omega),\omega\bigr) =&S_{1}( \omega, t)\varphi _{1}(\omega) + \int_{0}^{t} S_{1}(\omega,t-s)f_{1} \bigl(s,x(s,\omega),y(s, \omega),\omega\bigr)\,ds \end{aligned}$$
$$\begin{aligned} T_{2}\bigl(x(t,\omega),y(t,\omega),\omega\bigr) =& S_{2}( \omega, t) \varphi _{2}(\omega)+ \int_{0}^{t} S_{2}(\omega,t-s) f_{2}\bigl(s,x(s,\omega),y(s, \omega),\omega\bigr)\,ds. \end{aligned}$$
For some \(0<\lambda(\omega)<1\), we have
$$\begin{aligned} \bigl\vert x(t,\omega)\bigr\vert =& \bigl\vert \lambda(\omega) T_{1}\bigl(\omega,x(t,\omega),y(t,\omega)\bigr)\bigr\vert \\ \leq& \bigl\vert \lambda(\omega)\bigr\vert \biggl( \bigl\vert S_{1}(\omega,t)\varphi_{1}(\omega)\bigr\vert + \int_{0}^{t}\bigl\Vert S_{1}( \omega,t-s)\bigr\Vert \bigl\vert f_{1}\bigl(s, x(s,\omega), y(s, \omega),\omega\bigr)\bigr\vert \,ds \biggr) \\ \leq& \lambda_{1}e^{\alpha_{1} b} \bigl\vert \varphi_{1}(\omega)\bigr\vert + \lambda _{1} e^{\alpha_{1} b} \int_{0}^{t} \gamma_{1}(s,\omega) \psi_{1}\bigl(\bigl\vert x(s,\omega)\bigr\vert +\bigl\vert y(s, \omega)\bigr\vert \bigr)\,ds. \end{aligned}$$
$$\bigl\vert x(t,\omega)\bigr\vert \leq\lambda_{1}e^{\alpha_{1} b} \bigl\vert \varphi_{1}(\omega)\bigr\vert + \lambda_{1} e^{\alpha_{1} b} \int_{0}^{t} \gamma_{1}(s,\omega) \psi _{1}\bigl(\bigl\vert x(s,\omega)\bigr\vert +\bigl\vert y(s, \omega)\bigr\vert \bigr)\,ds. $$
$$\bigl\vert y(t,\omega)\bigr\vert \leq\lambda_{2}e^{\alpha_{2} b} \bigl\vert \varphi_{2}(\omega)\bigr\vert + \lambda_{2} e^{\alpha_{2} b} \int_{0}^{t} \gamma_{1}(s,\omega) \psi _{2}\bigl(\bigl\vert x(s,\omega)\bigr\vert +\bigl\vert y(s, \omega)\bigr\vert \bigr)\,ds. $$
By the two above inequalities, we get
$$\bigl\vert x(t,\omega)\bigr\vert +\bigl\vert y(t,\omega)\bigr\vert \leq c + \lambda_{1} e^{\alpha_{1} b} \int_{0}^{t} \Gamma(s,\omega) \psi\bigl(\bigl\vert x(s,\omega)\bigr\vert +\bigl\vert y(s,\omega)\bigr\vert \bigr)\,ds. $$
Applying the Bihari lemma, we obtain
$$ \bigl\vert x(t,\omega)\bigr\vert +\bigl\vert y(t,\omega) \bigr\vert \leq H^{-1} \biggl( \int_{c} ^{t} \Gamma(s, \omega)\,ds \biggr)\quad \mbox{for each } t\in[0,b], $$
$$H(u)= \int_{c} ^{u} \frac{du}{\psi(u)}. $$
Finally from (3.5) there exists a constant \(\sigma>0\) such that
$$\Vert x\Vert _{PC} \leq\sigma \quad \mbox{and}\quad \Vert y\Vert _{PC} \leq\sigma. $$
This shows that \({\mathcal {M}}\) is bounded. Thus by Theorem 2.6 the operator T has at least one fixed point which is a random mild solution of problem (1.1). □

4 Random Sadovskii’s fixed point theorem type

In this section, we present the random Sadovskii’s fixed point theorem in a vector Banach space. First, we give definitions and properties for a measure of noncompactness.

Definition 4.1

Let X be a generalized Banach space and \((\mathcal{A},\leq)\) be a partially ordered set. A map \(\beta:\mathcal{P}(X)\rightarrow \mathcal{A}\times\mathcal{A}\times \ldots\times\mathcal{A}\) is called a generalized measure of noncompactness \((m.n.c.)\) on X if
$$\beta(\overline{\operatorname{co}}C)=\beta(C)\quad \mbox{for every }C\in\mathcal{P}(X), $$
\(C\in\mathcal{P}(X)\), where
$$\beta(C):= \left( \begin{matrix}{\beta_{1}(C)} & \\ \vdots& \\ {\beta_{n}(C)} & \end{matrix}\right). $$

Definition 4.2

A measure of noncompactness β is called
  1. (a)

    Monotone if \(C_{0},C_{1}\in\mathcal{P}(X)\), \(C_{0}\subset C_{1}\) implies \(\beta(C_{0})\leq\beta(C_{1})\).

  2. (b)

    Nonsingular if \(\beta(\{a\}\cup C)=\beta(C)\) for every \(a\in X\), \(C\in\mathcal{P}(X)\).

  3. (c)

    Invariant with respect to the union with compact sets if \(\beta(K\cup C)=\beta(C)\) for every relatively compact set \(K\subset X\), and \(C\in\mathcal{P}(X)\).

  4. (d)

    Real if \(\mathcal{A}= \overline{\mathbb{R}}_{+}\) and \(\beta(C)< \infty\) for every \(i=1,\ldots,n\) and every bounded C.

  5. (e)

    Semi-additive if \(\beta(C_{0}\cup C_{1})=\max(\beta(C _{0}),\beta(C_{1}))\) for every \(C_{0},C_{1}\in\mathcal{P}(X)\).

  6. (f)

    Lower-additive if β is real and \(\beta(C_{0}+C_{1}) \leq\beta(C_{0})+\beta(C_{1})\) for every \(C_{0},C_{1}\in \mathcal{P}(X)\).

  7. (g)

    Regular if the condition \(\beta(C)=0\) is equivalent to the relative compactness of C.

A typical example of an \(MNC\) is the Hausdorff measure of noncompactness χ defined, for all \(C\subset X\), by
$$\chi(C):= \inf\bigl\{ \epsilon\in \mathbb{R}_{+}^{n}: \mbox{there exists } n \in \mathbb{N}\mbox{ such that } C \mbox{ has finite } \epsilon \mbox{-net}\bigr\} . $$

Definition 4.3

Let X, Y be two generalized normed spaces and \(F: X\rightarrow \mathcal{P}(Y)\) be a multivalued map. F is called an M-contraction (with respect to β) if there exists \(M\in\mathcal{M}_{n\times n}(\mathbb{R}_{+})\) converging to zero such that, for every \(D\in \mathcal{P}(X)\), we have
$$\beta\bigl(F(D)\bigr)\leq M\beta(D). $$

The next result is concerned with β-condensing or M-contractivity.

Theorem 4.1


Let \(V\subset X\) be a bounded closed convex subset and \(N: V\rightarrow V\) be a generalized β-condensing continuous mapping, where β is a nonsingular measure of noncompactness defined on the subsets of X. Then the set
$$\operatorname{Fix}(N)=\bigl\{ x\in V: x=N(x)\bigr\} $$
is nonempty.

As a consequence of Theorem 4.1, we present versions of Schaefer’s fixed point theorem and the nonlinear alternative Leray-Schauder-type theorem for β-condensing operators in a generalized Banach space.

Theorem 4.2


Let E be a generalized Banach space and \(N: E\rightarrow E\) be a continuous and β-condensing operator. Moreover, assume that the set
$$A=\bigl\{ x\in E: x=\lambda N(x)\textit{ for some } \lambda\in(0,1)\bigr\} $$
is bounded. Then N has a fixed point.

Now Theorems 4.2, 4.1 establish the results.

Theorem 4.3

Let \((\Omega, {\mathcal {F}})\) be a measurable space, C be a closed, convex, bounded subset of a separable vector Banach space, and \(F: \Omega\times C\to C\) be a continuous condensing random operator. Then F has at least one random fixed point.


Let \(\omega\in\Omega\). Consider \(F_{\omega}: C\to C\) defined by \(F_{\omega}(x)=F(\omega,x)\). By Theorem 4.2, there exists \(x(\omega)\in C\) such that
$$x(\omega)=F\bigl(\omega,x(\omega)\bigr). $$
Define \({\mathcal{T}}: C\to{\mathcal{P}}_{cl}(C)\) by
$${\mathcal{T}}(\omega)=\bigl\{ x\in X: x=F(\omega,x)\bigr\} . $$
Since F is a Carathéodory function, then the function \(\Psi: \Omega\times C\to \mathbb{R}^{n}_{+}\) defined by
$$\Psi(\omega,x)=d\bigl(x, F(\omega,x)\bigr) $$
is also a Carathéodory operator. From Theorem 2.3, the set multivalued map \(G_{p}\) is measurable, so
$$\begin{aligned} \overline{G_{p}(\omega)} =& \overline{ \bigl\{ x\in\Omega: x-F(\omega,x)\in B ( 0,\epsilon_{p} ) \bigr\} }, \qquad \epsilon_{p}= \left( \begin{matrix} {\frac{1}{p}} \\ \vdots \\ {\frac{1}{p}} \end{matrix}\right),\quad p\in \mathbb{N}. \end{aligned}$$
$${\mathcal{T}}(\omega)=\bigcap_{n=1}^{\infty} \overline{G_{p}(\omega)},\quad \omega\in\Omega. $$
From Theorem 2.1, there exists a measurable selection \(x: \Omega\to C\) of \({\mathcal{T}}\) which is a random fixed point of F. □

We can also prove the following result.

Theorem 4.4

Let X be a separable generalized Banach space, and let \(F: \Omega \times X\to X\) be a condensing continuous random operator. Then either of the following holds:
  1. (i)

    The random equation \(F(\omega,x) = x\) has a random solution, i.e., there is a measurable function \(x: \Omega\to X\) such that \(F(\omega,x(\omega)) = x(\omega)\) for all \(\omega\in\Omega \), or

  2. (ii)
    The set
    $${\mathcal {M}} =\bigl\{ x:\Omega\to X \textit{ is measurable} \vert \lambda(\omega)F( \omega,x) = x\bigr\} $$
    is unbounded for some measurable function \(\lambda: \Omega\to X\) with \(0 <\lambda(\omega) < 1\) on Ω.

Lemma 4.1

([35], Theorem 5.1.1)

Let E be a Banach space and \(N\colon L^{1}([a,b],E)\to C([a,b],E)\) be an abstract operator satisfying the following conditions:
\(({\mathcal {S}}_{1})\)
N is ξ-Lipschitz: there exists \(\xi>0\) such that, for every \(f,g\in L^{1}([a,b],E)\),
$$\bigl\vert Nf(t)-Ng(t)\bigr\vert \leq\xi \int _{a}^{b} \bigl\vert f(s)-g(s)\bigr\vert \,ds\quad \textit{for all } t \in[a,b]. $$
\(({\mathcal {S}}_{2})\)

N is weakly-strongly sequentially continuous on compact subsets: for any compact \(K\subset E\) and any sequence \(\{f_{n}\}_{n=1}^{\infty}\subset L^{1}([a,b],E)\) such that \(\{f_{n}(t)\}_{n=1}^{\infty}\subset K\) for a.e. \(t\in[a,b]\), the weak convergence \(f_{n}\rightharpoonup f_{0}\) implies the strong convergence \(N(f_{n})\to N(f_{0})\) as \(n\to+\infty\).

Then, for every semi-compact sequence \(\{f_{n}\}_{n=1}^{\infty} \subset L^{1}([0,b],E)\), the image sequence \(N(\{f_{n}\}_{n=1}^{ \infty})\) is relatively compact in \(C([a,b],E)\).

Corollary 4.1

Let \(N: L^{1}([0,b],E)\to C([0,b],E)\) be defined by
$$N(f) (t)= \int_{0}^{t}S(t-s)f(s)\,ds,\quad t\in[0,b], $$
where \((S(t))_{t\geq0}\) is a \(C_{0}\)-semigroup, then N satisfies \({\mathcal {S}}_{1}\) and \({\mathcal {S}}_{2}\).

Lemma 4.2

([35], Theorem 5.2.2)

Let an operator \(N\colon L^{1}([a,b],E)\to C([a,b],E)\) satisfy conditions \(({\mathcal {S}}_{1})\)-\(({\mathcal {S}}_{2})\) together with
\(({\mathcal {S}}_{3})\)
There exists \(\eta\in L^{1}([a,b])\) such that, for every integrably bounded sequence \(\{f_{n}\}_{n=1}^{\infty}\), we have
$$\chi\bigl(\bigl\{ f_{n}(t)\bigr\} _{n=1}^{\infty}\bigr) \leq\eta(t)\quad \textit{for a.e. } t \in[a,b], $$
where χ is the Hausdorff \(MNC\).
$$\chi\bigl(\bigl\{ N(f_{n}) (t)\bigr\} _{n=1}^{\infty} \bigr)\leq2\xi \int _{a}^{b}\eta(s)\,ds \quad \textit{for all } t \in[a,b], $$
where ξ is the constant in \(({\mathcal {S}}_{1})\).
Now we give our main existence result for problem (1.1) without the compactness of a \(C_{0}\)-semigroup, and there exists \(M>0\) such that
$$\bigl\Vert S(t)\bigr\Vert \leq M\quad \mbox{for all } t\in[0,b]. $$
We will need to introduce the following hypothesis which is assumed thereafter:
  1. (H7)
    There exists \(p_{i}: \Omega\to L^{1}([0,b],\mathbb{R}_{+})\) random variable such that, for every bounded D, \(D'\) in X,
    $$\chi\bigl(f_{i}\bigl(t,D, D',\omega\bigr)\bigr)\leq \overline{p}_{i}(t,\omega)\chi(D)+p _{i}(t,\omega)\chi \bigl(D'\bigr). $$

Theorem 4.5

Under the conditions of Theorem  3.2 and (H7), problem (1.1) has at least one random mild solution.


We are going to study problem (1.1) respectively in the intervals \([0,t_{1}], (t_{1},t_{2}]\), \(\ldots,(t_{m},b]\). The proof will be given in three steps and then continued by induction.

Step 1. It is clear that all the random mild solutions of problem (3.2) are fixed points of the operator \(N_{1}\) defined in Theorem 3.1. For applied Theorem 4.2, first we prove that \(N_{1}\) is a \(\beta_{0,1}\)-condensing operator for a suitable \(MNC\). Given a bounded subset \(D\subset C([0,t_{1}],X)\), let \(\operatorname{mod}_{C}(D)\) the modulus of quasi-equicontinuity of the set of functions D denote
$$\operatorname{mod}_{C}(D)= \lim _{\delta\to0} \sup _{x\in D} \max _{\vert \tau_{2}-\tau_{1}\vert \leq\delta} \bigl\vert x(\tau_{1})-x(\tau_{2})\bigr\vert . $$
It is well known (see, e.g., Example 2.1.2 in [35]) that \(\operatorname{mod}_{C}(D)\) defines an \(MNC\) in \(C([0,t_{1}],X)\), which satisfies all of the properties in Definition 4.2 except regularity. Given the Hausdorff \(MNC\) χ, let \(\overline{\gamma}_{1}\) be the real \(MNC\) defined on bounded subsets on \(C([0,t_{1}],X)\) by
$$\overline{\gamma}_{1}(D)= \sup _{t\in[0,t_{1}]}e^{\frac{2M}{\tau} \int_{0}^{t}p(s,\omega)\,ds}\chi \bigl(D(t)\bigr),\qquad p(\cdot ,\omega)=p_{1}(\cdot , \omega)+p_{2}(\cdot , \omega). $$
Finally, define the following \(MNC\) on bounded subsets of \(D\times D _{*}\subset C([0,t_{1}],X)\times C([0,t_{1}],X)\) by
$$\begin{aligned}& \beta_{0,1}(D\times D_{*}):= \left( \begin{matrix} {\beta_{1}(D)} \\ {\beta_{1}(D_{*})} \end{matrix}\right), \\& \beta_{1}(D)= \max _{D\in\Delta(C([0,t_{1}],X))}\bigl(\gamma_{1}(D),\operatorname{mod}_{C}(D)\bigr), \end{aligned}$$
where \(\Delta(C([0,t_{1}],X)\times C([0,t_{1}],X))\) is the collection of all denumerable subsets of \(D\times D_{*}\). Then the \(MNC\) β is monotone, regular, and nonsingular (see Example 2.1.4 in [35]). This measure is also used in [3, 36] in the discussion of semi-linear evolution differential inclusions. To show that \(N_{1}\) is \(\beta_{0,1}\)-condensing, let \(B=D\times D\subset C([0,t_{1}],X) \times C([0,t_{1}],X)\) be a bounded set in \(C([0,t_{1}],X)\times C([0,t _{1}],X) \) such that
$$ \beta_{0,1}(B)\leq\beta_{0,1}\bigl(N(B)\bigr). $$
We will show that B is relatively compact. Let \(\{(x_{n},y_{n}): n \in \mathbb{N}\}\subset B\), and let \(N_{i}^{i}=L_{1}^{i}+L_{2}^{i}\), \(i=1,2\), where \(L_{1}^{i}: C([0,t_{1}],X)\to C([0,t_{1}],X)\) is defined by
$$L_{1}^{1}\bigl(x(t,\omega),y(t,\omega) \bigr)=S(t)x_{0}(\omega), \qquad L_{2} ^{1}\bigl(x(t, \omega),y(t,\omega)\bigr)=S(t)y_{0}(\omega), $$
\(L_{2}^{i}: C([0,t_{1}],X)\to C([0,t_{1}],X)\) is defined by
$$L_{2}^{i}\bigl(x(t,\omega),y(t,\omega)\bigr)= \int _{0}^{t}S(t-s)f_{i}\bigl(s,x(s, \omega),y(s, \omega)\bigr)\,ds,\quad t\in[0,t_{1}], i=1,2. $$
From assumption (H7), it holds that for a.e. \(t\in[0,t_{1}]\),
$$\begin{aligned}& \chi\bigg(f_{1} \biggl(s, \bigcup_{n\in \mathbb{N}} \bigl\{ x_{n}(t,\omega) \bigr\} , \bigcup_{n\in \mathbb{N}} \bigl\{ y _{n}(t,\omega) \bigr\} ,\omega \biggr) \\& \quad\leq\chi \bigl(f_{1} \bigl(s, \bigl\{ x_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}}, \bigl\{ y_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}}, \omega \bigr) \bigr) \\& \quad\leq p_{1}(s,\omega)\chi \bigl( \bigl\{ x_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr) \\& \qquad{}+\overline{p}_{1}(s,\omega)\chi \bigl( \bigl\{ y_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr) \\& \quad\leq p_{1}(s,\omega)e^{2M\tau\int_{0}^{s}p(r,\omega)\,dr} e^{-2M \tau\int_{0}^{s}p(r,\omega)\,dr}\chi \bigl( \bigl\{ x_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr) \\& \qquad{}+\overline{p}_{1}(s,\omega)e^{2M\tau\int_{0}^{s}\overline {p}_{1}(r, \omega)\,dr} e^{-2M\tau\int_{0}^{s}{p}1(r,\omega)\,dr} \chi \bigl( \bigl\{ y_{n}(s, \omega) \bigr\} _{n\in \mathbb{N}} \bigr) \\& \quad\leq p_{1}(t,\omega)e^{2M\tau\int_{0}^{s}p(r,\omega)\,dr} \overline{ \gamma}_{1} \bigl( \bigl\{ x_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr) \\& \qquad{}+e^{2M\tau\int_{0}^{s}p(r,\omega)\,dr}\overline {p}_{1}(s,\omega)\overline{ \gamma}_{1} \bigl( \bigl\{ y_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr). \end{aligned}$$
$$\begin{aligned}& \chi\bigg(f_{1} \biggl(s,\bigcup _{n\in \mathbb{N}} \bigl\{ \bigl(x_{n}(t,\omega),y_{n}(t, \omega) \bigr) \bigr\} ,\omega \biggr) \\& \quad\leq p_{1}(t,\omega)e^{2M\tau\int_{0}^{s}p(r,\omega)\,dr} \overline{ \gamma}_{1} \bigl( \bigl\{ x_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr) \\& \qquad{}+\overline{p}_{1}(s,\omega)e^{2M\tau\int_{0}^{s}p(r,\omega )\,dr} \overline{ \gamma}_{1} \bigl( \bigl\{ y_{n}(s,\omega) \bigr\} _{n\in \mathbb{N}} \bigr). \end{aligned}$$
Lemmas 4.1 and 4.2 imply that
$$\begin{aligned}& \chi\bigl(\bigl\{ N_{1}^{1}\bigl(x_{n}(t, \omega),y_{n}(t,\omega)\bigr)\bigr\} _{n=1}^{\infty }\bigr) \\& \quad \leq\overline{\gamma}_{1}\bigl(\{x_{n} \}_{n=1}^{\infty}\bigr)2M \int _{0}^{t}{p}_{1}(s, \omega)\,ds+\overline{\gamma}_{1}\bigl(\{y_{n}\}_{n=1} ^{\infty}\bigr)2M \int _{0}^{t}\overline{p}_{1}(s)\,ds \\& \quad \leq\overline{\gamma}_{1}\bigl(\{x_{n} \}_{n=1}^{\infty}\bigr)2M \int _{0} ^{t}p(s, \omega)e^{2M\tau\int_{0}^{s}p(r,\omega)\,dr}\,ds \\& \qquad {}+\overline{\gamma}_{1}\bigl(\{y_{n}\}_{n=1}^{\infty} \bigr)2M \int _{0}^{t}e ^{2M\tau\int_{0}^{s}p(r,\omega)\,dr}p(s,\omega)\,ds. \end{aligned}$$
$$\begin{aligned} e^{-2M\int_{0}^{t}p(s)\,ds}\chi\bigl(\bigl\{ N_{1}\bigl(x_{n}(t, \omega),y_{n}(t, \omega)\bigr)\bigr\} _{n=1}^{\infty} \bigr) \leq&\frac{2M}{\tau}\overline{\gamma }_{1}\bigl( \{x_{n}\}_{n=1}^{\infty}\bigr) +\frac{2M}{\tau} \overline{\gamma} _{1}\bigl(\{y_{n}\}_{n=1}^{\infty} \bigr) \end{aligned}$$
$$\begin{aligned} \chi\bigl(L_{1}^{1}\bigl(\bigl\{ x_{n}(t)\bigr\} _{n=1}^{\infty},\bigl\{ y_{n}(t)\bigr\} _{n=1}^{ \infty}\bigr)\bigr)=0. \end{aligned}$$
$$\begin{aligned} \overline{\gamma}_{1}\bigl(\bigl\{ N_{1}^{1} \bigl(x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega)\bigr) \bigr\} _{n=1}^{\infty}\bigr) \leq&\frac{2M}{\tau}\overline{ \gamma}_{1}\bigl(\{x _{n}\}_{n=1}^{\infty} \bigr) +\frac{2M}{\tau}\overline{\gamma}_{1}\bigl(\{y _{n} \}_{n=1}^{\infty}\bigr). \end{aligned}$$
Similarly, we have
$$\begin{aligned} \overline{\gamma}_{1}\bigl(\bigl\{ N_{2} \bigl(x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega)\bigr)\bigr\} _{n=1}^{\infty}\bigr) \leq&\frac{2M}{\tau}\overline{ \gamma}_{1}\bigl(\{x _{n}\}_{n=1}^{\infty} \bigr) +\frac{2M}{\tau}\overline{\gamma}_{1}\bigl(\{y _{n} \}_{n=1}^{\infty}\bigr). \end{aligned}$$
$$\begin{aligned} \left( \begin{matrix} {\overline{\gamma}_{1}(N_{1}(\{x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega)\} _{n=1}^{\infty}))} \\ {\overline{\gamma}_{1}(N_{2}(\{x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega)\} _{n=1}^{\infty}))} \end{matrix}\right)\leq \left( \begin{matrix} {\frac{2M}{\tau}} &{\frac{2M}{\tau}} \\ {\frac{2M}{\tau}}& {\frac{2M}{\tau}} \end{matrix}\right) \left( \begin{matrix} {\overline{\gamma}_{1}(\{x_{n}(\cdot ,\omega)\}_{n=1}^{\infty})} \\ {\overline{\gamma}_{1}(\{y_{n}(\cdot ,\omega)\}_{n=1}^{\infty})} \end{matrix}\right). \end{aligned}$$
From (4.1), we have
$$\begin{aligned} \left( \begin{matrix}{\overline{\gamma}_{1}(N_{1}^{1}(\{x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega) \}_{n=1}^{\infty}))} \\ {\overline{\gamma}_{1}(N^{1}_{2}(\{x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega) \}_{n=1}^{\infty}))} \end{matrix}\right) \leq A \left( \begin{matrix} {\overline{\gamma}_{1}(N_{1}^{1}(\{x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega) \}_{n=1}^{\infty}))} \\ {\overline{\gamma}_{1}(N_{2}^{1}(\{x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega) \}_{n=1}^{\infty}))} \end{matrix}\right), \end{aligned}$$
where \(A=\bigl({\scriptsize\begin{matrix}{} \frac{2M}{\tau} &\frac{2M}{\tau} \cr \frac{2M}{\tau}& \frac{2M}{\tau} \end{matrix}}\bigr)\). Since the spectral radius \(\rho(A)=\frac{4M}{\tau}<1\), then
$$\begin{aligned} \overline{\gamma}_{1}\bigl(N_{1}^{1}\bigl(\bigl\{ x_{n}(\cdot ,\omega),y_{n}(\cdot ,\omega) \bigr\} _{n=1}^{\infty} \bigr)\bigr)=0,\qquad \overline{\gamma}_{1}\bigl(N_{2}^{1} \bigl(\bigl\{ x_{n}(\cdot , \omega),y_{n}(\cdot ,\omega)\bigr\} _{n=1}^{\infty}\bigr)\bigr)=0. \end{aligned}$$
This implies that
$$\begin{aligned}& \begin{aligned} &\overline{\gamma}_{1}\bigl(N_{1}^{1} \bigl(\bigl\{ x_{n}(t,\omega),y_{n}(t,\omega) \bigr\} _{n=1}^{\infty}\bigr)\bigr)=0, \\ &\overline{\gamma}_{1} \bigl(N_{2}^{1}\bigl(\bigl\{ x_{n}(t, \omega),y_{n}(t,\omega)\bigr\} _{n=1}^{\infty}\bigr) \bigr)=0\quad \mbox{for } t \in[0,t_{1}]. \end{aligned} \end{aligned}$$
Now, we show that \(\operatorname{mod}_{C}(B)=0\), i.e., the set
$$B_{n}=\bigl\{ \bigl(N_{1}\bigl(x_{n}(t, \omega),y_{n}(t,\omega)\bigr),N_{2}\bigl(\bigl\{ x_{n}(t, \omega),y_{n}(t,\omega)\bigr)\bigr)\big)\bigr\} _{n=1}^{\infty}\big): t\in[0,t_{1}]\bigr\} $$
is equicontinuous, we proceed as in the proof of Theorem 3.2. It follows that \(\operatorname{mod}_{C}(B_{n})=0\), which implies, by (4.3), that \(\beta_{0,1}(B_{n})=0\). We have proved that B is relatively compact. Hence \(N_{1}: C([0,t_{1}],X)\times C([0,t_{1}],X)\to C([0,t_{1}],X) \times C([0,t_{1}],X)\) is \(\beta_{0,1}\)-condensing. As in Theorem 3.2, \(N_{1}\) is continuous and, for some random variable \(\lambda: \Omega\to(0,1)\), we have
$${\mathcal {M}}_{1} =\bigl\{ (x,y): C\bigl([0,t_{1}], X\bigr) \times C\bigl([0,t_{1}], X\bigr): \lambda (\omega)N_{1}( \omega,x,y) = (x,y)\bigr\} $$
is bounded. As a consequence of Theorem 4.4, we deduce that \(N_{1}\) has a fixed point \((x,y)\) in \(C([0,t_{1}],X)\times C([0,t _{1}],X)\), which is a solution to problem (1.1) on \([0,t_{1}]\). Denote this by \((x_{0},y_{0})\).
Step 2. We consider problem (1.1) on \((t_{1},t_{2}]\). It is clear that the fixed points of the operator defined in Theorem 3.1 are the solutions of (3.3). Thus we only prove that \(N_{1}\) is a \(\beta_{1,2}\)-condensing operator. For a bounded subset \(B\times B \subset C_{*}([t_{1},t_{2}], X)\times C_{*}([t_{1},t_{2}], X)\), let \(\operatorname{mod}_{C}(B)\) be the modulus of quasi-equicontinuity of the set of functions B, \(\overline{\gamma}_{2}\) be the real \(MNC\) defined on a bounded subset on \(C_{*}([t_{1},t_{2}], X)\) by
$$\overline{\gamma}_{2}(B)= \sup _{t\in[t_{1},t_{2}]}e^{-2M\tau \int_{t_{1}}^{t}p(r,\omega)\,dr}\chi \bigl(B(t)\bigr), $$
and \(\beta_{2}\) the \(MNC\) defined on \(C_{*}([t_{1},t_{2}], X)\) by
$$\beta_{2}(B)= \max _{B\in\Delta(C_{*}([t_{1},t_{2}],X))}\bigl(\overline{ \gamma}_{2}(B), \operatorname{mod}_{C}(B)\bigr), $$
where \(\Delta(C_{*}([t_{1},T_{2}],X)\) is the collection of all denumerable subsets of \(D\times D_{*}\). So, we define \(MNC\) on bounded sets \(C_{*}([t_{1},t_{2}], X)\times C_{*}([t_{1},t_{2}], X)\) by
$$\beta_{1,2}(D\times D_{*}):= \left( \begin{matrix}{\beta_{2}(D)} \\ {\beta_{2}(D_{*})} \end{matrix}\right). $$
As in Step 1, we can prove that \(N_{2}\) is continuous and \(\beta_{1,2}\)-condensing. From Theorem 4.4, we deduce that \(N_{1}\) has a fixed point \((x,y)\) in \(C_{*}([t_{1},t_{2}], X) \times C_{*}([t_{1},t_{2}], X)\) denoted by \((x_{1},y_{1})\).
Step 3. We continue this process taking into account that \((x_{m},y_{m}):=(x \vert _{[t_{m},b]}, y \vert _{[t_{m},b]})\) is a solution of problem (3.4). A random mild solution \((x,y)\) of problem (1.1) is ultimately defined by
$$\bigl(x(t,\omega),y(t,\omega)\bigr)= \textstyle\begin{cases} (x_{0}(t,\omega),y_{0}(t,\omega)), & \mbox{if }t\in[0,t_{1}], \\ (x(t,\omega),y_{1}(t,\omega)), & \mbox{if }t\in(t_{1},t_{2}], \\ \ldots&\ldots \\ (x_{m}(t,\omega), y_{m}(t,\omega)), & \mbox{if }t\in(t_{m},b]. \end{cases} $$

5 Examples

In this section we use the abstract results proved in the above section to study the existence of a mild solution for random impulsive Stokes and hyperbolic differential equations.

Example 5.1

Let \((\Omega, \Sigma)\) be a measurable space and \(G\subset \mathbb{R}^{3}\) be a bounded open domain with the smooth boundary ∂G. Consider the following system of impulsive stochastic Stokes-type partial differential inclusions:
$$ \textstyle\begin{cases} u_{t}(t,\xi,\omega)-P(\Delta u(t,\xi,\omega))=f(t,u(t,\xi, \omega),v(t,x,\omega),\omega), & \mbox{a.e. } t \in[0,b], \xi\in G, \\ v_{t}(t,\xi,\omega)-P(\Delta v(t,\xi,\omega))=g(t,u(t,\xi, \omega),v(t,\xi,\omega)), & \mbox{a.e. } t\in[0,b], \xi\in G, \\ u(t_{k}^{+},\xi,\omega)-u(t_{k}^{-},\xi,\omega)= I_{k}(u(t_{k}, \xi,\omega)),& \\ v(t_{k}^{+},\xi,\omega)-v(t_{k},\xi,\omega)= \overline{I}_{k}(v(t _{k},\xi,\omega)),& k=1,\ldots,m, \\ \nabla u=\nabla v=0,& (t,\xi)\in[0,b]\times\partial G, \\ u(t,\xi,\cdot)= v(t,\xi,\cdot)=0,& (t,\xi)\in[0,b]\times\partial G, \\ u(0,\xi,\cdot)=u(b,\xi,\cdot), \qquad v(0,\xi,\cdot)=v(b,x,\cdot),& \xi\in G, \end{cases} $$
where \(n(x)\) is the outward normal to D at the point \(\xi\in \partial G\). Let
$$E=\bigl\{ u\in\bigl(C^{\infty}_{c}(G)\bigr)^{3}: \nabla u=0 \mbox{ in } \Omega \mbox{ and } n\cdot u=0 \mbox{ on } \partial G\bigr\} , $$
and let \(X=\overline{E}^{(L^{2}(D))^{3}}\) be the closure of Y in \((L^{2}(G))^{3}\). It is clear that, endowed with the standard inner product of the space \((L^{2}(G))^{3}\), defined by
$$\langle u,v\rangle= \sum _{i=1}^{3}\langle u_{i},v_{i}\rangle_{L^{2}(G)}, $$
X is a Hilbert space. Let \(P: (L^{2}(G))^{3}\to E\) denote the orthogonal projection of \((L^{2}(G))^{3}\) onto E, where \(P(\Delta)\) is the Stokes operator. Let \(A: D(A)\subset X\to X\) be defined by
$$\textstyle\begin{cases} D(A)=(H^{2}(G)\cap H_{0}^{1}(G))^{3}\cap X, \\ Au=-P(\Delta u), \quad u\in D(A). \end{cases} $$

Lemma 5.1

(Fujita-Kato, Theorem 7.3.4, [32])

The operator A, defined as above, is the generator of a compact and analytic \(C_{0}\)-semigroup of contractions in X.

Let us assume that
(\({\mathcal {K}}_{1}\)): 

\(f_{i},g_{i}: [0,b]\times G\times\mathbb{R} \times\mathbb{R}\times\Omega\to\mathbb{R}\), \(i=1,2\), are Carathéodory functions.

(\({\mathcal {K}}_{2}\)): 
\(\phi,\psi: \Omega\to L^{1}([0,b], \mathbb{R}_{+})\) are random functions such that
$$\bigl\vert f(t,x,u,v,\omega)\bigr\vert \leq\phi_{i}(t,\omega) \quad \mbox{and}\quad \bigl\vert g(t,x,u,v,\omega)\bigr\vert \leq\psi_{i}(t, \omega),\quad i=1,2, $$
for each \((t,x,u,v,\omega)\in[0,b]\times G\times\mathbb{R}\times \mathbb{R}\times\Omega\).
$$\begin{aligned}& x(t,\omega) (\xi)=u(t,\xi,\omega),\qquad y(t,\omega) (\xi )=v(t,\xi, \omega), \quad t\in[0,b], \xi\in G, \\& I_{k} \bigl(x(t_{k},\omega) \bigr)=K_{k} \frac {x (t_{k}^{-},\omega )}{1+ \vert x (t_{k}^{-},\omega ) \vert _{X}}, \quad\xi\in G, k=1,\ldots, m, \\& I_{k} \bigl(y(t_{k},\cdot,\omega) \bigr)= \bar{K}_{k} \frac {y (t_{k}^{-},\omega )}{1+ \vert y (t_{k}^{-},\omega ) \vert _{X}},\quad\xi\in\Omega, k=1,\ldots, m, \\& x(0,\omega) (\xi)=u(0,\xi,\omega)=u(b,\xi,\omega)=x(b,\omega) ( \xi), \\& y(0, \omega) (\xi)=v(0,\xi,\omega)=v(b,\xi,\omega)=y(b, \omega) (\xi) \xi\in G, \end{aligned}$$
where \(K_{k},\bar{K}_{k}\in \mathbb{R}\), \(k=1,\ldots,m\). Assume that \(({\mathcal {K}}_{1})\)-\(({\mathcal {K}}_{2})\) are satisfied. Thus problem (5.1) can be written in the abstract form
$$ \textstyle\begin{cases} x'(t,\omega)-A_{1}x(t,\omega)= f_{1}(t,x(t,\omega),y(t,\omega), \omega), & t\in[0,b], \\ y'(t,\omega)-A_{2}y(t,\omega)= f_{2}(t,x(t,\omega),y(t,\omega), \omega), & t\in[0,b], \\ x(t_{k}^{+},\omega)-x(t_{k}^{-},\omega)= I_{k}(x(t_{k},\omega)), \\ y(t_{k}^{+},\omega)-y(t_{k}^{-},\omega)= \overline{I}_{k}(y(t_{k}, \omega)), & k=1,\ldots,m, \\ x(0,\omega)=x_{0}(\omega),\qquad y(0,\omega)=y_{0}(\omega), \end{cases} $$
where \(A_{1}=A_{2}=A\). Since, for each \(k=1,\ldots,m\), we have
$$\bigl\vert I_{k}(x) \bigr\vert = \biggl\vert K_{k} \frac {x}{1+\vert x\vert _{X}} \biggr\vert _{X}\leq \vert K_{k} \vert ,\qquad \bigl\vert \bar{I}_{k}(x) \bigr\vert = \biggl\vert \bar{K}_{k} \frac {x}{1+\vert x\vert _{X}} \biggr\vert _{X}\leq \vert \bar{K}_{k} \vert \quad\mbox{for all } x\in X. $$
Then Theorem 3.2 ensures that problem (5.1) possesses at least one solution.

Example 5.2

Consider the following hyperbolic system of impulsive partial differential equations:
$$ \textstyle\begin{cases} u_{tt}(t,\xi,\omega)-\Delta u(t,\xi,\omega)=f(t,\xi,u(t,\xi, \omega),v(t,x,\omega),& \mbox{a.e. } t \in J, \xi\in G, \\ v_{tt}(t,\xi,\omega)-\Delta v(t,\xi,\omega)=g(t,u(t,\xi,\omega),v(t,\xi,\omega)), & \mbox{a.e. } t\in J, \xi\in G, \\ u(t_{k}^{+},\xi,\omega)-u(t_{k}^{-},\xi,\omega)= I_{k}(u(t_{k}, \xi,\omega)),& k=1,\ldots,m, \\ v(t_{k}^{+},x,\omega)-v(t_{k},x,\omega)= \overline{I}_{k}(v(t_{k}, \xi,\omega)),& k=1,\ldots,m, \\ u(0,\xi,\omega)=v(0,\xi,\omega)=0, \\ \quad (t,\xi)\in[0,b]\times \partial G, \omega\in\Omega, \\ u(0,\xi,\omega)=u_{0}(\xi,\omega),\qquad u_{t}(0,x,\omega)=u_{1}( \xi,\omega),& \xi\in G, \omega\in\Omega, \\ v(0,\xi,\omega)=v_{0}(\xi,\omega),\qquad v_{t}(0,x,\omega)=v_{1}( \xi,\omega),& \xi\in G, \omega\in\Omega, \end{cases} $$
where G is bounded in \(\mathbb{R}^{d}\) with a sufficiently regular boundary. Let \(x,y: \Omega\to PC([0,b], L^{2}(G,\mathbb{R}))\) be defined by
$$x(t,\omega) (\xi)=u(t,\xi,\omega),\qquad y(t,\omega)=v(t,\xi,\omega) $$
and \(f_{1}, f_{2}: [0,b]\times L^{2}(G,\mathbb{R})\times L^{2}(G,\mathbb{R})\times \Omega\to L^{2}(G,\mathbb{R})\) by
$$f_{1}\bigl(t,x(t,\omega),y(t,\omega),\omega\bigr) (\xi)=f\bigl(t,\xi, u(t,\xi, \omega),v(t,\xi,\omega),\omega\bigr) $$
$$f_{2}\bigl(t,x(t,\omega),y(t,\omega),\omega\bigr) (\xi)=g\bigl(t,\xi, u(t,\xi, \omega),v(t,\xi,\omega),\omega\bigr). $$
Hence problem (5.3) can be rewritten in the following form:
$$ \textstyle\begin{cases} x''(t,\omega)-A_{1}x(t,\omega) = f_{1}(t,x(t,\omega),y(t,\omega),\omega),& t\in[0,b], \\ y''(t,\omega)-A_{2}y(t,\omega) = f_{2}(t,x(t,\omega),y(t,\omega),\omega),& t\in[0,b], \\ x(t_{k}^{+},\omega)-x(t_{k}^{-},\omega)= I_{k}(x(t_{k},\omega)), & k=1,\ldots,m, \\ y(t_{k}^{+},\omega)-y(t_{k}^{-},\omega)= \overline{I}_{k}(y(t_{k}, \omega)), & k=1,\ldots,m, \\ x(0,\omega)=x_{0}(\omega),\qquad x'(0,\omega)=\bar{x}_{0}(\omega), \\ y(0,\omega)=y_{0}(\omega),\qquad y'(0,\omega)=\bar{y}_{0}(\omega), \end{cases} $$
where \(A_{1}=A_{2}=A: D(A)=W^{2,2}(G,\mathbb{R})\cap W_{0}^{1,2}(G,\mathbb{R}) \subset L^{2}(G,\mathbb{R})\to \mathbb{R}\) defined as
$$Au=\Delta u,\quad u\in W^{2,2}(G,\mathbb{R})\cap W_{0}^{1,2}(G, \mathbb{R}). $$
We introduce the Hilbert space \(X=W^{1,2}_{0}(G,\mathbb{R})\times L^{2}(G,\mathbb{R})\) with the inner product
$$\left\langle \left( \begin{matrix}u_{1} \\ u_{2} \end{matrix}\right), \left( \begin{matrix} v_{1} \\ v_{2} \end{matrix}\right) \right\rangle= \int_{G}\nabla u_{1}\nabla v_{1}\,d\xi+ \int_{G}u _{1}u_{2}\,d\xi+ \int_{G}v_{1}v_{2}\,d\xi. $$
The following linear operator
$${\mathcal {A}}= \left( \begin{matrix} 0& I \\ A&0 \end{matrix}\right),\qquad D({\mathcal {A}})=D(A)\times W_{0}^{1,2}(G,\mathbb{R}), $$
generates a strongly continuous semigroup (see [30]). So, we can transform problem (5.4) to a first-order system of random semilinear differential equations in X.
$$ \textstyle\begin{cases} X'(t,\omega)-{\mathcal {A}}X(t,\omega) = F_{1}(t,X(t,\omega),Y(t,\omega),\omega),& t\in[0,b], \\ Y'(t,\omega)-{\mathcal {A}}Y(t,\omega) = F_{2}(t,X(t,\omega),Y(t,\omega),\omega),& t\in[0,b], \\ X(t_{k}^{+},\omega)-X(t_{k}^{-},\omega)= R_{k}(X(t_{k},\omega)), & k=1,\ldots,m, \\ Y(t_{k}^{+},\omega)-Y(t_{k}^{-},\omega)= \overline{R}_{k}(Y(t_{k}, \omega)), & k=1,\ldots,m, \\ X(0,\omega)=X_{0}(\omega), & \\ Y(0,\omega)=Y_{0}(\omega), \end{cases} $$
where \(F_{1}, F_{2}: [0,b]\times X\times X\times\Omega\to X\) is defined as
$$\begin{aligned}& F_{1}(t,X,Y,\omega)= \left( \begin{matrix} 0 \\ {f_{1}(t,x_{1},y_{1},\omega)} \end{matrix}\right), \\& F_{2}(t,X,Y,\omega)= \left( \begin{matrix} 0 \\ {f_{2}(t,x_{1},y_{1},\omega)} \end{matrix}\right), \\& X= \left( \begin{matrix}x_{1} \\ x_{2} \end{matrix}\right), \qquad Y= \left( \begin{matrix} y_{1} \\ y_{2} \end{matrix}\right), \\& R_{k}\bigl(X(t_{k},\omega)\bigr)= \left( \begin{matrix}0 \\ {I_{k}(x_{1}(t_{k},\omega))} \end{matrix}\right),\qquad \overline{R}_{k}\bigl(Y(t_{k},\omega)\bigr)= \left( \begin{matrix} 0 \\ {\overline{I}_{k}(y_{1}(t_{k},\omega))} \end{matrix}\right), \end{aligned}$$
$$X_{0}(0,\omega)= \left( \begin{matrix} x_{0}(\omega) \\ \bar{x}_{0}(\omega) \end{matrix}\right),\qquad Y= \left( \begin{matrix} y_{0}(\omega) \\ \bar{y}_{0}(\omega) \end{matrix}\right). $$
Observe that the semigroup generated by \({\mathcal {A}}\) is noncompact. Assume that \(({\mathcal {K}}_{1})\)-\(({\mathcal {K}}_{2})\), (H7) are satisfied and \(I_{k}\), \(\overline{I}_{k}\) are continuous functions. Then from Theorem 4.5 problem (5.3) has at least one random mild solution.

6 Conclusions

In this paper, we investigated some problem of an impulsive random differential system under various assumptions on the right hand-side nonlinearity, and we obtained a number of new results regarding the existence and uniqueness of mild solutions. The main assumptions on the nonlinearity are the Carathéodory and the Lipschitz conditions and some Nagumo-Bernstein-type growth conditions. We have used fixed point theory in vector metric spaces. Also, we established a random version of Sadovskii’s fixed point theorem type in a vector Banach space. We hope this paper can provide some contribution to the questions of existence and compactness of solution sets for random systems of impulsive first-order differential equations on a bounded domain.



This paper was completed while A. Ouahab visited the Instituto de Matemáticas of Santiago de Compostela. The work of JJ Nieto has been partially supported by the AEI of Spain under Grant MTM2016- 75140-P and co-financed by the European Community Fund FEDER, and XUNTA de Galicia under grants GRC2015-004 and R2016/022.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Mascara University, Mascara, Algeria
Laboratory of Mathematics, Sidi-Bel-Abbès University, Sidi-Bel-Abbès, Algeria
Instituto de Matemáticas, Facultad de Matemáticas, Universidad de Santiago de Compostela, Santiago de Compostela, Spain


  1. Bainov, DD, Simeonov, PS: Systems with Impulse Effect. Ellis Horwood, Chichister (1989) MATHGoogle Scholar
  2. Bainov, DD, Simeonov, PS: Impulsive Differential Equations: Asymptotic Properties of the Solutions. Series on Advances in Mathematics for Applied Sciences, vol. 28. World Scientific, Singapore (1995) MATHGoogle Scholar
  3. Graef, JR, Henderson, J, Ouahab, A: Impulsive Differential Inclusions - a Fixed Point Approach. De Gruyter Series in Nonlinear Analysis and Applications, vol. 20. de Gruyter, Berlin (2013) View ArticleMATHGoogle Scholar
  4. Lakshmikantham, V, Bainov, DD, Simeonov, PS: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) View ArticleMATHGoogle Scholar
  5. Perestyuk, NA, Plotnikov, VA, Samoilenko, AM, Skripnik, NV: Differential Equations with Impulse Effects. Multivalued Right-Hand Sides with Discontinuities. de Gruyter, Berlin (2011) View ArticleMATHGoogle Scholar
  6. Rachunkov, I, Tomecek, J: State-Dependent Impulses. Boundary Value Problems on Compact Interval. Atlantis Briefs in Differential Equations, vol. 6. Atlantis Press, Paris (2015) Google Scholar
  7. Stamov, GT: Almost Periodic Solutions of Impulsive Differential Equations. Lecture Notes in Mathematics, vol. 2047. Springer, Heidelberg (2012) MATHGoogle Scholar
  8. Samoilenko, AM, Perestyuk, NA: Impulsive Differential Equations. World Scientific, Singapore (1995) View ArticleMATHGoogle Scholar
  9. Kampé de Feriet, J: Random Solutions of Partial Differential Equations. Proc. 3rd Berkeley Sympos. Math. Stat. and Probab., vol. III, pp. 199-208. University of California Press, Berkeley (1956) MATHGoogle Scholar
  10. Becus, G: Random generalized solutions to the heat equation. J. Math. Anal. Appl. 90, 93-102 (1977) MathSciNetView ArticleMATHGoogle Scholar
  11. Bharucha-Reid, AT: Random Integral Equations. Academic Press, New York (1972) MATHGoogle Scholar
  12. Ladde, GS, Lakshmikantham, V: Random Differential Inequalities. Academic Press, New York (1980) MATHGoogle Scholar
  13. Soong, TT: Random Differential Equations in Science and Engineering. Academic Press, New York (1973) MATHGoogle Scholar
  14. Strand, JL: Random Ordinary Differential Equations. Reidel, Boston (1985) MATHGoogle Scholar
  15. Tsokos, CP, Padgett, WJ: Random Integral Equations with Applications in Life Sciences and Engineering. Academic Press, New York (1974) MATHGoogle Scholar
  16. Benaissa, A, Benchohra, M: Functional differential equations with state-dependent delay and random effects. Rom. J. Math. Comput. Sci. 5, 84-94 (2015) MathSciNetMATHGoogle Scholar
  17. Benaissa, A, Benchohra, M, Graef, JR: Functional differential equations with delay and random effects. Stoch. Anal. Appl. 33, 1083-1091 (2015) MathSciNetView ArticleMATHGoogle Scholar
  18. Dhage, BC: On global existence and attractivity results for nonlinear random integral equations. Panam. Math. J. 19, 97-111 (2009) MathSciNetMATHGoogle Scholar
  19. Dhage, BC: On nth order nonlinear ordinary random differential equations. Nonlinear Oscil. 13(4), 535-549 (2011) MathSciNetView ArticleMATHGoogle Scholar
  20. Stanescu, D, Chen-Charpentier, BM: Random coefficient differential equation models for bacterial growth. Math. Comput. Model. 50, 885-895 (2009) MathSciNetView ArticleMATHGoogle Scholar
  21. Yang, X, Cao, J: Synchronization of delayed complex dynamical networks with impulsive and stochastic effects. Nonlinear Anal., Real World Appl. 12, 2252-2266 (2011) MathSciNetView ArticleMATHGoogle Scholar
  22. Varga, RS: Matrix Iterative Analysis, Second revised and expanded edition. Springer Series in Computational Mathematics, vol. 27. Springer, Berlin (2000) View ArticleMATHGoogle Scholar
  23. Precup, R: The role of matrices that are convergent to zero in the study of semilinear operator systems. Math. Comput. Model. 49, 703-708 (2009) MathSciNetView ArticleMATHGoogle Scholar
  24. Rus, IA: Principles and Applications of the Fixed Point Theory. Dacia, Cluj-Napoca (1979) (in Romanian) Google Scholar
  25. Turinici, M: Finite-dimensional vector contractions and their fixed points. Stud. Univ. Babeş-Bolyai, Math. 35(1), 30-42 (1990) MathSciNetMATHGoogle Scholar
  26. Kuratowski, K, Ryll-Nardzewski, C: A general theorem on selectors. Bull. Acad. Pol. Sci., Sér. Sci. Math. Astron. Phys. 13, 397-403 (1965) MathSciNetMATHGoogle Scholar
  27. Sinacer, ML, Nieto, JJ, Ouahab, A: Random fixed point theorem in generalized Banach space and applications. Random Oper. Stoch. Equ. 24, 93-112 (2016) MathSciNetView ArticleMATHGoogle Scholar
  28. Kisielewicz, M: Differential Inclusions and Optimal Control. Kluwer Academic, Dordrecht (1991) MATHGoogle Scholar
  29. Krawaritis, D, Stavrakakis, N: Perturbations of maximal monotone random operators. Linear Algebra Appl. 84, 301-310 (1986) MathSciNetView ArticleMATHGoogle Scholar
  30. Pazy, A: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, New York (1983) View ArticleMATHGoogle Scholar
  31. Engel, KJ, Nagel, R: One-Parameter Semigroups for Linear Evolution Equations. Springer, New York (2000) MATHGoogle Scholar
  32. Vrabie, II: \(C_{0}\)-Semigroups and Applications. North-Holland Mathematics Studies, vol. 191. North-Holland, Amsterdam (2003) MATHGoogle Scholar
  33. Bihari, I: A generalisation of a lemma of Bellman and its application to uniqueness problems of differential equations. Acta Math. Acad. Sci. Hung. 7, 81-94 (1956) MathSciNetView ArticleMATHGoogle Scholar
  34. Henderson, J, Ouahab, A, Slimani, M: Existence results for a semilinear system of discrete equations. Int. J. Difference Equ. 12, 235-253 (2017) MathSciNetGoogle Scholar
  35. Kamenskii, M, Obukhovskii, V, Zecca, P: Condensing Multi-Valued Maps and Semilinear Differential Inclusions in Banach Spaces. de Gruyter, Berlin (2001) View ArticleMATHGoogle Scholar
  36. Djebali, S, Gorniewicz, L, Ouahab, A: Solutions Sets for Differential Equations and Inclusions. De Gruyter Series in Nonlinear Analysis and Applications, vol. 18. de Gruyter, Berlin (2013) View ArticleMATHGoogle Scholar


© The Author(s) 2017