Skip to main content

Strongly regular points of mappings

Abstract

In this paper, we use a robust lower directional derivative and provide some sufficient conditions to ensure the strong regularity of a given mapping at a certain point. Then, we discuss the Hoffman estimation and achieve some results for the estimate of the distance to the set of solutions to a system of linear equalities. The advantage of our estimate is that it allows one to calculate the coefficient of the error bound.

1 Introduction

Let f be a mapping acting between the normed spaces \(\mathbb{X}\) and \(\mathbb{Y}\), whose norms are denoted by the same symbol \(\Vert \cdot \Vert \). To estimate the approximate solutions of the equation \(y=f(x)\), we seek an error bound

$$\begin{aligned} \mathop{\operatorname{dist}}\bigl(x, f^{-1}(y)\bigr)\leq \kappa \bigl\Vert y-f(x) \bigr\Vert , \end{aligned}$$

locally, for all \((x,y)\) near \((\bar{x}, \bar{y} = f(\bar{x}))\), or globally, for all x and y, where κ is some positive constant. The infimum of such κ is called the modulus of regularity of f. For instance, when \(f:\mathbb{R}\to \mathbb{R}\) is smooth and verifies \(f^{\prime }(\bar{x})\neq 0\), it is easily observed that the modulus of regularity of f at is exactly \(\vert f^{\prime }(\bar{x})\vert ^{-1}\). A first approach to the concept of regularity goes back to a celebrated fundamental result proved in 1934 by Lyusternik [1]:

Theorem 1.1

(Lyusternik, [1])

Let f be a mapping from a Banach space \(\mathbb{X}\) to a Banach space \(\mathbb{Y}\). Suppose that f is Fréchet differentiable in a neighborhood of and that its derivative \(f^{\prime }(x)\) is continuous at and \(f^{\prime }(\bar{x})\) is surjective. Then, for every \(\varepsilon >0\), there exists \(r>0\) such that

$$\begin{aligned} \mathop{\operatorname{dist}}\bigl(x, f^{-1}(0)\bigr)\leq \varepsilon \Vert x-\bar{x} \Vert , \end{aligned}$$

whenever

$$\begin{aligned} \Vert x-\bar{x} \Vert \leq r \quad\textit{and}\quad f^{\prime }(\bar{x}) (x- \bar{x}) =0. \end{aligned}$$

In other words, the tangent manifold to \(f^{-1}(0)\) is equal to \(\bar{x} +\mathop{\operatorname{Ker}}f^{\prime }(\bar{x})\), where \(\mathop{\operatorname{Ker}}f^{\prime }(\bar{x})\) is the set of those x such that \(f^{\prime }(\bar{x})(x)=0\).

We refer to Dontchev [2] for a nice overview on the Lyusternik theorem and to the fact that the Lyusternik theorem can be easily obtained from the Graves theorem. We also refer to the forthcoming book by Thibault [3].

Theorem 1.2

(Graves, [4])

Let \(\mathbb{X}\) and \(\mathbb{Y}\) be Banach spaces, \(\bar{x}\in \mathbb{X,}\) and \(f:\mathbb{X}\to \mathbb{Y}\) be a \(C^{1}\)-mapping whose derivative \(f^{\prime }(\bar{x})\) is onto. Then, there exist a neighborhood \(\mathbb{U}\) of and a constant \(c > 0\) such that for every \(x\in \mathbb{U}\) and \(\tau > 0 \) with \(\mathbb{ B}(\bar{x}, \tau ) \subset \mathbb{U}\),

$$\begin{aligned} \mathbb{ B}(\bar{x},c\tau ) \subset f\bigl( \mathbb{ B}(\bar{x}, \tau )\bigr) \quad\textit{(partial openness property with linear rate)}. \end{aligned}$$

Ioffe and Tihomirov showed in [5] that the original Lyusternik proof may lead to a stronger result and proved that if \(f^{\prime }(\bar{x})\) is surjective, then there are \(\kappa > 0\) and \(\delta > 0\) such that

$$\begin{aligned} \mathop{\operatorname{dist}}\bigl( x, f^{-1}(\bar{y})\bigr) \leq \kappa \bigl\Vert f(x)-f(\bar{x}) \bigr\Vert \quad\text{whenever } \Vert x- \bar{x} \Vert < \delta. \end{aligned}$$
(1.1)

Ioffe’s remark leads to a standard definition:

Definition 1.1

Point \(\bar{x}\in \mathbb{X} \) is said to be a regular point of a mapping \(f:\mathbb{X}\to \mathbb{Y}\) if the relation (1.1) is satisfied.

In this note, we will call a strongly regular point of f if the inequality

$$\begin{aligned} \Vert x-\bar{x} \Vert \leq \kappa \bigl\Vert f(x)-f(\bar{x}) \bigr\Vert , \end{aligned}$$
(1.2)

holds locally, for all x belonging to a neighborhood of , where \(\kappa >0\) is a positive constant. Next, we will provide sufficient conditions for to be a strongly regular point. Our results allow us to estimate the constant κ in (1.2). Then, we apply our results to the Hoffman estimate and obtain some results for the estimate of the distance to the set of solutions to a system of linear equalities. The advantage of our estimate is that it allows one to calculate the upper limit of the error. In particular, for a finite-dimensional space \(\mathbb{X}\) and a linear (continuous) mapping \(A:\mathbb{X}\rightarrow \mathbb{X}\), we prove that the estimate

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathop{\operatorname{Ker}}A)\leq \frac{ \Vert A(x) \Vert }{\inf \lbrace \Vert A(u) \Vert : u\in \mathbb{X}, \mathop{\operatorname{dist}}(u,\mathop{\operatorname{Ker}}A)=1 \rbrace }, \end{aligned}$$

holds for all \(x\in \mathbb{X}\) (Corollary 3.2 below). We can easily see that this estimate is sharp for injective linear mappings, in the sense that, if A is an injective linear mapping and

$$\begin{aligned} \inf \bigl\lbrace \bigl\Vert A(u) \bigr\Vert : u\in \mathbb{X}, \mathop{\operatorname{dist}}(u, \mathop{\operatorname{Ker}}A)=1 \bigr\rbrace < \mu, \end{aligned}$$

then there exists some \(x\in \mathbb{X}\) such that \(\|x\|=\mathop{\operatorname{dist}}(x,\mathop{\operatorname{Ker}}A)>\mu ^{-1} \Vert A(x)\Vert \).

Our work is outlined as follows. In Sect. 1, we recall the famous Lyusternik theorem and survey briefly its relationship with the concept of metric regularity. In Sect. 2, we first introduce the notion of homogeneous continuity of mappings. Then, using an appropriate notion of lower directional derivative, we achieve some results ensuring in finite dimension that for a given mapping a point is strongly regular. Finally, in Sect. 3, we focus our attention on Hoffman’s estimate of approximate solutions of finite systems of linear inequalities and prove some similar estimates.

2 Sufficient conditions of regularity via generalized derivative

Throughout the paper, we use standard notations. For a normed space \(\mathbb{X}\), we denote its norm by \(\Vert \cdot \Vert \) and by \(\mathbb{X}^{*}\) its (continuous) dual. The symbol \(\mathbb{S}\) stands for the unit sphere, that is, the set of all points of \(\mathbb{X}\) of norm one, while \(\mathbb{B}(x,r)\) and \(\overline{\mathbb{B}}(x,r)\) denote, respectively, the open and closed balls centered at x with radius r. Some other notations are introduced as and when needed.

2.1 Homogeneous continuity

We begin with the following definition.

Definition 2.1

Let \(\mathbb{X}\) and \(\mathbb{Y}\) be normed spaces and \(\mathbb{E}\subset \mathbb{X}\). The mapping \(f:\mathbb{X}\rightarrow \mathbb{Y}\) is said to be homogeneously continuous at \(\bar{x}\in \mathbb{X}\) on \(\mathbb{E}\) if for every \(\epsilon >0\) there exist \(\delta >0\) and \(0<\beta \leq 1\) such that

$$\begin{aligned} \Vert x-y \Vert < \delta \quad\Longrightarrow \quad\bigl\Vert f(\bar{x}+tx)-f(\bar{x}+ty) \bigr\Vert < t\epsilon, \end{aligned}$$

for all \(0< t\leq \beta \) and all \(x,y\in \mathbb{E}\).

We are going to provide some sufficient conditions under which a mapping f is homogeneously continuous. Let us recall that a mapping \(f:\mathbb{X}\rightarrow \mathbb{Y}\) is said to be locally Lipschitz around \(\bar{x}\in \mathbb{X}\) if there exist a neighborhood \(\mathbb{O}\) of and a real number \(\lambda >0\) such that

$$\begin{aligned} \bigl\Vert f(x)-f(y) \bigr\Vert \leq \lambda \Vert x-y \Vert , \end{aligned}$$
(2.1)

for all \(x,y\in \mathbb{O}\).

Lemma 2.1

Suppose that \(\mathbb{X}\) and \(\mathbb{Y}\) are normed spaces. If \(f:\mathbb{X}\rightarrow \mathbb{Y}\) is locally Lipschitz around \(\bar{x}\in \mathbb{X}\), then f is homogeneously continuous at on some closed ball \(\overline{\mathbb{B}}(0,r)\).

Proof

By hypothesis, there exist a constant \(\lambda >0\) and a neighborhood \(\mathbb{O}\) of in \(\mathbb{X}\) such that (2.1) holds for all \(x,y\in \mathbb{O}\). Choose \(r>0\) such that \(\overline{\mathbb{B}}(\bar{x},r)\subset \mathbb{O}\). It follows that

$$\begin{aligned} \bigl\Vert f(\bar{x}+tx)-f(\bar{x}+ty) \bigr\Vert \leq \lambda \bigl\Vert \bar{x}+tx-( \bar{x}+ty) \bigr\Vert =t\lambda \Vert x-y \Vert , \end{aligned}$$

for all \(x,y\in \overline{\mathbb{B}}(0,r)\) and all \(0\leq t\leq 1\). Now for each \(\epsilon >0\) take \(0<\delta <\epsilon \lambda ^{-1}\). It follows that

$$\begin{aligned} \Vert x-y \Vert < \delta \quad\Longrightarrow\quad \bigl\Vert f(\bar{x}+tx)-f( \bar{x}+ty) \bigr\Vert < t\epsilon, \end{aligned}$$

for all \(x,y\in \overline{\mathbb{B}}(0,r)\) and all \(0< t\leq 1\). This completes the proof. □

Proposition 2.1

Let \(\mathbb{X}\) and \(\mathbb{Y}\) be normed spaces, \(f:\mathbb{X}\rightarrow \mathbb{Y}\) be a mapping, \(\mathbb{E}\) be a subset of \(\mathbb{X}\) equipped with the topology induced by the norm and \(\bar{x}\in \mathbb{X}\). If the bifunction \(f_{\mathbb{E}}:\mathbb{E} \times (0,1]\rightarrow \mathbb{Y}\), defined by

$$\begin{aligned} f_{\mathbb{E}}(x,t):=\frac{f(\bar{x}+tx)-f(\bar{x})}{t}, \end{aligned}$$

is uniformly continuous (\(\mathbb{E} \times (0,1]\) equipped with the product topology with the usual linear operations of vector addition and scalar multiplication), then f is homogeneously continuous at on \(\mathbb{E}\).

Proof

Let \(\epsilon >0\). By hypothesis, there exist \(\delta, \beta >0\) such that for all \(x,y\in \mathbb{E}\) with \(\Vert x-y\Vert <\delta \) and all \(s,h\in (0,1]\) with \(\vert s-h\vert <\beta \) we have \(\Vert f_{\mathbb{E}}(x,s)- f_{\mathbb{E}}(y,h)\Vert < \epsilon \). It follows that

$$\begin{aligned} \biggl\Vert \frac{f(\bar{x}+tx)-f(\bar{x})}{t}- \frac{f(\bar{x}+ty)-f(\bar{x})}{t} \biggr\Vert < \epsilon, \end{aligned}$$

for all \(x,y\in \mathbb{E}\) with \(\Vert x-y\Vert <\delta \) and all \(0< t\leq 1\). Thus

$$\begin{aligned} \bigl\Vert f(\bar{x}+tx)-f(\bar{x}+ty) \bigr\Vert < t\epsilon, \end{aligned}$$

for all \(x,y\in \mathbb{E}\) with \(\Vert x-y\Vert <\delta \) and all \(0< t\leq 1\). This completes the proof. □

2.2 Generalized derivatives

We recall the definitions of the Hadamard and Gateaux derivatives: The Hadamard directional derivative \(f_{H}^{\prime }(\bar{x})(\nu )\) of f at in direction ν is defined as

$$\begin{aligned} f_{H}^{\prime }(\bar{x}) (\nu ):=\lim_{t\downarrow 0, \mu \rightarrow \nu } \frac{f(\bar{x} +t\mu )-f(\bar{x})}{t} = \lim_{n\to +\infty } \frac{f(\bar{x} +t_{n}\nu _{n})-f(\bar{x})}{t_{n}}, \end{aligned}$$

where \((\nu _{n}) \) and \((t_{n})\) are any sequences such that \(\nu _{n}\to \nu \) and \(t_{n}\to 0^{+}\).

The Gateaux directional derivative \(f_{G}^{\prime }(\bar{x})(\nu )\) of f at in direction ν is defined by

$$\begin{aligned} f_{G}^{\prime }(\bar{x}) (\nu ):= \lim_{t\downarrow 0} \frac{f(\bar{x} +t\nu )-f(\bar{x})}{t}. \end{aligned}$$

The following facts are well known:

  • Hadamard differentiability is a stronger notion than Gateaux differentiability, see, e.g., [6]; when f is Hadamard differentiable at , it is Gateaux (directional) differentiable at and, moreover, \(f_{G}^{\prime }(\bar{x})\) is continuous;

  • For locally Lipschitz mappings in normed spaces, Hadamard and Gateaux directional derivatives coincide.

The following corollary uses Hadamard differentiability and provides another sufficient condition for a mapping f to be homogeneously continuous.

Corollary 2.1

Let \(\mathbb{X}\) and \(\mathbb{Y}\) be normed spaces, \(f:\mathbb{X}\rightarrow \mathbb{Y}\) be a continuous mapping, \(\mathbb{E}\) be a compact subset of \(\mathbb{X}\) (equipped with the topology induced by the norm) and \(\bar{x}\in \mathbb{X}\). If the Hadamard directional derivative of f at in every direction \(\nu \in \mathbb{E}\) exists, then f is homogeneously continuous at on \(\mathbb{E}\).

Proof

Define the bifunction \(\bar{f}_{\mathbb{E}}:\mathbb{E} \times [0,1]\rightarrow \mathbb{Y}\) as

$$\begin{aligned} \bar{f}_{\mathbb{E}}(\nu,t):= \textstyle\begin{cases} \frac{f(\bar{x}+t\nu )-f(\bar{x})}{t} & \text{if } 0< t \leq 1, \\ f_{H}^{\prime }(\bar{x})(\nu ) & \text{if } t=0. \end{cases}\displaystyle \end{aligned}$$

Since f is continuous and the Hadamard directional derivative of f at in every direction \(\nu \in \mathbb{E}\) exists, the bifunction \(\bar{f}_{\mathbb{E}}\) is continuous. Since \(\mathbb{E} \times [0,1]\) is compact, \(\bar{f}_{\mathbb{E}}\) is uniformly continuous. It follows that the bifunction \(f_{\mathbb{E}}:\mathbb{E} \times (0,1]\rightarrow \mathbb{Y}\), defined by

$$\begin{aligned} f_{\mathbb{E}}(x,t):=\frac{f(\bar{x}+tx)-f(\bar{x})}{t}, \end{aligned}$$

is uniformly continuous. Now apply Proposition 2.1. □

The following proposition illustrates our main motivation for introducing the homogeneously continuous mappings.

Proposition 2.2

Let \(\mathbb{X}\) and \(\mathbb{Y}\) be normed vector spaces, \(f:\mathbb{X}\rightarrow \mathbb{Y}\) be a mapping, \(\mathbb{E}\) be a subset of \(\mathbb{X}\) and \(\bar{x}\in \mathbb{X}\). If f is homogeneously continuous at on \(\mathbb{E}\), then there exist \(\delta >0\) and \(\beta >0\) such that

$$\begin{aligned} \biggl\Vert \frac{f(\bar{x}+tx)-f(\bar{x})}{t}- \frac{f(\bar{x}+ty)-f(\bar{x})}{t} \biggr\Vert < \epsilon, \end{aligned}$$

for all \(x,y\in \mathbb{E}\) with \(\Vert x-y\Vert <\delta \) and all \(0< t\leq \beta \).

Proof

The proof is obvious; we therefore omit it. □

For a mapping \(f:\mathbb{X}\rightarrow \mathbb{Y}\), we consider the following notions of lower directional derivatives which are crucial to our approach:

$$\begin{aligned} &f_{l}^{\prime }(\bar{x}) ( \nu ):=\liminf_{t\downarrow 0} \frac{ \Vert f(\bar{x}+t\nu )-f(\bar{x}) \Vert }{t}, \\ &f_{0}^{\prime }(\bar{x}) ( \nu ):=\liminf_{t\downarrow 0, \mu \rightarrow \nu } \frac{ \Vert f(\bar{x}+t\mu )-f(\bar{x}) \Vert }{t}. \end{aligned}$$

Note that we have

$$\begin{aligned} 0\leq f_{0}^{\prime }(\bar{x}) (\nu )\leq f_{l}^{\prime }(\bar{x}) (\nu ), \end{aligned}$$
(2.2)

for every \(\nu \in \mathbb{X}\). We shall observe that if \(\inf_{\nu \in \mathbb{S}}f_{l}^{\prime }(\bar{x})(\nu )>0\) and f is homogeneously continuous at on \(\mathbb{S}\), then f satisfies the property (1.2) above.

2.3 Main results

Throughout the remaining part of the discussion, unless specified otherwise, we assume that \(\mathbb{X}\) is a finite-dimensional space and \(\mathbb{Y}\) is an arbitrary normed space. We now are completely ready to state the main theorem of the paper. For a positive scalar \(\alpha \in \mathbb{R}\), let

$$\begin{aligned} \mathbb{S}_{\alpha }:=\bigl\lbrace x\in \mathbb{X}: \Vert x \Vert = \alpha \bigr\rbrace =\alpha \mathbb{S}. \end{aligned}$$

Theorem 2.1

Let \(f:\mathbb{X}\rightarrow \mathbb{Y}\) be homogeneously continuous at \(\bar{x}\in \mathbb{X}\) on \(\mathbb{S}_{\alpha }\) for some positive scalar α. If there exists some \(\kappa >0\) such that \(\inf_{\nu \in \mathbb{S}_{\alpha }}f_{l}^{\prime }(\bar{x})(\nu )>\kappa \), then there exists \(\delta >0\) such that

$$\begin{aligned} \Vert x-\bar{x} \Vert \leq \frac{\alpha }{\kappa } \bigl\Vert f(x)-f(\bar{x}) \bigr\Vert , \end{aligned}$$

for all \(x\in \mathbb{B}(\bar{x},\delta )\). In other words, is a strongly regular point of f.

Proof

Let \(\kappa <\gamma <\inf_{\nu \in \mathbb{S}_{\alpha }}f_{l}^{\prime }( \bar{x})(\nu )\) and \(\epsilon:= \gamma -\kappa \). Hence, for all \(\nu \in \mathbb{S}_{\alpha }\) there exists \(0< r_{\nu }\leq 1\) such that

$$\begin{aligned} \inf_{ 0< h\leq r_{\nu }}\frac{ \Vert f(\bar{x}+h\nu )-f(\bar{x}) \Vert }{h}> \gamma. \end{aligned}$$
(2.3)

Since f is homogeneously continuous at on \(\mathbb{S}_{\alpha }\), there exist \(\theta >0\) and \(\beta >0\) such that

$$\begin{aligned} \Vert \nu -\mu \Vert < \theta \quad\Longrightarrow\quad \biggl\Vert \frac{f(\bar{x}+t\nu )-f(\bar{x})}{t}- \frac{f(\bar{x}+t\mu )-f(\bar{x})}{t} \biggr\Vert < \epsilon, \end{aligned}$$
(2.4)

for all \(\nu, \mu \in \mathbb{S}_{\alpha }\) and all \(0< t\leq \beta \), by Proposition 2.2. Let \(\hat{r}_{\nu }:=\min \{\theta,\beta, r_{\nu }\}\) for all \(\nu \in \mathbb{S}_{\alpha }\). Clearly, \(\mathbb{S}_{\alpha }\subset \bigcup_{\nu \in \mathbb{S}_{\alpha }} \mathbb{B}(\nu, \hat{r}_{\nu }) \). The compactness of \(\mathbb{S}_{\alpha }\) implies that there exist \(\nu _{1}, \nu _{2}, \ldots, \nu _{m}\in \mathbb{S}_{\alpha }\) such that \(\mathbb{S}_{\alpha }\subset \bigcup_{k=1}^{m} \mathbb{B}(\nu _{k}, \hat{r}_{\nu _{k}}) \). Now let \(x\in \mathbb{B}(\bar{x},\alpha \hat{\delta })\setminus \lbrace \bar{x} \rbrace \) and \(\nu:=\frac{\alpha }{\|x-\bar{x}\|}(x-\bar{x})\), where \(\hat{\delta }:=\min \{\hat{r}_{\nu _{k}}: 1\leq k\leq m\}\). Then, \(\nu \in \mathbb{S}_{\alpha }\) and therefore \(\nu \in \mathbb{B}(\nu _{s}, \hat{r}_{\nu _{s}})\) for some \(1\leq s\leq m\). It follows that \(\|\nu -\nu _{s}\|<\theta \) and \(\alpha ^{-1}\|x-\bar{x}\|<\beta \). By (2.4), we deduce that

$$\begin{aligned} \biggl\Vert \frac{f (\bar{x}+\alpha ^{-1} \Vert x-\bar{x} \Vert \nu _{s} )-f(\bar{x})}{\alpha ^{-1} \Vert x-\bar{x} \Vert }- \frac{f (\bar{x}+\alpha ^{-1} \Vert x-\bar{x} \Vert \nu )-f(\bar{x})}{\alpha ^{-1} \Vert x-\bar{x} \Vert } \biggr\Vert < \epsilon. \end{aligned}$$

Hence

$$\begin{aligned} \frac{ \Vert f(x)-f(\bar{x}) \Vert }{\alpha ^{-1} \Vert x-\bar{x} \Vert }&= \frac{ \Vert f (\bar{x}+\alpha ^{-1} \Vert x-\bar{x} \Vert \nu )-f(\bar{x}) \Vert }{\alpha ^{-1} \Vert x-\bar{x} \Vert }>\frac{ \Vert f (\bar{x}+\alpha ^{-1} \Vert x-\bar{x} \Vert \nu _{s} )-f(\bar{x}) \Vert }{\alpha ^{-1} \Vert x-\bar{x} \Vert }- \epsilon \\ &>\gamma -\epsilon =\kappa \quad\text{by } \text{(2.3)}, \end{aligned}$$

since \(\alpha ^{-1}\|x-\bar{x}\|< r_{\nu _{s}}\). It follows that

$$\begin{aligned} \Vert x-\bar{x} \Vert \leq \frac{\alpha }{\kappa } \bigl\Vert f(x)-f(\bar{x}) \bigr\Vert , \end{aligned}$$

for all \(x\in \mathbb{B}(\bar{x},\alpha \hat{\delta })\). Letting \(\delta:=\alpha \hat{\delta }\) completes the proof. □

Corollary 2.2

Let \(f:\mathbb{X}\rightarrow \mathbb{Y}\) be homogeneously continuous at \(\bar{x}\in \mathbb{X}\) on \(\mathbb{S}\). If there exists some \(\kappa >0\) such that \(\inf_{\nu \in \mathbb{S}}f_{0}^{\prime }(\bar{x})(\nu )>\kappa \), then there exists \(\delta >0\) such that

$$\begin{aligned} \Vert x-\bar{x} \Vert \leq \frac{1}{\kappa } \bigl\Vert f(x)-f(\bar{x}) \bigr\Vert , \end{aligned}$$
(2.5)

for all \(x\in \mathbb{B}(\bar{x},\delta )\).

Proof

Apply Theorem 2.1 and (2.2). □

Corollary 2.3

Suppose that \(f:\mathbb{X}\rightarrow \mathbb{Y}\) is locally Lipschitz around \(\bar{x}\in \mathbb{X}\). If there exists some \(\kappa >0\) such that \(\inf_{\nu \in \mathbb{S}}f_{l}^{\prime }(\bar{x})(\nu )>\kappa \), then there exists \(\delta >0\) such that (2.5) holds for all \(x\in \mathbb{B}(\bar{x},\delta )\).

Proof

By Lemma 2.1, f is homogeneously continuous at on some closed ball \(\overline{\mathbb{B}}(0,r)\). It follows that f is homogeneously continuous at on \(\mathbb{S}_{r}\) (since \(\mathbb{S}_{r} \subset \overline{\mathbb{B}}(0,r)\)). The condition \(\inf_{\nu \in \mathbb{S}}f_{l}^{\prime }(\bar{x})(\nu )>\kappa \) implies that \(\inf_{\nu \in \mathbb{S}_{r}}f_{l}^{\prime }(\bar{x})(\nu )>r\kappa \). Now apply Theorem 2.1. □

Corollary 2.4

Let \(f:\mathbb{X}\rightarrow \mathbb{Y}\) be a continuous mapping and \(\bar{x}\in \mathbb{X}\). Assume that the Hadamard directional derivative of f at in every direction \(\nu \in \mathbb{S}\) exists. If there exists some \(\kappa >0\) such that \(\inf_{\nu \in \mathbb{S}}\Vert f_{H}^{\prime }(\bar{x})(\nu )\Vert > \kappa \), then there exists \(\delta >0\) such that (2.5) holds for all \(x\in \mathbb{B}(\bar{x},\delta )\).

Proof

By hypothesis, \(f_{H}^{\prime }(\bar{x})(\nu )\) exists for every \(\nu \in \mathbb{S}\). By continuity of f and \(\Vert \cdot \Vert \), it follows that

$$\begin{aligned} \bigl\Vert f_{H}^{\prime }(\bar{x}) (\nu ) \bigr\Vert = \bigl\Vert f_{G}^{\prime }(\bar{x}) ( \nu ) \bigr\Vert =\lim _{t\downarrow 0} \frac{ \Vert f(\bar{x}+t\nu )-f(\bar{x}) \Vert }{t}=f_{l}^{\prime }(\bar{x}) (\nu ), \end{aligned}$$

for every \(\nu \in \mathbb{S}\). It follows that \(\inf_{\nu \in \mathbb{S}}f_{l}^{\prime }(\bar{x})(\nu )>\kappa \). Since \(\mathbb{X}\) is finite dimensional, \(\mathbb{S}\) is compact. Hence, f is homogeneously continuous at \(\bar{x}\in \mathbb{X}\) on \(\mathbb{S}\), by Corollary 2.1. Now apply Theorem 2.1. □

The following example has been considered in [7] (Example 2.1). We shall prove that the origin is a regular point of the involved mapping f once again by Theorem 2.1.

Example 2.1

Consider the mapping \(f:\mathbb{R}\rightarrow \mathbb{R}\) defined as

$$\begin{aligned} f(x):= \textstyle\begin{cases} \vert x \vert (\frac{2}{\pi }-x\sin (\frac{1}{x}) ),& x\neq 0; \\ 0, & x=0. \end{cases}\displaystyle \end{aligned}$$

We have \(\mathbb{S}=\lbrace \pm 1\rbrace \) and therefore f is homogeneously continuous at 0 on \(\mathbb{S}\). One may easily verify that

$$\begin{aligned} f_{l}^{\prime }(0) (\pm 1)=\liminf_{t\downarrow 0} \frac{ \vert \pm t ( \frac{2}{\pi }-(\pm t)\sin ( \frac{1}{\pm t}) ) \vert }{t}= \frac{2}{\pi }. \end{aligned}$$

It follows that \(\inf_{s\in \mathbb{S}}f_{l}^{\prime }(0)(s)=\frac{2}{\pi }>0\). Hence, if \(0<\kappa <\frac{2}{\pi }\), then there exists \(\delta >0\) such that

$$\begin{aligned} \vert x-0 \vert = \vert x \vert \leq \frac{1}{\kappa } \bigl\vert f(x)-f(0) \bigr\vert =\frac{1}{\kappa } \bigl\vert f(x) \bigr\vert , \end{aligned}$$

for all \(\vert x\vert <\delta \), by Theorem 2.1. Hence, 0 is a strongly regular point of f. Since f is continuous, thus the subset \(f^{-1}(0)\) is closed and therefore the distance function \(\mathop{\operatorname{dist}}(\cdot,f^{-1}(0))\) is Lipschitz around 0 (see [8, p. 11]). Hence, 0 is a regular point of f.

3 Hoffman’s estimate for the distance to the set of solutions to a system of linear inequalities

Theorem 3.1

(Hoffman, 1952, [9, 10])

Let \(x_{i}^{*}, i= 1, 2, \dots, k\) be a finite family of linear forms on \(\mathbb{X}^{*}\). Set

$$\begin{aligned} \mathbb{C}_{\leq }:=\bigl\{ x\in \mathbb{X} \textit{ such that } \bigl\langle x_{i}^{*},x \bigr\rangle \leq 0, i= 1, 2,\dots, k\bigr\} . \end{aligned}$$
(3.1)

Then, there exists \(\kappa >0\) such that

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{C}_{\leq })\leq \kappa \bigl[ \Phi (x)\bigr]_{+}, \end{aligned}$$
(3.2)

where \(\Phi (x):=\max \{\langle x_{i}^{*}, x\rangle, i= 1, 2,\dots, k\} \) and \([\Phi (x)]_{+}:= \max (\Phi (x),0)\).

Hoffman’s result is considered as the starting point of the theory of error bounds, theory that has been extended over the years to different contexts. We refer to [3, 1113] and the references therein for the discussion of the fundamental role played by Hoffman bounds and more generally by error bounds in mathematical programming. As described, for example, in [14], they are used, for instance, in convergence properties of algorithms, in sensitivity analysis, in designing solution methods for nonconvex quadratic problems. When \(\mathbb{C}:=\lbrace x\in \mathbb{X}: A(x)=0, \langle x_{i}^{*},x \rangle \leq 0, i=1,2,\dots, k\rbrace \) where \(x_{i}^{*}\in X^{*}\), \(i=1,2,\dots, k\), are some given functionals and \(A:\mathbb{X}\rightarrow \mathbb{Y}\) is a linear (continuous) mapping, we have the following result.

Theorem 3.2

(Ioffe, 1979, [15])

There exists some \(\kappa >0\) such that

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{C})\leq \kappa \Biggl( \bigl\Vert A(x) \bigr\Vert + \sum_{i=1}^{k}\bigl[ \bigl\langle x_{i}^{*},x\bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$
(3.3)

for all \(x\in \mathbb{X}\).

Now let \(\mathbb{G}:=\mathop{\operatorname{Ker}}A\cap ( \bigcap_{i=1}^{k} \mathop{\operatorname{Ker}}x_{i}^{*} )\). Then, Theorem 3.2 yields the following result.

Corollary 3.1

There exists some \(\kappa ^{\prime }>0\) such that

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \kappa ^{\prime } \Biggl( \bigl\Vert A(x) \bigr\Vert + \sum_{i=1}^{k} \bigl\vert \bigl\langle x_{i}^{*},x\bigr\rangle \bigr\vert \Biggr), \end{aligned}$$
(3.4)

for all \(x\in \mathbb{X}\).

In this section, we apply Theorem 2.1 and establish similar estimates. We prove that there exists \(\bar{\kappa }>0\) such that

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \bar{\kappa } \Biggl( \bigl\Vert L(x) \bigr\Vert + \sum_{i=1}^{k}\bigl[ \bigl\langle x_{i}^{*},x\bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$

for all \(x\in \mathbb{X}\), where \(L:\mathbb{X}\rightarrow \mathbb{X}\) is a linear mapping with \(\mathop{\operatorname{Ker}}L=\mathbb{G}\). Our results also allow us to evaluate the constant κ̄. The details are as follows.

Proposition 3.1

Let \(A:\mathbb{X}\rightarrow \mathbb{Y}\) be a linear mapping and \(x_{i}^{*}\in \mathbb{X}^{*}\), \(i=1,2,\dots, k\) be given. Suppose that \(L:\mathbb{X}\rightarrow \mathbb{X}\) is a linear mapping such that \(\mathop{\operatorname{Ker}}L=\mathbb{G}\). Then

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{1}{\gamma } \Biggl( \bigl\Vert L(x) \bigr\Vert + \sum_{i=1}^{k} \bigl[\bigl\langle x_{i}^{*},x\bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$
(3.5)

where γ is a positive real number given by

$$\begin{aligned} \gamma:=\inf \Biggl\lbrace \bigl\Vert L(\nu ) \bigr\Vert + \sum_{i=1}^{n}\bigl[ \bigl\langle x_{i}^{*},\nu \bigr\rangle \bigr]_{+}: \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu, \mathbb{G})=1 \Biggr\rbrace . \end{aligned}$$
(3.6)

Proof

Let us consider the finite-dimensional quotient space \(\mathcal{M}:= \frac{X}{\mathbb{G}}\), and denote by \([x]\) the equivalence class containing x in \(\mathcal{M}\), that is, \([x]:= x+\mathbb{G}\). We note \(\Vert [x]\Vert:= \inf \{\|x+y\|: y\in \mathbb{G}\}\). Denote by \(\mathbb{S}_{\mathcal{M}}\) the unit sphere of \(\mathcal{M}\) (i.e., the elements of \(\mathcal{M}\) of norm one). Obviously,

$$\begin{aligned} \mathbb{S}_{\mathcal{M}}=\bigl\lbrace x\in \mathbb{X}: \inf \bigl\{ \Vert x+y \Vert : y\in \mathbb{G}\bigr\} =1\bigr\rbrace =\bigl\lbrace x \in \mathbb{X}: \mathop{\operatorname{dist}}(x, \mathbb{G}\}=1 \bigr\rbrace . \end{aligned}$$
(3.7)

Consider the continuous linear mapping \(\overline{L}:\mathcal{M}\rightarrow \mathbb{X}\) defined as \(\overline{L}([x]):=L(x)\) for all \([x]\in \mathcal{M}\). Also for each \(1\leq i\leq k\) define \(\langle [x_{i}]^{*},[ x]\rangle:=\langle x_{i}^{*},x\rangle \) for all \([x]\in \mathcal{M}\). Obviously, each \([x_{i}]^{*}\) belongs to \(\mathbb{G}^{\perp }\) (the orthogonal complement of \(\mathbb{G}\)), and hence belongs to the dual of \(\mathcal{M}\) (which is isometrically isomorphic to \(\mathbb{G}^{\perp }\) [16]). Set

$$\begin{aligned} \overline{\mathbb{C}}:=\bigl\lbrace [x]\in \mathcal{M}: \overline{L}\bigl([x] \bigr)=0, \bigl\langle [x_{i}]^{*},[x]\bigr\rangle \leq 0, i=1,2,\dots,k\bigr\rbrace . \end{aligned}$$

We have \(\mathop{\operatorname{Ker}}\overline{L}=\mathop{\operatorname{Ker}}L=\mathbb{G}\) and therefore \(\overline{\mathbb{C}}=\lbrace [0]\rbrace \). Now define the mapping \(f:\mathcal{M}\rightarrow \mathbb{R}\) as

$$\begin{aligned} f\bigl([x]\bigr):= \bigl\Vert \overline{L}\bigl([x]\bigr) \bigr\Vert + \sum _{i=1}^{k}\bigl[\bigl\langle [x_{i}]^{*},[x] \bigr\rangle \bigr]_{+}. \end{aligned}$$

We show that the conditions of Theorem 2.1 for f at \([\bar{x}]=[0]\) are all satisfied. For all \([\nu ]\in \mathbb{S}_{\mathcal{M}}\), one has

$$\begin{aligned} \lim_{t\downarrow 0, \mu \rightarrow \nu }\frac{f([t\mu ])-f([0])}{t}= \bigl\Vert \overline{L}\bigl([ \nu ]\bigr) \bigr\Vert + \sum_{i=1}^{k} \bigl[\bigl\langle [x_{i}]^{*},[ \nu ]\bigr\rangle \bigr]_{+}. \end{aligned}$$

Hence, f is homogeneously continuous at \([0]\) on \(\mathbb{S}_{\mathcal{M}}\), by Corollary 2.1. We also have

$$\begin{aligned} f_{l}^{\prime }\bigl([0]\bigr) \bigl([\nu ]\bigr)= \bigl\Vert \overline{L}\bigl([\nu ]\bigr) \bigr\Vert + \sum_{i=1}^{k} \bigl[ \bigl\langle [x_{i}]^{*},[\nu ]\bigr\rangle \bigr]_{+}= \bigl\Vert L(\nu ) \bigr\Vert + \sum _{i=1}^{n}\bigl[ \bigl\langle x_{i}^{*}, \nu \bigr\rangle \bigr]_{+}. \end{aligned}$$

The continuity of f implies that the mapping \(f\vert _{\mathbb{S}_{\mathcal{M}}}\) (the restriction of f to \(\mathbb{S}_{\mathcal{M}}\)) attains its minimum at some \([\nu _{0}]\in \mathbb{S}_{\mathcal{M}}\). Then, \([\nu _{0}]\notin \overline{\mathbb{C}}\) (note that \(\overline{\mathbb{C}}=\lbrace [0]\rbrace \)) and therefore

$$\begin{aligned} \bigl\Vert \overline{L}\bigl([\nu _{0}]\bigr) \bigr\Vert + \sum _{i=1}^{k}\bigl[\bigl\langle [x_{i}]^{*},[ \nu _{0}]\bigr\rangle \bigr]_{+}>0. \end{aligned}$$

It follows that \(\inf_{[\nu ]\in \mathbb{S}_{\mathcal{M}}}f_{l}^{\prime }([0])([\nu ])>0\). Using (3.7), we obtain

$$\begin{aligned} \inf_{[\nu ]\in \mathbb{S}_{\mathcal{M}}}f_{l}^{\prime }\bigl([0]\bigr) \bigl([\nu ]\bigr)= \inf \Biggl\lbrace \bigl\Vert L(\nu ) \bigr\Vert + \sum _{i=1}^{n}\bigl[\bigl\langle x_{i}^{*}, \nu \bigr\rangle \bigr]_{+}: \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu,\mathbb{G})=1 \Biggr\rbrace =\gamma. \end{aligned}$$

Thus \(\gamma >0\). Now let \(0<\kappa <\gamma \). Theorem 2.1 implies that there exists some \(\delta >0\) such that

$$\begin{aligned} \bigl\Vert [x]-[0] \bigr\Vert = \bigl\Vert [x] \bigr\Vert \leq \frac{1}{\kappa } \Biggl( \bigl\Vert \overline{L}\bigl([x]\bigr) \bigr\Vert + \sum_{i=1}^{k}\bigl[\bigl\langle [x_{i}]^{*},[x] \bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$

for all \([x]\in \mathbb{B}_{\mathcal{M}}([0],\delta )\). Since f is sublinear,

$$\begin{aligned} \bigl\Vert [x] \bigr\Vert \leq \frac{1}{\kappa } \Biggl( \bigl\Vert \overline{L}\bigl([x]\bigr) \bigr\Vert + \sum _{i=1}^{k}\bigl[\bigl\langle [x_{i}]^{*},[x] \bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$

for all \([x]\in \mathcal{M}\). It follows that

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{1}{\kappa } \Biggl( \bigl\Vert L(x) \bigr\Vert + \sum_{i=1}^{k}\bigl[ \bigl\langle x_{i}^{*},x\bigr\rangle \bigr]_{+} \Biggr). \end{aligned}$$
(3.8)

For all \(x\in \mathbb{X}\). Letting \(\kappa \rightarrow \gamma \) in (3.8), we obtain the desired result. □

Remark 3.1

The existence of the linear mapping \(L:\mathbb{X}\rightarrow \mathbb{X}\) discussed in Proposition 3.1 is straightforward. Indeed, \(\mathbb{G}\) is a closed subspace of \(\mathbb{X}\) and \(\mathbb{X}\) is separable, thus there exists a (continuous) linear mapping \(L:\mathbb{X}\rightarrow \mathbb{X}\) with \(\mathop{\operatorname{Ker}}L=\mathbb{G}\) (see [17]). Of course, one can easily define L directly (without using [17]). To see this, suppose that \(\mathop{\operatorname{dim}}\mathbb{X}=n\) and let \(\lbrace e_{1}, \dots, e_{j}\rbrace \) be a linearly independent basis for the vector space \(\mathbb{G}\). By linear algebra, we can extend \(\lbrace e_{1}, \dots, e_{j}\rbrace \) to get a linearly independent basis for \(\mathbb{X}\) (since \(\mathbb{G}\) is a subspace of \(\mathbb{X}\)). Let us denote this basis by \(\lbrace e_{1}, \dots,e_{j},e_{j+1},\dots, e_{n}\rbrace \). Now for every \(x:=x_{1}e_{1}+\cdots +x_{n}e_{n}\in \mathbb{X}\), define the mapping \(L:\mathbb{X}\rightarrow \mathbb{X}\) as

$$\begin{aligned} L(x):=(\underbrace{0,0,\dots, 0}_{j}, \underbrace{x_{j+1}, \dots, x_{n}}_{n-j}). \end{aligned}$$

One can easily check that L is well-defined, linear, and \(\mathop{\operatorname{Ker}}L=\mathbb{G}\).

Corollary 3.2

Let \(A:\mathbb{X}\rightarrow \mathbb{X}\) be a linear mapping. Then

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathop{\operatorname{Ker}}A)\leq \frac{ \Vert A(x) \Vert }{\inf \lbrace \Vert A(\nu ) \Vert : \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu, \mathop{\operatorname{Ker}}A)=1 \rbrace }, \end{aligned}$$

for all \(x\in \mathbb{X}\).

Proof

Let \(\mathbb{X} =\mathbb{Y}\), and \(x_{i}^{*}\equiv 0\) for all \(1\leq i\leq k\). Then, \(\mathbb{G}=\mathop{\operatorname{Ker}}A\). Letting \(L:=A\) in Proposition 3.1 yields the result. □

Corollary 3.3

Let \(A:\mathbb{X}\rightarrow \mathbb{Y}\) and \(L:\mathbb{X}\rightarrow \mathbb{X}\) be linear mappings with \(\mathop{\operatorname{Ker}}L=\mathbb{G}\) and \(x_{i}^{*}\in \mathbb{X}^{*}\), \(i=1,2,\dots, k\), be some given functionals. Then

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{ \Vert L(x) \Vert }{\inf \lbrace \Vert L(\nu ) \Vert + \sum_{i=1}^{n}[\langle x_{i}^{*},\nu \rangle ]_{+}: \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu,\mathbb{G})=1 \rbrace }, \end{aligned}$$
(3.9)

for every \(x\in \mathbb{C}_{\leq }\) (see (3.1) above).

Proof

Proposition 3.1 implies that

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{1}{\gamma } \Biggl( \bigl\Vert L(x) \bigr\Vert + \sum_{i=1}^{k}\bigl[\bigl\langle x_{i}^{*},x\bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$

where

$$\begin{aligned} \gamma:=\inf \Biggl\lbrace \bigl\Vert L(\nu ) \bigr\Vert + \sum _{i=1}^{n}\bigl[ \bigl\langle x_{i}^{*}, \nu \bigr\rangle \bigr]_{+}: \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu, \mathbb{G})=1 \Biggr\rbrace . \end{aligned}$$

Now let \(x\in \mathbb{C}_{\leq }\). Thus \(\langle x_{i}^{*},x\rangle \leq 0\) for every \(1\leq i\leq k\). Hence \([\langle x_{i}^{*},x\rangle ]_{+} =0\) for every \(1\leq i\leq k\). Then, the above inequality yields

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{ \Vert L(x) \Vert }{\inf \lbrace \Vert L(\nu ) \Vert + \sum_{i=1}^{n}[\langle x_{i}^{*},\nu \rangle ]_{+}: \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu,\mathbb{G})=1 \rbrace }, \end{aligned}$$

for every \(x\in \mathbb{C}_{\leq }\). This completes the proof. □

Finally, let us make a comparison between the two estimations (3.5) in Proposition 3.1 and (3.4) in Corollary 3.1 described above. First, note that an application of Corollary 3.1 (or a direct application of Theorem 3.2) with L (described in Proposition 3.1) in place of A and no inequalities immediately produces the following estimate:

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \kappa _{0} \bigl\Vert L(x) \bigr\Vert , \end{aligned}$$
(3.10)

for all \(x\in \mathbb{X}\), where \(\kappa _{0}\) is a constant. On the other hand, doing the same replacements in Proposition 3.1 (i.e., applying Proposition 3.1 with L in place of A and L in place of itself without the inequalities) yields

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{1}{\gamma _{0}} \bigl\Vert L(x) \bigr\Vert , \end{aligned}$$
(3.11)

where

$$\begin{aligned} \gamma _{0}:=\inf \bigl\lbrace \bigl\Vert L(\nu ) \bigr\Vert : \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu,\mathbb{G})=1 \bigr\rbrace >0. \end{aligned}$$

The question is: which of the above estimates (3.10) and (3.11) is better? To answer this question, we need to know the relationship between the coefficients \(\kappa _{0}\) and \(\gamma _{0}\) stated above. As long as the value of the constant \(\kappa _{0}\) in (3.10) is not known, we can’t say which of the estimates (3.10) and (3.11) produces a better result. We can just say that the estimate (3.11) technically is better, since it also allows us to estimate the unknown constant \(\kappa _{0}\) in (3.10). Indeed, \(\kappa _{0}\leq \frac{1}{\gamma _{0}}\).

Another question which may arise is: with the simple estimate (3.11) in hand, what is the necessity of using the estimate (3.5) in Proposition 3.1 (regarding the inequalities)? To answer this question, let’s take a closer look at the estimate (3.5). Indeed, by Proposition 3.1, we have

$$\begin{aligned} \mathop{\operatorname{dist}}(x,\mathbb{G})\leq \frac{1}{\gamma } \Biggl( \bigl\Vert L(x) \bigr\Vert + \sum_{i=1}^{k}\bigl[ \bigl\langle x_{i}^{*},x\bigr\rangle \bigr]_{+} \Biggr), \end{aligned}$$
(3.12)

where

$$\begin{aligned} \gamma:=\inf \Biggl\lbrace \bigl\Vert L(\nu ) \bigr\Vert + \sum _{i=1}^{n}\bigl[ \bigl\langle x_{i}^{*},\nu \bigr\rangle \bigr]_{+}: \nu \in \mathbb{X}, \mathop{\operatorname{dist}}(\nu, \mathbb{G})=1 \Biggr\rbrace >0. \end{aligned}$$

We observe that, on the one hand, \(\Vert L(x)\Vert \leq \Vert L(x)\Vert + \sum_{i=1}^{k}[\langle x_{i}^{*},x \rangle ]_{+}\) and, on the other hand, \(\frac{1}{\gamma }\leq \frac{1}{\gamma _{0}}\). As a result, we cannot generally compare the right-hand sides of the estimates (3.11) and (3.12) to determine which is better. Corollary 3.3 says that when \(x\in \mathbb{C}_{\leq }\), it would be better to use (3.12).

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Lyusternik, L.A.: On conditional extrema of functionals. Sb. Math. 41, 390–401 (1934)

    Google Scholar 

  2. Dontchev, A.L.: The Graves theorem revisited. J. Convex Anal. 3(1), 45–53 (1996)

    MathSciNet  MATH  Google Scholar 

  3. Thibault, L.: Unilateral Variational Analysis in Banach Spaces. Book in progress (2021). Private communication

  4. Graves, L.M.: Some mapping theorems. Duke Math. J. 17, 111–114 (1950)

    Article  MathSciNet  Google Scholar 

  5. Ioffe, A.D., Tihomirov, V.M.: Theory of Extremal Problems. Studies in Mathematics and Its Applications, vol. 6. North-Holland, Amsterdam (1979) Translated from the Russian by Karol Makowski

    MATH  Google Scholar 

  6. Shapiro, A.: On concepts of directional differentiability. J. Optim. Theory Appl. 66(3), 477–487 (1990). https://doi.org/10.1007/BF00940933

    Article  MathSciNet  MATH  Google Scholar 

  7. Luu, D.V.: Optimality condition for local efficient solutions of vector equilibrium problems via convexificators and applications. J. Optim. Theory Appl. 171(2), 643–665 (2016). https://doi.org/10.1007/s10957-015-0815-8

    Article  MathSciNet  MATH  Google Scholar 

  8. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)

    MATH  Google Scholar 

  9. Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49, 263–265 (1952)

    Article  MathSciNet  Google Scholar 

  10. Hoffman, A.J.: Selected Papers of Alan Hoffman. World Scientific, River Edge (2003). With commentary, Edited by Charles A. Micchelli

    MATH  Google Scholar 

  11. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings. A View from Variational Analysis, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, New York (2014). https://doi.org/10.1007/978-1-4939-1037-3

    Book  MATH  Google Scholar 

  12. Kruger, A.Y., López, M.A., Théra, M.A.: Perturbation of error bounds. Math. Program. 168(1–2), 533–554 (2018). https://doi.org/10.1007/s10107-017-1129-4

    Article  MathSciNet  MATH  Google Scholar 

  13. Penot, J.P.: Calculus Without Derivatives. Graduate Texts in Mathematics, vol. 266. Springer, New York (2013). https://doi.org/10.1007/978-1-4614-4538-8

    Book  MATH  Google Scholar 

  14. Peña, J., Vera, J., Zuluaga, L.: New characterizations of Hoffman constants for systems of linear constraints. Math. Program. 187(1–2), 79–109 (2021). https://doi.org/10.1007/s10107-020-01473-6

    Article  MathSciNet  MATH  Google Scholar 

  15. Ioffe, A.D.: Regular points of Lipschitz functions. Trans. Am. Math. Soc. 251, 61–69 (1979). https://doi.org/10.2307/1998683

    Article  MathSciNet  MATH  Google Scholar 

  16. Conway, J.B.: A Course in Functional Analysis, 2nd edn. Graduate Texts in Mathematics, vol. 96. Springer, New York (1990)

    MATH  Google Scholar 

  17. Laustsen, N.J., White, J.T.: Subspaces that can and cannot be the kernel of a bounded operator on a Banach space. In: Banach Algebras and Applications, pp. 189–196. de Gruyter, Berlin (2020). https://doi.org/10.1515/9783110602418-011

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees whose meticulous reading helped us improve the presentation.

Funding

This research benefited from the support of the FMJH Program PGMO and from the support of EDF.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally in writing this article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Michel Théra.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Dedicated to the memory of Jonathan Borwein, in recognition of his deep contributions to mathematics and of his lasting friendship. He was one of the pioneers of metric regularity.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abbasi, M., Théra, M. Strongly regular points of mappings. Fixed Point Theory Algorithms Sci Eng 2021, 14 (2021). https://doi.org/10.1186/s13663-021-00699-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-021-00699-z

MSC

Keywords