Open Access

Scalar gap functions and error bounds for generalized mixed vector equilibrium problems with applications

Fixed Point Theory and Applications20152015:169

https://doi.org/10.1186/s13663-015-0422-2

Received: 21 April 2015

Accepted: 8 September 2015

Published: 18 September 2015

Abstract

It is well known that equilibrium problems are very important mathematical models and are closely related with fixed point problems, variational inequalities, and Nash equilibrium problems. Gap functions and error bounds which play a vital role in algorithms design, are two much-addressed topics of vector equilibrium problems. This paper is devoted to studying the scalar-valued gap functions and error bounds for the generalized mixed vector equilibrium problem (GMVE). First, a scalar gap function for (GMVE) is proposed without any scalarization methods, and then error bounds of (GMVE) are established in terms of the gap function. As applications, error bounds for generalized vector variational inequalities and vector variational inequalities are derived, respectively. The main results obtained are new and improve corresponding results of Charitha and Dutta (Pac. J. Optim. 6:497-510, 2010) and Sun and Chai (Optim. Lett. 8:1663-1673, 2014).

Keywords

error boundscalar gap functiongeneralized mixed vector equilibrium

1 Introduction

Equilibrium problems, which were firstly studied by Blum and Oettli [1], provide an unified framework for fixed point problems, variational inequality, complementarity problems, and optimization problems. It is well known that the vector equilibrium problem is a vital extension of equilibrium problems, which contain vector variational inequality, vector complementarity problems, and vector optimization problems as special cases. In the past decades, various kinds of vector equilibrium problems and their applications have been introduced and studied; see [29] and the references therein. Recently, Chang et al. [10] and Kumam et al. [5] researched a generalized mixed equilibrium problem,
$$F(x,y)+ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\geq0, \quad\forall y\in K. $$
It is very general in the sense that it includes fixed point problems, optimization problems, variational inequality problems, Nash equilibrium problems, and equilibrium problems as special cases.

Error bounds, which play a critical role in algorithm design, can be used to measure how much the approximate solution fails to be in the solution set and to analyze the convergence rates of various methods. Recently, kinds of error bounds have been presented for variational inequalities in [1118]. Results for error bounds have been established for a weak vector variational inequality (WVVI) in [1215, 19]. Xu and Li [15] obtained error bounds for a weak vector variational inequality with cone constraints by using a method of image space analysis. By using a scalarization approach of Konnov [20], Li and Mastroeni [13] established error bounds for two kinds of (WVVI) with set-valued mappings. By a regularized gap function and a D-gap function, Charitha and Dutta [12] used a projection operator method to obtain error bounds of (WVVI), respectively. Sun and Chai [14] studied some error bounds for generalized vector variational inequalities in virtue of the regularized gap functions. Very recently, a global error bound of a weak vector variational inequality was established by the nonlinear scalarization method in Li [19].

However, to the best of our knowledge, an error bound of the generalized mixed vector equilibrium problem (GMVE) has never been investigated. In this paper, motivated by ideas in Sun and Chai [14] and Yamashita et al. [18], we introduce a scalar gap function for (GMVE). Then an error bound of (GMVE) is presented. As an application of an error bound for (GMVE), we also get error bounds of (GVVI) and (VVI), respectively.

This paper is organized as follows: In Section 2, we first recall some basic definitions. In Section 3, we introduce scalar gap functions for (GMVE), (GVVI), and (VVI). By using these gap functions, we obtain some error bound results for (GMVE), (GVVI), and (VVI), respectively.

2 Mathematical preliminaries

Throughout this paper, let \(\mathbb{R}^{n}\) be the n-dimensional Euclidean space and denote \(\mathbb{R}^{n}_{+}=\{x=(x_{1},x_{2},\ldots,x_{n}):x_{j}\geq0, j=1,2,\ldots,n\}\), the norms of all finite dimensional spaces be denoted by \(\|\cdot \|\), the inner products of all finite dimensional spaces be denoted by \(\langle\cdot,\cdot\rangle\). Let \(K\subseteq\mathbb{R}^{n}\) be a nonempty closed convex set. Let \(A_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) (\(i=1,2,\ldots,m\)) be a vector-valued mapping, \(F_{i}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(g_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) (\(i=1,2,\ldots,m\)) be real-valued functions. For abbreviation, we put
$$A:=(A_{1},A_{2},\ldots,A_{m}),\qquad F:=(F_{1},F_{2},\ldots,F_{m}),\qquad g:=(g_{1},g_{2}, \ldots,g_{m}) $$
and for any \(x,v\in\mathbb{R}^{n}\)
$$\bigl\langle A(x),v \bigr\rangle := \bigl( \bigl\langle A_{1}(x),v \bigr\rangle , \bigl\langle A_{2}(x),v \bigr\rangle ,\ldots, \bigl\langle A_{m}(x),v \bigr\rangle \bigr). $$
In this paper, we consider the generalized mixed vector equilibrium problem (GMVE) of finding \(x\in K\) such that
$$ F(x,y)+ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\notin{-} \operatorname{int} \mathbb {R}^{m}_{+},\quad \forall y\in K. $$
(1)
Denote by \(S_{GMVE}\) the solution set of (GMVE).
If \(m=1\), our problem is to find \(x \in K\) such that
$$ F(x,y)+ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\geq0,\quad \forall y \in K. $$
(2)
Then this problem reduces to a generalized mixed equilibrium problem [5].
If \(F=0\), our problem is to find \(x \in K\) such that
$$ \bigl\langle A(x), y-x \bigr\rangle +g(y)-g(x)\notin{-} \operatorname{int} \mathbb{R}^{m}_{+}, \quad\forall y\in K. $$
(3)
Then this problem reduces to a generalized vector variational inequality problem (GVVI) [14].
In the case of \(F=0\) and \(g=0\), (GMVE) is a vector variational inequality problem (VVI), introduced and studied by Giannessi [21], finding \(x\in K\) such that
$$ \bigl\langle A(x), y-x \bigr\rangle \notin{-}\operatorname{int} \mathbb{R}^{m}_{+},\quad \forall y\in K. $$
(4)
In the case of \(A\equiv0\) and \(g=0\), then (GMVE) is a vector equilibrium problem, finding \(x \in K\) such that
$$ F(x,y)\notin{-}\operatorname{int} \mathbb{R}^{m}_{+}, \quad \forall y\in K. $$
(5)
In the case of \(A\equiv0\) and \(F=0\), (GMVE) is a vector optimization problem, finding \(x \in K\) such that
$$ g(y)-g(x)\notin{-}\operatorname{int} \mathbb{R}^{m}_{+}, \quad\forall y\in K. $$
(6)

For \(i=1,2,\ldots,m\), we denote the generalized mixed vector equilibrium problems (GMVE) associated with \(F_{i}\), \(A_{i}\) and \(g_{i}\) as \((GMVE)^{i}\), the generalized vector variational inequality problems (GVVI) associated with \(A_{i}\) and \(g_{i} \) as \((GVVI)^{i}\), and the vector variational inequality problems (VVI) associated with \(A_{i}\) as \((VVI)^{i}\), respectively. The solution sets of \((GMVE)^{i}\), \((GVVI)^{i}\), and \((VVI)^{i}\) will be denoted by \(S_{GMVE}^{i}\), \(S_{GVVI}^{i}\), and \(S_{VVI}^{i}\), respectively.

In the paper, we intend to investigate gap functions and error bounds of (GMVE), (GVVI), and (VVI). We shall recall some notations and definitions, which will be used in the sequel.

Definition 2.1

A real-valued function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex (resp. concave) over \(\mathbb{R}^{n}\), if
$$\begin{aligned} &f \bigl(\lambda x +(1-\lambda)y \bigr)\leq\lambda f(x) + (1-\lambda)f(y) \\ &\quad \bigl(\mbox{resp. } f \bigl(\lambda x +(1-\lambda)y \bigr)\geq\lambda f(x) + (1- \lambda)f(y) \bigr), \end{aligned}$$
for every \(x, y \in\mathbb{R}^{n}\) and \(\lambda\in[0,1]\).

Definition 2.2

A vector-valued function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is strongly monotone over \(\mathbb{R}^{n}\) with modulus \(\kappa> 0\), if for any \(x,y \in\mathbb{R}^{n}\),
$$\bigl\langle h(y)-h(x), y-x \bigr\rangle \geq\kappa\|y-x\|^{2}. $$

Definition 2.3

A real-valued function \(\vartheta: \mathbb{R}^{n} \rightarrow\mathbb{R}\) is said to be a scalar-valued gap function of (GMVE) (resp. (GVVI) and (VVI)), if it satisfies the following conditions:
  1. (i)

    \(\vartheta(x)\geq0\), for any \(x\in K \),

     
  2. (ii)

    \(\vartheta(x_{0})=0 \) if and only if \(x_{0} \in K\) is a solution of (GMVE) (resp. (GVVI) and (VVI)).

     

3 Main results

Notice that Sun and Chai [14] introduced a new scalar-valued gap function of (GVVI) without any scalarization approach; the gap function discussed in [14] is simpler from the computational view. In terms of an approach due to Sun and Chai [14] and Yamashita et al. [18], we construct the function \(\vartheta_{\alpha}:K\rightarrow\mathbb{R}\) for \(\alpha> 0\),
$$\begin{aligned}& \vartheta_{\alpha}(x):=\sup_{y\in K} \Bigl\{ \min _{1\leq i\leq m} \bigl\{ -F_{i}(x,y)+ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} - \alpha\varphi (x,y) \Bigr\} , \end{aligned}$$
(7)
$$\begin{aligned}& \psi_{\alpha}(x):=\sup_{y\in K} \Bigl\{ \min _{1\leq i\leq m} \bigl\{ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} -\alpha\varphi(x,y) \Bigr\} , \end{aligned}$$
(8)
and
$$ \phi_{\alpha}(x):=\sup_{y\in K}\min _{1\leq i\leq m} \bigl\langle A_{i}(x), x-y \bigr\rangle -\alpha \varphi(x,y)\}, $$
(9)
respectively, where \(F_{i},\varphi:\mathbb{\mathbb{R}}^{n}\times\mathbb{\mathbb{R}}^{n}\to \mathbb{\mathbb{R}}\), \(i=1,2,\ldots,m\) are real-valued functions. In the following, let φ be a continuously differentiable function, which has the following property with the associated constants \(\gamma, \beta>0\).
  1. (P)
    For all \(x,y \in\mathbb{R}^{n}\),
    $$ \beta\|x-y\|^{2}\leq\varphi(x,y)\leq(\gamma- \beta) \|x-y\|^{2},\quad \gamma\geq 2\beta>0. $$
    (10)
     
For example, let \(\kappa>0\) and let \(\varphi:\mathbb{\mathbb {R}}^{n}\times\mathbb{\mathbb{R}}^{n}\to\mathbb{\mathbb{R}}\) be defined by
$$\varphi(x,y)=\kappa \|x-y\|^{2},\quad x,y\in\mathbb{R}^{n}. $$
Then φ satisfies condition (P), with \(\gamma=2\kappa \) and \(\beta=\kappa\).
For any \(i=1,2,\ldots,m\), we suppose that \(F_{i}\) satisfies the following conditions:
  1. (A1)

    \(F_{i}\) is a convex function about the second variable on \(\mathbb{\mathbb{R}}^{n}\times\mathbb{\mathbb{R}}^{n}\).

     
  2. (A2)

    \(F_{i}(x,y)=0\), \(\forall x,y\in\mathbb{\mathbb{R}}^{n}\), if and only if \(x=y\).

     
  3. (A3)

    For any \(x,y,z\in K\), \(F_{i}(x,y)+F_{i}(y,z)\leq F_{i}(x,z)\).

     

Theorem 3.1

If \(F_{i}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex about the second variable and \(g_{i}\) is convex over \(\mathbb {R}^{n}\) for any \(i=1,2,\ldots,m\), then the function \(\vartheta_{\alpha}\), with \(\alpha> 0\), defined by (7) is a gap function for (GMVE).

Proof

(i) It is clear that for any \(x\in K\),
$$\vartheta_{\alpha}(x):=\sup_{y\in K} \Bigl\{ \min _{1\leq i\leq m} \bigl\{ -F_{i}(x,y)+ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} - \alpha\varphi (x,y) \Bigr\} \geq0 $$
follows simply by setting \(x=y\) in the right hand side of the expression for \(\vartheta_{\alpha}(x)\).
(ii) If there exists \(x_{0}\in K\) such that \(\vartheta_{\alpha}(x_{0})=0\), then
$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},y)+ \bigl\langle A_{i}(x_{0}), x_{0}-y \bigr\rangle +g_{i}(x_{0})-g_{i}(y) \bigr\} \leq\alpha \varphi(x,y),\quad \forall y\in K. $$
For arbitrary \(x\in K\) and \(\kappa\in(0,1)\), let \(y=x+\kappa(x_{0}-x)\). Since K is convex, we get \(y\in K\) and
$$\begin{aligned} &\min_{1\leq i\leq m}\bigl\{ -F_{i} \bigl(x_{0},x+ \kappa(x_{0}-x) \bigr)+ \bigl\langle A_{i}(x_{0}), x_{0}- \bigl(x+\kappa(x_{0}-x) \bigr) \bigr\rangle +g_{i}(x_{0})-g_{i} \bigl(x+\kappa(x_{0}-x) \bigr)\bigr\} \\ &\quad\leq\alpha\varphi \bigl(x_{0},x+\kappa(x_{0}-x) \bigr). \end{aligned}$$
Since \(F_{i}\) is convex over \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) about the second variable and \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\), we have
$$\begin{aligned} &\min_{1\leq i\leq m} \bigl\{ -\kappa F_{i}(x_{0},x_{0})-(1- \kappa ) F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}), (1-\kappa) (x_{0}-x) \bigr\rangle \\ &\qquad{}+g_{i}(x_{0})- \kappa g_{i}(x_{0})-(1-\kappa)g_{i}(x) \bigr\} \\ &\quad\leq\min_{1\leq i\leq m} \bigl\{ -F_{i} \bigl(x_{0},x+ \kappa(x_{0}-x) \bigr)+ \bigl\langle A_{i}(x_{0}), (1-\kappa) (x_{0}-x) \bigr\rangle +g_{i}(x_{0})-g_{i} \bigl(x+\kappa(x_{0}-x) \bigr) \bigr\} \\ &\quad\leq\alpha\varphi \bigl(x_{0},x+\kappa(x_{0}-x) \bigr). \end{aligned}$$
(11)
For the function \(\varphi(x,y)\), by using (10), we have
$$ \varphi \bigl(x_{0},x+\kappa(x_{0}-x) \bigr) \leq(1- \kappa)^{2}(\gamma-\beta)\|x_{0}-x\|^{2}. $$
(12)
By the property (A2) of the function \(F_{i}\), we obtain \(F_{i}(x_{0},x_{0})=0\).
Hence, from (11) and (12), we get
$$\begin{aligned} &(1-\kappa)\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i}(x_{0})-g_{i}(x) \bigr\} \\ &\quad\leq\alpha(\gamma-\beta) (1-\kappa)^{2}\|x-x_{0} \|^{2}. \end{aligned}$$
So,
$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i}(x_{0})-g_{i}(x) \bigr\} \leq\alpha(\gamma- \beta) (1-\kappa)\|x-x_{0}\|^{2}. $$
Taking the limit as \(\kappa\rightarrow1\), we obtain
$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i}(x_{0})-g_{i}(x) \bigr\} \leq0. $$
Then, for any \(x\in K\), there exists \(1\leq i_{0} \leq m\) such that
$$-F_{i_{0}}(x_{0},x)+ \bigl\langle A_{i}(x_{0}),x_{0}-x \bigr\rangle +g_{i_{0}}(x_{0})-g_{i_{0}}(x)\leq0. $$
This means that
$$F(x_{0},x)+ \bigl\langle A_{i}(x_{0}), x-x_{0} \bigr\rangle +g(x)-g(x_{0})\notin-\operatorname{int} \mathbb{R}^{m}_{+}, \quad\forall x\in K. $$
Thus, \(x_{0}\in S_{GMVE}\).
Conversely, if \(x_{0}\in S_{GMVE}\), then there exists \(1\leq i_{0}\leq m\) such that
$$F_{i_{0}}(x_{0},y)+ \bigl\langle A_{i}(x_{0}), y-x_{0} \bigr\rangle +g_{i_{0}}(y)-g_{i_{0}}(x_{0}) \geq0\quad \mbox{for any }y\in K. $$
This means that
$$\min_{1\leq i\leq m} \bigl\{ -F_{i}(x_{0},y)+ \bigl\langle A_{i}(x_{0}), x_{0}-y \bigr\rangle +g_{i}(x_{0})-g_{i}(y) \bigr\} \leq0\quad \mbox{for any }y\in K. $$
So,
$$\vartheta_{\alpha}(x_{0})\leq0. $$
Since \(\vartheta_{\alpha}(x_{0})\geq0\) for any \(x\in K\),
$$\vartheta_{\alpha}(x_{0})= 0. $$
This completes the proof. □

By a similar method, we conclude the following results for (GVVI) and (VVI), respectively.

Corollary 3.1

The function \(\psi_{\alpha}\), with \(\alpha> 0\), defined by (8) is a gap function for (GVVI).

Corollary 3.2

The function \(\phi_{\alpha}\), with \(\alpha> 0\), defined by (9) is a gap function for (VVI).

Now, by using the gap function \({\vartheta_{\alpha}}(x)\), we obtain an error bound result for (GMVE).

Theorem 3.2

Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). Assume \(F_{i}\) is convex about the second variable over \(\mathbb {R}^{n}\times\mathbb{R}^{n}\) and \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\). Further assume that \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq \emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma-\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then for any \(x\in K\),
$$d(x,S_{GMVE})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}} \sqrt {\vartheta_{\alpha}(x)}, $$
where \(d(x,S_{GMVE})\) denotes the distance from the point x to the solution set \(S_{GMVE}\).

Proof

By (7), we get
$$\vartheta_{\alpha}(x)\geq\min_{1\leq i\leq m} \bigl\{ -F_{i}(x,y)+ \bigl\langle A_{i}(x), x-y \bigr\rangle +g_{i}(x)-g_{i}(y) \bigr\} -\alpha\varphi(x,y)\quad \mbox{for any }y \in K. $$
Since \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\), all \((GMVE)^{i}\) have the same solution. Without loss of generality, we can assume that the same solution is \(x_{0}\in K\). Obviously, \(x_{0}\in S_{GMVE}\) and
$$\vartheta_{\alpha}(x)\geq\min_{1\leq i\leq m} \bigl\{ -F_{i}(x,x_{0})+ \bigl\langle A_{i}(x), x-x_{0} \bigr\rangle +g_{i}(x)-g_{i}(x_{0}) \bigr\} -\alpha\varphi(x,x_{0}). $$
Without loss of generality, we assume that
$$\begin{aligned} &{-}F_{1}(x,x_{0})+ \bigl\langle A_{1}(x), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0}) \\ &\quad=\min_{1\leq i\leq m} \bigl\{ -F_{i}(x,x_{0})+ \bigl\langle A_{i}(x), x-x_{0} \bigr\rangle +g_{i}(x)-g_{i}(x_{0}) \bigr\} . \end{aligned}$$
Since A is strongly monotone, we obtain
$$\vartheta_{\alpha}(x)\geq-F_{1}(x,x_{0})+ \bigl\langle A_{1}(x_{0}), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0})+\kappa\|x-x_{0} \|^{2}-\alpha\varphi(x,x_{0}). $$
It follows from the property (P) of the function φ that
$$\begin{aligned} \vartheta_{\alpha}(x)\geq{}&{-}F_{1}(x,x_{0})+ \bigl\langle A_{1}(x_{0}), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0}) \\ &{}+\kappa\|x-x_{0} \|^{2}-\alpha(\gamma-\beta)\|x-x_{0}\|^{2}. \end{aligned}$$
(13)
By \(x_{0}\in S_{GMVE}^{1}\), we get
$$ F_{1}(x_{0},x)+ \bigl\langle A_{1}(x_{0}), x-x_{0} \bigr\rangle +g_{1}(x)-g_{1}(x_{0}) \geq0. $$
(14)
For the function \(F_{1}\), by using (A3), we get from (13) and (14)
$$\vartheta_{\alpha}(x)=\vartheta_{\alpha}(x)+F_{1}(x,x)\geq \vartheta _{\alpha}(x)+F_{1}(x,x_{0})+F(x_{0},x) \geq \bigl[\kappa-\alpha(\gamma-\beta) \bigr]\| x-x_{0} \|^{2}. $$
Namely,
$$\vartheta_{\alpha}(x)\geq \bigl[\kappa-\alpha(\gamma-\beta) \bigr] \|x-x_{0}\|^{2}. $$
Then
$$\|x-x_{0}\|\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt {\vartheta_{\alpha}(x)}, $$
which means that
$$d(x,S_{GMVE})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt {\vartheta_{\alpha}(x)}. $$
This complete the proof. □

The following example shows that, in general, the conditions of Theorem 3.2 can be achieved.

Example 3.1

Let \(n=1\), \(m=2\), \(K\subseteq R\), and \(K=[-1,1]\). Define \(A_{1}, A_{2}, g_{1}, g_{2}: R\rightarrow R\), \(F_{1}: R\times R\rightarrow R\), \(F_{2}: R \times R\rightarrow R\) by
$$\begin{aligned}& A_{1}(x)= x,\qquad A_{2}(x)=x^{3} + 4x,\qquad g_{1}(x)=2x^{2},\qquad g_{2}(x)=3x^{4}, \\& F_{1}(x,y)=-x^{2}+y^{2},\qquad F_{2}(x,y)=-3x^{4}+3y^{4}. \end{aligned}$$
Then
$$F(x,y)= \bigl(-x^{2}+y^{2},-3x^{4}+3y^{4} \bigr),\qquad A(x)= \bigl(x,x^{3}+2x \bigr),\qquad g(x)= \bigl(2x^{2}, 3x^{4} \bigr). $$
Obviously, \(F_{1}(x,y)\) and \(F_{2}(x,y)\) are convex about the second variable, respectively. \(A_{1}\) and \(A_{2}\) are strongly monotone over K with the modulus \(\kappa_{1}=1\) and \(\kappa_{2}=2\), respectively. Moreover, \(g_{1}\) and \(g_{2}\) are convex over R. On the other hand, by direct calculations, we have
$$\bigcap^{2}_{i=1}S_{GMVE}^{i}= \{0\}. $$
Thus, the conditions of Theorem 3.2 are satisfied.

Similarly, by using gap functions \(\psi_{\alpha}\) and \(\phi_{\alpha}\), we can also obtain error bound results for (GVVI) and (VVI), respectively.

Corollary 3.3

Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). If \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\). Further assume that \(\bigcap^{m}_{i=1}S_{ GVVI}^{i}\neq\emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma-\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then, for any \(x\in K\),
$$d(x,S_{GVVI})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt{\psi _{\alpha}(x)}, $$
where \(d(x,S_{GVVI})\) denotes the distance from the point x to the solution set \(S_{GVVI}\).

Corollary 3.4

Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). Further assume that \(\bigcap^{m}_{i=1}S_{VVI}^{i}\neq\emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma-\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then, for any \(x\in K\),
$$d(x,S_{VVI})\leq\frac{1}{\sqrt{\kappa-\alpha(\gamma-\beta)}}\sqrt{\phi _{\alpha}(x)}, $$
where \(d(x,S_{VVI})\) denotes the distance from the point x to the solution set \(S_{VVI}\).

Remark 3.1

(i) In [14], there exist some mistakes in the proof of Theorem 3.2, which lead to the requirement of Lipschitz properties of \(g_{i}\), \(i=1,2,\ldots,n\). Hence, we give the modified error bound for (GVVI) in Corollary 3.3 without Lipschitz assumption.

(ii) In [12], Charitha and Dutta established error bounds for (VVI) by using the projection operator method and strongly monotone assumptions, whereas it seems that our method is more simple from the computational view since there are not any scalarization parameters.

(iii) Under the conditions of Theorem 3.2, the strong assumption that \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\) shows that \(S_{GMVE}\) is a singleton set but not a general set. In fact, by \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\), there exists \(x\in K\) such that \(x\in\bigcap^{m}_{i=1}S_{GMVE}^{i}\), namely, for every \(i=1,2,\ldots, m\),
$$ F_{i}(x,y)+ \bigl\langle A_{i}(x), y-x \bigr\rangle +g_{i}(y)-g_{i}(x)\geq0,\quad \forall y\in K. $$
(15)
It is clear that \(x\in S_{GMVE}\). If \(S_{GMVE}\) is not a singleton set, there exists \(x'\in S_{GMVE}\) with \(x'\neq x\). Therefore, there exists \(j\in\{1,2,\ldots, m\}\) such that
$$ F_{j} \bigl(x',y \bigr)+ \bigl\langle A_{j} \bigl(x' \bigr), y-x' \bigr\rangle +g_{j}(y)-g_{j} \bigl(x' \bigr)\geq0, \quad \forall y\in K. $$
(16)
Thus, from (15), we have
$$ F_{j} \bigl(x,x' \bigr)+ \bigl\langle A_{j}(x), x'-x \bigr\rangle +g_{j} \bigl(x' \bigr)-g_{j}(x)\geq0. $$
(17)
From (16), we have
$$ F_{j} \bigl(x',x \bigr)+ \bigl\langle A_{j} \bigl(x' \bigr), x-x' \bigr\rangle +g_{j}(x)-g_{j} \bigl(x' \bigr)\geq0. $$
(18)
According to (17) and (18), we get
$$ F_{j} \bigl(x,x' \bigr)+F_{j} \bigl(x',x \bigr)+ \bigl\langle A_{j}(x), x'-x \bigr\rangle + \bigl\langle A_{j} \bigl(x' \bigr), x-x' \bigr\rangle \geq0. $$
(19)
However, by the properties (A2) and (A3) of the function \(F_{j}\),
$$ F_{j} \bigl(x,x' \bigr)+F_{j} \bigl(x',x \bigr)\leq0. $$
(20)
As \(A_{j}\) is strongly monotone, we have
$$ \bigl\langle A_{j}(x)-A_{j} \bigl(x' \bigr), x-x' \bigr\rangle \geq\kappa \bigl\| x-x'\bigr\| ^{2}>0. $$
(21)
By combining (20) and (21), we have
$$ F_{j} \bigl(x,x' \bigr)+F_{j} \bigl(x',x \bigr)+ \bigl\langle A_{j}(x)-A_{j} \bigl(x' \bigr), x'-x \bigr\rangle < 0. $$
(22)
This, however, contradicts (18).

Now we ask: How do we establish error bounds for \(S_{GMVE}\) in terms of the gap function \(\vartheta_{\alpha}\), under mild assumptions, such that \(S_{GMVE}\) need not be a singleton set in general? This problem may be interesting and valuable in vector optimization.

Declarations

Acknowledgements

The authors would like to thank the associated editor and the two anonymous referees for their valuable comments and suggestions, which have helped to improve the paper. This work was partially supported by the National Natural Science Foundation of China (Grant number: 11401487), the Fundamental Research Funds for the Central Universities (Grant numbers: SWU113037, XDJK2014C073), and the Scientific Research Fund of Sichuan Provincial Science and Technology Department (Grant number: 2015JY0237).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
The School of Economic Mathematics, Southwestern University of Finance and Economics
(2)
School of Mathematics and Statistics, Southwest University
(3)
Education Committee of Banan District

References

  1. Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-145 (1994) MATHMathSciNetGoogle Scholar
  2. Agarwal, RP, Cho, YJ, Petrot, N: Systems of general nonlinear set-valued mixed variational inequalities problems in Hilbert spaces. Fixed Point Theory Appl. 2011, Article ID 31 (2011). doi:10.1186/1687-1812-2011-31 MathSciNetView ArticleGoogle Scholar
  3. Saewan, S, Kumam, P: The shrinking projection method for solving generalized equilibrium problems and common fixed points for asymptotically quasi-ϕ-nonexpansive mappings. Fixed Point Theory Appl. 2011, Article ID 9 (2011). doi:10.1186/1687-1812-2011-9 View ArticleGoogle Scholar
  4. Markshoe, P, Wangkeeree, R, Kamraksa, U: The shrinking projection method for generalized mixed equilibrium problems and fixed point problems in Banach spaces. J. Nonlinear Anal. Optim., Theory Appl. 1, 111-129 (2010) MathSciNetGoogle Scholar
  5. Kumam, W, Jaiboon, C, Kumam, P: A shrinking projection method for generalized mixed equilibrium problems, variational inclusion problems and a finite family quasi-nonexpansive mappings. J. Inequal. Appl. 2010, Article ID 458274 (2010). doi:10.1155/2010/458274 View ArticleGoogle Scholar
  6. Zhang, S: Generalized mixed equilibrium problem in Banach spaces. Appl. Math. Mech. 30, 1105-1112 (2009) MATHView ArticleGoogle Scholar
  7. Chen, J, Cho, YJ, Wan, Z: Shrinking projection algorithms for equilibrium problems with a bifunction defined on the dual space of a Banach space. Fixed Point Theory Appl. 2011, Article ID 91 (2011). doi:10.1186/1687-1812-2011-91 MathSciNetView ArticleGoogle Scholar
  8. Cholamjiak, W, Cholamjiak, P, Suantai, S: Convergence of iterative schemes for solving fixed point problems for multi-valued nonself mappings and equilibrium problems. J. Nonlinear Sci. Appl. 8, 1245-1256 (2015) Google Scholar
  9. Khan, SA, Chen, JW: Gap functions and error bounds for generalized mixed vector equilibrium problems. J. Optim. Theory Appl. 166, 767-776 (2015) MathSciNetView ArticleGoogle Scholar
  10. Chang, SS, Joseph Lee, HW, Chan, CK: A new hybrid method for solving generalized equilibrium problem variational inequality and common fixed point in Banach spaces with applications. Nonlinear Anal. 73, 2260-2270 (2010) MATHMathSciNetView ArticleGoogle Scholar
  11. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003) Google Scholar
  12. Charitha, C, Dutta, J: Regularized gap functions and error bounds for vector variational inequalities. Pac. J. Optim. 6, 497-510 (2010) MATHMathSciNetGoogle Scholar
  13. Li, J, Mastroeni, G: Vector variational inequalities involving set-valued mappings via scalarization with applications to error bounds for gap functions. J. Optim. Theory Appl. 145, 355-372 (2010) MATHMathSciNetView ArticleGoogle Scholar
  14. Sun, XK, Chai, Y: Gap functions and error bounds for generalized vector variational inequalities. Optim. Lett. 8, 1663-1673 (2014) MATHMathSciNetView ArticleGoogle Scholar
  15. Xu, YD, Li, SJ: Gap functions and error bounds for weak vector variational inequalities. Optimization 63, 1339-1352 (2014) MATHMathSciNetView ArticleGoogle Scholar
  16. Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99-110 (1992) MATHMathSciNetView ArticleGoogle Scholar
  17. Ng, KF, Tan, LL: Error bounds of regularized gap functions for nonsmooth variational inequality problems. Math. Program. 110, 405-429 (2007) MATHMathSciNetView ArticleGoogle Scholar
  18. Yamashita, N, Taji, K, Fukushima, M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439-456 (1997) MATHMathSciNetView ArticleGoogle Scholar
  19. Li, MH: Error bounds of regularized gap functions for weak vector variational inequality problems. J. Inequal. Appl. 2014, Article ID 331 (2014). doi:10.1186/1029-242X-2014-331 View ArticleGoogle Scholar
  20. Konnov, IV: A scalarization approach for vector variational inequalities with applications. J. Glob. Optim. 32, 517-527 (2005) MATHMathSciNetView ArticleGoogle Scholar
  21. Giannessi, F: Theorems of the alternative, quadratic programs and complementarity problems. In: Cottle, RW, Giannessi, F, Lions, JL (eds.) Variational Inequalities and Complementarity Problems, pp. 151-186. Wiley, New York (1980) Google Scholar

Copyright

© Zhang et al. 2015