 Research
 Open Access
 Published:
Scalar gap functions and error bounds for generalized mixed vector equilibrium problems with applications
Fixed Point Theory and Applications volume 2015, Article number: 169 (2015)
Abstract
It is well known that equilibrium problems are very important mathematical models and are closely related with fixed point problems, variational inequalities, and Nash equilibrium problems. Gap functions and error bounds which play a vital role in algorithms design, are two muchaddressed topics of vector equilibrium problems. This paper is devoted to studying the scalarvalued gap functions and error bounds for the generalized mixed vector equilibrium problem (GMVE). First, a scalar gap function for (GMVE) is proposed without any scalarization methods, and then error bounds of (GMVE) are established in terms of the gap function. As applications, error bounds for generalized vector variational inequalities and vector variational inequalities are derived, respectively. The main results obtained are new and improve corresponding results of Charitha and Dutta (Pac. J. Optim. 6:497510, 2010) and Sun and Chai (Optim. Lett. 8:16631673, 2014).
Introduction
Equilibrium problems, which were firstly studied by Blum and Oettli [1], provide an unified framework for fixed point problems, variational inequality, complementarity problems, and optimization problems. It is well known that the vector equilibrium problem is a vital extension of equilibrium problems, which contain vector variational inequality, vector complementarity problems, and vector optimization problems as special cases. In the past decades, various kinds of vector equilibrium problems and their applications have been introduced and studied; see [2–9] and the references therein. Recently, Chang et al. [10] and Kumam et al. [5] researched a generalized mixed equilibrium problem,
It is very general in the sense that it includes fixed point problems, optimization problems, variational inequality problems, Nash equilibrium problems, and equilibrium problems as special cases.
Error bounds, which play a critical role in algorithm design, can be used to measure how much the approximate solution fails to be in the solution set and to analyze the convergence rates of various methods. Recently, kinds of error bounds have been presented for variational inequalities in [11–18]. Results for error bounds have been established for a weak vector variational inequality (WVVI) in [12–15, 19]. Xu and Li [15] obtained error bounds for a weak vector variational inequality with cone constraints by using a method of image space analysis. By using a scalarization approach of Konnov [20], Li and Mastroeni [13] established error bounds for two kinds of (WVVI) with setvalued mappings. By a regularized gap function and a Dgap function, Charitha and Dutta [12] used a projection operator method to obtain error bounds of (WVVI), respectively. Sun and Chai [14] studied some error bounds for generalized vector variational inequalities in virtue of the regularized gap functions. Very recently, a global error bound of a weak vector variational inequality was established by the nonlinear scalarization method in Li [19].
However, to the best of our knowledge, an error bound of the generalized mixed vector equilibrium problem (GMVE) has never been investigated. In this paper, motivated by ideas in Sun and Chai [14] and Yamashita et al. [18], we introduce a scalar gap function for (GMVE). Then an error bound of (GMVE) is presented. As an application of an error bound for (GMVE), we also get error bounds of (GVVI) and (VVI), respectively.
This paper is organized as follows: In Section 2, we first recall some basic definitions. In Section 3, we introduce scalar gap functions for (GMVE), (GVVI), and (VVI). By using these gap functions, we obtain some error bound results for (GMVE), (GVVI), and (VVI), respectively.
Mathematical preliminaries
Throughout this paper, let \(\mathbb{R}^{n}\) be the ndimensional Euclidean space and denote \(\mathbb{R}^{n}_{+}=\{x=(x_{1},x_{2},\ldots,x_{n}):x_{j}\geq0, j=1,2,\ldots,n\}\), the norms of all finite dimensional spaces be denoted by \(\\cdot \\), the inner products of all finite dimensional spaces be denoted by \(\langle\cdot,\cdot\rangle\). Let \(K\subseteq\mathbb{R}^{n}\) be a nonempty closed convex set. Let \(A_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) (\(i=1,2,\ldots,m\)) be a vectorvalued mapping, \(F_{i}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(g_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) (\(i=1,2,\ldots,m\)) be realvalued functions. For abbreviation, we put
and for any \(x,v\in\mathbb{R}^{n}\)
In this paper, we consider the generalized mixed vector equilibrium problem (GMVE) of finding \(x\in K\) such that
Denote by \(S_{GMVE}\) the solution set of (GMVE).
If \(m=1\), our problem is to find \(x \in K\) such that
Then this problem reduces to a generalized mixed equilibrium problem [5].
If \(F=0\), our problem is to find \(x \in K\) such that
Then this problem reduces to a generalized vector variational inequality problem (GVVI) [14].
In the case of \(F=0\) and \(g=0\), (GMVE) is a vector variational inequality problem (VVI), introduced and studied by Giannessi [21], finding \(x\in K\) such that
In the case of \(A\equiv0\) and \(g=0\), then (GMVE) is a vector equilibrium problem, finding \(x \in K\) such that
In the case of \(A\equiv0\) and \(F=0\), (GMVE) is a vector optimization problem, finding \(x \in K\) such that
For \(i=1,2,\ldots,m\), we denote the generalized mixed vector equilibrium problems (GMVE) associated with \(F_{i}\), \(A_{i}\) and \(g_{i}\) as \((GMVE)^{i}\), the generalized vector variational inequality problems (GVVI) associated with \(A_{i}\) and \(g_{i} \) as \((GVVI)^{i}\), and the vector variational inequality problems (VVI) associated with \(A_{i}\) as \((VVI)^{i}\), respectively. The solution sets of \((GMVE)^{i}\), \((GVVI)^{i}\), and \((VVI)^{i}\) will be denoted by \(S_{GMVE}^{i}\), \(S_{GVVI}^{i}\), and \(S_{VVI}^{i}\), respectively.
In the paper, we intend to investigate gap functions and error bounds of (GMVE), (GVVI), and (VVI). We shall recall some notations and definitions, which will be used in the sequel.
Definition 2.1
A realvalued function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex (resp. concave) over \(\mathbb{R}^{n}\), if
for every \(x, y \in\mathbb{R}^{n}\) and \(\lambda\in[0,1]\).
Definition 2.2
A vectorvalued function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is strongly monotone over \(\mathbb{R}^{n}\) with modulus \(\kappa> 0\), if for any \(x,y \in\mathbb{R}^{n}\),
Definition 2.3
A realvalued function \(\vartheta: \mathbb{R}^{n} \rightarrow\mathbb{R}\) is said to be a scalarvalued gap function of (GMVE) (resp. (GVVI) and (VVI)), if it satisfies the following conditions:

(i)
\(\vartheta(x)\geq0\), for any \(x\in K \),

(ii)
\(\vartheta(x_{0})=0 \) if and only if \(x_{0} \in K\) is a solution of (GMVE) (resp. (GVVI) and (VVI)).
Main results
Notice that Sun and Chai [14] introduced a new scalarvalued gap function of (GVVI) without any scalarization approach; the gap function discussed in [14] is simpler from the computational view. In terms of an approach due to Sun and Chai [14] and Yamashita et al. [18], we construct the function \(\vartheta_{\alpha}:K\rightarrow\mathbb{R}\) for \(\alpha> 0\),
and
respectively, where \(F_{i},\varphi:\mathbb{\mathbb{R}}^{n}\times\mathbb{\mathbb{R}}^{n}\to \mathbb{\mathbb{R}}\), \(i=1,2,\ldots,m\) are realvalued functions. In the following, let φ be a continuously differentiable function, which has the following property with the associated constants \(\gamma, \beta>0\).

(P)
For all \(x,y \in\mathbb{R}^{n}\),
$$ \beta\xy\^{2}\leq\varphi(x,y)\leq(\gamma \beta) \xy\^{2},\quad \gamma\geq 2\beta>0. $$(10)
For example, let \(\kappa>0\) and let \(\varphi:\mathbb{\mathbb {R}}^{n}\times\mathbb{\mathbb{R}}^{n}\to\mathbb{\mathbb{R}}\) be defined by
Then φ satisfies condition (P), with \(\gamma=2\kappa \) and \(\beta=\kappa\).
For any \(i=1,2,\ldots,m\), we suppose that \(F_{i}\) satisfies the following conditions:

(A1)
\(F_{i}\) is a convex function about the second variable on \(\mathbb{\mathbb{R}}^{n}\times\mathbb{\mathbb{R}}^{n}\).

(A2)
\(F_{i}(x,y)=0\), \(\forall x,y\in\mathbb{\mathbb{R}}^{n}\), if and only if \(x=y\).

(A3)
For any \(x,y,z\in K\), \(F_{i}(x,y)+F_{i}(y,z)\leq F_{i}(x,z)\).
Theorem 3.1
If \(F_{i}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex about the second variable and \(g_{i}\) is convex over \(\mathbb {R}^{n}\) for any \(i=1,2,\ldots,m\), then the function \(\vartheta_{\alpha}\), with \(\alpha> 0\), defined by (7) is a gap function for (GMVE).
Proof
(i) It is clear that for any \(x\in K\),
follows simply by setting \(x=y\) in the right hand side of the expression for \(\vartheta_{\alpha}(x)\).
(ii) If there exists \(x_{0}\in K\) such that \(\vartheta_{\alpha}(x_{0})=0\), then
For arbitrary \(x\in K\) and \(\kappa\in(0,1)\), let \(y=x+\kappa(x_{0}x)\). Since K is convex, we get \(y\in K\) and
Since \(F_{i}\) is convex over \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) about the second variable and \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\), we have
For the function \(\varphi(x,y)\), by using (10), we have
By the property (A2) of the function \(F_{i}\), we obtain \(F_{i}(x_{0},x_{0})=0\).
Hence, from (11) and (12), we get
So,
Taking the limit as \(\kappa\rightarrow1\), we obtain
Then, for any \(x\in K\), there exists \(1\leq i_{0} \leq m\) such that
This means that
Thus, \(x_{0}\in S_{GMVE}\).
Conversely, if \(x_{0}\in S_{GMVE}\), then there exists \(1\leq i_{0}\leq m\) such that
This means that
So,
Since \(\vartheta_{\alpha}(x_{0})\geq0\) for any \(x\in K\),
This completes the proof. □
By a similar method, we conclude the following results for (GVVI) and (VVI), respectively.
Corollary 3.1
The function \(\psi_{\alpha}\), with \(\alpha> 0\), defined by (8) is a gap function for (GVVI).
Corollary 3.2
The function \(\phi_{\alpha}\), with \(\alpha> 0\), defined by (9) is a gap function for (VVI).
Now, by using the gap function \({\vartheta_{\alpha}}(x)\), we obtain an error bound result for (GMVE).
Theorem 3.2
Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). Assume \(F_{i}\) is convex about the second variable over \(\mathbb {R}^{n}\times\mathbb{R}^{n}\) and \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\). Further assume that \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq \emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then for any \(x\in K\),
where \(d(x,S_{GMVE})\) denotes the distance from the point x to the solution set \(S_{GMVE}\).
Proof
By (7), we get
Since \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\), all \((GMVE)^{i}\) have the same solution. Without loss of generality, we can assume that the same solution is \(x_{0}\in K\). Obviously, \(x_{0}\in S_{GMVE}\) and
Without loss of generality, we assume that
Since A is strongly monotone, we obtain
It follows from the property (P) of the function φ that
By \(x_{0}\in S_{GMVE}^{1}\), we get
For the function \(F_{1}\), by using (A3), we get from (13) and (14)
Namely,
Then
which means that
This complete the proof. □
The following example shows that, in general, the conditions of Theorem 3.2 can be achieved.
Example 3.1
Let \(n=1\), \(m=2\), \(K\subseteq R\), and \(K=[1,1]\). Define \(A_{1}, A_{2}, g_{1}, g_{2}: R\rightarrow R\), \(F_{1}: R\times R\rightarrow R\), \(F_{2}: R \times R\rightarrow R\) by
Then
Obviously, \(F_{1}(x,y)\) and \(F_{2}(x,y)\) are convex about the second variable, respectively. \(A_{1}\) and \(A_{2}\) are strongly monotone over K with the modulus \(\kappa_{1}=1\) and \(\kappa_{2}=2\), respectively. Moreover, \(g_{1}\) and \(g_{2}\) are convex over R. On the other hand, by direct calculations, we have
Thus, the conditions of Theorem 3.2 are satisfied.
Similarly, by using gap functions \(\psi_{\alpha}\) and \(\phi_{\alpha}\), we can also obtain error bound results for (GVVI) and (VVI), respectively.
Corollary 3.3
Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). If \(g_{i}\) is convex over \(\mathbb{R}^{n}\) for any \(i=1,2,\ldots,m\). Further assume that \(\bigcap^{m}_{i=1}S_{ GVVI}^{i}\neq\emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then, for any \(x\in K\),
where \(d(x,S_{GVVI})\) denotes the distance from the point x to the solution set \(S_{GVVI}\).
Corollary 3.4
Assume that each \(A_{i}\) are strongly monotone over K with the modulus \(\kappa_{i}> 0\). Further assume that \(\bigcap^{m}_{i=1}S_{VVI}^{i}\neq\emptyset\). Moreover, let \(\kappa=\min_{1\leq i\leq m}\kappa_{i}\) and \(\alpha> 0\) be chosen such that \(\kappa>\alpha(\gamma\beta)\), where \(\gamma\geq2\beta>0\) are constants associated with the function φ. Then, for any \(x\in K\),
where \(d(x,S_{VVI})\) denotes the distance from the point x to the solution set \(S_{VVI}\).
Remark 3.1
(i) In [14], there exist some mistakes in the proof of Theorem 3.2, which lead to the requirement of Lipschitz properties of \(g_{i}\), \(i=1,2,\ldots,n\). Hence, we give the modified error bound for (GVVI) in Corollary 3.3 without Lipschitz assumption.
(ii) In [12], Charitha and Dutta established error bounds for (VVI) by using the projection operator method and strongly monotone assumptions, whereas it seems that our method is more simple from the computational view since there are not any scalarization parameters.
(iii) Under the conditions of Theorem 3.2, the strong assumption that \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\) shows that \(S_{GMVE}\) is a singleton set but not a general set. In fact, by \(\bigcap^{m}_{i=1}S_{GMVE}^{i}\neq\emptyset\), there exists \(x\in K\) such that \(x\in\bigcap^{m}_{i=1}S_{GMVE}^{i}\), namely, for every \(i=1,2,\ldots, m\),
It is clear that \(x\in S_{GMVE}\). If \(S_{GMVE}\) is not a singleton set, there exists \(x'\in S_{GMVE}\) with \(x'\neq x\). Therefore, there exists \(j\in\{1,2,\ldots, m\}\) such that
Thus, from (15), we have
From (16), we have
According to (17) and (18), we get
However, by the properties (A2) and (A3) of the function \(F_{j}\),
As \(A_{j}\) is strongly monotone, we have
By combining (20) and (21), we have
This, however, contradicts (18).
Now we ask: How do we establish error bounds for \(S_{GMVE}\) in terms of the gap function \(\vartheta_{\alpha}\), under mild assumptions, such that \(S_{GMVE}\) need not be a singleton set in general? This problem may be interesting and valuable in vector optimization.
References
 1.
Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123145 (1994)
 2.
Agarwal, RP, Cho, YJ, Petrot, N: Systems of general nonlinear setvalued mixed variational inequalities problems in Hilbert spaces. Fixed Point Theory Appl. 2011, Article ID 31 (2011). doi:10.1186/16871812201131
 3.
Saewan, S, Kumam, P: The shrinking projection method for solving generalized equilibrium problems and common fixed points for asymptotically quasiϕnonexpansive mappings. Fixed Point Theory Appl. 2011, Article ID 9 (2011). doi:10.1186/1687181220119
 4.
Markshoe, P, Wangkeeree, R, Kamraksa, U: The shrinking projection method for generalized mixed equilibrium problems and fixed point problems in Banach spaces. J. Nonlinear Anal. Optim., Theory Appl. 1, 111129 (2010)
 5.
Kumam, W, Jaiboon, C, Kumam, P: A shrinking projection method for generalized mixed equilibrium problems, variational inclusion problems and a finite family quasinonexpansive mappings. J. Inequal. Appl. 2010, Article ID 458274 (2010). doi:10.1155/2010/458274
 6.
Zhang, S: Generalized mixed equilibrium problem in Banach spaces. Appl. Math. Mech. 30, 11051112 (2009)
 7.
Chen, J, Cho, YJ, Wan, Z: Shrinking projection algorithms for equilibrium problems with a bifunction defined on the dual space of a Banach space. Fixed Point Theory Appl. 2011, Article ID 91 (2011). doi:10.1186/16871812201191
 8.
Cholamjiak, W, Cholamjiak, P, Suantai, S: Convergence of iterative schemes for solving fixed point problems for multivalued nonself mappings and equilibrium problems. J. Nonlinear Sci. Appl. 8, 12451256 (2015)
 9.
Khan, SA, Chen, JW: Gap functions and error bounds for generalized mixed vector equilibrium problems. J. Optim. Theory Appl. 166, 767776 (2015)
 10.
Chang, SS, Joseph Lee, HW, Chan, CK: A new hybrid method for solving generalized equilibrium problem variational inequality and common fixed point in Banach spaces with applications. Nonlinear Anal. 73, 22602270 (2010)
 11.
Facchinei, F, Pang, JS: FiniteDimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003)
 12.
Charitha, C, Dutta, J: Regularized gap functions and error bounds for vector variational inequalities. Pac. J. Optim. 6, 497510 (2010)
 13.
Li, J, Mastroeni, G: Vector variational inequalities involving setvalued mappings via scalarization with applications to error bounds for gap functions. J. Optim. Theory Appl. 145, 355372 (2010)
 14.
Sun, XK, Chai, Y: Gap functions and error bounds for generalized vector variational inequalities. Optim. Lett. 8, 16631673 (2014)
 15.
Xu, YD, Li, SJ: Gap functions and error bounds for weak vector variational inequalities. Optimization 63, 13391352 (2014)
 16.
Fukushima, M: Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 53, 99110 (1992)
 17.
Ng, KF, Tan, LL: Error bounds of regularized gap functions for nonsmooth variational inequality problems. Math. Program. 110, 405429 (2007)
 18.
Yamashita, N, Taji, K, Fukushima, M: Unconstrained optimization reformulations of variational inequality problems. J. Optim. Theory Appl. 92, 439456 (1997)
 19.
Li, MH: Error bounds of regularized gap functions for weak vector variational inequality problems. J. Inequal. Appl. 2014, Article ID 331 (2014). doi:10.1186/1029242X2014331
 20.
Konnov, IV: A scalarization approach for vector variational inequalities with applications. J. Glob. Optim. 32, 517527 (2005)
 21.
Giannessi, F: Theorems of the alternative, quadratic programs and complementarity problems. In: Cottle, RW, Giannessi, F, Lions, JL (eds.) Variational Inequalities and Complementarity Problems, pp. 151186. Wiley, New York (1980)
Acknowledgements
The authors would like to thank the associated editor and the two anonymous referees for their valuable comments and suggestions, which have helped to improve the paper. This work was partially supported by the National Natural Science Foundation of China (Grant number: 11401487), the Fundamental Research Funds for the Central Universities (Grant numbers: SWU113037, XDJK2014C073), and the Scientific Research Fund of Sichuan Provincial Science and Technology Department (Grant number: 2015JY0237).
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhang, W., Chen, J., Xu, S. et al. Scalar gap functions and error bounds for generalized mixed vector equilibrium problems with applications. Fixed Point Theory Appl 2015, 169 (2015) doi:10.1186/s1366301504222
Received
Accepted
Published
DOI
Keywords
 error bound
 scalar gap function
 generalized mixed vector equilibrium