 Research
 Open Access
 Published:
The relationship between multiobjective robustness concepts and setvalued optimization
Fixed Point Theory and Applications volume 2014, Article number: 83 (2014)
Abstract
In this paper, we discuss the connection between concepts of robustness for multiobjective optimization problems and set order relations. We extend some of the existing concepts to general spaces and cones using set relations. Furthermore, we derive new concepts of robustness for multiobjective optimization problems. We point out that robust multiobjective optimization can be interpreted as an application of setvalued optimization. Furthermore, we develop new algorithms for solving uncertain multiobjective optimization problems. These algorithms can be used in order to solve a special class of setvalued optimization problems.
1 Introduction
Dealing with uncertainty in multiobjective optimization problems is very important in many applications. On the one hand, most real world optimization problems are contaminated with uncertain data, especially traffic optimization problems, scheduling problems, portfolio optimization, network flow and network design problems. On the other hand, many real world optimization problems require the minimization of multiple conflicting objectives (see [1]), e.g. the maximization of the expected return versus the minimization of risk in portfolio optimization, the minimization of production time versus the minimization of the cost of manufacturing equipment, or the maximization of tumor control versus the minimization of normal tissue complication in radiotherapy treatment design.
For an optimization problem contaminated with uncertain data it is typical that at the time it is solved these data are not completely known. It is very important to estimate the effects of this uncertainty and so it is necessary to evaluate how sensitive an optimal solution is to perturbations of the input data. One way to deal with this question is sensitivity analysis (for an overview see [2]). Sensitivity analysis is an a posteriori approach and provides ranges for input data within which a solution remains feasible or optimal. It does not, however, provide a course of action for changing a solution should the perturbation be outside this range. In contrast, stochastic programming (see e.g. Birge and Louveaux [3] for an introduction) and robust optimization (see e.g. [4, 5] for an overview) take the uncertainty into account during the optimization process. While stochastic programming assumes some knowledge about the probability distribution of the uncertain data and the objective usually is to find a solution that is feasible with a certain probability and that optimizes the expected value of some objective function, robust optimization hedges against the worst case. Hence robust optimization does not require any probabilistic information. Depending on the concrete application one can decide whether robust or stochastic optimization is the more appropriate way of dealing with uncertainty.
Robust optimization is usually applied to problems where a solution is required which hedges against all possible scenarios. For example, the emergency department with landing place for rescue helicopters in a ski resort should be chosen in such a way that the flight time to all ski slopes in the resort that are to be protected is minimized in the worst case, even though flight times are uncertain due to unknown weather conditions. Similarly, if an aircraft schedule of an airline is to be determined, one would want to be able to provide service to as many passengers as possible in a costeffective manner, even though the exact number of passengers is not known at the time the schedule is fixed.
Generally, in the concept of robustness it is not assumed that all data are known, but one allows different scenarios for the input parameters and looks for a solution that works well in every uncertain scenario.
Unfortunately, at the time the uncertain optimization problem has to be solved, it is not known which scenario is going to be realized. Therefore, a definition of a ‘good’ (or robust against the perturbations in the uncertain parameter) solution is necessary.
Robust optimization is a growing field of research, we refer to BenTal, El Ghaoui, Nemirovski [5], Kouvelis and Yu [4] for an overview of results and applications for the most prominent concepts. Several other concepts of robustness were introduced more recently, e.g. the concept of light robustness by Fischetti and Monaci [6] or the concept of recoveryrobustness in Liebchen et al. [7], for a unified approach, see [8]. A scenariobased approach is suggested in Goerigk and Schöbel [9]. In all these approaches, the uncertain optimization problem is replaced by a deterministic version, called the robust counterpart of the uncertain problem.
One of the most common approaches is the concept of minmax robustness, introduced by Soyster [10] and studied e.g. by BenTal and Nemirovski [11]. Here, a solution is said to be robust, if it minimizes the worst case of the objective function over all scenarios. We do not go into detail here as for this paper we mostly consider concepts of robustness for multiobjective optimization problems.
Now, if we consider the objective function in the problem definition to be not a singleobjective, but a multiobjective function, the concepts of robustness do not apply naturally anymore. The problem obviously is that there is no total order on ${\mathbb{R}}^{k}$ and the robustness concepts for uncertain singleobjective optimization problems rely on the total order of ℝ. Therefore, new definitions of what is seen as a robust solution to an uncertain multiobjective optimization problem are necessary.
The first approach to handle uncertainty for multiobjective optimization problems was presented by Deb and Gupta [12] who extended the concept Branke [13] introduced for singleobjective functions. Here each objective function is replaced by their mean function and an efficient solution of the resulting multiobjective optimization problem is called a robust solution. The authors also presented a second definition where the uncertainty is modeled into the constraints which restrict the variation of the original objective functions to their means. Barrico and Antunes [14] extended the concept of Deb and Gupta and introduced the degree of robustness as a measure how much a predefined neighborhood of the considered solution can be extended without containing solutions whose function values are too bad. An overview of the existing concepts of robustness for multiobjective optimization problems can be found in [15] and [16].
A first approach to extending the concept of minmax robustness to multiobjective optimization was presented by Kuroiwa and Lee [17]. Here, the worst case in each component is calculated separately, and an efficient solution to the problem of minimizing the vector of worst cases is then called a robust solution to the original problem. This definition has been extended by Ehrgott, et al. [18], where the authors replace the objective function by a setvalued objective function. Furthermore, the authors present solution algorithms for calculating minmaxrobust efficient solutions, one of which is closely connected to the concept of robustness presented by Kuroiwa and Lee [17]. Furthermore, in [17] the authors present solution concepts for obtaining robust points of uncertain multiobjective optimization problems and study optimality conditions for the special case of convex objective functions in [19].
Setvalued optimization deals on the other hand with the problem of minimizing a function where the image of a point is in fact a set. Minimizing a set is not totally intuitive since on a power set there is no total order as well as on ${\mathbb{R}}^{k}$. Therefore, a definition of what can be seen as an optimal solution to minimizing a setvalued objective function is necessary. In order to compare sets, several preorders have been introduced (see e.g. [20–24]). With these preorders it is then possible to formulate setvalued optimization problems related to robustness for multiobjective optimization problems, especially, we show that the concept of minmaxrobust efficiency (see [18]) is closely connected to a certain set order relation, introduced by Kuroiwa [21, 22], namely the uppertype set relation. We derive our results in general spaces using arguments from nonlinear and convex analysis (see Takahashi [25, 26]), for methods from numerical analysis in general spaces, see e.g. Aoyama, Kohsaka, Takahashi [27], Takahashi [28].
Replacing the set order relation implicitly used in the definition of minmaxrobust efficiency, Ide and Köbis [29] presented various other concepts of robustness for multiobjective optimization, derived by replacing the uppertype set relation with another set ordering from the literature.
Now, this paper is structured as follows: After fixing the notation and recalling the definitions of set order relations in Section 2, in Section 3 we introduce several concepts of robustness for multiobjective optimization problems based on set order relations. We show some characterizations for robust solutions in the sense of setvalued optimization that are important for deriving solution procedures using the ideas given in [18]. A lot of the results presented in [18] can be extended to our general setting. Using this information, we extend the algorithms presented in [18] to concepts for robustness and then we use these algorithms in order to solve a certain class of setvalued optimization problems. We conclude the paper with some final remarks and an outlook to future research.
2 Preliminaries
Throughout the paper, let Y be a linear topological space partially ordered by a proper closed convex and pointed (i.e., $C\cap (C)=\{0\}$) cone C. The ordering relation on Y is described by ${y}^{1}{\le}_{C}{y}^{2}$ if and only if ${y}^{2}{y}^{1}\in C$ for all ${y}^{1},{y}^{2}\in Y$. The dual cone to C is denoted by ${C}^{\ast}:=\{{y}^{\ast}\in {Y}^{\ast}\mid \mathrm{\forall}y\in C:{y}^{\ast}(y)\ge 0\}$ and the quasiinterior of ${C}^{\ast}$ is defined by ${C}^{\mathrm{\#}}:=\{{y}^{\ast}\in {C}^{\ast}\mid \mathrm{\forall}y\in C\setminus \{0\}:{y}^{\ast}(y)>0\}$. Furthermore, let X be a linear space, $F:X\rightrightarrows Y$ (with the ‘⇉’notation we denote that F is a setvalued objective function whose function values are sets in Y), and a subset of X. As usual, we denote the graph of the setvalued map F by $graphF:=\{(x,y)\in X\times Y\mid y\in F(x)\}$. Furthermore, we define $F(\mathcal{X}):={\bigcup}_{x\in \mathcal{X}}F(x)$.
In set optimization, the following set relations play an important role; see Young [24], Nishnianidze [23], Kuroiwa [21, 22, 30], Jahn and Ha [31] and Eichfelder and Jahn [20]. We will use these set relations to introduce several concepts of robustness.
Definition 1 (Set less order relation [20, 23, 24])
Let $C\subset Y$ be a proper closed convex and pointed cone. Furthermore, let $A,B\subset Y$ be arbitrarily chosen sets. Then the set less order relation is defined by
Remark 1 Of course, we have
and
Definition 2 (Uppertype set relation [21, 22])
Let $A,B\subset Y$ be arbitrarily chosen sets and $C\subset Y$ a proper closed convex and pointed cone. Then the utype set relation ${\u2aaf}_{C}^{u}$ is defined by
Another important set order relation is the lowertype set relation:
Definition 3 (Lowertype set relation [21, 22])
Let $A,B\subset Y$ be arbitrarily chosen sets and $C\subset Y$ a proper closed convex and pointed cone. Then the ltype set relation ${\u2aaf}_{C}^{l}$ is defined by
Remark 2 Note that the conditions

(i)
$A\subset BintC$,

(ii)
$A+N\subset BC$ for some neighborhood N of the zero vector ${0}_{Y}$ in Y
are not equivalent when A is not compact. Clearly (ii) implies (i) if $intC\ne \mathrm{\varnothing}$. From a theoretical viewpoint, (ii) may, in some cases, be more appropriate for describing solutions.
Taking into account this property we suppose in Section 3 that the setvalued map ${f}_{\mathcal{U}}$ in the formulation of the concepts of robustness for multiobjective optimization problems is compactvalued. This is important in the case where we are dealing with intC in the definition of robustness.
Remark 3 There is the following relationship between the ltype set relation ${\u2aaf}_{C}^{l}$ and the utype set relation ${\u2aaf}_{C}^{u}$:
To conclude the notation, we introduce a setvalued optimization problem: Consider $F:X\rightrightarrows Y$, and a subset of X. Furthermore, let ⪯ be a preorder on the power set of Y given by Definition 1, 2, 3, respectively. Then a setvalued optimization problem ($\mathcal{S}\mathcal{P}\u2aaf$) is given by
($\mathcal{S}\mathcal{P}\u2aaf$) ⪯minimize $F(x)$, subject to $x\in \mathcal{X}$,
where minimal solutions of ($\mathcal{S}\mathcal{P}\u2aaf$) are defined in the following way:
Definition 4 (Minimal solutions of ($\mathcal{S}\mathcal{P}\u2aaf$) w.r.t. the preorder ⪯)
Given a setvalued optimization problem ($\mathcal{S}\mathcal{P}\u2aaf$), an element $\overline{x}\in \mathcal{X}$ is called a minimal solution to ($\mathcal{S}\mathcal{P}\u2aaf$) if
Remark 4 If we use the set relation ${\u2aaf}_{C}^{l}$ introduced in Definition 3 in the formulation of the solution concept, i.e., we study the setvalued optimization problem of ($\mathcal{SP}{\u2aaf}_{C}^{l}$), we observe that this solution concept is based on comparisons among sets of minimal points of values of F. Furthermore, considering the utype set relation ${\u2aaf}_{C}^{u}$ (Definition 2), i.e., considering the problem ($\mathcal{SP}{\u2aaf}_{C}^{u}$) we recognize that this solution concept is based on comparisons among sets of maximal points of values of F. When $\overline{x}\in \mathcal{X}$ is a minimal solution of problem ($\mathcal{SP}{\u2aaf}_{C}^{l}$) there does not exist $x\in \mathcal{X}$ such that $F(x)$ is strictly smaller than $F(\overline{x})$ with respect to the set order ${\u2aaf}_{C}^{l}$.
Furthermore, the following definition of a minimizer of a setvalued optimization problem is very often used in the theory of set optimization and given below. However, the solution concept introduced in Definition 4 is more natural and useful as we can see in Example 1.
In the next definition we use the set of minimal elements of a nonempty subset A of Y with respect to C:
Definition 5 (Minimizer of a setvalued optimization problem)
Let $\overline{x}\in \mathcal{X}$ and $(\overline{x},\overline{y})\in graphF$. The pair $(\overline{x},\overline{y})\in graphF$ is called a minimizer of $F:X\rightrightarrows Y$ over with respect to C if $\overline{y}\in Min(F(\mathcal{X}),C)$.
For our approach to robustness of uncertain multiobjective optimization problems, minimal solutions in the sense of Definition 4 are useful and therefore, when considering robustness concepts, we will deal with this solution concept in the following.
In order to get an insight to the issue of setvalued optimization problems, we give two examples (see Kuroiwa [32]) of setvalued optimization problems.
Example 1 Consider the setvalued optimization problem
($\mathcal{SP}{\u2aaf}_{C}^{l}$) ${\u2aaf}_{C}^{l}$minimize ${F}_{1}(x)$, subject to $x\in \mathcal{X}$,
with $X=\mathbb{R}$, $Y={\mathbb{R}}^{2}$, $C={\mathbb{R}}_{+}^{2}$, $\mathcal{X}=[0,1]$ and ${F}_{1}:\mathcal{X}\rightrightarrows Y$ is given by
where $[(a,b),(c,d)]$ is the line segment between $(a,b)$ and $(c,d)$. Only the element $\overline{x}=0$ is a minimal solution of ($\mathcal{SP}{\u2aaf}_{C}^{l}$). However, all elements $(\overline{x},\overline{y})\in graph{F}_{1}$ with $\overline{x}\in [0,1]$, $\overline{y}=(1\overline{x},\overline{x})$ for $\overline{x}\in (0,1]$ and $\overline{y}=(1,0)$ for $\overline{x}=0$ are minimizers of the setvalued optimization problem in the sense of Definition 5. This example shows that the solution concept with respect to the set relation ${\u2aaf}_{C}^{l}$ (see Definitions 3 and 4) is more natural and useful than the concept of minimizers introduced in Definition 5.
Example 2 In this example we are looking for minimal solutions of a setvalued optimization problem with respect to the set relation ${\u2aaf}_{C}^{u}$ introduced in Definition 2.
($\mathcal{SP}{\u2aaf}_{C}^{u}$) ${\u2aaf}_{C}^{u}$minimize ${F}_{2}(x)$, subject to $x\in \mathcal{X}$,
with $X=\mathbb{R}$, $Y={\mathbb{R}}^{2}$, $C={\mathbb{R}}_{+}^{2}$, $\mathcal{X}=[0,1]$ and ${F}_{2}:\mathcal{X}\rightrightarrows Y$ is given by
where $[[(a,b),(c,d)]]:=\{({y}_{1},{y}_{2})\mid a\le {y}_{1}\le c,b\le {y}_{2}\le d\}$. Then the only minimal solution of ($\mathcal{SP}{\u2aaf}_{C}^{u}$) in the sense of Definition 4 is $\overline{x}=0$.
A visualization of both above discussed examples is given in Figure 1.
In Section 3, we will apply the preorders introduced in Definitions 1, 2, 3 in order to define several concepts of robustness for uncertain multiobjective optimization problems.
3 Concepts of robustness for multiobjective optimization problems based on set relations and corresponding algorithms
Talking about an uncertain optimization problem, we consider the uncertain data to be given as a parameter (also called scenario) $\xi \in \mathcal{U}$ where $\mathcal{U}\subseteq {\mathbb{R}}^{m}$ is the socalled uncertainty set. For each realization of this parameter $\xi \in \mathcal{U}$ we obtain a single optimization problem
($\mathcal{P}(\xi )$) $\begin{array}{l}f(x,\xi )\to min\\ \text{s.t.}x\in \mathcal{X},\end{array}$
where $f:X\times \mathcal{U}\mapsto Y$ is the objective function and $\mathcal{X}\subseteq X$ is the set of feasible solutions (note that we assume the feasible set to be unchanged for every realization of the uncertain parameter). We use the notation
for the image of the uncertainty set and x under f (note that ${f}_{\mathcal{U}}(x)$ in general is a set and not a singleton).
Taking into account the discussion in Remark 2 we assume that the setvalued map ${f}_{\mathcal{U}}$ is compactvalued.
Now, when searching for an optimal solution, one has to overcome the problem that we do not know anything about the different scenarios, e.g., which one is most likely to occur, any kind of probability distribution and so on. Therefore, an uncertain (multiobjective) optimization problem is defined as the family of optimization problems
($\mathcal{P}(\mathcal{U})$) $(\mathcal{P}(\xi ),\xi \in \mathcal{U})$.
Now it is not clear what solution to this problem ($\mathcal{P}(\mathcal{U})$) would be seen as desirable. Throughout the paper we discuss several concepts of robustness and derive new approaches to robustness for multiobjective optimization problems.
In this section we extend the robustness concepts presented in [18] to general spaces using the preorders introduced in Definitions 1, 2, 3. In particular, we are interested in extending the theorems which provide the foundation for the algorithms for calculating the respective robust solutions. We shortly repeat the various concepts which relate to different set orderings, extend the theorems and then formulate the algorithms. With this, we present some ideas for solving special setvalued optimization problems in our paper (see Section 4).
3.1 ${\u2aaf}_{C}^{u}$Robustness
We extend the definitions and results presented by Ehrgott et al. [18] about minmaxrobust efficiency.
Here, a feasible solution $x\in \mathcal{X}$ to ($\mathcal{P}(\mathcal{U})$) is called minmaxrobust efficient if there is no other feasible solution $\overline{x}\in \mathcal{X}\setminus \{x\}$, such that
where ${\mathbb{R}}_{\geqq}^{k}:=\{\lambda \in {\mathbb{R}}^{k}:{\lambda}_{i}\ge 0\phantom{\rule{0.25em}{0ex}}\mathrm{\forall}i=1,\dots ,k\}$.
With the definitions of uppertype set relation, see Definition 2, and minmaxrobust efficiency in mind we can see the close connection between minmaxrobust efficiency and the uppertype set relation, since a solution $x\in \mathcal{X}$ to ($\mathcal{P}(\mathcal{U})$) is minmaxrobust efficient if there is no other feasible solution $\overline{x}\in \mathcal{X}\setminus \{x\}$, such that
where $Y={\mathbb{R}}^{k}$ and $C={\mathbb{R}}_{\geqq}^{k}$.
Since all the concepts considered in this paper are closely related to a set order relation ⪯, in order to keep the names of the concepts readable we call the respective solution ⪯robust.
In the following definition we use a preorder ${\u2aaf}_{Q}^{u}$ like in Definition 2 with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively, instead of ${\u2aaf}_{C}^{u}$:
where $A,B\subset Y$ are arbitrarily chosen sets. If we are dealing with $Q=intC$ we suppose $intC\ne \mathrm{\varnothing}$.
Using this notation, the concept of minmaxrobust efficiency can be redefined as a concept of robustness in the sense of set optimization in the following way.
Definition 6 Given an uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$), a solution ${x}^{0}\in \mathcal{X}$ is called ${\u2aaf}_{Q}^{u}$ robust for ($\mathcal{P}(\mathcal{U})$) with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively, if there is no solution $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
Note that the definition of ${\u2aaf}_{Q}^{u}$robustness is valid for general spaces and general cones C, while the definition of minmaxrobust efficiency in [18] is for $Y={\mathbb{R}}^{k}$ and $C={\mathbb{R}}_{\geqq}^{k}$ only.
The motivation behind this concept is the following: When comparing sets with the utype setrelation, the upper bounds of these sets, i.e., the ‘worst cases’, are considered. Minimizing these worst cases is closely connected to the concept of minmaxrobust efficiency where one wants to minimize the objective function in the worst case. This risk averse approach would reflect a decisionmakers strategy to hedge against a worst case and is rather pessimistic.
Remark 5 The first scenariobased concept to uncertain multiobjective optimization, or minmaxrobustness adapted to multiobjective optimization, has been introduced by Kuroiwa and Lee [17] and studied in [19]. In [17, 19] robust solutions of multiobjective optimization problems are introduced in the following way. The authors propose to consider the robust counterpart to ($\mathcal{P}(\mathcal{U})$)
where the objective vector for $x\in \mathcal{X}$ is given by
with functionals ${f}_{i}:{\mathbb{R}}^{n}\times {\mathcal{U}}_{i}\to \mathbb{R}$ for $i=1,\dots ,k$ and the convex and compact uncertainty sets $\mathcal{U}:=({\mathcal{U}}_{1},\dots ,{\mathcal{U}}_{k})$ (${\mathcal{U}}_{i}\subseteq {\mathbb{R}}^{m}$, $i=1,\dots ,k$). In [17], solutions to (2) are called robust.
Note that in [18] the authors pointed out that this concept differs from the concept of minmaxrobust efficiency.
With the definition of ${\u2aaf}_{C}^{u}$robustness, we can generalize algorithms for computing minmaxrobust efficient solutions which is an extension of the wellknown weighted sum scalarization technique for calculating efficient solutions of multiobjective optimization problems (compare e.g. Ehrgott [33]).
The general idea is to form a scalar optimization problem by multiplying each objective function with a positive weight and summing up the weighted objectives. The resulting (singleobjective) problem in a more general setting is
($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) $\underset{x\in \mathcal{X}}{min}\underset{\xi \in \mathcal{U}}{sup}{y}^{\ast}\circ f(x,\xi )$,
where $f:X\times \mathcal{U}\to Y$ and ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, i.e., ${y}^{\ast}:Y\to \mathbb{R}$.
Now, solving this problem one can obtain ${\u2aaf}_{C}^{u}$robust solutions as shown in Theorem 4.3 in [18]. Before extending this theorem, we need a lemma which will help during the proofs.
Lemma 1 Consider the uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$). Then we have for all ${x}^{\prime},\overline{x}\in \mathcal{X}$ and for $Q=intC$ ($Q=C\setminus \{0\}$, $Q=C$, respectively),
Proof ‘⟹’: Suppose the contrary. Then
‘⟸’: Suppose the contrary. Then
□
With this, we can extend Theorem 4.3 from [18] in the following way.
Theorem 1 Consider an uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$). The following statements hold:

(a)
If ${x}^{0}\in \mathcal{X}$ is a unique optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) for some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, then ${x}^{0}$ is a ${\u2aaf}_{C}^{u}$robust solution for ($\mathcal{P}(\mathcal{U})$).

(b)
If ${x}^{0}\in \mathcal{X}$ is an optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) for some ${y}^{\ast}\in {C}^{\mathrm{\#}}$ and ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exists for all $x\in \mathcal{X}$, then ${x}^{0}$ is a ${\u2aaf}_{C\setminus \{0\}}^{u}$robust solution for ($\mathcal{P}(\mathcal{U})$).

(c)
If ${x}^{0}\in \mathcal{X}$ is an optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) for some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ and ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exists for all $x\in \mathcal{X}$, then ${x}^{0}$ is a ${\u2aaf}_{intC}^{u}$robust solution for ($\mathcal{P}(\mathcal{U})$).
Proof Suppose that ${x}^{0}$ is not ${\u2aaf}_{Q}^{u}$robust for $Q=C$, $Q=(C\setminus \{0\})$, $Q=intC$, respectively. Then there exists an element $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
for $Q=C$ ($Q=(C\setminus \{0\})$, $Q=intC$, respectively).
This implies
taking into account Lemma 1.
Choose now ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ for $Q=C$ (${y}^{\ast}\in {C}^{\mathrm{\#}}$ for $Q=C\setminus \{0\}$, ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ for $Q=intC$, respectively) arbitrary but fixed.
The last inequalities hold because for (b) and (c) ${max}_{{\xi}^{\prime}\in \mathcal{U}}{y}^{\ast}\circ f(\overline{x},{\xi}^{\prime})$ exists. But this means that ${x}^{0}$ is not the unique optimal (an optimal, an optimal, respectively) solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) for ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ (${y}^{\ast}\in {C}^{\mathrm{\#}}$, ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, respectively). □
Remark 6 In Theorem 1(b) we consider ${y}^{\ast}\in {C}^{\mathrm{\#}}$. Under our assumptions concerning the cone C and if we assume additionally $Y={\mathbb{R}}^{q}$ we have ${C}^{\mathrm{\#}}\ne \mathrm{\varnothing}$ (compare [[34], Theorem 2.2.12], [[34], Example 2.2.16]). Moreover, if Y is a Hausdorff locally convex space, $C\subset Y$ is a proper convex cone and C has a base B with $0\notin clB$, then ${C}^{\mathrm{\#}}\ne \mathrm{\varnothing}$ (compare [[34], Theorem 2.2.12]).
With this theorem we can now formulate a first algorithm for finding ${\u2aaf}_{Q}^{u}$robust solutions for $Q=C$, $Q=C\setminus \{0\}$, $Q=intC$, respectively.
Algorithm 1 Deriving $({\u2aaf}_{C}^{u},{\u2aaf}_{C\setminus \{0\}}^{u},{\u2aaf}_{intC}^{u})$robust solutions to ($\mathcal{P}(\mathcal{U})$) based on weighted sum scalarization:
Input: Uncertain multiobjective problem $\mathcal{P}(\mathcal{U})$, solution sets ${Opt}_{C}={Opt}_{C\setminus \{0\}}={Opt}_{intC}=\mathrm{\varnothing}$.
Step 1: Choose a set $\overline{C}\subset {C}^{\ast}\setminus \{0\}$.
Step 2: If $\overline{C}=\mathrm{\varnothing}$: STOP. Output: Set of ${\u2aaf}_{C}^{u}$robust solutions ${Opt}_{C}$, set of ${\u2aaf}_{C\setminus \{0\}}^{u}$robust solutions ${Opt}_{C\setminus \{0\}}$, set of ${\u2aaf}_{intC}^{u}$robust solutions ${Opt}_{intC}$.
Step 3: Choose ${y}^{\ast}\in \overline{C}$. Set $\overline{C}:=\overline{C}\setminus \{{y}^{\ast}\}$.
Step 4: Find an optimal solution ${x}^{0}$ of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$).

(a)
If ${x}^{0}$ is a unique optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$), then ${x}^{0}$ is ${\u2aaf}_{C}^{u}$robust for ($\mathcal{P}(\mathcal{U})$), thus
$${Opt}_{C}:={Opt}_{C}\phantom{\rule{0.2em}{0ex}}\cup \phantom{\rule{0.2em}{0ex}}\left\{{x}^{0}\right\}.$$ 
(b)
If ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exists for all $x\in \mathcal{X}$ and ${y}^{\ast}\in {C}^{\mathrm{\#}}$, then ${x}^{0}$ is ${\u2aaf}_{C\setminus \{0\}}^{u}$robust for ($\mathcal{P}(\mathcal{U})$), thus
$${Opt}_{C\setminus \{0\}}:={Opt}_{C\setminus \{0\}}\phantom{\rule{0.2em}{0ex}}\cup \phantom{\rule{0.2em}{0ex}}\left\{{x}^{0}\right\}.$$ 
(c)
If ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exists for all $x\in \mathcal{X}$, then ${x}^{0}$ is ${\u2aaf}_{intC}^{u}$robust for ($\mathcal{P}(\mathcal{U})$), thus
$${Opt}_{intC}:={Opt}_{intC}\phantom{\rule{0.2em}{0ex}}\cup \phantom{\rule{0.2em}{0ex}}\left\{{x}^{0}\right\}.$$
Step 5: Go to Step 2.
Remark 7 In Step 4 of Algorithm 1 the scalar optimization problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) is to be solved such that the effectiveness of Algorithm 1 depends from the properties of the algorithm for solving ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$). An interesting question is how to choose the set $\overline{C}$ in Step 1 of the algorithm. The decision maker could be involved to choose a finite set $\overline{C}$ in Step 1. If this set $\overline{C}$ is finite the algorithm stops after finitely many steps.
Furthermore, we present an interactive algorithm for finding $({\u2aaf}_{C}^{u},{\u2aaf}_{C\setminus \{0\}}^{u},{\u2aaf}_{intC}^{u})$robust solutions to the uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$). This algorithm uses the input of the decision maker who either accepts the calculated solution or not.
Algorithm 2 Deriving a single accepted $({\u2aaf}_{C}^{u},{\u2aaf}_{C\setminus \{0\}}^{u},{\u2aaf}_{intC}^{u})$robust solution to ($\mathcal{P}(\mathcal{U})$) based on weighted sum scalarization:
Input: Uncertain vectorvalued problem ($P(\mathcal{U})$).
Step 1: Choose a nonempty set $\overline{C}\subset {C}^{\ast}\setminus \{0\}$.
Step 2: Choose ${\overline{y}}^{\ast}\in \overline{C}$.
Step 3: Find an optimal solution ${x}^{0}$ of ($P{(\mathcal{U})}_{{\overline{y}}^{\ast}}$).

(a)
If ${x}^{0}$ is a unique optimal solution of ($P{(\mathcal{U})}_{{\overline{y}}^{\ast}}$), then ${x}^{0}$ is ${\u2aaf}_{C}^{u}$robust for ($P(\mathcal{U})$).

(b)
If ${max}_{\xi \in \mathcal{U}}{\overline{y}}^{\ast}\circ f(x,\xi )$ exists for all $x\in S$ and ${\overline{y}}^{\ast}\in {C}^{\mathrm{\#}}$, then ${x}^{0}$ is ${\u2aaf}_{C\setminus \{0\}}^{u}$robust for ($P(\mathcal{U})$).

(c)
If ${max}_{\xi \in \mathcal{U}}{\overline{y}}^{\ast}\circ f(x,\xi )$ exists for all $x\in S$, then ${x}^{0}$ is ${\u2aaf}_{intC}^{u}$robust for ($P(\mathcal{U})$).
If ${x}^{0}$ is accepted by the decisionmaker, then Stop. Output: ${x}^{0}$. Otherwise, go to Step 4.
Step 4: Put $k=0$, ${t}_{0}=0$. Choose ${\stackrel{\u02c6}{y}}^{\ast}\in \overline{C}$, ${\stackrel{\u02c6}{y}}^{\ast}\ne {\overline{y}}^{\ast}$. Go to Step 5.
Step 5: Choose ${t}_{k+1}$ with ${t}_{k}<{t}_{k+1}\le 1$ and compute an optimal solution ${x}^{k+1}$ of
($P{(\mathcal{U})}_{{\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast})}$) $\underset{x\in S}{min}\underset{\xi \in \mathcal{U}}{sup}({\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast}))\circ f(x,\xi )$
and use ${x}^{k}$ as starting point. If an optimal solution of ($P{(\mathcal{U})}_{{\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast})}$) cannot be found for $t>{t}_{k}$, then go to Step 2. Otherwise, go to Step 6.
Step 6: The point ${x}^{k+1}$ is to be evaluated by the decisionmaker. If it is accepted by the decisionmaker, then Stop. Output: ${x}^{k+1}$. Otherwise, go to Step 7.
Step 7: If ${t}_{k+1}=1$, then go to Step 2. Otherwise, set $k=k+1$ and go to Step 5.
Remark 8 In the interactive procedure in Algorithm 2 we use a surrogate oneparametric optimization problem. So a systematic generation of solutions is possible.
3.2 ${\u2aaf}_{C}^{l}$Robustness
In this section we use the ltype setrelation ${\u2aaf}_{Q}^{l}$ like in Definition 3 with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively, instead of ${\u2aaf}_{C}^{l}$:
where $A,B\subset Y$ are arbitrarily chosen sets. If we are dealing with $Q=intC$ we suppose $intC\ne \mathrm{\varnothing}$. Using this notation we derive the new concept of ${\u2aaf}_{Q}^{l}$robustness, defined analogously to ${\u2aaf}_{Q}^{u}$robustness (Definition 6).
Definition 7 Given an uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$), a solution ${x}^{0}\in \mathcal{X}$ is called ${\u2aaf}_{Q}^{l}$robust if there is no $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
The ${\u2aaf}_{Q}^{l}$robustness (with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively) can be interpreted as an optimistic approach. The following example illustrates this concept for the case $Q=C$.
Remark 9 In Figure 2, x is ${\u2aaf}_{C}^{l}$robust, while it is not ${\u2aaf}_{C}^{u}$robust.
The ${\u2aaf}_{Q}^{l}$robustness is an alternative tool for the decision maker for obtaining solutions of another type to an uncertain multiobjective optimization problem. This rather optimistic approach focuses on the lower bound of a set ${f}_{\mathcal{U}}(\overline{x})$ for the comparison with another set ${f}_{\mathcal{U}}({x}^{0})$. In particular, in the case $Q=C$, a point ${x}^{0}\in \mathcal{X}$ is called a ${\u2aaf}_{C}^{l}$solution if there is no other point $\overline{x}\in \mathcal{X}$ such that ${f}_{\mathcal{U}}({x}^{0})$ is a subset of ${f}_{\mathcal{U}}(\overline{x})+C$. Contrary to the ${\u2aaf}_{Q}^{u}$robustness approach, the ${\u2aaf}_{Q}^{l}$robustness (with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively) is hence not a worstcase concept, thus the decision maker is not considered to be risk averse but risk affine. This optimistic concept thus hedges against perturbations in the bestcase scenarios.
For calculating ${\u2aaf}_{Q}^{l}$robust solutions again the weighted sum scalarization is helpful, but in order to later on compute ${\u2aaf}_{Q}^{l}$robust solutions to ($\mathcal{P}(\mathcal{U})$), we define a new weighted sum problem in a general setting:
Let ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ (${y}^{\ast}\in {C}^{\mathrm{\#}}$, respectively). Consider the weighted sum scalarization problem
($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{opt}}$) $\underset{x\in \mathcal{X}}{min}\underset{\xi \in \mathcal{U}}{inf}{y}^{\ast}\circ f(x,\xi )$.
Theorem 2 Consider an uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$). The following statements hold.

(a)
If ${x}^{0}$ is a unique optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{opt}}$) for some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, then ${x}^{0}$ is a ${\u2aaf}_{C}^{l}$robust solution for ($\mathcal{P}(\mathcal{U})$).

(b)
If ${x}^{0}$ is an optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{opt}}$) for some ${y}^{\ast}\in {C}^{\mathrm{\#}}$ and ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exists for all $x\in \mathcal{X}$, then ${x}^{0}$ is a ${\u2aaf}_{C\setminus \{0\}}^{l}$robust solution for ($\mathcal{P}(\mathcal{U})$).

(c)
If ${x}^{0}$ is an optimal solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{opt}}$) for some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ and ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exists for all $x\in \mathcal{X}$, then ${x}^{0}$ is a ${\u2aaf}_{intC}^{l}$robust solution for ($\mathcal{P}(\mathcal{U})$).
Proof Suppose ${x}^{0}$ is not ${\u2aaf}_{Q}^{l}$robust for $Q=C$ ($Q=C\setminus \{0\}$, $Q=intC$, respectively). Consequently, there exists an $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that ${f}_{\mathcal{U}}(\overline{x})+Q\supseteq {f}_{\mathcal{U}}({x}^{0})$ for $Q=C$ ($Q=C\setminus \{0\}$, $Q=intC$, respectively). That is equivalent to
Now choose ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ for $Q=C$ (${y}^{\ast}\in {C}^{\mathrm{\#}}$ for $Q=C\setminus \{0\}$, ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ for $Q=intC$, respectively) arbitrary, but fixed. Hence, we obtain from (6)
in contradiction to the assumptions. □
Based on these results, we are able to present the following algorithm that computes $({\u2aaf}_{C}^{l}/{\u2aaf}_{C\setminus \{0\}}^{l}/{\u2aaf}_{intC}^{l})$robust solutions to $\mathcal{P}(\mathcal{U})$.
Algorithm 3 Deriving $({\u2aaf}_{C}^{l}/{\u2aaf}_{C\setminus \{0\}}^{l}/{\u2aaf}_{intC}^{l})$robust solutions for ($\mathcal{P}(\mathcal{U})$) based on weighted sum scalarization:
Input & Step 15: Analogous to Algorithm 1, only replacing ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}$) by ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{opt}}$) and replacing ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f({x}^{0},\xi )$ by ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f({x}^{0},\xi )$.
The next algorithm computes $({\u2aaf}_{C}^{l}/{\u2aaf}_{C\setminus \{0\}}^{l}/{\u2aaf}_{intC}^{l})$robust solutions via weighted sum scalarization by altering the weights:
Algorithm 4 Calculating a single desired $({\u2aaf}_{C}^{l}/{\u2aaf}_{C\setminus \{0\}}^{l}/{\u2aaf}_{intC}^{l})$robust solution for ($\mathcal{P}(\mathcal{U})$) based on weighted sum scalarization:
Input & Step 17: Analogous to Algorithm 2, only replacing ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}}$) by ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}}^{\mathrm{opt}}$), ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f({x}^{0},\xi )$ by ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f({x}^{0},\xi )$ and ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast})}$) by ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast})}^{\mathrm{opt}}$).
3.3 ${\u2aaf}_{C}^{s}$Robustness
Now, we use the set less order relation ${\u2aaf}_{Q}^{s}$ with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively (compare Definition 1) for $A,B\subset Y$ arbitrarily chosen sets:
If we are dealing with $Q=intC$ we suppose $intC\ne \mathrm{\varnothing}$. We can now introduce the concept of ${\u2aaf}_{Q}^{s}$robustness (with $Q=C$, $Q=C\setminus \{0\}$ and $Q=intC$, respectively):
Definition 8 A solution ${x}^{0}$ of ($\mathcal{P}(\mathcal{U})$) is called $({\u2aaf}_{C}^{s}/{\u2aaf}_{C\setminus \{0\}}^{s}/{\u2aaf}_{intC}^{s})$robust if there is no $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
for $Q=C$ ($Q=C\setminus \{0\}$, $Q=intC$, respectively).
Remark 10 Figure 3 shows an element $x\in \mathcal{X}$ that is ${\u2aaf}_{C}^{s}$robust, while it is not ${\u2aaf}_{intC}^{u}$robust.
Remark 11 Note that a ${\u2aaf}_{C}^{l}$robust solution is as well ${\u2aaf}_{C}^{s}$robust by definition. The same assertion holds for a ${\u2aaf}_{C}^{u}$robust solution.
The concept of ${\u2aaf}_{C}^{s}$robustness can be interpreted in the following way: In a situation where it is not clear if one should follow a risk affine or risk averse strategy (e.g., the decision maker is not at hand or wants to get a feeling for the variety of the solutions) this concept might be helpful as it calculates solutions which reflect these different strategies. Therefore, this concept can serve as a preselection before deciding a definite strategy.
Computing ${\u2aaf}_{C}^{s}$robust solutions is possible with the help of the following optimization problem:
($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) $h(x):=\left(\begin{array}{c}{inf}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )\\ {sup}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )\end{array}\right)\to v\underset{x\in \mathcal{X}}{min}$
with ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ (${y}^{\ast}\in {C}^{\mathrm{\#}}$, respectively). For ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$), we use the solution concept of weak Pareto efficiency: An element ${x}^{0}\in \mathcal{X}$ is called weakly Pareto efficient for ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$), if
Furthermore, a point ${x}^{0}\in \mathcal{X}$ is called strictly Pareto efficient for ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$), if
We prove the following theorem.
Theorem 3 Consider an uncertain multiobjective optimization problem ($\mathcal{P}(\mathcal{U})$). The following statements hold.

(a)
If ${x}^{0}$ is strictly Pareto efficient for problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) for some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, then ${x}^{0}$ is ${\u2aaf}_{C}^{s}$robust.

(b)
If ${x}^{0}$ is weakly Pareto efficient for problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) for some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ and ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ and ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exist for all $x\in \mathcal{X}$ and the chosen weight ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, then ${x}^{0}$ is ${\u2aaf}_{intC}^{s}$robust.

(c)
If ${x}^{0}$ is weakly Pareto efficient for problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) for some ${y}^{\ast}\in {C}^{\mathrm{\#}}$ and ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ and ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exist for all $x\in \mathcal{X}$ and the chosen weight ${y}^{\ast}\in {C}^{\mathrm{\#}}$, then ${x}^{0}$ is ${\u2aaf}_{C\setminus \{0\}}^{s}$robust.
Proof Let ${x}^{0}$ be strictly Pareto efficient (weakly Pareto efficient, weakly Pareto efficient) for problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) with some ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ (${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, ${y}^{\ast}\in {C}^{\mathrm{\#}}$, respectively), i.e., there is no $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
Now suppose ${x}^{0}$ is not $({\u2aaf}_{C}^{s}/{\u2aaf}_{intC}^{s}/{\u2aaf}_{C\setminus \{0\}}^{s})$robust. Then there exists an $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
for $Q=C$ ($Q=intC$, $Q=C\setminus \{0\}$). That implies
for $Q=C$ ($Q=intC$, $Q=C\setminus \{0\}$). Choose now ${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$ (${y}^{\ast}\in {C}^{\ast}\setminus \{0\}$, ${y}^{\ast}\in {C}^{\mathrm{\#}}$) as in problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$). We obtain from (7)
The last two strict inequalities hold because the minimum and maximum exist. But this is a contradiction to the assumption. □
Based on these observations, we can formulate the following algorithm for computing ${\u2aaf}_{C}^{s}$robust solutions to $\mathcal{P}(\mathcal{U})$.
Algorithm 5 Computing $({\u2aaf}_{C}^{s}/{\u2aaf}_{C\setminus \{0\}}^{s}/{\u2aaf}_{intC}^{s})$robust solutions using a family of problems ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$):
Input & Step 13: Analogous to Algorithm 1.
Step 4: Find a set of weakly Pareto efficient solutions $SO{L}_{\mathrm{we}}({y}^{\ast})$ of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$).
Step 5: If $SO{L}_{\mathrm{we}}({y}^{\ast})=\mathrm{\varnothing}$, then go to Step 2.
Step 6: Choose $\overline{x}\in SO{L}_{\mathrm{we}}({y}^{\ast})$. Set $SO{L}_{\mathrm{we}}({y}^{\ast}):=SO{L}_{\mathrm{we}}({y}^{\ast})\setminus \{\overline{x}\}$.

(a)
If $\overline{x}$ is a strictly Pareto efficient solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$), then $\overline{x}$ is ${\u2aaf}_{C}^{s}$robust for ($\mathcal{P}(\mathcal{U})$), thus
$${Opt}_{C}:={Opt}_{C}\phantom{\rule{0.2em}{0ex}}\cup \phantom{\rule{0.2em}{0ex}}\{\overline{x}\}.$$ 
(b)
If $\overline{x}$ is weakly Pareto efficient for problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) and ${y}^{\ast}\in {C}^{\mathrm{\#}}$ and ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ and ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exist for all $x\in \mathcal{X}$ and the chosen weight ${y}^{\ast}\in {C}^{\mathrm{\#}}$, then $\overline{x}$ is ${\u2aaf}_{C\setminus \{0\}}^{s}$robust for ($\mathcal{P}(\mathcal{U})$), thus
$${Opt}_{C\setminus \{0\}}:={Opt}_{C\setminus \{0\}}\phantom{\rule{0.2em}{0ex}}\cup \phantom{\rule{0.2em}{0ex}}\{\overline{x}\}.$$ 
(c)
If $\overline{x}$ is a weakly Pareto efficient solution of ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$) and ${max}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ and ${min}_{\xi \in \mathcal{U}}{y}^{\ast}\circ f(x,\xi )$ exist for all $x\in \mathcal{X}$, then $\overline{x}$ is ${\u2aaf}_{intC}^{s}$robust for ($\mathcal{P}(\mathcal{U})$), thus
$${Opt}_{intC}:={Opt}_{intC}\phantom{\rule{0.2em}{0ex}}\cup \phantom{\rule{0.2em}{0ex}}\{\overline{x}\}.$$
Step 7: Go to Step 5.
In the following we present an algorithm that computes ${\u2aaf}_{C}^{s}$robust solutions while varying the weights in the vector of objectives of problem ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$).
Algorithm 6 Computing $({\u2aaf}_{C}^{s}/{\u2aaf}_{C\setminus \{0\}}^{s}/{\u2aaf}_{intC}^{s})$robust solutions using a family of problems ($\mathcal{P}{(\mathcal{U})}_{{y}^{\ast}}^{\mathrm{biobj}}$):
Input & Step 13 & Step 57: Analogous to Algorithm 2, only replacing ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}}$) by ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}}^{\mathrm{biobj}}$) and ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast})}$) by ($\mathcal{P}{(\mathcal{U})}_{{\overline{y}}^{\ast}+{t}_{k+1}({\stackrel{\u02c6}{y}}^{\ast}{\overline{y}}^{\ast})}^{\mathrm{biobj}}$).
Step 4: Analogous to Step 4 of Algorithm 5.
3.4 Alternative set less ordered robustness
Another way of combining the u and ltype setrelations is the alternative set less order relation:
Definition 9 (Alternative set less order relation (compare Ide and Köbis [29]))
Let $C\subset Y$ be a proper closed convex and pointed cone. Furthermore, let $A,B\subset Y$ be arbitrarily chosen sets. Then the alternative set less order relation is defined by
Based on this definition we can now define the concept of ${\u2aaf}_{C}^{a}$robustness for general cones:
Definition 10 A solution ${x}^{0}$ of ($\mathcal{P}(\mathcal{U})$) is called $({\u2aaf}_{C}^{a}/{\u2aaf}_{C\setminus \{0\}}^{a}/{\u2aaf}_{intC}^{a})$robust if there is no $\overline{x}\in \mathcal{X}\setminus \{{x}^{0}\}$ such that
for $Q=C$ ($Q=C\setminus \{0\}$, $Q=intC$, respectively).
The following example illustrates ${\u2aaf}_{C}^{a}$robust solutions.
Remark 12 In Figure 4, both x and $\overline{x}$ are ${\u2aaf}_{C}^{a}$robust.
The next lemma follows directly from the definitions.
Lemma 2 Note that a solution of ($\mathcal{P}(\mathcal{U})$) is ${\u2aaf}_{C}^{a}$robust if and only if it is ${\u2aaf}_{C}^{l}$robust and ${\u2aaf}_{C}^{u}$robust.
As this lemma shows, the concept of ${\u2aaf}_{C}^{a}$robustness is rather restrictive as only solutions which are ${\u2aaf}_{C}^{u}$robust and ${\u2aaf}_{C}^{l}$robust, thus reflect both a risk averse and a risk affine strategy, are also ${\u2aaf}_{C}^{a}$robust. Therefore, this concept is fit for a decision maker who does not want to make any mistake in terms of the best or worst cases. We can see easily that such an approach would be very restrictive against the solutions and that only very few solutions should fulfill these conditions.
Due to this Lemma 2, from Algorithms 1 and 3, we can deduce the following algorithm for calculating ${\u2aaf}_{C}^{a}$robust solutions to ($\mathcal{P}(\mathcal{U})$).
Algorithm 7 Deriving $({\u2aaf}_{C}^{a}/{\u2aaf}_{C\setminus \{0\}}^{a}/{\u2aaf}_{intC}^{a})$robust solutions to ($\mathcal{P}(\mathcal{U})$):
Input: Uncertain multiobjective problem ($\mathcal{P}(\mathcal{U})$), solution sets ${Opt}_{C}^{a}={Opt}_{C\setminus \{0\}}^{a}={Opt}_{intC}^{a}=\mathrm{\varnothing}$.
Step 1: Compute a set of $({\u2aaf}_{C}^{l}/{\u2aaf}_{intC}^{l}/{\u2aaf}_{C\setminus \{0\}}^{l})$robust solutions $({Opt}_{C}^{l},{Opt}_{intC}^{l},{Opt}_{C\setminus \{0\}}^{l})$ using Algorithm 3 or 4.
Step 2: Compute a set of $({\u2aaf}_{C}^{u}/{\u2aaf}_{intC}^{u}/{\u2aaf}_{C\setminus \{0\}}^{u})$robust solutions $({Opt}_{C}^{u},{Opt}_{intC}^{u},{Opt}_{C\setminus \{0\}}^{u})$ using Algorithm 1 or 2.
Output: Set of $({\u2aaf}_{C}^{a}/{\u2aaf}_{intC}^{a}/{\u2aaf}_{C\setminus \{0\}}^{a})$robust solutions
3.5 Further relationships between the concepts
From Remark 11 we can see that every ${\u2aaf}_{C}^{u}$robust solution and every ${\u2aaf}_{C}^{l}$robust solution is also a ${\u2aaf}_{C}^{s}$robust solution. The inverse direction does not hold. The following example in Figure 5 shows that a solution can be ${\u2aaf}_{C}^{s}$robust but neither ${\u2aaf}_{C}^{u}$robust nor ${\u2aaf}_{C}^{l}$robust.
We summarize the relationship between the various robustness concepts in Figure 6.
4 Conclusions
In the following we will explain that our algorithms presented in Section 3 can be used for solving special classes of setvalued optimization problems.
Having a close look at all the concepts of robustness from Section 3, we can see that in fact all of these are setvalued optimization problems.
Consider a setvalued optimization problem of the form
($\mathcal{SP}\u2aaf$) ⪯minimize $F(x)$, subject to $x\in \mathcal{X}$,
with some given preorder ⪯ and a setvalued objective map $F:\mathcal{X}\rightrightarrows Y$, we can see the following.
If the preorder ⪯ is given by ${\u2aaf}_{C}^{l}$, ${\u2aaf}_{C}^{u}$, or ${\u2aaf}_{C}^{s}$ with some proper closed convex pointed cone $C\subset Y$ and $F(x)$ can be parametrized by parameters $\xi \in \mathcal{U}$ with some set in the way that
where ${f}_{\mathcal{U}}(x)=\{f(x,\xi )\mid \xi \in \mathcal{U}\}$ and $f:X\times \mathcal{U}\mapsto Y$, then the setvalued optimization problem ($\mathcal{S}\mathcal{P}\u2aaf$) is equivalent to finding ⪯robust solutions to the uncertain multiobjective problem ($\mathcal{P}(\mathcal{U})$) and can therefore be solved by using one of the respective algorithms presented in Section 3.
We revealed strong connections between setvalued optimization and uncertain multiobjective optimization. Furthermore, we derived our results in a more general setting than in [18] and [29]. In particular, we provided solution algorithms for a certain class of setvalued optimization problems. It seems possible to extend this class of problems to a more general one, but this is future work and of interest for the next steps in this area of research.
Moreover, this paper makes very clear that finding robust solutions to uncertain multiobjective optimization problems can be interpreted as an application of setvalued optimization. Thus, robust solutions to uncertain multiobjective optimization problems can be obtained by using the solution techniques from setvalued optimization. Formulating concrete algorithms of this kind is another topic for future research.
References
 1.
Stewart T, Bandte O, Braun H, Chakraborti N, Ehrgott M, Göbelt M, Jin Y, Nakayama H, Poles S, Di Stefano D: Realworld applications of multiobjective optimization. Lecture Notes in Computer Science 5252. In Multiobjective Optimization: Interactive and Evolutionary Approaches. Edited by: Branke J, Deb K, Miettinen K, Slowinski R. Springer, Berlin; 2008:285–327.
 2.
Scott EM, Saltelli A, Sörensen T: Practical experience in applying sensitivity and uncertainty analysis. Wiley Ser. Probab. Stat. In Sensitivity Analysis. Wiley, Chichester; 2000:267–274.
 3.
Birge JR, Louveaux FV: Introduction to Stochastic Programming. Springer, New York; 1997.
 4.
Kouvelis P, Yu G: Robust Discrete Optimization and Its Applications. Kluwer Academic, Dordrecht; 1997.
 5.
BenTal A, El Ghaoui L, Nemirovski A: Robust Optimization. Princeton University Press, Princeton; 2009.
 6.
Fischetti M, Monaci M: Light robustness. Lecture Notes in Computer Science 5868. In Robust and Online LargeScale Optimization. Edited by: Ahuja RK, Moehring R, Zaroliagis C. Springer, Berlin; 2009:61–84.
 7.
Liebchen C, Lübbecke M, Möhring RH, Stiller S: The concept of recoverable robustness, linear programming recovery, and railway applications. Lecture Note on Computer Science 5868. In Robust and Online LargeScale Optimization. Edited by: Ahuja RK, Möhring RH, Zaroliagis CD. Springer, Berlin; 2009.
 8.
Klamroth K, Köbis E, Schöbel A, Tammer C: A unified approach for different concepts of robustness and stochastic programming via nonlinear scalarizing functionals. Optimization 2013, 62: 649–671. 10.1080/02331934.2013.769104
 9.
Goerigk M, Schöbel A: A scenariobased approach for robust linear optimization. Lecture Notes in Computer Science. In Proceedings of the 1st International ICST Conference on Practice and Theory of Algorithms in (Computer) Systems (TAPAS). Springer, Berlin; 2011:139–150.
 10.
Soyster AL: Convex programming with setinclusive constraints and applications to inexact linear programming. Oper. Res. 1973, 21: 1154–1157. 10.1287/opre.21.5.1154
 11.
BenTal A, Nemirovski A: Robust convex optimization. Math. Oper. Res. 1998, 23(4):769–805. 10.1287/moor.23.4.769
 12.
Deb K, Gupta H: Introducing robustness in multiobjective optimization. Evol. Comput. 2006, 14(4):463–494. 10.1162/evco.2006.14.4.463
 13.
Branke J: Creating robust solutions by means of evolutionary algorithms. Lecture Notes in Computer Science 1498. In Parallel Problem Solving from Nature  PPSNV. Edited by: Eiben EA, Bäck T, Schenauer M, Schwefel HP. Springer, Berlin; 1998:119–128.
 14.
Barrico C, Antunes CH: Robustness analysis in multiobjective optimization using a degree of robustness concept. IEEE Congress on Evolutionary Computation 2006, 1887–1892.
 15.
Steponavičė, I, Miettinen, K: Survey on multiobjective robustness for simulationbased optimization. Talk at the 21st International Symposium on Mathematical Programming, August 19–24 2012, Berlin, Germany (2012)
 16.
Witting, K: Numerical algorithms for the treatment of parametric multiobjective optimization problems and applications. PhD thesis, Universität Paderborn, Paderborn (2012)
 17.
Kuroiwa D, Lee GM: On robust multiobjective optimization. Vietnam J. Math. 2012, 40(2–3):305–317.
 18.
Ehrgott M, Ide J, Schöbel A: Minmax robustness for multiobjective optimization problems. Eur. J. Oper. Res. 2014. 10.1016/j.ejor.2014.03.013
 19.
Kuroiwa, D, Lee, GM: On robust convex multiobjective optimization. J. Nonlinear Convex Anal. (2013, accepted)
 20.
Eichfelder G, Jahn J: Vector optimization problems and their solution concepts. Vector Optim. In Recent Developments in Vector Optimization. Springer, Berlin; 2012:1–27.
 21.
Kuroiwa D: Some duality theorems of setvalued optimization with natural criteria. In Proceedings of the International Conference on Nonlinear Analysis and Convex Analysis. World Scientific, Singapore; 1999:221–228.
 22.
Kuroiwa D: The natural criteria in setvalued optimization. Sūrikaisekikenkyūsho Kōkyūroku 1998, 1031: 85–90. Research on nonlinear analysis and convex analysis (Kyoto, 1997)
 23.
Nishnianidze ZG: Fixed points of monotone multivalued operators. Soobshch. Akad. Nauk Gruzin. SSR 1984, 114(3):489–491.
 24.
Young RC: The algebra of manyvalued quantities. Math. Ann. 1931, 104(1):260–290. 10.1007/BF01457934
 25.
Takahashi W: Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.
 26.
Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.
 27.
Aoyama K, Kohsaka F, Takahashi W: Proximal point methods for monotone operators in Banach spaces. Taiwan. J. Math. 2011, 15(1):259–281.
 28.
Takahashi W: Nonlinear mappings in equilibrium problems and an open problem in fixed point theory. In Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2010:177–197.
 29.
Ide, J, Köbis, E: Concepts of robustness for multiobjective optimization problems based on set order relations (2013)
 30.
Kuroiwa D: On setvalued optimization. Nonlinear Anal. 2001, 47(2):1395–1400. 10.1016/S0362546X(01)002747
 31.
Jahn J, Ha TXD: New order relations in set optimization. J. Optim. Theory Appl. 2011, 148(2):209–236. 10.1007/s1095701097528
 32.
Kuroiwa D: Existence theorems of set optimization with setvalued maps. J. Inf. Optim. Sci. 2003, 24(1):73–84.
 33.
Ehrgott M: Multicriteria Optimization. 2nd edition. Springer, Berlin; 2005.
 34.
Göpfert A, Riahi H, Tammer C, Zălinescu C: Variational Methods in Partially Ordered Spaces. Springer, New York; 2003.
Acknowledgements
Supported by DFG RTG 1703 ‘Resource Efficiency in Interorganizational Networks’.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Ide, J., Köbis, E., Kuroiwa, D. et al. The relationship between multiobjective robustness concepts and setvalued optimization. Fixed Point Theory Appl 2014, 83 (2014). https://doi.org/10.1186/16871812201483
Received:
Accepted:
Published:
Keywords
 robust optimization
 multiobjective optimization
 scalarization
 vectorization
 setvalued optimization