Open Access

The relationship between multi-objective robustness concepts and set-valued optimization

  • Jonas Ide1,
  • Elisabeth Köbis2,
  • Daishi Kuroiwa3,
  • Anita Schöbel1 and
  • Christiane Tammer4Email author
Fixed Point Theory and Applications20142014:83

https://doi.org/10.1186/1687-1812-2014-83

Received: 30 September 2013

Accepted: 14 March 2014

Published: 31 March 2014

Abstract

In this paper, we discuss the connection between concepts of robustness for multi-objective optimization problems and set order relations. We extend some of the existing concepts to general spaces and cones using set relations. Furthermore, we derive new concepts of robustness for multi-objective optimization problems. We point out that robust multi-objective optimization can be interpreted as an application of set-valued optimization. Furthermore, we develop new algorithms for solving uncertain multi-objective optimization problems. These algorithms can be used in order to solve a special class of set-valued optimization problems.

Keywords

robust optimizationmulti-objective optimizationscalarizationvectorizationset-valued optimization

1 Introduction

Dealing with uncertainty in multi-objective optimization problems is very important in many applications. On the one hand, most real world optimization problems are contaminated with uncertain data, especially traffic optimization problems, scheduling problems, portfolio optimization, network flow and network design problems. On the other hand, many real world optimization problems require the minimization of multiple conflicting objectives (see [1]), e.g. the maximization of the expected return versus the minimization of risk in portfolio optimization, the minimization of production time versus the minimization of the cost of manufacturing equipment, or the maximization of tumor control versus the minimization of normal tissue complication in radiotherapy treatment design.

For an optimization problem contaminated with uncertain data it is typical that at the time it is solved these data are not completely known. It is very important to estimate the effects of this uncertainty and so it is necessary to evaluate how sensitive an optimal solution is to perturbations of the input data. One way to deal with this question is sensitivity analysis (for an overview see [2]). Sensitivity analysis is an a posteriori approach and provides ranges for input data within which a solution remains feasible or optimal. It does not, however, provide a course of action for changing a solution should the perturbation be outside this range. In contrast, stochastic programming (see e.g. Birge and Louveaux [3] for an introduction) and robust optimization (see e.g. [4, 5] for an overview) take the uncertainty into account during the optimization process. While stochastic programming assumes some knowledge about the probability distribution of the uncertain data and the objective usually is to find a solution that is feasible with a certain probability and that optimizes the expected value of some objective function, robust optimization hedges against the worst case. Hence robust optimization does not require any probabilistic information. Depending on the concrete application one can decide whether robust or stochastic optimization is the more appropriate way of dealing with uncertainty.

Robust optimization is usually applied to problems where a solution is required which hedges against all possible scenarios. For example, the emergency department with landing place for rescue helicopters in a ski resort should be chosen in such a way that the flight time to all ski slopes in the resort that are to be protected is minimized in the worst case, even though flight times are uncertain due to unknown weather conditions. Similarly, if an aircraft schedule of an airline is to be determined, one would want to be able to provide service to as many passengers as possible in a cost-effective manner, even though the exact number of passengers is not known at the time the schedule is fixed.

Generally, in the concept of robustness it is not assumed that all data are known, but one allows different scenarios for the input parameters and looks for a solution that works well in every uncertain scenario.

Unfortunately, at the time the uncertain optimization problem has to be solved, it is not known which scenario is going to be realized. Therefore, a definition of a ‘good’ (or robust against the perturbations in the uncertain parameter) solution is necessary.

Robust optimization is a growing field of research, we refer to Ben-Tal, El Ghaoui, Nemirovski [5], Kouvelis and Yu [4] for an overview of results and applications for the most prominent concepts. Several other concepts of robustness were introduced more recently, e.g. the concept of light robustness by Fischetti and Monaci [6] or the concept of recovery-robustness in Liebchen et al. [7], for a unified approach, see [8]. A scenario-based approach is suggested in Goerigk and Schöbel [9]. In all these approaches, the uncertain optimization problem is replaced by a deterministic version, called the robust counterpart of the uncertain problem.

One of the most common approaches is the concept of minmax robustness, introduced by Soyster [10] and studied e.g. by Ben-Tal and Nemirovski [11]. Here, a solution is said to be robust, if it minimizes the worst case of the objective function over all scenarios. We do not go into detail here as for this paper we mostly consider concepts of robustness for multi-objective optimization problems.

Now, if we consider the objective function in the problem definition to be not a single-objective, but a multi-objective function, the concepts of robustness do not apply naturally anymore. The problem obviously is that there is no total order on R k and the robustness concepts for uncertain single-objective optimization problems rely on the total order of . Therefore, new definitions of what is seen as a robust solution to an uncertain multi-objective optimization problem are necessary.

The first approach to handle uncertainty for multi-objective optimization problems was presented by Deb and Gupta [12] who extended the concept Branke [13] introduced for single-objective functions. Here each objective function is replaced by their mean function and an efficient solution of the resulting multi-objective optimization problem is called a robust solution. The authors also presented a second definition where the uncertainty is modeled into the constraints which restrict the variation of the original objective functions to their means. Barrico and Antunes [14] extended the concept of Deb and Gupta and introduced the degree of robustness as a measure how much a predefined neighborhood of the considered solution can be extended without containing solutions whose function values are too bad. An overview of the existing concepts of robustness for multi-objective optimization problems can be found in [15] and [16].

A first approach to extending the concept of minmax robustness to multi-objective optimization was presented by Kuroiwa and Lee [17]. Here, the worst case in each component is calculated separately, and an efficient solution to the problem of minimizing the vector of worst cases is then called a robust solution to the original problem. This definition has been extended by Ehrgott, et al. [18], where the authors replace the objective function by a set-valued objective function. Furthermore, the authors present solution algorithms for calculating minmax-robust efficient solutions, one of which is closely connected to the concept of robustness presented by Kuroiwa and Lee [17]. Furthermore, in [17] the authors present solution concepts for obtaining robust points of uncertain multi-objective optimization problems and study optimality conditions for the special case of convex objective functions in [19].

Set-valued optimization deals on the other hand with the problem of minimizing a function where the image of a point is in fact a set. Minimizing a set is not totally intuitive since on a power set there is no total order as well as on R k . Therefore, a definition of what can be seen as an optimal solution to minimizing a set-valued objective function is necessary. In order to compare sets, several preorders have been introduced (see e.g. [2024]). With these preorders it is then possible to formulate set-valued optimization problems related to robustness for multi-objective optimization problems, especially, we show that the concept of minmax-robust efficiency (see [18]) is closely connected to a certain set order relation, introduced by Kuroiwa [21, 22], namely the upper-type set relation. We derive our results in general spaces using arguments from nonlinear and convex analysis (see Takahashi [25, 26]), for methods from numerical analysis in general spaces, see e.g. Aoyama, Kohsaka, Takahashi [27], Takahashi [28].

Replacing the set order relation implicitly used in the definition of minmax-robust efficiency, Ide and Köbis [29] presented various other concepts of robustness for multi-objective optimization, derived by replacing the upper-type set relation with another set ordering from the literature.

Now, this paper is structured as follows: After fixing the notation and recalling the definitions of set order relations in Section 2, in Section 3 we introduce several concepts of robustness for multi-objective optimization problems based on set order relations. We show some characterizations for robust solutions in the sense of set-valued optimization that are important for deriving solution procedures using the ideas given in [18]. A lot of the results presented in [18] can be extended to our general setting. Using this information, we extend the algorithms presented in [18] to concepts for robustness and then we use these algorithms in order to solve a certain class of set-valued optimization problems. We conclude the paper with some final remarks and an outlook to future research.

2 Preliminaries

Throughout the paper, let Y be a linear topological space partially ordered by a proper closed convex and pointed (i.e., C ( C ) = { 0 } ) cone C. The ordering relation on Y is described by y 1 C y 2 if and only if y 2 y 1 C for all y 1 , y 2 Y . The dual cone to C is denoted by C : = { y Y y C : y ( y ) 0 } and the quasi-interior of C is defined by C # : = { y C y C { 0 } : y ( y ) > 0 } . Furthermore, let X be a linear space, F : X Y (with the ‘’-notation we denote that F is a set-valued objective function whose function values are sets in Y), and a subset of X. As usual, we denote the graph of the set-valued map F by graph F : = { ( x , y ) X × Y y F ( x ) } . Furthermore, we define F ( X ) : = x X F ( x ) .

In set optimization, the following set relations play an important role; see Young [24], Nishnianidze [23], Kuroiwa [21, 22, 30], Jahn and Ha [31] and Eichfelder and Jahn [20]. We will use these set relations to introduce several concepts of robustness.

Definition 1 (Set less order relation [20, 23, 24])

Let C Y be a proper closed convex and pointed cone. Furthermore, let A , B Y be arbitrarily chosen sets. Then the set less order relation is defined by
A C s B : A B C  and  A + C B .
Remark 1 Of course, we have
A B C a A b B : a C b
and
A + C B b B a A : a C b .

Definition 2 (Upper-type set relation [21, 22])

Let A , B Y be arbitrarily chosen sets and C Y a proper closed convex and pointed cone. Then the u-type set relation C u is defined by
A C u B : A B C a A b B : a C b .

Another important set order relation is the lower-type set relation:

Definition 3 (Lower-type set relation [21, 22])

Let A , B Y be arbitrarily chosen sets and C Y a proper closed convex and pointed cone. Then the l-type set relation C l is defined by
A C l B : A + C B b B a A : a C b .
Remark 2 Note that the conditions
  1. (i)

    A B int C ,

     
  2. (ii)

    A + N B C for some neighborhood N of the zero vector 0 Y in Y

     

are not equivalent when A is not compact. Clearly (ii) implies (i) if int C . From a theoretical viewpoint, (ii) may, in some cases, be more appropriate for describing solutions.

Taking into account this property we suppose in Section 3 that the set-valued map f U in the formulation of the concepts of robustness for multi-objective optimization problems is compact-valued. This is important in the case where we are dealing with intC in the definition of robustness.

Remark 3 There is the following relationship between the l-type set relation C l and the u-type set relation C u :
A C l B : A + C B B A ( C ) : B C u A .

To conclude the notation, we introduce a set-valued optimization problem: Consider F : X Y , and a subset of X. Furthermore, let be a preorder on the power set of Y given by Definition 1, 2, 3, respectively. Then a set-valued optimization problem ( S P ) is given by

( S P )  -minimize F ( x ) , subject to x X ,

where minimal solutions of ( S P ) are defined in the following way:

Definition 4 (Minimal solutions of ( S P ) w.r.t. the preorder )

Given a set-valued optimization problem ( S P ), an element x ¯ X is called a minimal solution to ( S P ) if
( F ( x ) F ( x ¯ )  for some  x X ) F ( x ¯ ) F ( x ) .

Remark 4 If we use the set relation C l introduced in Definition 3 in the formulation of the solution concept, i.e., we study the set-valued optimization problem of ( SP C l ), we observe that this solution concept is based on comparisons among sets of minimal points of values of F. Furthermore, considering the u-type set relation C u (Definition 2), i.e., considering the problem ( SP C u ) we recognize that this solution concept is based on comparisons among sets of maximal points of values of F. When x ¯ X is a minimal solution of problem ( SP C l ) there does not exist x X such that F ( x ) is strictly smaller than F ( x ¯ ) with respect to the set order C l .

Furthermore, the following definition of a minimizer of a set-valued optimization problem is very often used in the theory of set optimization and given below. However, the solution concept introduced in Definition 4 is more natural and useful as we can see in Example 1.

In the next definition we use the set of minimal elements of a nonempty subset A of Y with respect to C:
Min ( A , C ) : = { y A A ( y ( C { 0 } ) = } .

Definition 5 (Minimizer of a set-valued optimization problem)

Let x ¯ X and ( x ¯ , y ¯ ) graph F . The pair ( x ¯ , y ¯ ) graph F is called a minimizer of F : X Y over with respect to C if y ¯ Min ( F ( X ) , C ) .

For our approach to robustness of uncertain multi-objective optimization problems, minimal solutions in the sense of Definition 4 are useful and therefore, when considering robustness concepts, we will deal with this solution concept in the following.

In order to get an insight to the issue of set-valued optimization problems, we give two examples (see Kuroiwa [32]) of set-valued optimization problems.

Example 1 Consider the set-valued optimization problem

( SP C l )   C l -minimize F 1 ( x ) , subject to x X ,

with X = R , Y = R 2 , C = R + 2 , X = [ 0 , 1 ] and F 1 : X Y is given by
F 1 ( x ) : = { [ ( 1 , 0 ) , ( 0 , 1 ) ] if  x = 0 , [ ( 1 x , x ) , ( 1 , 1 ) ] if  x ( 0 , 1 ] ,

where [ ( a , b ) , ( c , d ) ] is the line segment between ( a , b ) and ( c , d ) . Only the element x ¯ = 0 is a minimal solution of ( SP C l ). However, all elements ( x ¯ , y ¯ ) graph F 1 with x ¯ [ 0 , 1 ] , y ¯ = ( 1 x ¯ , x ¯ ) for x ¯ ( 0 , 1 ] and y ¯ = ( 1 , 0 ) for x ¯ = 0 are minimizers of the set-valued optimization problem in the sense of Definition 5. This example shows that the solution concept with respect to the set relation C l (see Definitions 3 and 4) is more natural and useful than the concept of minimizers introduced in Definition 5.

Example 2 In this example we are looking for minimal solutions of a set-valued optimization problem with respect to the set relation C u introduced in Definition 2.

( SP C u )   C u -minimize F 2 ( x ) , subject to x X ,

with X = R , Y = R 2 , C = R + 2 , X = [ 0 , 1 ] and F 2 : X Y is given by
F 2 ( x ) : = { [ [ ( 1 , 1 ) , ( 2 , 2 ) ] ] if  x = 0 , [ [ ( 0 , 0 ) , ( 3 , 3 ) ] ] if  x ( 0 , 1 ] ,

where [ [ ( a , b ) , ( c , d ) ] ] : = { ( y 1 , y 2 ) a y 1 c , b y 2 d } . Then the only minimal solution of ( SP C u ) in the sense of Definition 4 is x ¯ = 0 .

A visualization of both above discussed examples is given in Figure 1.
Figure 1

Feasible solution sets of F 1 , F 2 , described in Examples 1 and 2.

In Section 3, we will apply the preorders introduced in Definitions 1, 2, 3 in order to define several concepts of robustness for uncertain multi-objective optimization problems.

3 Concepts of robustness for multi-objective optimization problems based on set relations and corresponding algorithms

Talking about an uncertain optimization problem, we consider the uncertain data to be given as a parameter (also called scenario) ξ U where U R m is the so-called uncertainty set. For each realization of this parameter ξ U we obtain a single optimization problem

( P ( ξ ) )   f ( x , ξ ) min s.t.  x X ,

where f : X × U Y is the objective function and X X is the set of feasible solutions (note that we assume the feasible set to be unchanged for every realization of the uncertain parameter). We use the notation
f U ( x ) : = { f ( x , ξ ) ξ U }
(1)

for the image of the uncertainty set and x under f (note that f U ( x ) in general is a set and not a singleton).

Taking into account the discussion in Remark 2 we assume that the set-valued map f U is compact-valued.

Now, when searching for an optimal solution, one has to overcome the problem that we do not know anything about the different scenarios, e.g., which one is most likely to occur, any kind of probability distribution and so on. Therefore, an uncertain (multi-objective) optimization problem is defined as the family of optimization problems

( P ( U ) )   ( P ( ξ ) , ξ U ) .

Now it is not clear what solution to this problem ( P ( U ) ) would be seen as desirable. Throughout the paper we discuss several concepts of robustness and derive new approaches to robustness for multi-objective optimization problems.

In this section we extend the robustness concepts presented in [18] to general spaces using the preorders introduced in Definitions 1, 2, 3. In particular, we are interested in extending the theorems which provide the foundation for the algorithms for calculating the respective robust solutions. We shortly repeat the various concepts which relate to different set orderings, extend the theorems and then formulate the algorithms. With this, we present some ideas for solving special set-valued optimization problems in our paper (see Section 4).

3.1 C u -Robustness

We extend the definitions and results presented by Ehrgott et al. [18] about minmax-robust efficiency.

Here, a feasible solution x X to ( P ( U ) ) is called minmax-robust efficient if there is no other feasible solution x ¯ X { x } , such that
f U ( x ¯ ) f U ( x ) R k

where R k : = { λ R k : λ i 0 i = 1 , , k } .

With the definitions of upper-type set relation, see Definition 2, and minmax-robust efficiency in mind we can see the close connection between minmax-robust efficiency and the upper-type set relation, since a solution x X to ( P ( U ) ) is minmax-robust efficient if there is no other feasible solution x ¯ X { x } , such that
f U ( x ¯ ) C u f U ( x ) ,

where Y = R k and C = R k .

Since all the concepts considered in this paper are closely related to a set order relation , in order to keep the names of the concepts readable we call the respective solution -robust.

In the following definition we use a preorder Q u like in Definition 2 with Q = C , Q = C { 0 } and Q = int C , respectively, instead of C u :
A Q u B : A B Q ,

where A , B Y are arbitrarily chosen sets. If we are dealing with Q = int C we suppose int C .

Using this notation, the concept of minmax-robust efficiency can be redefined as a concept of robustness in the sense of set optimization in the following way.

Definition 6 Given an uncertain multi-objective optimization problem ( P ( U ) ), a solution x 0 X is called Q u -robust for ( P ( U ) ) with Q = C , Q = C { 0 } and Q = int C , respectively, if there is no solution x ¯ X { x 0 } such that
f U ( x ¯ ) Q u f U ( x 0 ) .

Note that the definition of Q u -robustness is valid for general spaces and general cones C, while the definition of minmax-robust efficiency in [18] is for Y = R k and C = R k only.

The motivation behind this concept is the following: When comparing sets with the u-type set-relation, the upper bounds of these sets, i.e., the ‘worst cases’, are considered. Minimizing these worst cases is closely connected to the concept of minmax-robust efficiency where one wants to minimize the objective function in the worst case. This risk averse approach would reflect a decision-makers strategy to hedge against a worst case and is rather pessimistic.

Remark 5 The first scenario-based concept to uncertain multi-objective optimization, or minmax-robustness adapted to multi-objective optimization, has been introduced by Kuroiwa and Lee [17] and studied in [19]. In [17, 19] robust solutions of multi-objective optimization problems are introduced in the following way. The authors propose to consider the robust counterpart to ( P ( U ) )
Min ( f R C U ( X ) , R k ) ,
(2)
where the objective vector for x X is given by
f R C U ( x ) : = ( max ξ 1 U 1 f 1 ( x , ξ 1 ) max ξ k U k f k ( x , ξ k ) ) ,
(3)

with functionals f i : R n × U i R for i = 1 , , k and the convex and compact uncertainty sets U : = ( U 1 , , U k ) ( U i R m , i = 1 , , k ). In [17], solutions to (2) are called robust.

Note that in [18] the authors pointed out that this concept differs from the concept of minmax-robust efficiency.

With the definition of C u -robustness, we can generalize algorithms for computing minmax-robust efficient solutions which is an extension of the well-known weighted sum scalarization technique for calculating efficient solutions of multi-objective optimization problems (compare e.g. Ehrgott [33]).

The general idea is to form a scalar optimization problem by multiplying each objective function with a positive weight and summing up the weighted objectives. The resulting (single-objective) problem in a more general setting is

( P ( U ) y )   min x X sup ξ U y f ( x , ξ ) ,

where f : X × U Y and y C { 0 } , i.e., y : Y R .

Now, solving this problem one can obtain C u -robust solutions as shown in Theorem 4.3 in [18]. Before extending this theorem, we need a lemma which will help during the proofs.

Lemma 1 Consider the uncertain multi-objective optimization problem ( P ( U ) ). Then we have for all x , x ¯ X and for Q = int C ( Q = C { 0 } , Q = C , respectively),
f U ( x ) f U ( x ¯ ) Q ξ U η U : f ( x , ξ ) f ( x ¯ , η ) Q .
(4)
Proof’: Suppose the contrary. Then
ξ U η U : f ( x , ξ ) f ( x ¯ , η ) Q ξ U : f ( x , ξ ) f U ( x ¯ ) Q f U ( x ) f U ( x ¯ ) Q .
’: Suppose the contrary. Then
ξ U : f ( x , ξ ) f U ( x ¯ ) Q ξ U η U : f ( x , ξ ) f ( x ¯ , η ) Q .

 □

With this, we can extend Theorem 4.3 from [18] in the following way.

Theorem 1 Consider an uncertain multi-objective optimization problem ( P ( U ) ). The following statements hold:
  1. (a)

    If x 0 X is a unique optimal solution of ( P ( U ) y ) for some y C { 0 } , then x 0 is a C u -robust solution for ( P ( U ) ).

     
  2. (b)

    If x 0 X is an optimal solution of ( P ( U ) y ) for some y C # and max ξ U y f ( x , ξ ) exists for all x X , then x 0 is a C { 0 } u -robust solution for ( P ( U ) ).

     
  3. (c)

    If x 0 X is an optimal solution of ( P ( U ) y ) for some y C { 0 } and max ξ U y f ( x , ξ ) exists for all x X , then x 0 is a int C u -robust solution for ( P ( U ) ).

     
Proof Suppose that x 0 is not Q u -robust for Q = C , Q = ( C { 0 } ) , Q = int C , respectively. Then there exists an element x ¯ X { x 0 } such that
f U ( x ¯ ) f U ( x 0 ) Q ,
(5)

for Q = C ( Q = ( C { 0 } ) , Q = int C , respectively).

This implies
ξ U η U : f ( x ¯ , ξ ) f ( x 0 , η ) Q ,

taking into account Lemma 1.

Choose now y C { 0 } for Q = C ( y C # for Q = C { 0 } , y C { 0 } for Q = int C , respectively) arbitrary but fixed.
ξ U η U : y f ( x ¯ , ξ ) ( < , < , respectively ) y f ( x 0 , η ) ξ U : y f ( x ¯ , ξ ) ( < , < , respectively ) sup η U y f ( x 0 , η ) sup ξ U y f ( x ¯ , ξ ) ( < , < , respectively ) sup η U y f ( x 0 , η ) .

The last inequalities hold because for (b) and (c) max ξ U y f ( x ¯ , ξ ) exists. But this means that x 0 is not the unique optimal (an optimal, an optimal, respectively) solution of ( P ( U ) y ) for y C { 0 } ( y C # , y C { 0 } , respectively). □

Remark 6 In Theorem 1(b) we consider y C # . Under our assumptions concerning the cone C and if we assume additionally Y = R q we have C # (compare [[34], Theorem 2.2.12], [[34], Example 2.2.16]). Moreover, if Y is a Hausdorff locally convex space, C Y is a proper convex cone and C has a base B with 0 cl B , then C # (compare [[34], Theorem 2.2.12]).

With this theorem we can now formulate a first algorithm for finding Q u -robust solutions for Q = C , Q = C { 0 } , Q = int C , respectively.

Algorithm 1 Deriving ( C u , C { 0 } u , int C u ) -robust solutions to ( P ( U ) ) based on weighted sum scalarization:

Input: Uncertain multi-objective problem P ( U ) , solution sets Opt C = Opt C { 0 } = Opt int C = .

Step 1: Choose a set C ¯ C { 0 } .

Step 2: If C ¯ = : STOP. Output: Set of C u -robust solutions Opt C , set of C { 0 } u -robust solutions Opt C { 0 } , set of int C u -robust solutions Opt int C .

Step 3: Choose y C ¯ . Set C ¯ : = C ¯ { y } .

Step 4: Find an optimal solution x 0 of ( P ( U ) y ).
  1. (a)
    If x 0 is a unique optimal solution of ( P ( U ) y ), then x 0 is C u -robust for ( P ( U ) ), thus
    Opt C : = Opt C { x 0 } .
     
  2. (b)
    If max ξ U y f ( x , ξ ) exists for all x X and y C # , then x 0 is C { 0 } u -robust for ( P ( U ) ), thus
    Opt C { 0 } : = Opt C { 0 } { x 0 } .
     
  3. (c)
    If max ξ U y f ( x , ξ ) exists for all x X , then x 0 is int C u -robust for ( P ( U ) ), thus
    Opt int C : = Opt int C { x 0 } .
     

Step 5: Go to Step 2.

Remark 7 In Step 4 of Algorithm 1 the scalar optimization problem ( P ( U ) y ) is to be solved such that the effectiveness of Algorithm 1 depends from the properties of the algorithm for solving ( P ( U ) y ). An interesting question is how to choose the set C ¯ in Step 1 of the algorithm. The decision maker could be involved to choose a finite set C ¯ in Step 1. If this set C ¯ is finite the algorithm stops after finitely many steps.

Furthermore, we present an interactive algorithm for finding ( C u , C { 0 } u , int C u ) -robust solutions to the uncertain multi-objective optimization problem ( P ( U ) ). This algorithm uses the input of the decision maker who either accepts the calculated solution or not.

Algorithm 2 Deriving a single accepted ( C u , C { 0 } u , int C u ) -robust solution to ( P ( U ) ) based on weighted sum scalarization:

Input: Uncertain vector-valued problem ( P ( U ) ).

Step 1: Choose a nonempty set C ¯ C { 0 } .

Step 2: Choose y ¯ C ¯ .

Step 3: Find an optimal solution x 0 of ( P ( U ) y ¯ ).
  1. (a)

    If x 0 is a unique optimal solution of ( P ( U ) y ¯ ), then x 0 is C u -robust for ( P ( U ) ).

     
  2. (b)

    If max ξ U y ¯ f ( x , ξ ) exists for all x S and y ¯ C # , then x 0 is C { 0 } u -robust for ( P ( U ) ).

     
  3. (c)

    If max ξ U y ¯ f ( x , ξ ) exists for all x S , then x 0 is int C u -robust for ( P ( U ) ).

     

If x 0 is accepted by the decision-maker, then Stop. Output: x 0 . Otherwise, go to Step 4.

Step 4: Put k = 0 , t 0 = 0 . Choose y ˆ C ¯ , y ˆ y ¯ . Go to Step 5.

Step 5: Choose t k + 1 with t k < t k + 1 1 and compute an optimal solution x k + 1 of

( P ( U ) y ¯ + t k + 1 ( y ˆ y ¯ ) )    min x S sup ξ U ( y ¯ + t k + 1 ( y ˆ y ¯ ) ) f ( x , ξ )

and use x k as starting point. If an optimal solution of ( P ( U ) y ¯ + t k + 1 ( y ˆ y ¯ ) ) cannot be found for t > t k , then go to Step 2. Otherwise, go to Step 6.

Step 6: The point x k + 1 is to be evaluated by the decision-maker. If it is accepted by the decision-maker, then Stop. Output: x k + 1 . Otherwise, go to Step 7.

Step 7: If t k + 1 = 1 , then go to Step 2. Otherwise, set k = k + 1 and go to Step 5.

Remark 8 In the interactive procedure in Algorithm 2 we use a surrogate one-parametric optimization problem. So a systematic generation of solutions is possible.

3.2 C l -Robustness

In this section we use the l-type set-relation Q l like in Definition 3 with Q = C , Q = C { 0 } and Q = int C , respectively, instead of C l :
A Q l B : A + Q B ,

where A , B Y are arbitrarily chosen sets. If we are dealing with Q = int C we suppose int C . Using this notation we derive the new concept of Q l -robustness, defined analogously to Q u -robustness (Definition 6).

Definition 7 Given an uncertain multi-objective optimization problem ( P ( U ) ), a solution x 0 X is called Q l -robust if there is no x ¯ X { x 0 } such that
f U ( x ¯ ) Q l f U ( x 0 ) .

The Q l -robustness (with Q = C , Q = C { 0 } and Q = int C , respectively) can be interpreted as an optimistic approach. The following example illustrates this concept for the case Q = C .

Remark 9 In Figure 2, x is C l -robust, while it is not C u -robust.
Figure 2

x is C l -robust.

The Q l -robustness is an alternative tool for the decision maker for obtaining solutions of another type to an uncertain multi-objective optimization problem. This rather optimistic approach focuses on the lower bound of a set f U ( x ¯ ) for the comparison with another set f U ( x 0 ) . In particular, in the case Q = C , a point x 0 X is called a C l -solution if there is no other point x ¯ X such that f U ( x 0 ) is a subset of f U ( x ¯ ) + C . Contrary to the Q u -robustness approach, the Q l -robustness (with Q = C , Q = C { 0 } and Q = int C , respectively) is hence not a worst-case concept, thus the decision maker is not considered to be risk averse but risk affine. This optimistic concept thus hedges against perturbations in the best-case scenarios.

For calculating Q l -robust solutions again the weighted sum scalarization is helpful, but in order to later on compute Q l -robust solutions to ( P ( U ) ), we define a new weighted sum problem in a general setting:

Let y C { 0 } ( y C # , respectively). Consider the weighted sum scalarization problem

( P ( U ) y opt )   min x X inf ξ U y f ( x , ξ ) .

Theorem 2 Consider an uncertain multi-objective optimization problem ( P ( U ) ). The following statements hold.
  1. (a)

    If x 0 is a unique optimal solution of ( P ( U ) y opt ) for some y C { 0 } , then x 0 is a C l -robust solution for ( P ( U ) ).

     
  2. (b)

    If x 0 is an optimal solution of ( P ( U ) y opt ) for some y C # and min ξ U y f ( x , ξ ) exists for all x X , then x 0 is a C { 0 } l -robust solution for ( P ( U ) ).

     
  3. (c)

    If x 0 is an optimal solution of ( P ( U ) y opt ) for some y C { 0 } and min ξ U y f ( x , ξ ) exists for all x X , then x 0 is a int C l -robust solution for ( P ( U ) ).

     
Proof Suppose x 0 is not Q l -robust for Q = C ( Q = C { 0 } , Q = int C , respectively). Consequently, there exists an x ¯ X { x 0 } such that f U ( x ¯ ) + Q f U ( x 0 ) for Q = C ( Q = C { 0 } , Q = int C , respectively). That is equivalent to
ξ U η U : f ( x ¯ , η ) + Q f ( x 0 , ξ ) ξ U η U : f ( x ¯ , η ) f ( x 0 , ξ ) Q .
(6)
Now choose y C { 0 } for Q = C ( y C # for Q = C { 0 } , y C { 0 } for Q = int C , respectively) arbitrary, but fixed. Hence, we obtain from (6)
ξ U η U : y f ( x ¯ , η ) ( < , < , respectively ) y f ( x 0 , ξ ) ξ U : inf η U y f ( x ¯ , η ) ( < , < , respectively ) y f ( x 0 , ξ ) inf η U y f ( x ¯ , η ) ( < , < , respectively ) inf ξ U y f ( x 0 , ξ ) ,

in contradiction to the assumptions. □

Based on these results, we are able to present the following algorithm that computes ( C l / C { 0 } l / int C l ) -robust solutions to P ( U ) .

Algorithm 3 Deriving ( C l / C { 0 } l / int C l ) -robust solutions for ( P ( U ) ) based on weighted sum scalarization:

Input & Step 1-5: Analogous to Algorithm 1, only replacing ( P ( U ) y ) by ( P ( U ) y opt ) and replacing max ξ U y f ( x 0 , ξ ) by min ξ U y f ( x 0 , ξ ) .

The next algorithm computes ( C l / C { 0 } l / int C l ) -robust solutions via weighted sum scalarization by altering the weights:

Algorithm 4 Calculating a single desired ( C l / C { 0 } l / int C l ) -robust solution for ( P ( U ) ) based on weighted sum scalarization:

Input & Step 1-7: Analogous to Algorithm 2, only replacing ( P ( U ) y ¯ ) by ( P ( U ) y ¯ opt ), max ξ U y f ( x 0 , ξ ) by min ξ U y f ( x 0 , ξ ) and ( P ( U ) y ¯ + t k + 1 ( y ˆ y ¯ ) ) by ( P ( U ) y ¯ + t k + 1 ( y ˆ y ¯ ) opt ).

3.3 C s -Robustness

Now, we use the set less order relation Q s with Q = C , Q = C { 0 } and Q = int C , respectively (compare Definition 1) for A , B Y arbitrarily chosen sets:
A Q s B : A B Q  and  A + Q B .

If we are dealing with Q = int C we suppose int C . We can now introduce the concept of Q s -robustness (with Q = C , Q = C { 0 } and Q = int C , respectively):

Definition 8 A solution x 0 of ( P ( U ) ) is called ( C s / C { 0 } s / int C s ) -robust if there is no x ¯ X { x 0 } such that
f U ( x ¯ ) Q s f U ( x 0 )

for Q = C ( Q = C { 0 } , Q = int C , respectively).

Remark 10 Figure 3 shows an element x X that is C s -robust, while it is not int C u -robust.
Figure 3

x is C s -robust.

Remark 11 Note that a C l -robust solution is as well C s -robust by definition. The same assertion holds for a C u -robust solution.

The concept of C s -robustness can be interpreted in the following way: In a situation where it is not clear if one should follow a risk affine or risk averse strategy (e.g., the decision maker is not at hand or wants to get a feeling for the variety of the solutions) this concept might be helpful as it calculates solutions which reflect these different strategies. Therefore, this concept can serve as a pre-selection before deciding a definite strategy.

Computing C s -robust solutions is possible with the help of the following optimization problem:

( P ( U ) y biobj )   h ( x ) : = ( inf ξ U y f ( x , ξ ) sup ξ U y f ( x , ξ ) ) v min x X

with y C { 0 } ( y C # , respectively). For ( P ( U ) y biobj ), we use the solution concept of weak Pareto efficiency: An element x 0 X is called weakly Pareto efficient for ( P ( U ) y biobj ), if
h ( X ) ( h ( x 0 ) int R 2 ) = .
Furthermore, a point x 0 X is called strictly Pareto efficient for ( P ( U ) y biobj ), if
h ( X { x 0 } ) ( h ( x 0 ) R 2 ) = .

We prove the following theorem.

Theorem 3 Consider an uncertain multi-objective optimization problem ( P ( U ) ). The following statements hold.
  1. (a)

    If x 0 is strictly Pareto efficient for problem ( P ( U ) y biobj ) for some y C { 0 } , then x 0 is C s -robust.

     
  2. (b)

    If x 0 is weakly Pareto efficient for problem ( P ( U ) y biobj ) for some y C { 0 } and min ξ U y f ( x , ξ ) and max ξ U y f ( x , ξ ) exist for all x X and the chosen weight y C { 0 } , then x 0 is int C s -robust.

     
  3. (c)

    If x 0 is weakly Pareto efficient for problem ( P ( U ) y biobj ) for some y C # and min ξ U y f ( x , ξ ) and max ξ U y f ( x , ξ ) exist for all x X and the chosen weight y C # , then x 0 is C { 0 } s -robust.

     
Proof Let x 0 be strictly Pareto efficient (weakly Pareto efficient, weakly Pareto efficient) for problem ( P ( U ) y biobj ) with some y C { 0 } ( y C { 0 } , y C # , respectively), i.e., there is no x ¯ X { x 0 } such that
inf ξ U y f ( x ¯ , ξ ) ( < , < , respectively ) inf ξ U y f ( x 0 , ξ )  and sup ξ U y f ( x ¯ , ξ ) ( < , < , respectively ) sup ξ U y f ( x 0 , ξ ) .
Now suppose x 0 is not ( C s / int C s / C { 0 } s ) -robust. Then there exists an x ¯ X { x 0 } such that
f U ( x ¯ ) + Q f U ( x 0 )  and  f U ( x ¯ ) f U ( x 0 ) Q
for Q = C ( Q = int C , Q = C { 0 } ). That implies
x ¯ X { x 0 } : ξ 1 , ξ 2 U η 1 , η 2 U : f ( x ¯ , η 1 ) + Q f ( x 0 , ξ 1 )  and f ( x ¯ , ξ 2 ) f ( x 0 , η 2 ) Q
(7)
for Q = C ( Q = int C , Q = C { 0 } ). Choose now y C { 0 } ( y C { 0 } , y C # ) as in problem ( P ( U ) y biobj ). We obtain from (7)
x ¯ X { x 0 } : ξ 1 , ξ 2 U η 1 , η 2 U : y f ( x ¯ , η 1 ) ( < , < , respectively ) y f ( x 0 , ξ 1 ) and  y f ( x ¯ , ξ 2 ) ( < , < , respectively ) y f ( x 0 , η 2 ) inf ξ U y f ( x ¯ , ξ ) ( < , < , respectively ) inf ξ U y f ( x 0 , ξ ) and  sup ξ U y f ( x ¯ , ξ ) ( < , < , respectively ) sup ξ U y f ( x 0 , ξ ) .

The last two strict inequalities hold because the minimum and maximum exist. But this is a contradiction to the assumption. □

Based on these observations, we can formulate the following algorithm for computing C s -robust solutions to P ( U ) .

Algorithm 5 Computing ( C s / C { 0 } s / int C s ) -robust solutions using a family of problems ( P ( U ) y biobj ):

Input & Step 1-3: Analogous to Algorithm 1.

Step 4: Find a set of weakly Pareto efficient solutions S O L we ( y ) of ( P ( U ) y biobj ).

Step 5: If S O L we ( y ) = , then go to Step 2.

Step 6: Choose x ¯ S O L we ( y ) . Set S O L we ( y ) : = S O L we ( y ) { x ¯ } .
  1. (a)
    If x ¯ is a strictly Pareto efficient solution of ( P ( U ) y biobj ), then x ¯ is C s -robust for ( P ( U ) ), thus
    Opt C : = Opt C { x ¯ } .
     
  2. (b)
    If x ¯ is weakly Pareto efficient for problem ( P ( U ) y biobj ) and y C # and min ξ U y f ( x , ξ ) and max ξ U y f ( x , ξ ) exist for all x X and the chosen weight y C # , then x ¯ is C { 0 } s -robust for ( P ( U ) ), thus
    Opt C { 0 } : = Opt C { 0 } { x ¯ } .
     
  3. (c)
    If x ¯ is a weakly Pareto efficient solution of ( P ( U ) y biobj ) and max ξ U y f ( x , ξ ) and min ξ U y f ( x , ξ ) exist for all x X , then x ¯ is int C s -robust for ( P ( U ) ), thus
    Opt int C : = Opt int C { x ¯ } .
     

Step 7: Go to Step 5.

In the following we present an algorithm that computes C s -robust solutions while varying the weights in the vector of objectives of problem ( P ( U ) y biobj ).

Algorithm 6 Computing ( C s / C { 0 } s / int C s ) -robust solutions using a family of problems ( P ( U ) y biobj ):

Input & Step 1-3 & Step 5-7: Analogous to Algorithm 2, only replacing ( P ( U ) y ¯ ) by ( P ( U ) y ¯ biobj ) and ( P ( U ) y ¯ + t k + 1 ( y ˆ y ¯ ) ) by ( P ( U ) y ¯ + t k + 1 ( y ˆ y ¯ ) biobj ).

Step 4: Analogous to Step 4 of Algorithm 5.

3.4 Alternative set less ordered robustness

Another way of combining the u- and l-type set-relations is the alternative set less order relation:

Definition 9 (Alternative set less order relation (compare Ide and Köbis [29]))

Let C Y be a proper closed convex and pointed cone. Furthermore, let A , B Y be arbitrarily chosen sets. Then the alternative set less order relation is defined by
A C a B : A C u B  or  A C l B .

Based on this definition we can now define the concept of C a -robustness for general cones:

Definition 10 A solution x 0 of ( P ( U ) ) is called ( C a / C { 0 } a / int C a ) -robust if there is no x ¯ X { x 0 } such that
f U ( x ¯ ) Q a f U ( x 0 )

for Q = C ( Q = C { 0 } , Q = int C , respectively).

The following example illustrates C a -robust solutions.

Remark 12 In Figure 4, both x and x ¯ are C a -robust.
Figure 4

Both x and x ¯ are C a -robust.

The next lemma follows directly from the definitions.

Lemma 2 Note that a solution of ( P ( U ) ) is C a -robust if and only if it is C l -robust and C u -robust.

As this lemma shows, the concept of C a -robustness is rather restrictive as only solutions which are C u -robust and C l -robust, thus reflect both a risk averse and a risk affine strategy, are also C a -robust. Therefore, this concept is fit for a decision maker who does not want to make any mistake in terms of the best or worst cases. We can see easily that such an approach would be very restrictive against the solutions and that only very few solutions should fulfill these conditions.

Due to this Lemma 2, from Algorithms 1 and 3, we can deduce the following algorithm for calculating C a -robust solutions to ( P ( U ) ).

Algorithm 7 Deriving ( C a / C { 0 } a / int C a ) -robust solutions to ( P ( U ) ):

Input: Uncertain multi-objective problem ( P ( U ) ), solution sets Opt C a = Opt C { 0 } a = Opt int C a = .

Step 1: Compute a set of ( C l / int C l / C { 0 } l ) -robust solutions ( Opt C l , Opt int C l , Opt C { 0 } l ) using Algorithm 3 or 4.

Step 2: Compute a set of ( C u / int C u / C { 0 } u ) -robust solutions ( Opt C u , Opt int C u , Opt C { 0 } u ) using Algorithm 1 or 2.

Output: Set of ( C a / int C a / C { 0 } a ) -robust solutions
Opt C a = Opt C u Opt C l , Opt int C a = Opt int C u Opt int C l , Opt C { 0 } a = Opt C { 0 } u Opt C { 0 } l .

3.5 Further relationships between the concepts

From Remark 11 we can see that every C u -robust solution and every C l -robust solution is also a C s -robust solution. The inverse direction does not hold. The following example in Figure 5 shows that a solution can be C s -robust but neither C u -robust nor C l -robust.
Figure 5

x is C s -robust, but neither C u -robust nor C l -robust.

We summarize the relationship between the various robustness concepts in Figure 6.
Figure 6

Scheme of solutions to an uncertain multi-objective optimization problem.

4 Conclusions

In the following we will explain that our algorithms presented in Section 3 can be used for solving special classes of set-valued optimization problems.

Having a close look at all the concepts of robustness from Section 3, we can see that in fact all of these are set-valued optimization problems.

Consider a set-valued optimization problem of the form

( SP )  -minimize F ( x ) , subject to x X ,

with some given preorder and a set-valued objective map F : X Y , we can see the following.

If the preorder is given by C l , C u , or C s with some proper closed convex pointed cone C Y and F ( x ) can be parametrized by parameters ξ U with some set in the way that
F ( x ) : = f U ( x ) for all  x X ,

where f U ( x ) = { f ( x , ξ ) ξ U } and f : X × U Y , then the set-valued optimization problem ( S P ) is equivalent to finding -robust solutions to the uncertain multi-objective problem ( P ( U ) ) and can therefore be solved by using one of the respective algorithms presented in Section 3.

We revealed strong connections between set-valued optimization and uncertain multi-objective optimization. Furthermore, we derived our results in a more general setting than in [18] and [29]. In particular, we provided solution algorithms for a certain class of set-valued optimization problems. It seems possible to extend this class of problems to a more general one, but this is future work and of interest for the next steps in this area of research.

Moreover, this paper makes very clear that finding robust solutions to uncertain multi-objective optimization problems can be interpreted as an application of set-valued optimization. Thus, robust solutions to uncertain multi-objective optimization problems can be obtained by using the solution techniques from set-valued optimization. Formulating concrete algorithms of this kind is another topic for future research.

Declarations

Acknowledgements

Supported by DFG RTG 1703 ‘Resource Efficiency in Interorganizational Networks’.

Authors’ Affiliations

(1)
University Göttingen
(2)
University Erlangen-Nürnberg
(3)
Shimane University
(4)
Martin-Luther-University Halle-Wittenberg

References

  1. Stewart T, Bandte O, Braun H, Chakraborti N, Ehrgott M, Göbelt M, Jin Y, Nakayama H, Poles S, Di Stefano D: Real-world applications of multiobjective optimization. Lecture Notes in Computer Science 5252. In Multiobjective Optimization: Interactive and Evolutionary Approaches. Edited by: Branke J, Deb K, Miettinen K, Slowinski R. Springer, Berlin; 2008:285–327.View ArticleGoogle Scholar
  2. Scott EM, Saltelli A, Sörensen T: Practical experience in applying sensitivity and uncertainty analysis. Wiley Ser. Probab. Stat. In Sensitivity Analysis. Wiley, Chichester; 2000:267–274.Google Scholar
  3. Birge JR, Louveaux FV: Introduction to Stochastic Programming. Springer, New York; 1997.MATHGoogle Scholar
  4. Kouvelis P, Yu G: Robust Discrete Optimization and Its Applications. Kluwer Academic, Dordrecht; 1997.View ArticleMATHGoogle Scholar
  5. Ben-Tal A, El Ghaoui L, Nemirovski A: Robust Optimization. Princeton University Press, Princeton; 2009.MATHView ArticleGoogle Scholar
  6. Fischetti M, Monaci M: Light robustness. Lecture Notes in Computer Science 5868. In Robust and Online Large-Scale Optimization. Edited by: Ahuja RK, Moehring R, Zaroliagis C. Springer, Berlin; 2009:61–84.View ArticleGoogle Scholar
  7. Liebchen C, Lübbecke M, Möhring RH, Stiller S: The concept of recoverable robustness, linear programming recovery, and railway applications. Lecture Note on Computer Science 5868. In Robust and Online Large-Scale Optimization. Edited by: Ahuja RK, Möhring RH, Zaroliagis CD. Springer, Berlin; 2009.Google Scholar
  8. Klamroth K, Köbis E, Schöbel A, Tammer C: A unified approach for different concepts of robustness and stochastic programming via nonlinear scalarizing functionals. Optimization 2013, 62: 649–671. 10.1080/02331934.2013.769104View ArticleMathSciNetGoogle Scholar
  9. Goerigk M, Schöbel A: A scenario-based approach for robust linear optimization. Lecture Notes in Computer Science. In Proceedings of the 1st International ICST Conference on Practice and Theory of Algorithms in (Computer) Systems (TAPAS). Springer, Berlin; 2011:139–150.View ArticleGoogle Scholar
  10. Soyster AL: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 1973, 21: 1154–1157. 10.1287/opre.21.5.1154View ArticleMathSciNetGoogle Scholar
  11. Ben-Tal A, Nemirovski A: Robust convex optimization. Math. Oper. Res. 1998, 23(4):769–805. 10.1287/moor.23.4.769View ArticleMathSciNetGoogle Scholar
  12. Deb K, Gupta H: Introducing robustness in multi-objective optimization. Evol. Comput. 2006, 14(4):463–494. 10.1162/evco.2006.14.4.463View ArticleGoogle Scholar
  13. Branke J: Creating robust solutions by means of evolutionary algorithms. Lecture Notes in Computer Science 1498. In Parallel Problem Solving from Nature - PPSNV. Edited by: Eiben EA, Bäck T, Schenauer M, Schwefel H-P. Springer, Berlin; 1998:119–128.View ArticleGoogle Scholar
  14. Barrico C, Antunes CH: Robustness analysis in multi-objective optimization using a degree of robustness concept. IEEE Congress on Evolutionary Computation 2006, 1887–1892.Google Scholar
  15. Steponavičė, I, Miettinen, K: Survey on multiobjective robustness for simulation-based optimization. Talk at the 21st International Symposium on Mathematical Programming, August 19–24 2012, Berlin, Germany (2012)Google Scholar
  16. Witting, K: Numerical algorithms for the treatment of parametric multiobjective optimization problems and applications. PhD thesis, Universität Paderborn, Paderborn (2012)Google Scholar
  17. Kuroiwa D, Lee GM: On robust multiobjective optimization. Vietnam J. Math. 2012, 40(2–3):305–317.MathSciNetGoogle Scholar
  18. Ehrgott M, Ide J, Schöbel A: Minmax robustness for multi-objective optimization problems. Eur. J. Oper. Res. 2014. 10.1016/j.ejor.2014.03.013Google Scholar
  19. Kuroiwa, D, Lee, GM: On robust convex multiobjective optimization. J. Nonlinear Convex Anal. (2013, accepted)Google Scholar
  20. Eichfelder G, Jahn J: Vector optimization problems and their solution concepts. Vector Optim. In Recent Developments in Vector Optimization. Springer, Berlin; 2012:1–27.View ArticleGoogle Scholar
  21. Kuroiwa D: Some duality theorems of set-valued optimization with natural criteria. In Proceedings of the International Conference on Nonlinear Analysis and Convex Analysis. World Scientific, Singapore; 1999:221–228.Google Scholar
  22. Kuroiwa D: The natural criteria in set-valued optimization. Sūrikaisekikenkyūsho Kōkyūroku 1998, 1031: 85–90. Research on nonlinear analysis and convex analysis (Kyoto, 1997)MathSciNetGoogle Scholar
  23. Nishnianidze ZG: Fixed points of monotone multivalued operators. Soobshch. Akad. Nauk Gruzin. SSR 1984, 114(3):489–491.MathSciNetGoogle Scholar
  24. Young RC: The algebra of many-valued quantities. Math. Ann. 1931, 104(1):260–290. 10.1007/BF01457934View ArticleMathSciNetGoogle Scholar
  25. Takahashi W: Nonlinear Functional Analysis. Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2000.MATHGoogle Scholar
  26. Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.MATHGoogle Scholar
  27. Aoyama K, Kohsaka F, Takahashi W: Proximal point methods for monotone operators in Banach spaces. Taiwan. J. Math. 2011, 15(1):259–281.MathSciNetGoogle Scholar
  28. Takahashi W: Nonlinear mappings in equilibrium problems and an open problem in fixed point theory. In Fixed Point Theory and Its Applications. Yokohama Publishers, Yokohama; 2010:177–197.Google Scholar
  29. Ide, J, Köbis, E: Concepts of robustness for multi-objective optimization problems based on set order relations (2013)Google Scholar
  30. Kuroiwa D: On set-valued optimization. Nonlinear Anal. 2001, 47(2):1395–1400. 10.1016/S0362-546X(01)00274-7View ArticleMathSciNetGoogle Scholar
  31. Jahn J, Ha TXD: New order relations in set optimization. J. Optim. Theory Appl. 2011, 148(2):209–236. 10.1007/s10957-010-9752-8View ArticleMathSciNetGoogle Scholar
  32. Kuroiwa D: Existence theorems of set optimization with set-valued maps. J. Inf. Optim. Sci. 2003, 24(1):73–84.MathSciNetGoogle Scholar
  33. Ehrgott M: Multicriteria Optimization. 2nd edition. Springer, Berlin; 2005.MATHGoogle Scholar
  34. Göpfert A, Riahi H, Tammer C, Zălinescu C: Variational Methods in Partially Ordered Spaces. Springer, New York; 2003.MATHGoogle Scholar

Copyright

© Ide et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.