History: Q-MIP and others
(Jan Wolf, Jan. 2017)
QLP / QIPi: In [194] a QLP formulation was used to model
problems from the domain of real-time scheduling [57], with the
goal to reduce the inflexibility of static scheduling in hard
real-time systems. One of the fundamental features of real-time
systems is the uncertain nature of execution times of tasks and
the presence of complex timing constraints among them. In
traditional models variable execution times are modeled through
fixed worst-case values and relationships are limited to those
that can be represented by precedence graphs [206]. Using QLPs
as modeling tool allows to schedule tasks with varying execution
times, represented by universally quantified variables, and
complex timing constraints among tasks described by linear
constraints.
In this context, a first polynomial time algorithm to solve
QLPs based on quantifier elimination techniques was proposed
in [108] for a restricted class of constraints [194], which
are a subset of totally unimodular constraint systems [197].
For arbitrary linear constraints the worst-case time complexity
of the proposed algorithm is double exponential [197, 125].
Subramani proposed a framework to formalize scheduling problems
in real-time systems using QLPs in [207]. There he focussed on
the variability in the execution times, complex relationships
between tasks using arbitrary linear constraints and different
scheduling schemes – static (98-QLPs), co-static (89-QLPs) or
parametric (alternating quantifier changes) – depending on the
information available when dispatching. A detailed introduction
to QLPs and methodologies to solve them are available in [209,
147, 88]. QIPs were studied in [210], where a number of special
cases where analyzed from the perspective of computational
complexity. In that paper the authors discussed conditions
under which QIPs can be relaxed to QLPs without altering the
solution space.
Research in this context was also supported by the Deutsche
Forschungsgemeinschaft (DFG). The DFG founded project
“Erweiterung mathemtische Optimierungsmethoden zur Lösung
PSPACE-vollständiger Probleme mit Hilfe quantifizierter linearer
Programme” at TU Darmstadt dealt with the development of
algorithms to solve QIPs based on extended LP- and QLP-techniques.
The DFG founded Collaborative Research Centre (SFB) 805
“Sonderforschungsbereich 805 - Beherrschung von Unsicherheit
in lasttragenden Systemen des Maschinenbaus” at TU Darmstadt
deals with the control of uncertainty in mechanical systems.
In this context Q-MIPs have been used to model and optimize
process chains under uncertainty. Also in the context of game
modeling QIPs can be applied. In [89] the PSPACE-complete
two-person game Gomoku was modeled with the help of an QIP
formulation, in [147] the model was adopted to the PSPACE-complete
two-person game Connect-6 resulting in a 0/1-QIP that was
solved with an extended variant of the algorithm proposed in [88].
Recently, extensions of traditional QLPs were presented in [91],
where the standard quantification was extended to implications
of quantified linear systems, called quantified linear
implications (QLI). QLPs and QLIs that arise when universally
quantified variables are unbounded were studied in [185].
Whereas in the above citations the focus was on analyzing
various special cases of the general problem, with a view on
subclasses that are tractable, our focus lies on general QLPs
with two or multiple stages without any restrictions to the
quantification sequence or the constraint system. Furthermore,
our focus lies on QLP optimization problems, whereas the above
citations consider quantified decision problems in general.
Optimization Under Uncertainty has experienced rapid
development in both theory and practice from the very beginning
in the 1950s, starting with the seminal works of Beale [15],
Dantzig [72], Bellman [16] as well as Charnes and Cooper [65].
The particular importance of optimization under uncertainty was
also demonstrated in the seminal paper by Ben-Tal and Nemirovski
[23], where they showed in a detailed computational study that
even a small perturbation of the problem data can make the
nominal optimal solution completely meaningless from a practical
viewpoint. We quote from their case study:
“In real-world applications of linear programming, one cannot
ignore the possibility that a small uncertainty in the data can
make the usual optimal solution completely meaningless from a
practical viewpoint.”
Due to limited computational capabilities in the past, decision
models often replaced those uncertainties with averages or best
estimates but recent computational advantages greatly expanded
the range of optimization techniques under uncertainty. Today
stochastic programming and robust optimization – two
fundamentally different approaches that address optimization
with data uncertainty – are the most prominent modeling paradigms
and presently the subject of intense research in this field.
The main difference lies in the type of uncertainty that is
handled by the two methodologies. Stochastic programming assumes
the uncertainty to follow a probability distribution, while
robust optimization in contrast is concerned with models that
contain deterministic set-based uncertainties [36]. In the
following we give a brief introduction to stochastic programming
and robust optimization and compare them with quantified linear
programming.
Stochastic Programming The approach for uncertainty
quantification used in stochastic programming (see, e.g.
[128, 174, 47, 189] and the references therein) is the
representation of random parameters in the input data (c, A, b)
by random variables. This approach has a long and active history
dating at least as far back as Dantzigs original paper [72] and
draws upon a variety of traditional operations research
techniques. During the last four decades a vast quantity of
literature on the subject has appeared (see, e.g. [218, 217] for
a stochastic programming bibliography maintained by Maarten van
der Vlerk). A variety of stochastic programming applications can
be found in [224]. The underlying optimization problem can be a
linear program, a (mixed) integer program, but also a nonlinear
program. Usually, stochastic programming models are furthermore
subdivided in recourse models and chance constraint models.
Recourse Models The more important and widely studied case, where
most of the applications reported in the literature belong to and
which has strong similarities to QLPs, is that of a stochastic
linear program with recourse, which first appeared in Beale [15]
and Dantzig [72]. These problems arise when some of the decision
variables must be taken before the outcomes of some (or all)
random events are known, and some after they become known.
Recourse variables represent decisions that can be made on a
“wait and see” basis, after the uncertainty is resolved, and
they allow to adapt a solution to the specific outcome observed.
Stochastic programming problems with recourse can be essentially
divided into two-stage and multi-stage problems
(TSSLPs and MSSLPs). In the former, a set of initial decisions
are taken first, followed by a random event. After this, recourse
decisions, which are based on this event, are taken. The
multi-stage problem, accordingly consists of multiple stages,
with a random event occurring between each stage.
Chance Constraints Models The essence of recourse models
is that infeasibilities in the second or higher stages are
allowed and can be compensated at a certain penalty. In chance
constraint models, developed by Charnes and Cooper [65], the
focus lies on the reliability of the system in contrast. This
reliability is expressed as a minimum requirement on the
probability of satisfying constraints. The resulting program is
called a chance-constrained stochastic program. Under certain
assumptions on the probability distribution (like e.g. Normal,
Cauchy, Uniform), chance- constraints can be converted into a
deterministic equivalent form and then be treated as in a
traditional recourse problem as mentioned above. More details on
chance-constrained stochastic programming can be found in [174].
Over the last 20 years, also stochastic programming with integer
or binary recourse variables has received tremendous research
attention in both application and algorithmic aspects. However,
compared to the continuous case of a TSSLP or MSSLP, relatively
little is known on theory and algorithms for linear mixed-integer
stochastic programming in general. Given that duality results do
not hold in general IP and MIP formulations, it is also difficult
to extend current algorithms for the linear case in order to
generate cutting planes, except when only the first stage
contains integer variables or other special cases. A survey of
developments can be found in [198, 199], a stochastic integer
programming bibliography with publications from the years
1996-2007 can be found at [217].
Compared to stochastic programming models, a QLP can be
interpreted as a worst-case MSSLP with fixed recourse and only
the right-hand side being affected by uncertainties. The main
advantage of quantified linear and integer programming is that
uncertainty can be easily modeled without detailed knowledge of
the underling probability distribution. Furthermore, while SP
models minimize the expected value of an objective function with
respect to a set of scenarios, QLPs minimize an objective
function with respect to the possible worst case (maximum loss).
There is a recursive node-based definition for QLPs, similar as
for MSSLPs, and it has been shown how a DEP of a QLP can be
constructed.
Robust Optimization In contrast to stochastic programming,
robust optimization is a more recent approach in the context of
optimization under uncertainty, in which the uncertainty model
is not stochastic, but rather deterministic and set-based.
Instead of seeking to immunize the solution in some probabilistic
sense to stochastic uncertainty, here the decision-maker
constructs a solution that is valid for any realization of
uncertainty in a given set [36]. While applications of stochastic
programming have been reported over many years in the literature,
robust optimization appeared recently, with most research in the
past ten years (see, e.g. [24, 38, 42, 18, 36, 215] and the
references therein). The roots can be found in the field of
robust control and in the work of Soyster [204]. However, high
interest in both theoretical aspects and practical applications
started in the first place with the work of Ben-Tal and Nemirovski
[20, 21], and El-Ghaoui et al [109, 110] in the late 1990s.
The idea of robust optimization is to define a so-called
uncertanty set U for the uncertain parameters in the input
data (c, A, b) and then to require that the constraint system
should hold for any possible realization of the data from U. The
optimization problem modeling this requirement is called the
robust counterpart problem (RCP), an optimal solution of the RCP
is called robust optimal solution. The robust optimization
methodology can be applied to any optimization problem (e.g.
quadratically constrained quadratic programs (QCQPs) or
semidefinite programs (SDPs)), in which the uncertain data is
known to lie in a given uncertainty set. Though it is still a
relatively new approach, it has already proved very useful in
many applications. In [18] a list of problem classes for which
robust optimization is applicable is listed.
One might imagine that the addition of robustness to a general
optimization problem comes at the expense of a significantly
increased computational complexity, and indeed, the RCP of an
arbitrary convex optimization problem is in general intractable
[18]. Nevertheless, depending on the structure of the underlying
optimization problem and the uncertainty set U , the corresponding
RCP can be solved or at least be approximated in polynomial time
for many application cases [21]. Ben-Tal and Nemirovski [22]
showed that the RCP of a linear optimization problem is
essentially always tractable for many practical uncertainty sets.
Due to the results in [18], in the case of robust linear
optimization, the objective and the right-hand side of the
constraints can be assumed to be deterministic and w.l.o.g.
constraint-wise uncertainty can be assumed. It hence suffices
to focus on single constraints of a special form. These are
referred as the subproblems which must be solved in order to
solve the entire RCP. Of course, a major modeling decision
concerns the choice of the uncertainty set U and the resulting
RCP might not longer be an LP [17]. In particular, the RCP of an
LP with a polyhedral uncertainty set becomes a linear programming
problem, while the RCP of an LP with an ellipsoidal uncertainty
set becomes a second-order cone problem respectively [24]. Other
interesting results have been achieved recently, e.g. under very
natural assumptions, robust LPs can be formulated as semi-definite
programming problems and thus solved by a polynomial time
approximation scheme [21, 18]. Several attempts to extend the
ideas to discrete robust optimization have been e.g. made in
[37, 40, 39]. However, unlike the case of continuous robust
optimization, the conditions on the uncertainty sets are rather
restrictive to still ensure the efficient solvability of the RCP
as shown in [39]. Therefore, approximation techniques based on
piecewise linearization are currently the means of choice for
such problems [42].
While stochastic programming problems can result in large
deterministic linear models when considering many scenarios, the
RCP models grow only slightly when uncertainty is added in general
and therefore, they can often be solved efficiently. Another
crucial difference is that in the stochastic programming approach
constraints may be violated with a certain probability
(chance-constraints) or at a given penalty (recourse model),
whereas robust optimization is associated with hard constraints,
which must be satisfied whatever the realization of the data
(c, A, b) in the uncertainty set U is [22]. Thus, in many cases a
single-stage robust optimization model tends to be to conservative
[11], especially in situations where some recourse decisions can
be made after the uncertainty is revealed. To address this issue,
two-stage models – also called robust adjustable or adaptable
optimization – were proposed e.g. in [19]. However, two-stage
problems are very challenging to compute, even formulations with
LP problems in both stages can be NP-hard [19]. Nevertheless, in
many real-world problems, decisions and uncertainty are often
repeatedly interwoven over time and RO model are not capable to
handle this efficiently due to the difficulty in incorporating
multiple stages [215].
From the viewpoint of quantified linear and integer programming,
robust optimization problems can be viewed as QLPs with one
quantifier change (98), whereas in QLPs multiple quantifier
changes occur in general. However, from the viewpoint of robust
optimization, where arbitrary uncertainty sets may appear (though
not necessarily being tractable at all), the uncertainty set of
QLPs is a simple hyperrectangle.
Other LP-based solution techniques Apart from stochastic
programming and robust optimization as mentioned above, typical
solution approaches for problems that are affected by uncertainty
are sensitivity analysis [179], LP-based approximation techniques
[155], dynamic programming [16], and the exploration of
Markov-chains [237] for example. Sensitivity analysis is a simple
classical method of addressing uncertainty, which is typically
applied as a post-optimization tool for quantifying the goodness
or robustness of a solution, e.g. the change in cost for small
perturbations in the underlying problem data. It is not
particularly helpful for generating solutions that are robust to
data changes in the sense of an a priori ensured feasibility when
the problem parameters vary within the prescribed uncertainty set
[234]. A further possibility to deal with these uncertainties is
an approximation where a given probability distribution is
aggregated to a single estimated number. Then, the optimum
concerning these estimated input data can be computed with the
help of traditional optimization tools. In some fields of
application, as e.g. the fleet assignment problem of airlines,
this procedure was successfully established [117]. In other
fields, like production planning and control, this technique
could not be successfully applied, although mathematical models
do exist [172]. Dynamic programming was developed by Richard
Bellman in the early1950s [16] and is a computational method to
deal with multi-stage decision processes in a dynamic environment.
In fact, decision-making under uncertainty was the original and
intended application of this technique [164]. Dynamic programming
usually refers to simplifying a decision by breaking it down into
a sequence of decision steps over time. It starts solving all
subproblems at the final stage and then uses the solution to
recursively compute solutions for higher stages until the original
problem at the root is solved eventually. For more information on
solution approaches in the context of optimization under
uncertainty we refer the reader to [193, 41] and the references
therein.
Quantification of Variables and Constraints The explicit
quantification of variables and constraints allows to express
many problems that cannot be modeled using classical modeling
paradigms. Quantifiers are a powerful tools to model uncertainty
or adversarial situations. For example, if a decision is based
on a value that is not exactly known in advance, one might look
for robust solutions that are valid for all possible values that
might occur. As the efforts of extending languages with
quantifiers have not only been made for linear programs in terms
of QLPs and QIPs, we give an overview in the following, where we
briefly sketch the achievements from other fields in this context.
Boolean Satisfiablility (SAT) and Constraint
Satisfaction (CSP)
Many real-world problems are combinatorial search problems that
can be represented in terms of decision variables and constraints.
Besides integer programming (IP), boolean satisfiability problems
(SAT) and constraint satisfaction problems (CSP) are very
successful frameworks that are used to model and solve these
problems.
The SAT problem is one of the most important and extensively
studied problems in computer science. Given a propositional
boolean formula, the SAT problem asks for an assignment of
variables such that the formula evaluates to true, or a proof that
no such assignment exists [10, 44]. SAT is one of the classic
problems in computer science, it is of theoretical interest
because it is the canonical NP-complete problem [70] (cf.
Section 4.2 for details). Apart from its high theoretical
relevance, SAT has many practical applications as e.g. model
checking, design verification, or AI planning [152, 68].
Integer Programming (IP) generalizes the standard SAT problem,
by allowing the possible values for the variables to be chosen
from an arbitrary finite set, and allowing the constraints to be
arbitrary linear constraints rather than just clauses. Whereas
the solution approaches for IPs often rely on LP-relaxations
combined with branch-and-cut procedures to find optimal integer
solutions, the core of most modern SAT solvers is the
Davis-Putnam-Logemann-Loveland (DPLL) algorithm [78, 77], which
is essentially a depth-first-search with some pruning techniques
like unit propagation [235], conflict-driven clause learning
[153], and back- tracking [150].
The constraint satisfaction problem (CSP) also generalizes the
standard SAT problem, by allowing the possible values for the
variables to be chosen from an arbitrary finite set. Furthermore,
the constraints are allowed to to be arbitrary constraints rather
than just clauses or linear constraints and thus, CSP also
generalizes IP. The CSP problem consists of a set of variables,
each with a finite domain of values and a set of constraints on
subsets of these variables.
CSPs provide a general framework to express a wide variety of
combinatorial search problems and they play an important role in
many areas of computer science and artificial intelligence [52].
They also find their application in the Operations Research
community in the context of Location Planning, Scheduling, Car
Sequencing, Cutting Stock Problems, Vehicle Routing, Timetabeling,
Rostering and many more [54]. SAT, CSP and the IP feasibility
problem are three different paradigms to model and solve
combinatorial search problems, and as each of the problem
classes is NP-complete, they can be reduced among each others in
polynomial time [225, 32, 238]. However, each paradigm has its
strengths and weaknesses. Due to their complementary strengths,
there is an increasing belief that hybrid methods may often
perform better than pure methods. An approach to integrate
techniques from constraint programming and (mixed-integer)
programming in order to exploit the strengths of both fields, is
the constraint integer programming framework SCIP [2, 4].
However, there are classes of problems containing uncertainty
that cannot be expressed within these frameworks. QCSPs and QSAT,
which are the quantified extensions of CSP and SAT and allow for
universally quantified variables, make it possible to model such
problems that contain uncertainty. As a result, these extensions
have been attracting significant interest in recent years. In the
following, we highlight the key ideas of both approaches and
specify their basic solution strategies.
Bibliography
[10] B. Aspvall, M. F. Plass, and R. E. Tarjan. A linear-time
algorithm for testing the truth of certain quantified boolean
formulas. Inf. Process. Lett., 8(3):121–123, 1979. Erratum:
Information Processing Letters 14(4): 195 (1982).
[11] A. Atamtürk and M. Zhang. Two-stage robust network flow
and design under demand uncertainty. Operations Research,
55(4):662–673, 2007.
[15] E. M. L. Beale. On minimizing a convex function subject
to linear inequalities. J. Roy. Statist. Soc. Ser. B., 17,
1955.
[16] R. Bellman. Dynamic Programming. Princeton University
Press, Princeton, NJ, USA, 1. edition, 1957.
[17] A. Ben-Tal, D. den Hertog, A. De Waegenaere, B. Melenberg,
and G. Rennen. Robust solutions of optimization problems
affected by uncertain probabilities. Manage. Sci.,
59(2):341–357, Feb. 2013.
[18] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust
optimization. Princeton University Press, 2009.
[19] A. Ben-Tal, A. Goryashko, E. Guslitzer, and A. Nemirovski.
Adjustable robust solutions of uncertain linear programs.
Mathematical Programming, 99(2):351–376, Mar. 2004.
[20] A. Ben-Tal and A. Nemirovski. Robust truss topology design
via semidefinite programming, research report 4/95. In
Optimization Laboratory, Faculty of Industrial Engineering and
Management, Technion The Israel Institute of Technology,
Technion City, Haifa 32000, 1995.
[21] A. Ben-Tal and A. Nemirovski. Robust convex optimization.
Mathematics of Operations Research, 23:769–805, 1998.
[22] A. Ben-Tal and A. Nemirovski. Robust solutions of
uncertain linear programs. Operations Research Letters,
25:1–13, 1999.
[23] A. Ben-Tal and A. Nemirovski. Robust solutions of linear
programming problems contaminated with uncertain data.
Mathematical Programming, 88:411–424, 2000.
[24] A. Ben-Tal and A. Nemirovski. Robust optimization -
methodology and applications. Mathematical Programming,
92(3):453–480, 2002.
[32] H. Bennaceur. A comparison between sat and csp techniques.
Constraints, 9(2):123–138, Apr. 2004.
[36] D. Bertsimas, D. B. Brown, and C. Caramanis. Theory and
applications of robust optimization. SIAM Rev., 53(3):464–501,
Aug. 2011.
[37] D. Bertsimas and M. Sim. Robust discrete optimization and
network flows. Mathematical Programming, 98(1-3):49–71, 2003.
[38] D. Bertsimas and M. Sim. The price of robustness.
OperationsResearch, 52(1):35–53,2004.
[39] D. Bertsimas and M. Sim. Robust Discrete Optimization
Under Ellipsoidal Uncertainty Sets. 2004.
[40] D. Bertsimas and A. Thiele. A robust optimization approach
to supply chain management. In IPCO, pages 86–100, 2004.
[41] D. Bertsimas and A. Thiele. Robust and Data-Driven
Optimization: Modern Decision-Making Under Uncertainty. In
Tutorials on Operations Research, INFORMS, 2006.
[42] H.-G. Beyer and B. Sendhoff. Robust optimization A
comprehensive survey. Computer Methods in Applied Mechanics and
Engineering, 196:3190–3218, 2007.
[44] A. Biere, M. Heule, H. van Maaren, and T. Walsh, editors.
Handbook of Satisfiability, volume 185 of Frontiers in
Artificial Intelligence and Applications. IOS Press, 2009.
[47] J. R. Birge and F. Louveaux. Introduction to stochastic
programming. Springer series in operations research. Springer,
New York, 1997.
[52] F. Börner, A. A. Bulatov, H. Chen, P. Jeavons, and A. A.
Krokhin. The complexity of constraint satisfaction games and
qcsp. Inf. Comput., 207(9):923–944, 2009.
[54] S.C. Brailsford, C.N. Potts and B.M. Smith.
Constraintsatisfaction problems: Algorithms and applications.
European Journal of Operational Research, 119(3):557–581, 1999.
[57] P. Brucker. Scheduling Algorithms. Springer, Secaucus,
NJ, USA, 3rd edition, 2001.
[65] A. Charnes and W. W. Cooper. Chance-Constrained
programming. Management Science, 6(1):73–79, 1959.
[68] K. Claessen, N. Ee ́n, M. Sheeran, N. Sörensson ,A. Voronov,
K.Akesson. Sat-solving in practice, with a tutorial example
from supervisory control. Discrete Event Dynamic Systems,
19(4):495–524, 2009.
[70] S. A. Cook. The complexity of theorem-proving procedures.
In Proceedings of the third annual ACM symposium on Theory of
computing, STOC ’71, pages 151–158, New York, NY, USA, 1971. ACM.
[72] G. B. Dantzig. Linear programming under uncertainty.
Management Science, 1(3-4):197– 206, 1955.
[77] M. Davis, G. Logemann, and D. W. Loveland. A machine
program for theorem-proving. Commun. ACM, 5(7):394–397, 1962.
[78] M. Davis and H. Putnam. A computing procedure for
quantification theory. Jounal of the ACM, 7(3):201–215, 1960.
[88] T. Ederer, U. Lorenz, A. Martin, and J. Wolf. Quantified
linear programs: a computational study. Proc. of the 11th Ann.
Europ. Symp. on Algorithms (ESA 2012), pp. 203–214, 2011,
Springer.
[89] T. Ederer, U. Lorenz, T. Opfer, and J. Wolf. Modeling
games with the help of quantified integer linear programs. In
ACG, pages 270–281, 2011.
[91] P. Eirinakis, S. Ruggieri, K. Subramani,and
P.J.Wojciechowski: Computational complexity of inclusion
queries over polyhedral sets. In ISAIM, 2012.
[108] R. Gerber, W. Pugh, and M. Saksena. Parametric
dispatching of hard real-time tasks. IEEE Transactions on
Computers, 44:471–479, 1995.
[109] L. E. Ghaoui and H. Lebret. Robust solutions to
least-squares problems with uncertain data, 1997.
[110] L.E. Ghaoui, F. Oustry and H. Lebret. Robust solutions
to uncertain semidefinite programs. SIAM J. OPTIMIZATION,
9(1):33–52, 1998.
[117] S. Grothklags. Fleet assignment with connection dependent
ground times. In Proc. of the 11th Ann. Europ. Symp. on
Algorithms (ESA 2003), pages 667–678, 2003.
[125] T. Huynh, L. Joskowicz, C. Lassez, and J.-L. Lassez.
Reasoning about linear constraints using parametric queries.
In FSTTCS, pages 1–20, 1990.
[128] P. Kall and S. Wallace. Stochastic programming.
Wiley-Interscience series in systems and optimization. Wiley,
1994.
[135] D.E. Knuth and R.W.Moore. An analysis of alpha-beta
pruning. Artif.Intell.,6(4):293–326, 1975.
[147] U. Lorenz, A.Martin and J. Wolf. Polyhedral and
algorithmic properties of quantified linear programs. In Proc.
of the 18th Ann. Europ. Symp. on Algorithms (ESA 2011,
pages 512–523, 2010.
[150] S. Malik and L. Zhang. Boolean satisfiability from
theoretical hardness to practical success. Commun. ACM,
52(8):76–82, Aug. 2009.
[152] J. Marques-Silva. Practical applications of boolean
satisfiability. In Workshop on Discrete Event Systems (WODES).
IEEE Press, 2008.
[155] N. Megow and T. Vredeveld. Approximation results for
preemtive stochastic online scheduling. Proc. of the 14th Ann.
Europ. Symp. on Algorithms (ESA 2006, 2006).
[164] C. H. Papadimitriou. Games against nature. J. Comput.
Syst. Sci., 31(2):288–301, 1985.
[172] Y. Pochet and L. A. Wolsey. Production Planning by Mixed
Integer Programming (Springer Series in Operations Research
and Financial Engineering). Springer, Secaucus, NJ, USA, 2006.
[174] A. Prekopa. Stochastic Programming. Mathematics and Its
Applications. Springer, 1995.
[179] J. Renegar. Some perturbation theory for linear
programming. Mathematical Programming, 65:73–91, 1992.
[185] S. Ruggieri, P. Eirinakis, K. Subramani, and P. J.
Wojciechowski. On the complexity of quantified linear systems.
Theor. Comput. Sci., 518:128–134, 2014.
[189] A.Ruszczyn ́ski and A. Shapiro. Stochastic Programming.
Handbooks in operations research and management science.
Elsevier, 2003.
[193] N. V. Sahinidis. Optimization under uncertainty:
State-of-the-art and opportunities. Computers and Chemical
Engineering, 28:971–983, 2004.
[194] M. C. Saksena. Parametric Scheduling for Hard Real-time
Systems. PhD thesis, College Park, MD, USA, 1994.
[197] A. Schrijver. Theory of linear and integer programming.
John Wiley and Sons, Inc., New York, NY, USA, 1986.
[198] R. Schultz. Stochastic programming with integer variables.
Math. Progr.,97:285–309,2003.
[199] S. Sen. Algorithms for Stochastic Mixed-Integer
Programming Models. Handbooks in OR and MS.
[204] A. L. Soyster. Convex Programming with Set-Inclusive
Constraints and Applications to Inexact Linear Programming.
Operations Research, 21(5):1154–1157, 1973.
[206] K. Subramani. Parametric scheduling - algorithms and
complexity. In HiPC, pages 36–46, 2001.
[207] K. Subramani. A specification framework for real-time
scheduling. In SOFSEM, pages 195–207, 2002.
[209] K. Subramani. Analyzing selected quantified integer
programs. In IJCAR, pages 342–356, 2004.
[210] K.Subramani. Partially clair voyant scheduling for
aggregate constraints. JAMDS,9(4):225– 240, 2005.
[215] A. Thiele, C. Murat, and V. Gabrel. Recent advances in
robust optimization: An overview. European Journal of
Operational Research, 2013.
[217] M. H. van der Vlerk. Stochastic integer programming
bibliography. World Wide Web,
www.eco.rug.nl/mally/biblio/sip.html,
1996-2007. [Online;
ac- cessed 1-July-2014].
[218] M. H. van der Vlerk. Stochastic programming bibliography.
World Wide Web, http:// www.eco.rug.nl/mally/spbib.html,
1996-2007. [Online; accessed 1-July-2014].
[224] S. W. Wallace and W. T. Ziemba. Applications of
Stochastic Programming. MPS-SIAM Series on Optimization.
Society for Industrial and Applied Mathematics, 2005.
[225] T. Walsh. Sat v csp. In R. Dechter, editor, CP, volume
1894 of Lecture Notes in Computer Science, pages 441–456.
Springer, 2000.
[234] H.-X. Yu and L. Jin. An brief introduction to robust
optimization approach. Int. J. Pure Appl. Math.,
74(1):121–124, 2012.
[235] R. Zabih and D. A. McAllester. A rearrangement search
strategy for determining propositional satisfiability. In H. E.
Shrobe, T. M. Mitchell, and R. G. Smith, editors, AAAI, pages
155–160. AAAI Press / The MIT Press, 1988.
[237] L. Zhang, H. Hermanns, F. Eisenbrand, and D. Jansen.
Flow faster: Efficient decision algorithms for probabilistic
simulations. Logical Methods in Computer Science, 4(4), 2008.
[238] N.-F. Zhou, M. Tsuru, and E. Nobuyama. A comparison of cp,
ip, and sat solvers through a common interface. In Proceedings
of the 2012 IEEE 24th International Conference on Tools with
Artificial Intelligence - Volume 01, ICTAI ’12, pages 41–48,
Washington, DC, USA, 2012. IEEE Computer Society.
|
Downloads
Yasol for Mac OS
Yasol for Linux
Yasol for Windows
Instances
|