cp 2012
18th International Conference on
Principles and Practice of Constraint Programming
To be held in Québec City, Canada from 8-12th October 2012

Invited Tutorials

Multi-Armed Bandits and Monte-Carlo Tree Search: advances and open problems related to the Exploration versus Exploitation dilemma.

Michèle Sebag


Multi-Armed Bandits (MAB) are concerned with finding the best option among a set of options in a stochastic environment, through an adaptive exploration schedule. Monte-Carlo Tree Search (MCTS), extending MAB to tree-structured search spaces, is concerned with finding the best sequence of options in a stochastic environment.

Since 2006, these settings have been thoroughly investigated to deal with strategic games, such as the game of Go, when the domain knowledge is notoriously insufficient to support an efficient non-adaptive search. In such cases, one must learn about the search tree while searching it: MAB and MCTS are all about building a virtuous circle between learning and efficient search.

The tutorial will present some MAB and MCTS algorithms, theoretical guarantees and applications. If time permits, the issues raised by their application to portfolio-based algorithm or heuristic selection will also be discussed.

Optimization for Disaster Management

Pascal Van Hentenryck

The Katrina report in 2006 identified a strong need for advanced decision support in responding to disasters, complementing the role of situational awareness which has hitherto been the focus. This tutorial will review why this recommendation is more important than ever, present some of the progress accomplished in the last 5 years, and describe the computational challenges ahead in optimization, simulation, and the modeling of complex inter-dependent infrastructures.

Constraint Programming with Decision Diagrams

Willem Jan van Hoeve


In this tutorial, I will present an overview of the recent successful applications of multivalued decision diagrams (MDDs) in Constraint Programming.

In particular, I will discuss how constraint propagation algorithms can be extended to operate on MDDs. By propagating MDDs instead of variable domains, more structural information can be communicated between constraints, which can lead to substantial speedups. Furthermore, limited-size MDDs can serve as discrete relaxations for combinatorial optimization problems. The resulting bounds can be much stronger than continuous bounds based on Linear Programming, and can be faster to compute. Lastly, I will discuss how these techniques can be implemented in existing Constraint Programming systems.