**Keywords:** Reformulation techniques in
Mixed Integer Programming (MIP), Polyhedral approaches (cut
generation), Robust Optimization, Approximation Algorithms, Extended formulations, Lagrangian Relaxation (Column Generation) based
algorithms, Dantzig and Benders Decomposition, Primal Heuristics, Graph Theory,
Constraint Programming.

Quantitative modeling is routinely used in both industry and administration to design and
operate transportation, distribution, or production systems. Optimization concerns every
stage of the decision-making process: long term investment budgeting and activity planning,
tactical management of scarce resources, or the control of day-to-day operations. In many
optimization problems that arise in decision support applications the most important
decisions (control variables) are discrete in nature: such as on/off decision to buy, to
invest, to hire, to send a vehicle, to allocate resources, to decide on precedence in
operation planning, or to install a connection in network design. Such *combinatorial
optimization* problems can be modeled as linear or nonlinear programs with integer decision
variables and extra variables to deal with continuous adjustments. The most widely used
modeling tool consists in defining the feasible decision set using linear inequalities with a
mix of integer and continuous variables, so-called Mixed Integer Programs (MIP), which
already allow a fair description of reality and are also well-suited for global
optimization. The solution of such models is essentially based on enumeration techniques and
is notoriously difficult given the huge size of the solution space.

Commercial solvers have made significant progress but remain quickly overwhelmed beyond a certain problem size. A key to further progress is the development of better problem formulations that provide strong continuous approximations and hence help to prune the enumerative solution scheme. Effective solution schemes are a complex blend of techniques: cutting planes to better approximate the convex hull of feasible (integer) solutions, extended reformulations (combinatorial relations can be formulated better with extra variables), constraint programming to actively reduce the solution domain through logical implications along variable fixing based on reduced cost, Lagrangian decomposition methods to produce powerful relaxations, and Bender's decomposition to project the formulation, reducing the problem to the important decision variables, and to implement multi-level programming that models a hierarchy of decision levels or recourse decision in the case of data adjustment, primal heuristics and meta-heuristics (greedy, local improvement, or randomized partial search procedures) to produce good candidates at all stage of the solution process, and branch-and-bound or dynamic programming enumeration schemes to find a global optimum, with specific strong strategies for the selection on the sequence of fixings. The real challenge is to integrate the most efficient methods in one global system so as to prune what is essentially an enumeration based solution technique. The progress are measured in terms of the large scale of input data that can now be solved, the integration of many decision levels into planning models, and not least, the account taken for random (or dynamically adjusted) data by way of modeling expectation (stochastic approaches) or worst-case behavior (robust approaches).

Building on complementary expertise, our team's overall goals are threefold:

To design tight formulations for specific combinatorial optimization problems and generic models, relying on delayed cut and column generation, decomposition, extended formulations and projection tools for linear and nonlinear mixed integer programming models. To develop generic methods based on such strong formulations by handling their large scale dynamically. To generalize algorithmic features that have proven efficient in enhancing performance of exact optimization approaches. To develop approximation schemes with proven optimality gap and low computational complexity. More broadly, to contribute to theoretical and methodological developments of exact and approximate approaches in combinatorial optimization, while extending the scope of applications and their scale.

To demonstrate the strength of cooperation between complementary exact mathematical optimization techniques, dynamic programming, robust and stochastic optimization, constraint programming, combinatorial algorithms and graph theory, by developing “efficient” algorithms for specific mathematical models. To tackle large-scale real-life applications, providing provably good approximate solutions by combining exact, approximate, and heuristic methods.

To provide prototypes of modelers and solvers based on generic software tools that build on our research developments, writing code that serves as the proof-of-concept of the genericity and efficiency of our approaches, while transferring our research findings to internal and external users.

integer programming, graph theory, decomposition approaches, polyhedral approaches, quadratic programming approaches, constraint programming.

*Combinatorial optimization* is the field of discrete optimization problems. In many
applications, the most important decisions (control variables) are binary (on/off decisions)
or integer (indivisible quantities). Extra variables can represent continuous adjustments or
amounts. This results in models known as *mixed integer programs* (MIP), where the
relationships between variables and input parameters are expressed as linear constraints and
the goal is defined as a linear objective function. MIPs are notoriously difficult to solve:
good quality estimations of the optimal value (bounds) are required to prune
enumeration-based global-optimization algorithms whose complexity is exponential. In the
standard approach to solving an MIP is so-called *branch-and-bound algorithm* :

Progress can be expected from the development of tighter formulations. Central to our field is the characterization of polyhedra defining or approximating the solution set and combinatorial algorithms to identify “efficiently” a minimum cost solution or separate an unfeasible point. With properly chosen formulations, exact optimization tools can be competitive with other methods (such as meta-heuristics) in constructing good approximate solutions within limited computational time, and of course has the important advantage of being able to provide a performance guarantee through the relaxation bounds. Decomposition techniques are implicitly leading to better problem formulation as well, while constraint propagation are tools from artificial intelligence to further improve formulation through intensive preprocessing. A new trend is robust optimization where recent progress have been made: the aim is to produce optimized solutions that remain of good quality even if the problem data has stochastic variations. In all cases, the study of specific models and challenging industrial applications is quite relevant because developments made into a specific context can become generic tools over time and see their way into commercial software.

Our project brings together researchers with expertise in mathematical programming (polyhedral approaches, decomposition and reformulation techniques in mixed integer programing, robust and stochastic programming, and dynamic programming), graph theory (characterization of graph properties, combinatorial algorithms) and constraint programming in the aim of producing better quality formulations and developing new methods to exploit these formulations. These new results are then applied to find high quality solutions for practical combinatorial problems such as routing, network design, planning, scheduling, cutting and packing problems, High Performance and Cloud Computing.

Adding valid inequalities to the polyhedral description of an MIP allows one to improve the
resulting LP bound and hence to better prune the enumeration tree. In a cutting plane
procedure, one attempt to identify valid inequalities that are violated by the LP solution of
the current formulation and adds them to the formulation. This can be done at each node of
the branch-and-bound tree giving rise to a so-called *branch-and-cut algorithm*
. The goal is to reduce the resolution of an integer program to that of a
linear program by deriving a linear description of the convex hull of the feasible
solutions. Polyhedral theory tells us that if *separation
problem* over the associated polyhedron *separation procedures* (cutting plane generation). Only a
subset of the inequalities *cutting
plane algorithm* at each node of the branch-and-bound tree, gives rise to the algorithm
called *branch-and-cut*.

An hierarchical approach to tackle complex combinatorial problems consists in considering
separately different substructures (subproblems). If one is able to implement relatively
efficient optimization on the substructures, this can be exploited to reformulate the global
problem as a selection of specific subproblem solutions that together form a global
solution. If the subproblems correspond to subset of constraints in the MIP formulation, this
leads to Dantzig-Wolfe
decomposition. If
it corresponds to isolating a subset of decision variables, this leads to Bender's
decomposition. Both lead to extended formulations of the problem with either a huge number of
variables or constraints. Dantzig-Wolfe approach requires specific algorithmic approaches to
generate subproblem solutions and associated global decision variables dynamically in the
course of the optimization. This procedure is known as *column generation*, while its
combination with branch-and-bound enumeration is called *branch-and-price*.
Alternatively, in Bender's approach, when dealing with exponentially many constraints in the
reformulation, the *cutting plane procedures* that we defined in the previous section are
well-suited tools. When optimization on a substructure is (relatively) easy, there often
exists a tight reformulation of this substructure typically in an extended variable space.
This gives rise powerful reformulation of the global problem, although it might be
impractical given its size (typically pseudo-polynomial). It can be possible to project (part
of) the extended formulation in a smaller dimensional space if not the original variable
space to bring polyhedral insight (cuts derived through polyhedral studies can often be
recovered through such projections).

When one deals with combinatorial problems with a large number of integer variables, or tightly constrained problems, mixed integer programming (MIP) alone may not be able to find solutions in a reasonable amount of time. In this case, techniques from artificial intelligence can be used to improve these methods. In particular, we use variable fixing techniques, primal heuristics and constraint programming.

Primal heuristics are useful to find feasible solutions in a small amount of time. We focus on heuristics that are either based on integer programming (rounding, diving, relaxation induced neighborhood search, feasibility pump), or that are used inside our exact methods (heuristics for separation or pricing subproblem, heuristic constraint propagation, ...). Such methods are likely to produce good quality solutions only if the integer programming formulation is of top quality, i.e., if its LP relaxation provides a good approximation of the IP solution.

In the same line, variable fixing techniques, that are essential in reducing the size of large scale problems, rely on good quality approximations: either tight formulations or tight relaxation solvers (as a dynamic program combined with state space relaxation). Then if the dual bound derives when the variable is fixed to one exceeds the incubent solution value, the variable can be fixed to zero and hence removed from the problem. The process can be apply sequentially by refining the degree of relaxation.

Constraint Programming (CP) focuses on iteratively reducing the variable domains (sets of feasible values) by applying logical and problem-specific operators. The latter propagates on selected variables the restrictions that are implied by the other variable domains through the relations between variables that are defined by the constraints of the problem. Combined with enumeration, it gives rise to exact optimization algorithms. A CP approach is particularly effective for tightly constrained problems, feasibility problems and min-max problems. Mixed Integer Programming (MIP), on the other hand, is known to be effective for loosely constrained problems and for problems with an objective function defined as the weighted sum of variables. Many problems belong to the intersection of these two classes. For such problems, it is reasonable to use algorithms that exploit complementary strengths of Constraint Programming and Mixed Integer Programming.

Decision makers are usually facing several sources of uncertainty, such as the variability in time or estimation errors. A simplistic way to handle these uncertainties is to overestimate the unknown parameters. However, this results in over-conservatism and a significant waste in resource consumption. A better approach is to account for the uncertainty directly into the decision aid model by considering mixed integer programs that involve uncertain parameters. Stochastic optimization account for the expected realization of random data and optimize an expected value representing the average situation. Robust optimization on the other hand entails protecting against the worst-case behavior of unknown data. There is an analogy to game theory where one considers an oblivious adversary choosing the realization that harms the solution the most. A full worst case protection against uncertainty is too conservative and induces very high over-cost. Instead, the realization of random data are bound to belong to a restricted feasibility set, the so-called uncertainty set. Stochastic and robust optimization rely on very large scale programs where probabilistic scenarios are enumerated. There is hope of a tractable solution for realistic size problems, provided one develops very efficient ad-hoc algorithms. The techniques for dynamically handling variables and constraints (column-and-row generation and Bender's projection tools) that are at the core of our team methodological work are specially well-suited to this context.

In some contexts, obtaining an exact solution to an optimization
problem is not feasible: when instances are too large, or when
decisions need to be taken rapidly. Since most of the combinatorial
optimization problems are NP-hard, another direction to obtain good
quality solutions in reasonable time is to focus on
**approximation algorithms**. The definition of approximation
algorithms is based on the notion of input set

The objective is to search for polynomial algorithms, with
approximation ratios as close to 1 as possible. Such algorithms are
called *worst-case* approximation algorithms, because the
performance guarantee is expressed over all possible inputs of the
problem. The design of these algorithms have strong links with the
enumeration techniques described above: since computing **strong a
priori bounds** on the optimal solution value which can afterward be
compared to estimations of the value of the solution produced. In many
cases, it is also possible to build

Many fundamental combinatorial optimization problems can be modeled as the search for a
specific structure in a graph. For example, ensuring connectivity in a network amounts to
building a *tree* that spans all the nodes. Inquiring about its resistance to failure
amounts to searching for a minimum cardinality *cut* that partitions the graph. Selecting
disjoint pairs of objects is represented by a so-called *matching*. Disjunctive choices
can be modeled by edges in a so-called *conflict graph* where one searches for *stable sets* – a set of nodes that are not incident to one another. Polyhedral
combinatorics is the study of combinatorial algorithms involving polyhedral considerations.
Not only it leads to efficient algorithms, but also, conversely, efficient algorithms often
imply polyhedral characterizations and related min-max relations. Developments of polyhedral
properties of a fundamental problem will typically provide us with more interesting
inequalities well suited for a branch-and-cut algorithm to more general
problems. Furthermore, one can use the fundamental problems as new building bricks to
decompose the more general problem at hand. For problem that let themselves easily be
formulated in a graph setting, the graph theory and in particular graph decomposition theorem
might help.

We are actively working on problems arising in network topology design, implementing a survivability condition of the form “at least two paths link each pair of terminals”. We have extended polyhedral approaches to problem variants with bounded length requirements and re-routing restrictions . Associated to network design is the question of traffic routing in the network: one needs to check that the network capacity suffices to carry the demand for traffic. The assignment of traffic also implies the installation of specific hardware at transient or terminal nodes.

To accommodate the increase of traffic in telecommunication networks, today's optical networks use grooming and wavelength division multiplexing technologies. Packing multiple requests together in the same optical stream requires to convert the signal in the electrical domain at each aggregation of disaggregation of traffic at an origin, a destination or a bifurcation node. Traffic grooming and routing decisions along with wavelength assignments must be optimized to reduce opto-electronics system installation cost. We developed and compared several decomposition approaches , , to deal with backbone optical network with relatively few nodes (around 20) but thousands of requests for which traditional multi-commodity network flow approaches are completely overwhelmed. We also studied the impact of imposing a restriction on the number of optical hops in any request route . We also developed a branch-and-cut approach to a problem that consists in placing sensors on the links of a network for a minimum cost , .

The Dial-a-Ride Problem is a variant of the pickup and delivery problem with time windows, where the user inconvenience must be taken into account. In , ride time and customer waiting time are modeled through both constraints and an associated penalty in the objective function. We develop a column generation approach, dynamically generating feasible vehicle routes. Handling ride time constraints explicitly in the pricing problem solver requires specific developments. Our dynamic programming approach for pricing problem makes use of a heuristic dominance rule and a heuristic enumeration procedure, which in turns implies that our overall branch-and-price procedure is a heuristic. However, in practice our heuristic solutions are experimentally very close to exact solutions and our approach is numerically competitive in terms of computation times.

In , , we consider the problem of covering an urban area with sectors under additional constraints. We adapt the aggregation method to our column generation algorithm and focus on the problem of disaggregating the dual solution returned by the aggregated master problem.

We studied several time dependent formulations for the unit demand vehicle routing problem , . We gave new bounding flow inequalities for a single commodity flow formulation of the problem. We described their impact by projecting them on some other sets of variables, such as variables issued of the Picard and Queyranne formulation or the natural set of design variables. Some inequalities obtained by projection are facet defining for the polytope associated with the problem. We are now running more numerical experiments in order to validate in practice the efficiency of our theoretical results.

We also worked on the p-median problem, applying the matching theory to develop an efficient algorithm in Y-free graphs and to provide a simple polyhedral characterization of the problem and therefore a simple linear formulation simplifying results from Baiou and Barahona.

We considered the multi-commodity transportation problem. Applications of this problem arise in, for example, rail freight service design, "less than truckload" trucking, where goods should be delivered between different locations in a transportation network using various kinds of vehicles of large capacity. A particularity here is that, to be profitable, transportation of goods should be consolidated. This means that goods are not delivered directly from the origin to the destination, but transferred from one vehicle to another in intermediate locations. We proposed an original Mixed Integer Programming formulation for this problem which is suitable for resolution by a Branch-and-Price algorithm and intelligent primal heuristics based on it.

For the problem of routing freight railcars, we proposed two algorithmes based on the column generation approach. These algorithmes have been tested on a set of real-life instances coming from a real Russian freight transportation company. Our algorithms have been faster on these instances than the current solution approach being used by the company.

Realopt team has a strong experience on exact methods for cutting and packing problems. These problems occur in logistics (loading trucks), industry (wood or steel cutting), computer science (parallel processor scheduling).

We developed a branch-and-price algorithm for the Bin Packing Problem with Conflicts which improves on other approaches available in the literature . The algorithm uses our methodological advances like the generic branching rule for the branch-and-price and the column based heuristic. One of the ingredients which contributes to the success of our method are fast algorithms we developed for solving the subproblem which is the Knapsack Problem with Conflicts. Two variants of the subproblem have been considered: with interval and arbitrary conflict graphs.

We also developed a branch-and-price algorithm for a variant of the bin-packing problem where the items are fragile. In we studied empirically different branching schemes and different algorithms for solving the subproblems.

We studied a variant of the knapsack problem encountered in inventory routing problem : we faced a multiple-class integer knapsack problem with setups (items are partitioned into classes whose use implies a setup cost and associated capacity consumption). We showed the extent to which classical results for the knapsack problem can be generalized to this variant with setups and we developed a specialized branch-and-bound algorithm.

We studied the orthogonal knapsack problem, with the help of graph
theory , ,
, . Fekete and Schepers proposed to model
multi-dimensional orthogonal placement problems by using an efficient representation of all
geometrically symmetric solutions by a so called *packing class* involving one *interval graph* for each dimension. Though Fekete & Schepers' framework is very efficient,
we have however identified several weaknesses in their algorithms: the most obvious one is
that they do not take advantage of the different possibilities to represent interval
graphs. We propose to represent these graphs by matrices with consecutive ones on each
row. We proposed a branch-and-bound algorithm for the 2D knapsack problem that uses our 2D
packing feasibility check. We are currently developing exact optimization tools for
glass-cutting problems in a collaboration with Saint-Gobain
. This 2D-3stage-Guillotine cut problems are very hard to solve
given the scale of the instance we have to deal with. Moreover one has to issue cutting
patterns that avoid the defaults that are present in the glass sheet that are used as raw
material. There are extra sequencing constraints regarding the production that make the
problem even more complex.

We have also organized a European challenge on packing with society Renault. This challenge was about loading trucks under practical constraints.

Inventory routing problems combine the optimization of product deliveries (or pickups) with inventory control at customer sites. We considered an industrial application where one must construct the planning of single product pickups over time; each site accumulates stock at a deterministic rate; the stock is emptied on each visit. We have developed a branch-and-price algorithm where periodic plans are generated for vehicles by solving a multiple choice knapsack subproblem, and the global planning of customer visits is coordinated by the master program . We previously developed approximate solutions to a related problem combining vehicle routing and planning over a fixed time horizon (solving instances involving up to 6000 pick-ups and deliveries to plan over a twenty day time horizon with specific requirements on the frequency of visits to customers .

Together with our partner company GAPSO from the associate team SAMBA, we worked on the equipment routing task scheduling problem arising during port operations. In this problem, a set of tasks needs to be performed using equipments of different types with the objective to maximize the weighted sum of performed tasks.

We participated to the project on an airborne radar scheduling. For this problem, we developed fast heuristics and exact algorithms . A substantial research has been done on machine scheduling problems. A new compact MIP formulation was proposed for a large class of these problems . An exact decomposition algorithm was developed for the NP-hard maximizing the weighted number of late jobs problem on a single machine . A dominant class of schedules for malleable parallel jobs was discovered in the NP-hard problem to minimize the total weighted completion time . We proved that a special case of the scheduling problem at cross docking terminals to minimize the storage cost is polynomially solvable , .

Another application area in which we have successfully developed MIP approaches is in the area of tactical production and supply chain planning. In , we proposed a simple heuristic for challenging multi-echelon problems that makes effective use of a standard MIP solver. contains a detailed investigation of what makes solving the MIP formulations of such problems challenging; it provides a survey of the known methods for strengthening formulations for these applications, and it also pinpoints the specific substructure that seems to cause the bottleneck in solving these models. Finally, the results of provide demonstrably stronger formulations for some problem classes than any previously proposed. We are now working on planning phytosanitary treatments in vineries.

We have been developing robust optimization models and methods to deal with a number of applications like the above in which uncertainty is involved. In , , we analyzed fundamental MIP models that incorporate uncertainty and we have exploited the structure of the stochastic formulation of the problems in order to derive algorithms and strong formulations for these and related problems. These results appear to be the first of their kind for structured stochastic MIP models. In addition, we have engaged in successful research to apply concepts such as these to health care logistics . We considered train timetabling problems and their re-optimization after a perturbation in the network , . The question of formulation is central. Models of the literature are not satisfactory: continuous time formulations have poor quality due to the presence of discrete decision (re-sequencing or re-routing); arc flow in time-space graph blow-up in size (they can only handle a single line timetabling problem). We have developed a discrete time formulation that strikes a compromise between these two previous models. Based on various time and network aggregation strategies, we develop a 2-stage approach, solving the contiguous time model having fixed the precedence based on a solution to the discrete time model.

Currently, we are conducting investigations on a real-world planning problem in the domain of energy production, in the context of a collaboration with EDF , , . The problem consists in scheduling maintenance periods of nuclear power plants as well as production levels of both nuclear and conventional power plants in order to meet a power demand, so as to minimize the total production cost. For this application, we used a Dantzig-Wolfe reformulation which allows us to solve realistic instances of the deterministic version of the problem . In practice, the input data comprises a number of uncertain parameters. We deal with a scenario-based stochastic demand with help of a Benders decomposition method. We are working on Multistage Robust Optimization approaches to take into account other uncertain parameters like the duration of each maintenance period, in a dynamic optimization framework. The main challenge addressed in this work is the joint management of different reformulations and solving techniques coming from the deterministic (Dantzig-Wolfe decomposition, due to the large scale nature of the problem), stochastic (Benders decomposition, due to the number of demand scenarios) and robust (reformulations based on duality and/or column and/or row generation due to maintenance extension scenarios) components of the problem .

In the context of numerical simulations on high performance machines, optimizing data
locality and resource usage is very important for faster execution times and lower energy
consumption. This optimization can be seen as a special case of scheduling problem on
parallel resource, with several challenges. First, instances are typically large: a large
matrix factorization (with

The team has recruited Aurélien Froger as assistant professor.

Ruslan Sadykov has defended his habilitation (HDR) .

A paper was accepted in conference IPCO, which is the most prestigious conference in the field.

*A generic Branch-And-Price-And-Cut Code*

Keywords: Column Generation - Branch-and-Price - Branch-and-Cut - Mixed Integer Programming - Mathematical Optimization - Benders Decomposition - Dantzig-Wolfe Decomposition - Extended Formulation

Functional Description: BaPCod is a prototype code that solves Mixed Integer Programs (MIP) by application of reformulation and decomposition techniques. The reformulated problem is solved using a branch-and-price-and-cut (column generation) algorithms, Benders approaches, network flow and dynamic programming algorithms. These methods can be combined in several hybrid algorithms to produce exact or approximate solutions (primal solutions with a bound on the deviation to the optimum).

Release Functional Description: An important update to make BaPCod compatible with VRPSolver. Correction of numerous bugs.

Participants: Artur Alves Pessoa, Boris Detienne, Eduardo Uchoa Barboza, Franck Labat, François Clautiaux, Francois Vanderbeck, Halil Sen, Issam Tahiri, Michael Poss, Pierre Pesneau, Romain Leguay and Ruslan Sadykov

Partners: Université de Bordeaux - CNRS - IPB - Universidade Federal Fluminense

Contact: Ruslan Sadykov

URL: https://

*Operation Research Tools Under Julia*

Keywords: Modeling - Processing - Dashboard

Functional Description: This set of tools currently includes : 1) BlockJuMP.jl: extension of JuMP to model decomposable mathematical programs (using either Benders or Dantzig-Wolfe decomposition paradign) 2) Scanner.jl: a default data parser to ease the reading of the input data in the form that they are often encountered in operational research. 3) BenchmarkUtils.jl: Tools to ease the setup of numerical experiments to benchmark algorithmic feature performances. The test automation permits to quickly calibrate the parameters of an arbitrary algorithm control function.

Participants: Francois Vanderbeck, Guillaume Marques, Issam Tahiri and Ruslan Sadykov

Contact: Issam Tahiri

Keywords: Scheduling - Task scheduling - StarPU - Heterogeneity - GPGPU - Performance analysis

Functional Description: Analyse post-mortem the behavior of StarPU applications. Provide lower bounds on makespan. Study the performance of different schedulers in a simple context. Provide implementations of many scheduling algorithms from the literature

News Of The Year: Included many new algorithms, in particular online algorithms Better integration with StarPU by accepting .rec files as input

Participant: Lionel Eyraud-Dubois

Contact: Lionel Eyraud-Dubois

Publications: Approximation Proofs of a Fast and Efficient List Scheduling Algorithm for Task-Based Runtime Systems on Multicores and GPUs - Fast Approximation Algorithms for Task-Based Runtime Systems

We have developed an approach to solve the temporal knapsack problem (TKP) based on a very large size dynamic programming formulation . In this generalization of the classical knapsack problem, selected items enter and leave the knapsack at fixed dates. We solve the TKP with a dynamic program of exponential size, which is solved using a method called Successive Sublimation Dynamic Programming (SSDP). This method starts by relaxing a set of constraints from the initial problem, and iteratively reintroduces them when needed. We show that a direct application of SSDP to the temporal knapsack problem does not lead to an effective method, and that several improvements are needed to compete with the best results from the literature.

We investigated an integrated optimization approach for timetabling and rolling stock rotation planning in the context of passenger railway traffic . Given a set of possible passenger trips, service requirement constraints, and a fleet of multiple heterogeneous self-powered railcars, our method aims at producing a timetable and solving the rolling stock problem in such a way that the use of railcars and the operational costs are minimized. To solve this hard optimization problem, we design a mixed-integer linear programming model based on network-flow in an hypergraph. We use this models to handle effectively constraints related to coupling and decoupling railcars. To reduce the size of the model, we use an aggregation and disaggregation technique combined with reduced-cost filtering. Computational experiments based on several French regional railway traffic case studies show that our method scales successfully to real-life problems.

We have studied a class of two-stage robust binary optimization problems with objective uncertainty where recourse decisions are restricted to be mixed-binary . For these problems, we present a deterministic equivalent formulation through the convexification of the recourse feasible region. We then explore this formulation under the lens of a relaxation, showing that the specific relaxation we propose can be solved using the branch-and-price algorithm. We present conditions under which this relaxation is exact, and describe alternative exact solution methods when this is not the case. Despite the two-stage nature of the problem, we provide NP-completeness results based on our reformulations. Finally, we present various applications in which the methodology we propose can be applied. We compare our exact methodology to those approximate methods recently proposed in the literature under the name K-adaptability. Our computational results show that our methodology is able to produce better solutions in less computational time compared to the K-adaptability approach, as well as to solve bigger instances than those previously managed in the literature.

Primal heuristics have become an essential component in mixed integer programming (MIP) solvers. Extending MIP based heuristics, our study outlines generic procedures to build primal solutions in the context of a branch-and-price approach and reports on their performance. Our heuristic decisions carry on variables of the Dantzig-Wolfe reformulation, the motivation being to take advantage of a tighter linear programming relaxation than that of the original compact formulation and to benefit from the combinatorial structure embedded in these variables. In , we focus on the so-called diving methods that use re-optimization after each LP rounding. We explore combinations with diversification-intensification paradigms such as limited discrepancy search, sub-MIPing, local branching, and strong branching. The dynamic generation of variables inherent to a column generation approach requires specific adaptation of heuristic paradigms. We manage to use simple strategies to get around these technical issues. Our numerical results on generalized assignment, cutting stock, and vertex coloring problems sets new benchmarks, highlighting the performance of diving heuristics as generic procedures in a column generation context and producing better solutions than state-of-the-art specialized heuristics in some cases.

Major advances were recently obtained in the exact solution of Vehicle Routing Problems (VRPs). Sophisticated Branch-Cut-and-Price (BCP) algorithms for some of the most classical VRP variants now solve many instances with up to a few hundreds of customers. However, adapting and re-implementing those successful algorithms for other variants can be a very demanding task. In , , , we propose a BCP solver for a generic model that encompasses a wide class of VRPs. It incorporates the key elements found in the best existing VRP algorithms: ng-path relaxation, rank-1 cuts with limited memory, path enumeration, and rounded capacity cuts; all generalized through the new concepts of “packing set” and “elementarity set”. The concepts are also used to derive a branching rule based on accumulated resource consumption and to generalize the Ryan and Foster branching rule. Extensive experiments on several variants show that the generic solver has an excellent overall performance, in many problems being better than the best specific algorithms. Even some non-VRPs, like bin packing, vector packing and generalized assignment, can be modeled and effectively solved.

The solver is available for download and free academic use in http://

In , , we are interested in the exact solution of the vehicle routing problem with backhauls (VRPB), a classical vehicle routing variant with two types of customers: linehaul (delivery) and backhaul (pickup) ones. We propose two branch-cut-and-price (BCP) algorithms for the VRPB. The first of them follows the traditional approach with one pricing subproblem, whereas the second one exploits the linehaul/backhaul customer partitioning and defines two pricing subproblems. The methods incorporate elements of state-of-the-art BCP algorithms, such as rounded capacity cuts, limited-memory rank-1 cuts, strong branching, route enumeration, arc elimination using reduced costs and dual stabilization. Computational experiments show that the proposed algorithms are capable of obtaining optimal solutions for all existing benchmark instances with up to 200 customers, many of them for the first time. It is observed that the approach involving two pricing subproblems is more efficient computationally than the traditional one. Moreover, new instances are also proposed for which we provide tight bounds. Also, we provide results for benchmark instances of the heterogeneous fixed fleet VRPB and the VRPB with time windows.

In , we consider the standard Capacitated Location-Routing Problem (LRP), which is the combination of two canonical combinatorial optimization problems : Facility Location Problem (FLP), and Vehicle Routing Problem (VRP). We have extended the Branch-and-Cut-and-Price Algorithm from to solve a Mixed Integer Programming (MIP) formulation with and exponential number of varialbes. A new family of Route Load Knapsack valid inequalites is proposed to strengthen the formulation. Preliminary results showed that our algorithm could solve to optimality, for the first time, 12 open instances of the most difficult classes of LRP instances.

In the first echelon of the two-echelon stochastic multi-period capacitated location-routing problem (2E-SM-CLRP), one has to decide the number and location of warehouse platforms as well as the intermediate distribution platforms for each period; while fixing the capacity of the links between them. The system must be dimensioned to enable an efficient distribution of goods to customers under a stochastic and time-varying demand. In the second echelon of the 2E-SM-CLRP, the goal is to construct vehicle routes that visit customers from operating distribution platforms. The objective is to minimize the total expected cost. We model this hierarchical decision problem as a two-stage stochastic program with integer recourse. The first-stage includes location and capacity decisions to be fixed at each period over the planning horizon, while routing decisions of the second echelon are determined in the recourse problem. In , , we propose a Benders decomposition approach to solve this model. In the proposed approach, the location and capacity decisions are taken by solving the Benders master problem. After these first-stage decisions are fixed, the resulting subproblem is a capacitated vehicle-routing problem with capacitated multi-depot (CVRP-CMD) that is solved by a branch-cut-and-price algorithm. Computational experiments show that instances of realistic size can be solved optimally within reasonable time, and that relevant managerial insights are derived on the behavior of the design decisions under the stochastic multi-period characterization of the planning horizon.

Much of the existing research on electric vehicle routing problems (E-VRPs) assumes that the charging stations (CSs) can simultaneously charge an unlimited number of electric vehicles, but this is not the case. In , we investigate how to model and solve E-VRPs taking into account these capacity restrictions. In particular, we study an E-VRP with non-linear charging functions, multiple charging technologies, en route charging, and variable charging quantities, while explicitly accounting for the capacity of CSs expressed in the number of chargers. We refer to this problem as the E-VRP with non-linear charging functions and capacitated stations (E-VRP-NL-C). This problem advances the E-VRP literature by considering the scheduling of charging operations at each CS. We first introduce two mixed integer linear programming formulations showing how CS capacity constraints can be incorporated into E-VRP models. We then introduce an algorithmic framework to the E-VRP-NL-C, that iterates between two main components: a route generator and a solution assembler. The route generator uses an iterated local search algorithm to build a pool of high-quality routes. The solution assembler applies a branch-and-cut algorithm to select a subset of routes from the pool. We report on computational experiments comparing four different assembly strategies on a large and diverse set of instances. Our results show that our algorithm deals with the CS capacity constraints effectively. Furthermore, considering the well-known uncapacitated version of the E-VRP-NL-C, our solution method identifies new best-known solutions for 80 out of 120 instances.

In the two-dimensional guillotine cutting-stock problem, the objective is to minimize the number of large plates used to cut a list of small rectangles. We consider a variant of this problem, which arises in glass industry when different bills of order (or batches) are considered consecutively. For practical organisation reasons, leftovers are not reused, except the large one obtained in the last cutting pattern of a batch, which can be reused for the next batch. The problem can be decomposed into an independent problem for each batch. In we focus on the one-batch problem, the objective of which is to minimize the total width of the cutting patterns used. We propose a diving heuristic based on column generation, in which the pricing problem is solved using dynamic programming (DP). This DP generates so-called non-proper columns, i.e. cutting patterns that cannot participate in a feasible integer solution of the problem. We show how to adapt the standard diving heuristic to this “non-proper” case while keeping its effectiveness. We also introduce the partial enumeration technique, which is designed to reduce the number of non-proper patterns in the solution space of the dynamic program. This technique strengthens the lower bounds obtained by column generation and improves the quality of the solutions found by the diving heuristic. Computational results are reported and compared on classical benchmarks from the literature as well as on new instances inspired from glass industry data. According to these results, variants of the proposed diving heuristic outperform constructive and evolutionary heuristics.instances than those previously managed in the literature.

The bin packing problem with generalized time lags (BPGL) consists of a set of items, each having a positive weight, and a set of precedence constraints with lags between pairs of items, allowing negative and non-negative lags. The items must be packed into the minimum possible number of bins with identical capacity, and the bins must be assigned to time periods satisfying the precedence constraints with lags on the items. In we show a solution strategy using a generic branch-and-price algorithm (implemented in the software platform BaPCod) and applying some problem specific cuts. Our approach outperformed the compact Mixed Integer Programming (MIP) formulation solved by the MIP solver Cplex.

ISP networks are taking a leap forward thanks to emerging technologies such as Software Defined Networking (SDN) and Network Function Virtualization (NFV). Efficient algorithms considered too hard to be put in practice on legacy networks now have a second chance to be considered again. In this context, we rethink the ISP network dimensioning problem with protection against Shared Risk Link Group (SLRG) failures. In , , , we consider a path-based protection scheme with a global rerouting strategy, in which, for each failure situation, we may have a new routing of all the demands. Our optimization task is to minimize the needed amount of bandwidth. After discussing the hardness of the problem, we develop two scalable mathematical models that we handle using both Column Generation and Benders Decomposition techniques. Through extensive simulations on real-world IP network topologies and on random generated instances, we show the effectiveness of our methods. Finally, our implementation in OpenDaylight demonstrates the feasibility of the approach and its evaluation with Mininet shows that technical implementation choices may have a dramatic impact on the time needed to reestablish the flows after a failure takes place.

We have an on-going contract with SNCF on scheduling of rolling-stock. The PhD thesis of Mohamed Benkirane is part of this contract.

Following the PhD thesis of Rodolphe Griset, our collaboration with EDF has continued through a new contract within Inria Tech. Its goal is to investigate the possibility of developing an operational prototype (called Fenix) for strategic planning of nuclear plant outages. Two scientific questions are raised. The first one concerns the new mechanisms of management of the power capacity market on the French power grid. The second one is about a new model of the stock variation during a refueling operation, which requires information of several previous production campaigns.

We also have a contract with RTE to develop strategies inspired from stochastic gradient methods to speed-up Benders' decomposition. The PhD thesis of Xavier Blanchot is part of this contract.

We have a contract with Thales Avionique to study a robust scheduling problem.

**SysNum Cluster**
SysNum is a Cluster of Excellence of Bordeaux Idex that aims at bringing Bordeaux academic players in the digital sciences closer to each other around large-scale distributed digital systems. The cluster is organized around 4 methodological axes (Interconnected object systems; Reliability and safety; Modeling and numerical systems;
Massive and heterogeneous data) and 3 application platforms around major societal issues (ecology, mobile systems, interconnected objects and data analysis).

François Clautiaux is leading the methodological WP on Interconnected object systems. Understanding and controlling the complexity of systems of interconnected objects is a major challenge for both industrial and everyday life applications. We think, in particular, to fields like robotics, car industry, energy distribution or smart buildings, where it is essential to tackle autonomous heterogeneous objects and to develop robust control tools to optimize their interconnections. Our research in this direction will be developed within three interconnected tasks.

Orlando Rivera Letelier is pursuing a co-tutelle thesis (with Universidad Adolfo Ibáñez, Peñalolén, Santiago, Chile)

We continue close collaboration with the LOGIS laboratory (Universidade Federal Fluminense, Niteroi, Brazil) after the end of the Inria Associate Team SAMBA.

Eduardo Uchoa visited the team in April 2019 for one week.

Emir Démirovic (University of Melbourne, Australia) visited the team in July for one week

Isaac Cleland (University of Auckland, New-Zealand) visited the team in July for one week

Guillaume Marques spent 3 months in Universidade Federal Fluminense, Niteroi, Brazil (August-November 2019), financed by mobility grant of IdEx Bordeaux

François Clautiaux: Conférence Dataquitaine (around 300 participants) in Bordeaux, February 2019

François Clautiaux: Workshop on Integer Programming and Algorithms in Marne-La-Vallée, November 2019

François Clautiaux: Conférence Dataquitaine (around 300 participants) in Bordeaux, February 2019

François Clautiaux: Workshop on Integer Programming and Algorithms in Marne-La-Vallée, November 2019

François Clautiaux: Conference ROADEF 2019, Le Havre

Pierre Pesneau: INOC 2019, in Avignon

François Clautiaux : editor for Open Journal on Mathematical Optimization (OJMO)

François Clautiaux : European Journal of Operational Research, Discrete Applied Mathematics, Discrete Optimization, International Transactions on Operations Research,

Aurélien Froger: European Journal of Operational Research, Transportation Science, Computers & Operations Research, Journal of Heuristics

Ruslan Sadykov: Mathematical Programming, Transportation Science, International Transactions of Operations Research, ACM Transactions on Parallel Computing, Integer Programming and Combinatorial Optimization conference

François Clautiaux: Invited talk at the ESICUP Workshop, in Mexico (April 10th, 2019)

Aurélien Froger: Invited talk at the 2nd International Workshop on Synchronisation in Transport, SynchroTrans 2019, in Nantes (September 10th, 2019)

Ruslan Sadykov: Invited talk at the 9th International Network Optimization conference, invited talk at the POC Autumn School on Advanced BCP Tools

François Clautiaux has been elected president of the French O.R. association, ROADEF.

François Clautiaux has been expert for HCERES

François Clautiaux has been expert for the Flander's Innovation and Entrepreneurship Agency.

Licence : François Clautiaux, Projet d'optimisation, L3, Université de Bordeaux, France

Licence : François Clautiaux, Grands domaines de l'optimisation, L1, Université de Bordeaux, France

Master : François Clautiaux, Introduction à la programmation en variables entières, M1, Université de Bordeaux, France

Master : François Clautiaux, Integer Programming, M2, Université de Bordeaux, France

Master : François Clautiaux, Algorithmes pour l'optimisation en nombres entiers, M1, Université de Bordeaux, France

Master : François Clautiaux, Programmation linéaire, M1, Université de Bordeaux, France

Master: Boris Detienne, Combinatoire et routage, ENSEIRB INPB

Licence : Boris Detienne, Optimisation, L2, Université de Bordeaux

Licence : Boris Detienne, Groupe de travail applicatif, L3, Université de Bordeaux

Master : Boris Detienne, Optimisation continue, M1, Université de Bordeaux

Master : Boris Detienne, Integer Programming, M2, Université de Bordeaux

Master : Boris Detienne, Optimisation dans l'incertain, M2, Université de Bordeaux

Licence : Aurélien Froger, Groupe de travail applicatif, L3, Université de Bordeaux, France

Master : Aurélien Froger, Optimisation dans les graphes, M1, Université de Bordeaux, France

Master : Aurélien Froger, Gestion des opérations et planification de la production, M2, Université de Bordeaux, France

Master : Ruslan Sadykov, Introduction to Constraint Programming, M2, Université de Bordeaux, France

Licence : Pierre Pesneau, Grands domaines de l'optimisation, L1, Université de Bordeaux, France

Licence : Pierre Pesneau, Programmation pour le calcul scientifique, L2, Université de Bordeaux, France

Licence : Pierre Pesneau, Optimisation, L2, Université de Bordeaux, France

DUT : Pierre Pesneau, Recherche Opérationnelle, DUT Informatique 2ème année, Université de Bordeaux, France

Master : Pierre Pesneau, Algorithmique et Programmation 1, M1, Université de Bordeaux, France

Master : Pierre Pesneau, Algorithmique et Programmation 2, M1, Université de Bordeaux, France

Master : Pierre Pesneau, Programmation linéaire, M1, Université de Bordeaux, France

Master : Pierre Pesneau, Integer Programming, M2, Université de Bordeaux, France

HdR : Ruslan Sadykov, Modern Branch-Cut-and-Price, Université de Bordeaux, 4/12/2019.

PhD : Imen Ben Mohamed, Designing Two-Echelon Distribution Networks under Uncertainty , Université de Bordeaux, 27/05/2019, Walid Klibi (dir), Ruslan Sadykov (dir), François Vanderbeck (co-dir).

PhD in progress : Alena Shilova, Scheduling for Deep Learning Frameworks from October 2018, Olivier Beaumont (dir) and Alexis Joly (dir)

PhD in progress: Tobias Castanet, Use of Replication in Distributed Games from September 2018, Olivier Beaumont (dir), Nicolas Hanusse (dir) and Corentin Travers (dir).

PhD in progress : Guillaume Marques, Planification de tournées de véhicules avec transbordement en logistique urbaine : approches basées sur les méthodes exactes de l'optimisation mathématique, from September 2017, Ruslan Sadykov (dir)

PhD in progress : Gaël Guillot, Aggregation and disaggregation methods for hard combinatorial problems, from November 2017, François Clautiaux (dir) and Boris Detienne (dir).

PhD in progress : Orlando Rivera Letelier, Bin Packing Problem with Generalized Time Lags, from May 2018, François Clautiaux (dir) and Ruslan Sadykov (co-dir), a co-tutelle with Universidad Adolfo Ibáñez, Peñalolén, Santiago, Chile.

PhD in progress: Mohamed Benkirane, "Optimisation des moyens dans la recomposition commerciale de dessertes TER" from November 2016, François Clautiaux (dir), Boris Detienne (dir)

PhD in progress: Xavier Blanchot, "Accélération de la Décomposition de Benders à l'aide du Machine Learning : Application à de grands problèmes d'optimisation stochastique two-stage pour les réseaux d'électricité" from September 2019, François Clautiaux (dir), Aurélien Froger (co-dir)

PhD in progress: Johan Levêque, "Conception de réseaux de distributions urbains mutualisées en mode doux", from September 2018, François Clautiaux (dir), Gautier Stauffer (co-dir)

François Clautiaux: external referee for the thesis of Arthur Kramer (Bologna), referee for the thesis of Arnaud Lazare (Université Paris Saclay) jury member for Simon Bélières (Université de Toulouse), Imen Ben Mohamed (Université de Bordeaux) jury member for the habilitation of Ruslan Sadykov (Université de Bordeaux)

Boris Detienne: jury member for Ikram Bouras (Université de Montpellier) and Imen Ben Mohamed (Université de Bordeaux)

Ruslan Sadykov: member of of the selection committee of the Maitre de Conference position (Université de Bordeaux)

Local events: Participation to "Journée emploi maths et interaction 2019". This day aims to bring together students, researchers and practitioners in mathematics in the Bordeaux area. https://

Participation to Circuit Scientifique Bordelais (Fête de la Science)

Participation to the 80th anniversary of CNRS