Commands is a team with a global view on dynamic optimization in its various aspects: trajectory optimization, geometric control, deterministic and stochastic optimal control, stochastic programming, dynamic programming and HamiltonJacobiBellman approach.
Our aim is to derive new and powerful algorithms for solving numerically these various problems, with applications in several industrial fields. While the numerical aspects are the core of our approach it happens that the study of convergence of these algorithms and the verification of their wellposedness and accuracy raises interesting and difficult theoretical questions, such as, for trajectory optimization: qualification conditions and secondorder optimality condition, wellposedness of the shooting algorithm, estimates for discretization errors; for the HamiltonJacobiBellman approach: accuracy estimates, strong uniqueness principles when state constraints are present, for stochastic programming problems: sensitivity w.r.t. the probability laws, formulation of risk measures.
For many years the team members have been deeply involved in various industrial applications. The Commands team itself has dealt since its foundation in 2007 with two main types of apllications:
Space vehicle trajectories, in collaboration with CNES, the French space agency,
Production, management, storage and trading of energy resources (in collaboration with EDF, GDF and TOTAL).
We give more details in the Application domain section.
In the framework of our research with CNES in fast numerical methods for solving HamiltonJacobiBellman equations, we were able to build an efficient numerical code for optimizing the trajectory of the European launcher Ariane 5, with maximal payload and under a structural constraint on dynamic pressure.
For deterministic stateconstrained optimal control problems we were able to provide a better understanding of the wellposedness and numerical properties of the shooting algorithm . This algorithm has been applied to the optimization of an atmospheric reentry problem in .
For deterministic optimal control we will distinguish two approaches, trajectory optimization, in which the object under consideration is a single trajectory, and the HamiltonJacobiBellman approach, based on dynamic principle, in which a family of optimal control problems is solved.
The roots of deterministic optimal control are the “classical” theory of the calculus of variations, illustrated by the work of Newton, Bernoulli, Euler, and Lagrange (whose famous multipliers were introduced in ), with improvements due to the “Chicago school”, Bliss during the first part of the 20th century, and by the notion of relaxed problem and generalized solution (Young ).
Trajectory optimizationreally started with the spectacular achievement done by Pontryagin's group during the fifties, by stating, for general optimal control problems, nonlocal optimality conditions generalizing those of Weierstrass. This motivated the application to many industrial problems (see the classical books by Bryson and Ho , Leitmann , Lee and Markus , Ioffe and Tihomirov ). Since then, various theoretical achievements have been obtained by extending the results to nonsmooth problems, see Aubin , Clarke , Ekeland . Substantial improvements were also obtained by using tools of differential geometry, which concern a precise understanding of optimal syntheses in low dimension for large classes of nonlinear control systems, see Bonnard, Faubourg and Trélat .
Overviews of numerical methods for trajectory optimization are provided in Pesch , Betts . We follow here the classical presentation that distinguishes between direct and indirect methods.
Dynamic programmingwas introduced and systematically studied by R. Bellman during the fifties. The HJB equation, whose solution is the value function of the (parameterized) optimal control problem, is a variant of the classical HamiltonJacobi equation of mechanics for the case of dynamics parameterized by a control variable. It may be viewed as a differential form of the dynamic programming principle. This nonlinear firstorder PDE appears to be wellposed in the framework of viscosity solutionsintroduced by Crandall and Lions , , . These tools also allow to perform the numerical analysis of discretization schemes. The theoretical contributions in this direction did not cease growing, see the books by Barles and Bardi and CapuzzoDolcetta .
A interesting byproduct of the HJB approach is an expression of the optimal control in feedback form. Also it reaches the global optimum, whereas trajectory optimization algorithms are of
local nature. A major difficulty when solving the HJB equation is the high cost for a large dimension
nof the state (complexity is exponential with respect to
n).
The socalled direct methodsconsist in an optimization of the trajectory, after having discretized time, by a nonlinear programming solver that possibly takes into account the dynamic structure. So the two main problems are the choice of the discretization and the nonlinear programming algorithm. A third problem is the possibility of refinement of the discretization once after solving on a coarser grid.
Many authors prefer to have a coarse discretization for the control variables (typically constant or piecewiselinear on each time step) and a higher order discretization for the state equation. The idea is both to have an accurate discretization of dynamics (since otherwise the numerical solution may be meaningless) and to obtain a smallscale resulting nonlinear programming problem. See e.g. Kraft . A typical situation is when a few dozen of timesteps are enough and there are no more than five controls, so that the resulting NLP has at most a few hundreds of unknowns and can be solved using full matrices software. On the other hand, the error order (assuming the problem to be unconstrained) is governed by the (poor) control discretization. Note that the integration scheme does not need to be specified (provided it allows to compute functions and gradients with enough precision) and hence general Ordinary Differential Equations integrators may be used.
On the other hand, a full discretization (i.e. in a context of RungeKutta methods, with different values of control for each inner substep of the scheme) allows to obtain higher orders that can be effectively computed, see Hager , Bonnans , being related to the theory of partitioned RungeKutta schemes, Hairer et al. . In an interiorpoint algorithm context, controls can be eliminated and the resulting system of equation is easily solved due to its band structure. Discretization errors due to constraints are discussed in Dontchev et al. . See also Malanowski et al. .
For large horizon problems integrating from the initial time to the final time may be impossible (finding a feasible point can be very hard !). Analogously to the indirect method of multiple shooting algorithm, a possibility is to add (to the control variables), as optimization parameters, the state variables for a given set of times, subject of course to “sticking” constraint. Note that once more the integration scheme does not need to be specified. Integration of the ODE can be performed in parallel. See Bock .
Recent proposals were made of methods based on a reformulation of the problem based on (possibly flat) output variables. By the definition, control and state variables are combinations of derivatives of these output variables. When the latter are represented on a basis of smooth functions such as polynomials, their derivatives are given linear combinations of the coefficients, and so the need for integration is avoided. One must of course take care of the possibly complicated expression of constraints that can make numerical difficulties. The numerical analysis of these methods seems largely open. See on this subject Petit, Milam and Murray .
The collocation approach for solving an ODE consists in a polynomial interpolation of the dynamic variable, the dynamic equation being enforced only at limited number of points (equal to the degree of the polynomial). Collocation can also be performed on each time step of a onestep method; it can be checked than collocation methods are a particular case of RungeKutta methods.
It is known that the polynomial interpolation with equidistant points is unstable for more than about 20 points, and that the Tchebycheff points should be preferred, see e.g. Section 5.2.6. Nevertheless, several papers suggested the use of pseudospectral methods Ross and Fahroo in which a single (over time) highorder polynomial approximation is used for the control and state. Therefore pseudospectral methods should not be used in the case of nonsmooth (e.g. discontinuous) control.
In view of model and data uncertainties there is a need for robust solutions. Robust optimization has been a subject of increasing importance in recent years see BenTal and Nemirovski . For dynamic problems taking the worstcase of the perturbation at each timestep may be too conservative. Specific remedies have been proposed in specific contexts, see BenTal et al. , Diehl and Björnberg .
A relatively simple method taking into account robustness, applicable to optimal control problems, was proposed in Diehl, Bock and Kostina .
The dominant ones (for optimal control problems as well as for other fields) have been successively the augmented Lagrangian approach (1969, due to Hestenes and Powell , see also Bertsekas ) successive quadratic programming (SQP: late seventies, due to , , and interiorpoint algorithms since 1984, Karmarkar . See the general textbooks on nonlinear programming , , .
When ordered by time the optimality system has a “band structure”. One can take easily advantage of this with interiorpoint algorithms whereas it is not so easy for SQP methods; see Berend et al. . There exist some very reliable SQP softwares SNOPT, some of them dedicated to optimal control problems, Betts , as well as robust interiorpoint software, see Morales et al. , Wächter and Biegler , and for application to optimal control Jockenhövel et al. .
We have developed a general SQP algorithm , for sparse nonlinear programming problems, and the associated software for optimal control problems; it has been applied to atmospheric reentry problems, in collaboration with CNES .
More recently, in collaboration with CNES and ONERA, we have developed a sparse interiorpoint algorithm with an embedded refinement procedure. The resulting TOPAZE code has been applied to various space trajectory problems , , . The method takes advantage of the analysis of discretization errors, is wellunderstood for unconstrained problems .
The indirect approach eliminates control variables using Pontryagin's maximum principle, and solves the twopoints boundary value problem (with differential variables state and costate) by
a single or multiple shooting method. The questions are here the choice of a discretization scheme for the integration of the boundary value problem, of a (possibly globalized) Newton type
algorithm for solving the resulting finite dimensional problem in
IR^{n}(
nis the number of state variables), and a methodology for finding an initial point.
The choice of the discretization scheme for the numerical integration of the boundary value problem can have a great impact on the convergence of the method. First, the integration itself can be tricky. If the state equation is stiff (the linearized system has fast modes) then the statecostate has both fast and unstable modes. Also, discontinuities of the control or its derivative, due to commutations or changes in the set of active constraints, lead to the use of sophisticated variable step integrators and/or switching detection mechanisms, see Hairer et al. , . Another point is the computation of gradients for the Newton method, for which basic finite differences can give inaccurate results with variable step integrators (see Bock ). This difficulty can be treated in several ways, such as the socalled “internal differentiation” or the use of variational equations, see Gergaud and Martinon .
Most optimal control problems include control and state constraints. In that case the formulation of the TPBVP must take into account entry and exit times of boundary arcs for these constraints, and (for state constraints of order at least two) times of touch points (isolated contact points). In addition for state constrained problems, the socalled “alternative formulation” (that allows to eliminate the “algebraic variables, i.e. control and state, from the algebraic constraints) has to be used, see Hartl, Sethi and Vickson .
Another interesting point is the presence of singular arcs, appearing for instance when the control enters in the system dynamics and cost function in a linear way, which is common in practical applications. As for state contraints, the formulation of the boundary value problem must take into account these singular arcs, over which the expression of the optimal control typically involves higher derivatives of the Hamiltonian, see Goh and Robbins .
As mentioned before, finding a suitable initial point can be extremely difficult for indirect methods, due to the small convergence radius of the Newton type method used to solve the boundary value problem. Homotopy methods are an effective way to address this issue, starting from the solution of an easier problem to obtain a solution of the target problem (see Allgower and Georg ). It is sometimes possible to combine the homotopy approach with the Newton method used for the shooting, see Deuflhard .
With a given value of the initial costate are associated (through in integration of the reduced statecostate system) a control and a state, and therefore a cost function. The latter can therefore be minimized by adhoc minimization algorithms, see Dixon and BartholomewBiggs . The advantage of this point of view is the possibility to use the various descent methods in order to avoid convergence to a local maximum or saddlepoint. The extension of this approach to constrained problems (especially in the case of state constraints) is an open and difficult question.
We have recently clarified under which properties shooting algorithms are wellposed in the presence of state constraints. The (difficult) analysis was carried out in , . A related homotopy algorithm, restricted to the case of a single firstorder state constraint, has been proposed in .
We also conducted a study of optimal trajectories with singular arcs for space launcher problems. The results obtained for the generalized threedimensional Goddard problem (see ) have been successfully adapted for the numerical solution of a realistic launcher model (Ariane 5 class).
Furthermore, we continue to investigate the effects of the numerical integration of the boundary value problem and the accurate computation of Jacobians on the convergence of the shooting method. As initiated in , we focus more specifically on the handling of discontinuities, with ongoing work on the geometric integration aspects (Hamiltonian conservation).
Geometric approaches succeeded in giving a precise description of the structure of optimal trajectories, as well as clarifying related questions. For instance, there have been many works
aiming to describe geometrically the set of attainable points, by many authors (Krener, Schättler, Bressan, Sussmann, Bonnard, Kupka, Ekeland, Agrachev, Sigalotti, etc). It has been proved,
in particular, by Krener and Schättler
that, for generic singleinput controlaffine systems in
IR^{3},
, where the control satisfies the constraint

u
1, the boundary of the accessible set in small time consists of the surfaces
generated by the trajectories
x_{+}x_{}and
x_{}x_{+}, where
x_{+}(resp.
x_{}) is an arc corresponding to the control
u= 1(resp.
u= 1); moreover, every point inside the accessible set can be reached with a trajectory of the form
x_{}x_{+}x_{}or
x_{+}x_{}x_{+}. It follows that minimal time trajectories of generic singleinput controlaffine systems in
IR^{3}are locally of the form
x_{}x_{+}x_{}or
x_{+}x_{}x_{+}, i.e., are bangbang with at most two switchings.
This kind of result has been slightly improved recently by AgrachevSigalotti, although they do not take into account possible state constraints.
In , we have extended that kind of result to the case of state constraints: we described a complete classification, in terms of the geometry (Lie configuration) of the system, of local minimal time syntheses, in dimension two and three. This theoretical study was motivated by the problem of atmospheric reentry posed by the CNES, and in , we showed how to apply this theory to this concrete problem, thus obtaining the precise structure of the optimal trajectory.
This approach consists in calculating the value function associated with the optimal control problem, and then synthesizing the feedback control and the optimal trajectory using Pontryagin's principle. The method has the great particular advantage of reaching directly the global optimum, which can be very interesting, when the problem is not convex.
From the dynamic programming principle, we derive a characterization of the value function as being a solution (in viscosity sense) of an HamiltonJacobiBellman equation, wich is a nonlinear PDE of dimension equal to the number n of state variables. Since the pioneer works of Crandall and Lions , , , many theoretical contributions were carried out, allowing an understanding of the properties of the value function as well as of the set of admissible trajectories. However, there remains an important effort to provide for the development of effective and adapted numerical tools, mainly because of numerical complexity (complexity is exponential with respect to n).
Several numerical schemes have been already studied to treat the case when the solution of the HJB equation (the value function) is continuous. Let us quote for example the SemiLagrangian methods , studied by the team of M. Falcone (La Sapienza, Rome), the high order schemes WENO, ENO, Discrete galerkin introduced by S. Osher, C.W. Shu, E. Harten , , , , and also the schemes on nonregular grids by R. Abgrall , . All these schemes rely on finite differences or/and interpolation techniques which lead to numerical diffusions. Hence, the numerical solution is unsatisfying for long time approximations even in the continuous case.
In a realistic optimal control problem, there are often constraints on the state (reaching a target, restricting the state of the system in an acceptable domain, ...). When some controlability assumptions are not satisfied, the value function associated to such a problem is discontinuous and the region of discontinuity is of great importance since it separates the zone of admissible trajectories and the nonadmissible zone.
In this case, it is not reasonable to use the usual numerical schemes (based on finite differences) for solving the HJB equation. Indeed, these schemes provide poor approximation quality because of the numerical diffusion.
Discrete approximations of the HamiltonJacobi equation for an optimal control problem of a differentialalgebraic system were studied in .
Numerical methods for the HJB equation in a bilevel optimization scheme where the upperlevel variables are design parameters were used in . The algorithm has been applied to the parametric optimization of hybrid car engines.
Within the framework of the thesis of N. Megdich, we have studied new antidiffusive schemes for HJB equations with discontinuous data , . One of these schemes is based on the Ultrabee algorithm proposed, in the case of advection equation with constant velocity, by Roe and recently revisited by DesprésLagoutière , . The numerical results on several academic problems show the relevance of the antidiffusive schemes. However, the theoretical study of the convergence is a difficult question and is only partially done.
Optimal stochastic control problems occur when the dynamical system is uncertain. A decision typically has to be taken at each time, while realizations of future events are unknown (but some information is given on their distribution of probabilities). In particular, problems of economic nature deal with large uncertainties (on prices, production and demand). Specific examples are the portfolio selection problems in a market with risky and nonrisky assets, superreplication with uncertain volatility, management of power resources (dams, gas). Air traffic control is another example of such problems.
By stochastic programming we mean stochastic optimal control in a discrete time (or even static) setting; see the overview by Ruszczynski and Shapiro . The static and single recourse cases are essentially wellunderstood; by contrast the truly dynamic case (multiple recourse) presents an essential difficulty Shapiro , Shapiro and Nemirovski . So we will speak only of the latter, assuming decisions to be measurable w.r.t. a certain filtration (in other words, all information from the past can be used).
In the standard case of minimization of an expectation (possibly of a utility function) a dynamic programming principleholds. Essentially, this says that the decision is a function of the present state (we can ignore the past) and that a certain reversetime induction over the associated values holds. Unfortunately a straighforward resolution of the dynamic programming principle based on a discretization of the state space is out of reach (again this is the curse of dimensionality). For convex problems one can build lower convex approximations of the value function: this is the Stochastic dual dynamic programming(SDDP) approach, Pereira and Pinto . Another possibility is a parametric approximation of the value function; however determining the basis functions is not easy and identifying (or, we could say in this context, learning) the best parameters is a nonconvex problem, see however Bertsekas and J. Tsitsiklis , Munos .
A popular approach is to sample the uncertainties in a structured way of a tree(branching occurs typically at each time). Computational limits allow only a small number of branching, far less than the amount needed for an accurate solution Shapiro and Nemirovski . Such a poor accuracy may nevertheless (in the absence of a more powerful approach) be a good way for obtaining a reasonable policy. Very often the resulting programs are linear, possibly with integer variables (onoff switches of plants, investment decisions), allowing to use (possibly dedicated) mixed integer linear programming codes. The tree structure (coupling variables) can be exploited by the numerical algorithms, see Dantzig and Wolfe , Kall and Wallace .
By Monte Carlo we mean here sampling a given number of independent trajectories (of uncertainties). In the special case of optimal stopping (e.g., American options) it happens that the state space and the uncertainty space coincide. Then one can compute the transition probabilities of a Markov chain whose law approaches the original one, and then the problem reduces to the one of a Markov chain, see . Let us mention also the quantization approach, see .
In the general case a useful possibility is to compute a tree by agregating the original sample, as done in .
Maximizing the expectation of gains can lead to a solution with a too high probability of important losses (bankruptcy). In view of this it is wise to make a compromise between expected gains and risk of high losses. A simple and efficient way to achieve that may be to maximize the expectation of a utility function; this, however, needs an adhoc tuning. An alternative is the meanvariance compromise, presented in the case of portfolio optimization in Markowitz . A useful generalization of the variance, uncluding dissymetric functions such as semideviations, is the theory of deviation measures, Rockafellar et al. .
Another possibility is to put a constraint on the level of gain to be obtained with a high probability value say at least 99%. The corresponding concept of valueatrisk leads to difficult nonconvex optimization problems, although convex relaxations may be derived, see Shapiro and Nemirovski .
Yet the most important contribution of the recent years is the axiomatized theory of risk measures Artzner et al. , satisfying the properties of monotonicity and possibly convexity.
In a dynamic setting, risk measures (over the total gains) are not coherent (they do not obey a dynamic programming principle). The theory of coherent risk measuresis an answer in which risk measures over successive time steps are inductively applied; see Ruszczyński and Shapiro . Their drawback is to have no clear economic interpretation at the moment. Also, associated numerical methods still have to be developed.
The study of relations between chance constraints (constraints on the probability of some event) and robust optimization is the subject of intense research. The idea is, roughly speaking, to solve a robust optimization (some classes of which are tractable in the sense of algorithmic complexity). See the recent work by BenTal and Teboulle .
The case of continuoustime can be handled with the Bellman dynamic programming principle, which leads to obtain a characterization of the value function as solution of a second order HamiltonJacobiBellman equation , .
Sometimes this value function is smooth (e.g. in the case of Merton's portfolio problem, Oksendal ) and the associated HJB equation can be solved explicitly. Still, the value function is not smooth enough to satisfy the HJB equation in the classical sense. As for the deterministic case, the notion of viscosity solution provides a convenient framework for dealing with the lack of smoothness, see Pham , that happens also to be well adapted to the study of discretization errors for numerical discretization schemes , .
The numerical discretization of second order HJB equations was the subject of several contributions. The book of KushnerDupuis gives a complete synthesis on the chain Markov schemes (i.e Finite Differences, semiLagrangian, Finite Elements, ...). Here a main difficulty of these equations comes from the fact that the second order operator (i.e. the diffusion term) is not uniformly elliptic and can be degenerated. Moreover, the diffusion term (covariance matrix) may change direction at any space point and at any time (this matrix is associated the dynamics volatility).
In the framework of the thesis of R. Apparigliato (that will finish at the end of 2007) we have studied the robust optimization approach to stochastic programming problems, in the case of hydroelectric production, for one valley. The main difficulty lies with both the dynamic character of the system and the large number of constraints (capacity of each dam). We have also studied the simplified electricity production models for respecting the “margin” constraint. In the framework of the thesis of G. Emiel and in collaboration with CEPEL, we have studied largescale bundle algorithms for solving (through a dual “price decomposition” method) stochastic problems for the Brazilian case.
For solving stochastic control problems, we studied the socalled Generalized Finite Differences (GFD), that allow to choose at any node, the stencil approximating the diffusion matrix up to a certain threshold . Determining the stencil and the associated coefficients boils down to a quadratic program to be solved at each point of the grid, and for each control. This is definitely expensive, with the exception of special structures where the coefficients can be computed at low cost. For two dimensional systems, we designed a (very) fast algorithm for computing the coefficients of the GFD scheme, based on the SternBrocot tree . The GFD scheme was used as a basis for the approximation of an HJB equation coming from a superreplication problem. The problem was motivated by a study conducted in collaboration with Société Générale, see .
Within the framework of the thesis of Stefania Maroso, we also contributed to the study of the error estimate of the approximation of Isaac equation associated to a differential game with one player , and also for the approximation of HJB equation associated with the impulse problem .
The field has been strongly influenced by the work of J.L. Lions, who started its systematic study of optimal control problems for PDEs in , in relation with singular perturbation problems , and illposed problems . A possible direction of research in this field consists in extending results from the finitedimensional case such as Pontryagin's principle, secondorder conditions, structure of bangbang controls, singular arcs and so on. On the other hand PDEs have specific features such as finiteness of propagation for hyperbolic systems, or the smoothing effect of parabolic sytems, so that they may present qualitative properties that are deeply different from the ones in the finitedimensional case.
Unilateral systems in mechanics, plasticity theory, multiphases heat (Stefan) equations, etc. are described by inequalities; see Duvaut and Lions , Kinderlehrer and Stampacchia . For an overview in a finite dimensional setting, see Harker and Pang . Optimizing such sytems often needs dedicated schemes with specific regularization tools, see Barbu , Bermúdez and Saguez . Nonconvex variational inequalities can be dealt as well in Controllability of such systems is discussed in Brogliato et al. .
As for finitedimensional problems, but with additional difficulties, there is a need for a better understanding of stability and sensitivity issues, in relation with the convergence of numerical algorithms. The secondorder analysis for optimal control problems of PDE's in dealt with in e.g. , . No much is known in the case of state constraints. At the same time the convergence of numerical algorithms is strongly related to this secondorder analysis.
Many models in control problems couple standard finite dimensional control dynamics with partial differential equations (PDE's). For instance, a well known but difficult problem is to optimize trajectories for planes landoff, so as to minimize, among others, noise pollution. Noise propagation is modeled using wave like equations, i.e., hyperbolic equations in which the signal propagates at a finite speed. By contrast when controlling furnaces one has to deal with the heat equation, of parabolic type, which has a smooting effect. Optimal control laws have to reflect such strong differences in the model.
Let us mention some applications where optimal control of PDEs occurs. One can study the atmospheric reentry problem with a model for heat diffusion in the vehicle. Another problem is the one of traffic flow, modeled by hyperbolic equations, with control on e.g. speed limitations. Of course control of beams, thin structures, furnaces, are important.
An overview of sensitivity analysis of optimization problems in a Banach space setting, with some applications to the control of PDEs of elliptic type, is given in the book . See also .
We studied various regularization schemes for solving optimal control problems of variational inequalities: see Bonnans and D. Tiba , Bonnans and E. Casas , Bergounioux and Zidani . The wellposed of a “nonconvex” variational inequality modelling some mechanical equilibrium is considered in Bonnans, Bessi and Smaoui .
In Coron and Trélat , , we prove that it is possible, for both heat like and wave like equations, to move from any steadystate to any other by means of a boundary control, provided that they are in the same connected component of the set of steadystates. Our method is based on an effective feedback procedure which is easily and efficiently implementable. The first work was awarded SIAM Outstanding Paper Prize (2006).
Dynamic optimization appears in various applied fields: mechanics, physics, biology, economics. Pontryagin's principle itself appeared in the fifties as an applied tools for mechnaical engineers. Since that time progress in the theory and in the application went hand by hand, and so we are commited to develop both of them. We took part ine the past few years in the following appied projects:
Aerospace trajectories  CNES, ONERA. We have a long tradition of studying aerospace trajectory optimization problems: ascent phase of launchers, reentry problem). Our main contributions in this field have been carried out in collaboration with CNES, Onera, through either research contracts or PhD fellowships. see , , , , .
Production, storage and Natural and power resources.  EDF, GDF, Total.
We have worked with EDF on the optimization of the shortterm electricity production , as well as the midterm electricity production. We are starting a collaboration with TOTAL on the trading of LNG (liquefied natural gas).
SHOOT software for indirect shooting
TOPAZE code for trajectory optimization.Developed in the framework of the PhD Thesis of J. LaurentVarin, supported by CNES and ONERA. Implementation of an interiorpoint algorithm for multiarc trajectory optimization, with builtin refinement. Applied to several academic, launcher and reentry problems.
SOHJB code for second order HJB equations. Developped since 2004 in C++ for solving the stochastic HJB equations in dimension 2. The code is based on the Generalized Finite Differences, and includes a decomposition of the covariance matrices in elementary diffusions pointing towards grid points. The implementation is very fast and was mainly tested on academic examples.
Sparse HJBUltrabee. Developped in C++ for solving HJB equations in dimension 4. This code is based on the UltraBee scheme and an efficient storage technique with sparse matrices. The code provides also optimal trajectories for target problems. A prelimenary version in Scilab was developped by N. Megdich. The current version is developped by O. Bokanowski, E. Cristiani and H. Zidani. A specific software dedicated to space problems is developped also in C++, in the framework of a contract with the CNES.
Our results provide several types of efficiency measures of the penalization technique: error estimations of the control for
L^{s}norms (
sin
[1, +
]), error estimations of the state and the adjoint state in Sobolev spaces
W^{1,
s}(
sin
[1, +
)) and also error estimates for the value function. For the
L^{1}norm and the logarithmic penalty, the optimal results are given. In this case we indeed establish that the penalized control and the value function errors are of order
O(
log
).
Nous avons obtenu des conditions nécessaires et suffisantes du second ordre pour une problème avec un arc singulier et une seule commande. Un article est en préparation.
In the frame of research contracts with the CNES, we have studied since 2006 trajectory optimization for the atmospheric climbing phase of space launchers. One major axis was to investigate the existence of optimal trajectories with nonmaximal thrust arcs (i.e. singular arcs), both from the theoretical and numerical point of view. The physical reason behind this phenomenon is that aerodynamic forces may make high speed ineffective (namely the drag term, proportional to the speed squared). Our main axis is an indirect method (Pontryagin's Minimum Principle and shooting method) combined with a continuation approach. We studied in the theoretical aspects on the generalized Goddard problem, and conducted the numerical experiments for a typical Ariane 5 mission to the geostationary transfer orbit ( ). We then moved on to the study of a prototype reusable launcher with wings, for which we considered a more complex aerodynamic model (lift force) as well as a mixed statecontrol constraint limiting the angle of attack. While our work seem to indicate that optimal trajectories involve a full thrust, the homotopic approach was able to deal with the constraint quite smoothly.
We have in improved the results and given shorter proofs for the analysis of state constrained optimal control problems presented by the authors in , concerning second order optimality conditions and the wellposedness of the shooting algorithm. The hypothesis for the second order necessary conditions is weaker, and the main results are obtained without reduction to the normal form used in that reference, and without analysis of high order regularity results for the control. In addition, we provide some numerical illustration. The essential tool is the use of the “alternative optimality system”.
We aim in developping antidiffusive numerical schemes for HJB equations with possibly discontinuous initial data.
In
, we prove the convergence of a nonmonotonous scheme for a
onedimensional first order HamiltonJacobiBellman equation of the form
,
v(0,
x) =
v_{0}(
x). The scheme is related to the HJBUltraBee scheme suggested in
. We show for general discontinuous initial data a firstorder
convergence of the scheme, in
L^{1}norm, towards the viscosity solution. We also illustrate the nondiffusive behavior of the scheme on several numerical examples.
We develop an efficient dynamic storage technique suitable for handling front evolutions in large dimension. Then we propose a fast algorithm, showing its relevance on several challenging
tests in dimension
d= 2, 3, 4. We also compare our method with the techniques usually used in level set methods. Our approach leads to a computational cost as well as a memory
allocation scaling as
O(
N_{nb})in most situations, where
N_{nb}is the number of grid nodes around the front. Finally, let us point out that the approximation in a rough grid gives qualitatively good results. This study leads also to a very fast
numerical code in C++ for solving HJB equations in 4d.
We have investigated a minimum time problem for controlled nonautonomous differential systems, with a dynamics depending on the final time. The minimal time function associated to this problem does not satisfy the dynamic programming principle. However, we have proved in , that it is related to a standard front propagation problem via the reachability function.
We consider a target problem for a nonlinear system under state constraints. In , we give a new continuous levelset approach for characterizing the optimal times and the backwardreachability sets. This approach leads to a characterization via a HamiltonJacobi equation, without assuming any controllability assumption. We also treat the case of timedependent state constraints, as well as a target problem for a twoplayer game with state constraints. Our method gives a good framework for numerical approximations, and some numerical illustrations are included in the paper.
This paper aims to investigate a control problem governed by differential equations with Random measure as data and with final state constraints. This problem is motivated by several real applications. For instance, in space navigation area, when steering a multistage launcher, the separation of the boosters (once they are empty) lead to discontinuities in the mass variable. In resource management, discontinuous trajectories are also used to modelize the problem of sequential batch reactors.
By using a known reparametrization method (by Dal Maso and Rampazzo, 1991), we obtain that the value function can be characterized by means of an auxiliary control problem involving absolutely continuous trajectories. We study the characterization of the value function of this auxiliary problem and discuss its discrete approximations.
In optimal control, there is a wellknown link between the HamiltonJacobiBellman (HJB) equation and Pontryagin's Minimum Principle (PMP). Namely, the costate (or adjoint state) in PMP corresponds to the gradient of the value function in HJB. We investigate from the numerical point of view the possibility of coupling these two approaches to solve control problems. Firstm a rough approximation of the value function is computed by the HJB method, and then used to obtain an initial guess for the PMP method. The advantage of our approach over other initialization techniques (such as continuation or direct methods) is to provide an initial guess close to the global minimum. Numerical tests have been conducted over simple problems involving multiple minima, discontinuous control, singular arcs and state constraints. The application to realistic space launcher problems is currently in progress..
The first results will appear in .
We started this year a projet on the numerical methods for solving the variational inequalities for the second order HJB equations, and the application to swing options in a model with jumps.
We study a multistage stochastic optimization problem where randomness is only present in the objective function. After building a Markov chain by using vectorial quantization tree method, we rely on the dual dynamic programming method (DDP) to solve the optimization problem. The combination of these 2 methods presents the capacity of dealing with highdimension state variables problem. Finally, some numerical tests applied to energy markets have been performed, which show that the method provides good convergence rate. The report is published in .
We started a project with Renault aiming at developing mathematical tools to model and simulate scenarios allowing to improve the performances of electric car engines. Let us recall that the hard point of the electric technology lies on the restricted kilometric autonomy related to a weak density energy of the battery and a weak speed of its refill in energy (8 hours for the slow load). The goal of this project consists in:
* Maximizing the absolute autonomy (hybrid technology)
* Minimizing the autonomy variability: Optimal management of the energy by taking into account the available informations of navigation
These problems can be described by stochastic dynamic optimization problems. The objective of our project is twofold:
* Theoretical and numerical study of related stochastic optimization problems
* Validation on the model associated to the problem of hybrid car engine.
We have finalized our result on the optimal structure of gas transmission trunklines. Suppose a gas pipeline is to be designed to transport a specified flowrate from the entry point to the gas demand point. Physical and contractual requirements at supply and delivery nodes are known as well as the costs to buy and lay a pipeline or build a compressor station. In order to minimize the overall cost of creation of this mainline, the following design variables need to be determined: the number of compressor stations, the lengths of pipeline segments between compressor stations, the diameters of the pipeline segments, the suction and discharge pressures at each compressor station. To facilitate the calculation of the design of a pipeline, gas engineers proposed, in several handbooks, to base their costassessments on some optimal properties from previous experiences and usual engineering practices: the distance between compressors is constant, all diameters are equal, and all inlet (resp. outlet) pressures are equal. The goals of this paper are (1) to state on which assumptions we can consider that the optimal properties are valid and (2) to propose a rigorous proof of the optimal properties (based on nonlinear programming optimality conditions) within a more general framework than before. The paper will appear in .
INRIACNES (OPALE pole framework), 20082009. HJB approach for the atmospheric reentry problem. Involved researchers: F. Bonnans, H. Zidani.
ENSTADGA, 20072009. Study of HJB equations associated to motion planning. Involved researchers: N. Forcadel, H. Zidani.
INRIACNES (OPALE pole framework), 20082009 Study of singular arcs for the ascent phase of winged launchers. Involved researchers: F. Bonnans, P. Martinon, E. Trélat.
INRIATOTAL. PhD fellowship (CIFRE) of Y. Cen, dec. 2008dec. 2011. Involved researchers: F. Bonnans.
INRIACNES, 20092010. Trajectory optimization for climbing problems. Involved researchers: O. Bokanowski, P. Martinon, H. Zidani.
InriaRenault, 2009. Optimization of energy management for electric vehicles. Involved researchers: F. Bonnans, H. Zidani.
Inria  HPC Project, Dec 2009  Dec 2011. Bibliothèque numérique de calcul parallèle: Equations HJB. Involved researchers: O. Bokanowski, F. Bonnans, N. Forcadel, H. Zidani.
In the setting of the STIC Tunisie project, we started this year a projet with M. Mnif (ENIT), involving F. Bonnans, H. Zidani, and N. Touzi from CMAP, on the variational inequalities for the second order HJB equations, and the application to swing options. A PhD student, Imene Ben Latifa is on this project.
In the setting of the STIC AmSud project on “Energy Optimization” we have a collaboration with P. Lotito (U. Tandil) on deterministic continuoustime models for the optimization of hydrothermal electricity and related optimal control problems with singular arcs. We have one PhD student, Maria S. Aronna, on this project.
With Felipe Alvarez (CMM, Universidad de Chile, Santiago) we study the logarithmic penalty approach for optimal control problems, in connection with the PhD thesis of Francisco Silva.
Italy: U. Roma (Sapienza). With M. Falcone: numerical methods for the resolution of HJB equations. Collaboration in connection with the master's internship of D. Venturini.
Shortterm visits: invited professors,
Andrei Dmitruk (Moscow State University), 2 weeks.
Pablo Lotito (U. Tandil, Argentina), 2 weeks.
Mohamed Mnif (ENIT, Tunis), 3 weeks.
Fabio Camilli (Univ. L'Aquila, Italy), 2 weeks. Joint work in progress on “Random switching systems”.
Lars Grüne (Univ. Bayreuth, Germany), 1 week. Joint work in progress on “Stabilization of constrained nolinear systems”.
Maria de Rosàrio do Pinho (Univ. Porto), 1 week. Joint work in progress on “Pontryagin's principle for some hybrid systems”.
F. Bonnans is one of the three Corresponding Editors of “ESAIM:COCV” (Control, Optimisation and Calculus of Variations), and Associate Editor of “Applied Mathematics and Optimization”, and “Optimization, Methods and Software”.
F. Bonnans is member of: (i) CouncilatLarge of the Mathematical Programming Society (20062009), (iii) board of the SMAIMODE group (20072010).
He is one of the founders organizers of SPO (Séminaire Parisien d'Optimisation, IHP, Paris).
E. Trélat is Associate Editor of “ESAIM:COCV” (Control, Optimisation and Calculus of Variations) and of "International Journal of Mathematics and Statistics".
F. Bonnans.
Professeur Chargé de Cours, Ecole Polytechnique. Courses on Operations Research and Numerical Analysis, 50 h.
Mastere de Math. et Applications, “Parcours OJME”, Optimisation, Jeux et Modélisation en Economie, Université Paris VI. Course on Continuous Optimization; Application to Stochastic Programming, (18 h).
Second order analysis of optimal control problems. Fourth Spring School on Variational Analysis, Paseky (Giant Mountains, Czech Republic), April 19  25, 2009.
P. Martinon
Optimisation quadratique, vacations ENSTA (15h).
Introduction à Matlab, vacations ENSTA (20h).
E. Trélat:
Supervisor of the Master of Mathematics of the University of Orléans: M2: Optimal control(30h), Automatic(55h), M1: Continuous optimization(60h).
ENSTA: 3rd year: Optimal control(22h).
H. Zidani: Professeur Chargée de Cours à l'Ensta.
1st year Continuous optimisation(22h),
3rd year and Master MMMEF Paris 1, course on Numerical methods for finance (25h),
3rd year and Master ”Modélisation et Simulation de Versailles StQuentin et de l'INSTN“, course on Numerical methods for front propagation(22h), (iv) M2 MIME Univ. Orsay: Optimal control(22h).
F. Bonnans:
14 th BelgianFrenchGerman Conference on Optimization. Leuven, September 1418, 2009.
COLIBRI (COLloque d'Informatique: BRésil / INRIA, Coopérations, Avancées et Défis), July 2223 2009, Brazil.
H. Zidani: (i) 4e colloque sur les Tendances des Applications Mathématiques en Tunisie, Algérie et Maroc, May 48, 2009.
F. Bonnans: stochastic optimization methods for energy planning, ISMP2009, Chicago, August 2329, 2009.
H. Zidani: sensitivity analysis for nonlinear control problems, 14 th BelgianFrenchGerman Conference on Optimization. Leuven, September 1418, 2009.
S. Aronna :
(i) Second order analysis in singular optimal control problems. Congrès Societé française de Mathématiques Appliquées et Industrielles. [Posters session]. May 2529, 2009. Nice, France.
(ii) Singular arcs in continuous time. IFIP TC 7 Conference on system modelling and optimization. July 2731, 2009. Buenos Aires, Argentina.
F. Bonnans:
(i) Secondorder optimality conditions: The case of a state constrained optimal control problem. International Conference on Engineering and Computational Mathematics 2009, 2729 May 2009, HongKong.
(ii) Nogap second order optimality conditions and the shooting algorithm for optimal control problems. International Symposium on Mathematical Programming. Chicago, August 2328, 2009.
A. Briani:
(i) A minimum time problem with discontinuous data. GermanPolish Conference on Optimization, 1418 March 2009 , Moritzburg Germany.
(ii) A minimum time problem with discontinuous data. 14 th BelgianFrenchGerman Conference on Optimization, 1418 September 2009, Leuven Belgium.
(iii) Problèmes de contrôle optimal avec données mesure. Mathémathiques de l'optimisation et Applications GdR 32 73 MOA, 1821 October 2009, Prquerolles France.
P. Martinon:
(i) A less problem dependent approach for optimal trajectories with singular arcs: application to space launchers. IFAC Workshop, Control Applications of Optimisation  CAO 2009, 0608 May 2009, Jyväskylä Finland.
(ii) Numerical conservation of the Hamiltonian for an optimal control problem with control discontinuities. Conference in honour of E. Hairer's 60th birthday, 1720 June 2009, Genève Suisse.
O. Serea:
(i) Reflected differential games, Game Theory: Mathematical Aspects and Applications, May 2009, CIRM Luminy, France
(ii) Linear programming approach to optimal control problems. Workshop by the Universities of Bayreuth, ErlangenNürnberg, and Würzburg, Germany, July 2009.
(iii) The problem of optimal control with reflection studied through a linear optimization problem stated on occupational measures. Réunion annuelle du GdR MOA Mathématiques de l'Optimisation et Applications, October 2009, Ile de Porquerolles, France.
F. Silva:
(i) Error estimates for the solution of a control constrained optimal control problem with interior penalties. Control Applications of Optimization. Jivaskyla, Finland, 68 Mai, 2009.
(ii) Poster: Développements asymptotiques pour les solutions intérieures de pénalité d'un problème linéaire quadratique avec contraintes sur la commande. SMAI 2009. La CollesurLoup, France, 2529 Mai, 2009.
(iii) Asymptotic expansion of the solution of a control constrained linear quadratic optimal control problem with interior penalty. IFIP Conf. System Modeling and Optimization. Buenos Aires, Argentine, 2731 Juillet, 2009.
(iv) Asymptotic expansions for interior penalty solutions of control constrained problems. 14th BelgianFrenchGerman Conference on Optimization. Leuven, Septembre 1418, 2009.
(v) Développement asymptotique pour la solution d'un problème de contrôle semilinéaire elliptique penalisé. Journée de bilan de la chaire Modélisation Mathématique et Simulation Numérique. École Polytechnique, Paris, 29 Septembre, 2009.
(vi) Développement asymptotique pour la solution d'un problème de contrôle semilinéaire elliptique pénalisé. GdR 3273 Mathématiques de l'Optimisation et Applications. Porquerolles, 1821 Octobre, 2009.
E. Trélat:
(i) Tomographic reconstruction of blurred and noised binary images. Conference on "Nonsmooth Analysis, Control Theory and Differential Equations", Istituto Nazionale di Alta Matematica (INDAM), Rome, June 8–12, 2009.
(ii) Minimum Time LowThrust Orbital Transfer with Eclipse. 14th BelgianFrenchGerman Conference on Optimization, Leuven, Sept. 14–18, 2009.
J.F. Bonnans: (i) Commande optimale de problèmes avec contraintes sur l'état : les multiplicateurs alternatifs. Séminaire de l'équipe ACSIOMI3M, 27 janvier 2009. (ii) Méthodes de différences finies et au delà pour la résolution de problèmes de controle stochastique. Séminaire “Méthode numérique pour la commande optimale”, EDF R & D, 2 avril 2009. (iii) Estimations d'erreur pour les méthodes de différences finies en controle stochastique. Séminaire “Méthode numérique pour la commande optimale”, EDF R & D, 9 avril 2009. (iv) Programmation stochastique dynamique duale. Application au négoce de gaz. Séminaire du LAMSIN, Ecole Nationale d'Ingénieurs de Tunis, 1er décembre 2009.
P. Martinon: (i) A less problem dependent indirect approach for optimal trajectories with singular arcs: application to space launchersOPTEC Seminar, K.U. Leuven University, 16 avril 2009.
H. Zidani: (i) A numerical approximation for a superreplication problem under gamma constraints, Seminario di Modellistica Differenziale Numerica, La Sapienza, 24 February 2009. (ii) Stateconstrained nonlinear problems without any controllability assumption., Seminario Differential equations and applications, Univ. Padova, 21 Aprile 2009. (iii) Characterization of the value function for stateconstrained nonlinear problems without any controllability assumption, Seminar SPO IHP, Paris, 12 october 2009. (iii) Approche HJB pour la planification de trajectoire, Séminaire DGA, 8 October 2009.