Team Sydoco

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Scientific Foundations

For deterministic optimal control problems there are basically three approaches. The so-called direct method consists in an optimization of the trajectory, after having discretized time, by a nonlinear programming solver that possibly takes into account the dynamic structure; see Betts [32] . The indirect approach eliminates control variables using Pontryagin's maximum principle, and solves the resulting two-points boundary value problem by a multiple shooting method. Finally the dynamic programming approach solves the associated Hamilton-Jacobi-Bellman (HJB) equation, which is a partial differential equation of dimension equal to the number n of state variables. This allows to find the global minimum, whereas the two other approaches are local; however, it suffers from the curse of dimensionality (complexity is exponential with respect to n).

There are various additional issues: decomposition of large scale problems, simplification of models (leading to singular perturbation problems), computation of feedback solutions.

For stochastic optimal control problems there are essentially two approaches. The one based on the (stochastic) HJB equation has the same advantages and disadvantages as its deterministic counterpart. The stochastic programming approach is based on a finite approximation of uncertain events called a scenario tree (for problems with no decision this boils down to the Monte Carlo method). Their complexity is polynomial with respect to the number of state variables but exponential with respect to the number of time steps. In addition, various heuristics are proposed for dealing with the case (uncovered by the two other approaches) when both the number of state variables and time steps is large.


previous
next

Logo Inria