Section: Scientific Foundations
Keywords : Stochastic Control, singular and impulse control, risk-sensitive control, free boundary, Hamilton-Jacobi-Bellman, variational and quasi-variational inequalities, BSDE.
Stochastic Control and Backward Stochastic Differential equations
Participants : V. Bally, J.-Ph. Chancelier, D. Lefèvre, M. Mnif, M. Messaoud, M.C. Kammerer-Quenez, A. Sulem.
Stochastic control consists in the study of dynamical systems subject to random perturbations and which can be controlled in order to optimize some performance criterion. Dynamic programming approach leads to Hamilton-Jacobi-Bellman (HJB) equations for the value function. This equation is of integrodifferential type when the underlying processes admit jumps (see [11] ). The theory of viscosity solutions offers a rigourous framework for the study of dynamic programming equations. An alternative approach to dynamic programming is the study of optimality conditions (stochastic maximum principle) which leads to backward stochastic differential equations (BSDE). Typical financial applications arise in portfolio optimization, hedging and pricing in incomplete markets, calibration. BSDE's also provide the prices of contingent claims in complete and incomplete markets and are an efficient tool to study recursive utilities as introduced by Duffie and Epstein [60] .