Inria / Raweb 2004
Team: Mathfi

Search in Activity Report, year 2004:
HELP

INDEX

Team : mathfi

Section: Scientific Foundations


Keywords: Stochastic Control, singular and impulse control, risk-sensitive control, free boudary, Hamilton-Jacobi-Bellman, variational and quasi-variational inequalities.

Stochastic Control

Participants: J.-Ph. Chancelier (ENPC), D. Lefèvre, M. Mnif, M. Messaoud, B. Øksendal (Oslo University), A. Sulem.

Stochastic control consists in the study of dynamical systems subject to random perturbations and which can be controlled in order to optimize some performance criterion function.

We consider systems following controlled diffusion dynamics possibly with jumps. The objective is to optimize a criterion over all admissible strategies on a finite or infinite planning horizon. The criterion can also be of ergodic or risk-sensitive type. Dynamic programming approach leads to Hamilton-Jacobi-Bellman (HJB) equations for the value function. This equation is integrodifferential in the case of underlying jump processes (see [12]). The limit conditions depend on the behaviour of the underlying process on the boundary of the domain (stopped or reflected).

Optimal stopping problems such as American pricing lead to variational inequalities of obstacle type. In the case of singular control, the dynamic programming equation is a variational inequality, that is a system of partial differential inequalities. Singular control are used for example to model proportional transaction costs in portfolio optimisation. The control process may also be of impulse type: in this case the state of the system jumps at some intervention times. The impulse control consist in the sequence of instants and sizes of the impulses. The associated dynamic programming equation is then a Quasivariational inequality (QVI). These models are used for example in the case of portfolio optimisation with fixed transaction costs. Variational and quasivariational inequalities are free boundary problems. The theory of viscosity solutions offers a rigourous framework for the study of dynamic programming equations. An alternative approach to dynamic programming is the study of optimality conditions which lead to backward stochastic differential equations for the adjoint state. See [21] for a maximum principle for the optimal control of jump diffusion processes.

We also study partial observation optimal control problems [45] , [11].


previous
next