Project Team Anubis

Members
Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
PDF e-pub XML


Section: Scientific Foundations

Developping mathematical methods of optimal control, inverse problems and dynamical systems; software tools

Optimal control of systems governed by partial differential equations has a long past history at INRIA going back to the pioneering work of J.L. Lions [28] . Now Commands and Corida team-projects are investigating this area. First we want to be users of results from these researches. We want to use the automatic control tools not only as a way of optimizing the action on a system but also as a modeling help. For instance Lyapunov functions have long been used as a theoretical tool in population dynamics. Similarly, the recent trend in automatic control consisting in using families of model giving a finer or coarser representation of reality can be found in population dynamics: models describing the evolution of interacting populations are quite numerous, ranging from individual based models to models governed by systems of ordinary or partial differential equations.

The method of virtual controls has been set forth by J.-L. Lions and O. Pironneau. It aims at providing methods for domain decomposition, model coupling, and multipyshic model based on optimal control techniques. Yet interactions (between domains or models) are considered as control variables and the problem is solved by minimizing a criterion. This approach suits well with the framework described here particularly for inverse problems and we intend to contribute to it.

Inverse problems : application to parameter identification and data assimilation in biomathematics

A classical way to tackle inverse problems is to set them as optimal control problems. This method has proved to be efficient and is widely used in various fields. Nevertheless we are persuaded that important methodological progresses are still to be done in order to generalize its use. With JP Yvon, we have worked on the numerical stability of these methods, seeking to redefine the mismatch criterion in order to improve the conditioning of the Hessian of the optimization problem ( [29] ). In the same way a simple idea to explore is to use a total least square approach for this criterion.

An other idea we want to investigate consists in defining a measure of match (positive) and one of mismatch (negative) between the output of the model and the measurements, and to take into account only the positive part in the criterion. This point of view inspired from methods used in genomic sequences comparison (Waterman's algorithm) aims at a better robustness of the method by eliminating from the criterion the effect of unmodelled phenomena. It also leads to free boundary problems (part of the observation taken into account).

For certain problems the ill-posedness can be related by the factorization method to the ill-posedness of the backward integration of a parabolic equation. Then we can apply the well-known quasi-reversibility method to that case. The setting in position of programs of vaccination, prophylaxy, detection needs an a priori study of feasibility. This study after a modeling step will go through a step of model tuning to the data. Yet, initial data are badly known or completely unknown, demographic parameters are often unknown and disease transmission mechanisms are subject to discussion between biologists to determine their nature but their exact form and value is unknown. We intend to use parameter estimation techniques for these biomathematics problems.

Also, even though the models used nowadays are mainly qualitative, we want to investigate on forecasting simulations. For that purpose data assimilation is an important method. It has benefited of many recent developments in the field of meteorology and oceanography as reduced state Kalman filtering or ensemble Kalman filtering. To our knowledge these tools have not been used in the present context. We intend to explore the use of these tools and adapt them. Furthermore the efficiency of the “robust” Kalman filter issued from our research on QR factorization will also be evaluated (cf. section 3.3.2 ).

Dynamic programming and factorization of boundary value problems

We propose a method to solve elliptic boundary value problems inspired by optimal control theory. We use here spatially the technique of invariant embedding which is used in time to compute optimal feedback in control. In the symmetric case we consider the state equation as the optimality system of a control problem, one space variable playing the role of time. The problem is embedded in a family of similar problems defined over subdomains of the initial domain. These subdomains are limited by a family of surfaces sweeping over the initial domain. This technique allows to decouple the optimality system as for the derivation of the optimal feedback. So one can factorize a second order elliptic boundary value problem in two first order Cauchy problems of parabolic type. These problems are decoupled : one can solve one problem in one space direction (“descent phase”) then the other problem in the opposite direction (“climbing phase”). This decoupling technique also works in the nonsymmetric case.

The goal is to provide Cauchy problems equivalent to boundary value problems in a manner as general as possible. We expect from this an interesting theoretical tool : it has already established a link between certain uniqueness results for the Cauchy problem for the considered operator and backward uniqueness for the parabolic problem in the factorized form.

At the moment the method has been applied and fully justified for the Poisson equation in the case of a cylinder [10] . Indeed, the invariant embedding can be done naturally in the direction of the cylinder axis and allowing the factorization of the second order operator in the product of operators of the first order with respect to the coordinate along the cylinder axis. It needs the computation of an operator solution of a Riccati equation. This operator relates two kinds of boundary conditions on the mobile boundary for the same solution (for example the operator relating Neumann and Dirichlet boundary conditions). Furthermore the same method applied to the finite difference discretized problem is nothing else but the Gauss block factorization of its matrix. Therefore the method can be seen as the infinite dimensional generalization of the Gauss block factorization. We look for a generalization of the method to open sets of arbitrary shape and also to families of surfaces sweeping over the domain of arbitrary shape.

There are many ways of extending the method for instance to other elliptic equations, equations of different type, QR factorisation, nonlinear equations ... and of applying it to other problems as obtaining transparent conditions for unbounded domains, domain decomposition, inverse problems, singular perturbation analysis,...

Besides this theoretical tool, giving equivalent formulation to the continuous problem may give rise to new numerical methods based on these formulations (cf. 3.3.3 ).

Applications of the factorization method to devise new numerical methods

The factorization method yield an equivalent formulation to the original boundary value problem. One can use it numerically in various ways :

  1. the interpretation of the block Gauss factorization as a possible discretization of the continuous factorization suggests new schemes : we have already studied an explicit discretization of the factorized system in the privileged space direction. Many other variants are possible;

  2. following the analogy with control problems, we can see incomplete factorization preconditionning as corresponding to suboptimal feedbacks in the framework of otimal control. It is a matter of defining sparse approximations of the Dirichlet-Neuman operator and to use these approximations to obtain preconditionning operators.

  3. the factorization puts into play a family of surfaces depending on a space variable sweeping over the domain. Then we have to describe these surfaces and their displacement, as well as the effect of operators acting on functions defined on these surfaces. In the framework of the finite element method a discretization of the family of surfaces as the “fronts” of the meshing and the block (related to the front) LU factorization as the integration of first order equations. The method needs only the meshing of a family of surfaces instead of a volume meshing. Then mesh size adaption methods may give rise to an alteration of the front velocity and so to an alteration of the mesh.

Generally speaking in any situation where the Dirichlet-Neumann operator is used (transparent boundary conditions, domain decomposition, wave guide matching,..) the factorization method which provides the equation satisfied by this operator may permit advances. We will also make progress by transposing results obtained in one domain to connected domains. In this framework we wish to develop and promote the concept of “computing zoom”: during a simulation the user defines a region of interest and the software recomputes the solution only in the region of interest (with the same number of unknowns i.e. with a better resolution) allowing variation of the data in this region. For that purpose we need to compute boundary conditions on the boundary of the region of interest which sums up the behaviour of the solution outside exactly. This can be done by integrating a Riccati equation from the boundary of the initial domain to the boundary of the region of interest.