Team apics

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Structure and control of non-linear systems

Feedback control and optimal control

Keywords : control, stabilization, optimal control, control Lyapunov functions, feedback.

Participants : Jean-Baptiste Pomet, Ludovic Rifford.

Using the terminology of the beginning of section  3.1 , the class of models considered here is the one of finite dimensional nonlinear control systems; we focus on control rather than identification. In many cases, a linear control based on the linear approximation around a nominal point or trajectory is sufficient. However, there are important instances where it is not, either because the magnitude of the control is limited or because the linear approximation is not controllable, or else in control problems like path planning, that are not local in nature.

State feedback stabilization consists in designing a control law which is a function of the state and makes a given point (or trajectory) asymptotically stable for the closed–loop system. That function of the state must bear some regularity, at least enough to allow the closed-loop system to make sense; continuous or smooth feedback would be ideal, but one may also be content with discontinuous feedback if robustness properties are not defeated. One can consider this as a weak version of the optimal control problem which is to find a control that minimizes a given criterion (for instance the time to reach a prescribed state). Optimal control generally leads to a rather irregular dependence on the initial state; in contrast, stabilization is a qualitative objective (i.e., to reach a given state asymptotically) which is more flexible and allows one to impose a lot more regularity.

Lyapunov functions are a well-known tool to study the stability of non-controlled dynamic systems. For a control system, a Control Lyapunov Function is a Lyapunov function for the closed-loop system where the feedback is chosen appropriately. It can be expressed by a differential inequality called the “Artstein (in)equation”  [36] , reminiscent of the Hamilton-Jacobi-Bellmann equation but largely under-determined. One can easily deduce a continuous stabilizing feedback control from the knowledge of a control Lyapunov function; also, even when such a control is known beforehand, obtaining a control Lyapunov function can still be very useful to deal with robustness issues. Moreover, if one has to deal with a problem where it is important to optimize a criterion, and if the optimal solution is hard to compute, one can look for a control Lyapunov function which comes “close” (in the sense of the criterion) to the solution of the optimization problem but leads to a control which is easier to work with.

These constructions were exploited in a joint collaborative research conducted with Thales Alenia Space (Cannes), where minimizing a certain cost is very important (fuel consumption / transfer time) while at the same time a feedback law is preferred because of robustness and ease of implementation (see section  4.4 ).

Optimal transportation

Keywords : control, stabilization, optimal control, control Lyapunov functions, feedback.

Participants : Ahed Hindawi, Jean-Baptiste Pomet, Ludovic Rifford.

The study of optimal mass transport problems in the Euclidean or Riemannian setting has a long history which goes back to the pioneering works [84] , [74] , and was more recently revised and revitalized by [59] , [83] . It is the problem of finding the cheapest transformation that moves a given initial measure to a given final one, where the cost between points is a (squared) Euclidean or Riemannian distance.

There has been, quite newly, a lot of interest in the same transportation problems with a cost coming from optimal control, i.e. from minimizing an integral quadratic cost, among trajectories that are subject to differential constraints coming from a control system. The case of controllable affine control systems without drift (in which case the cost is the sub-Riemannian distance) is studied in [35] , [32] and [21] .

This is a new topic in the team, starting with the PhD of A. Hindawi, whose goal is to tackle the problem of systems with drift. The optimal transport problem in this setting borrows methods from control and at the same time helps understanding optimal control because it is a more regular problem.

Transformations and equivalences of non-linear systems and models

Keywords : non-linear control, non-linear feedback, classification, non-linear identification.

Participants : Laurent Baratchart, Jean-Baptiste Pomet.

Here we study certain transformations of models of control systems, or more accurately of equivalence classes modulo such transformations. The interested reader can find a richer overview in the first chapter of the HDR Thesis [12] . The motivations are two-fold:

A static feedback transformation is a (non-singular) re-parametrization of the control depending on the state, together with a change of coordinates in the state space. A dynamic feedback transformation consists of a dynamic extension (adding new states, and assigning them a new dynamics) followed by a state feedback on the augmented system. Dynamic equivalence is obviously more general than static equivalence. Let us now stress two specific problems that we are tackling.

Dynamic Equivalence. Very few invariants are known. Any insight on this problem is relevant to the above questions. Some results [24] are accounted for in section  6.14 .

A special equivalence class is the one containing linear controllable systems. It turns out that a system is in this class —i.e. is dynamic linearizable— if and only if there is a formula that gives the general solution by applying a nonlinear differential operator to a certain number of arbitrary functions of time; such a formula is often called a (Monge) parametrization and the order of the differential operator the order of the parametrization. Existence of such a parametrization has been emphasized over the last years as very important and useful in control, see  [66] ; this property (with additional requirements on the parametrization) is also called flatness.

An important question remains open: how can one algorithmically decide whether a given system has this property or not, i.e., is dynamic linearizable or not? The mathematical difficulty is that no a priori bound is known on the order of the above mentioned differential operator giving the parametrization. Within the team, results on low dimensional systems have been obtained [1] , see also  [37] ; the above mentioned difficulty is not solved for these systems but results are given with priori prescribed bounds on this order.

From the differential algebraic point of view, the module of differentials of a controllable system is free and finitely generated over the ring of differential polynomials in d/dt with coefficients in the ring of functions on the system's trajectories; the above question is the one of finding out whether there exists a basis consisting of closed differential forms. Expressed in this way, it looks like an extension of the classical Frobenius integrability theorem to the case where coefficients are differential operators. Of course, some non classical conditions have to be added to the classical stability by exterior differentiation, and the problem is open. In [38] , a partial answer to this problem was given, but in a framework where infinitely many variables are allowed and a finiteness criterion is still missing. The goal is to obtain a formal and implementable algorithm to decide whether or not a given system is flat around a regular point.

Topological Equivalence. Compared to static equivalence, dynamic equivalence is more general, hence might offer some more robust “qualitative” invariants; another way to enlarge equivalence classes is to look for equivalence modulo possibly non-differentiable transformations.

In the case of dynamical systems without control, the Hartman-Grobman theorem states that every system is locally equivalent via a transformation that is solely bi-continuous, to a linear system in a neighborhood of a non-degenerate equilibrium. A similar result control systems would say, typically, that outside a “meager” class of models (for instance, those whose linear approximation is non-controllable), and locally around nominal values of the state and the control, no qualitative phenomenon can distinguish a non-linear system from a linear one, all non-linear phenomena being thus either of global nature or singularities.

In [41] , we proved a “Hartman Grobman Theorem for control systems”, under weak regularity conditions, but it is too abstract to be relevant to the above considerations on qualitative phenomena: linearization is performed by functional non-causal transformations, whose structure is not well understood; it however acquires a concrete meaning when the inputs are themselves generated by a finite-dimensional dynamics.

A stronger Hartman Grobman Theorem for control systems —where transformations are homeomorphisms in the state-control space— cannot hold, in fact; this is proved in [15] , commented in section  6.13 : almost all topologically linearizable control systems are differentiably linearizable. In general (equivalence between nonlinear systems), topological invariants are still to be investigated.


previous
next

Logo Inria