New Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
 PDF e-Pub

## Section: Research Program

### Control Systems

Our effort is directed toward efficient methods for the control of real (physical) systems, based on a model of the system to be controlled. System refers to the physical plant or device, whereas model refers to a mathematical representation of it.

We mostly investigate nonlinear systems whose nonlinearities admit a strong structure derived from physics; the equations governing their behavior are then well known, and the modeling part consists in choosing what phenomena are to be retained in the model used for control design, the other phenomena being treated as perturbations; a more complete model may be used for simulations, for instance. We focus on systems that admit a reliable finite-dimensional model, in continuous time; this means that models are controlled ordinary differential equations, often nonlinear.

Choosing accurate models yet simple enough to allow control design is in itself a key issue; however, modeling or identification as a theory is not per se in the scope of our project.

The extreme generality and versatility of linear control do not contradict the often heard sentence “most real life systems are nonlinear”. Indeed, for many control problems, a linear model is sufficient to capture the important features for control. The reason is that most control objectives are local, first order variations around an operating point or a trajectory are governed by a linear control model, and except in degenerate situations (non-controllability of this linear model), the local behavior of a nonlinear dynamic phenomenon is dictated by the behavior of first order variations. Linear control is the hard core of control theory and practice; it has been pushed to a high degree of achievement –see for instance some classics: [45] , [35] – that leads to big successes in industrial applications (PID, Kalman filtering, frequency domain design, ${H}^{\infty }$ robust control, etc...). It must be taught to future engineers, and it is still a topic of ongoing research.

Linear control by itself however reaches its limits in some important situations:

1. Non local control objectives. For instance, steering the system from a region to a reasonably remote other one (path planning and optimal control); in this case, local linear approximation cannot be sufficient.

It is also the case when some domain of validity (e.g. stability) is prescribed and is larger than the region where the linear approximation is dominant.

2. Local control at degenerate equilibria. Linear control yields local stabilization of an equilibrium point based on the tangent linear approximation if the latter is controllable. When it is not, and this occurs in some physical systems at interesting operating points, linear control is irrelevant and specific nonlinear techniques have to be designed.

This is in a sense an extreme case of the second paragraph in point 1 : the region where the linear approximation is dominant vanishes.

3. Small controls. In some situations, actuators only allow a very small magnitude of the effect of control compared to the effect of other phenomena. Then the behavior of the system without control plays a major role and we are again outside the scope of linear control methods.

4. Local control around a trajectory. Sometimes a trajectory has been selected (this appeals to point 1 ), and local regulation around this reference is to be performed. Linearization in general yields, when the trajectory is not a single equilibrium point, a time-varying linear system. Even if it is controllable, time-varying linear systems are not in the scope of most classical linear control methods, and it is better to incorporate this local regulation in the nonlinear design, all the more so as the linear approximation along optimal trajectories is, by nature, often non controllable.

Let us discuss in more details some specific problems that we are studying or plan to study: classification and structure of control systems in section  3.2 , optimal control, and its links with feedback, in section  3.3 , the problem of optimal transport in section  3.4 , and finally problems relevent to a specific class of systems where the control is “small” in section  3.5 .