The team develops constructive, function-theoretic approaches to inverse problems arising in modeling and design, in particular for electro-magnetic systems as well as in the analysis of certain classes of signals.

Data typically consist of measurements or desired behaviors. The general thread is to approximate them by families of solutions to the equations governing the underlying system. This leads us to consider various interpolation and approximation problems in classes of rational and meromorphic functions, harmonic gradients, or solutions to more general elliptic partial differential equations (PDE), in connection with inverse potential problems. A recurring difficulty is to control the singularities of the approximants.

The mathematical tools pertain to complex and harmonic analysis, approximation theory, potential theory, system theory, differential topology, optimization and computer algebra. Targeted applications include:

identification and synthesis of analog microwave devices (filters, amplifiers),

non-destructive control from field measurements in medical engineering (source recovery in magneto/electro-encephalography), and paleomagnetism (determining the magnetization of rock samples).

In each case, the endeavor is to develop algorithms resulting in dedicated software.

Within the extensive field of inverse problems, much of the research by Factas deals with reconstructing solutions of classical elliptic PDEs from their boundary behavior. Perhaps the simplest example lies with harmonic identification of a stable linear dynamical system: the transfer-function *e.g.* the Cauchy formula.

Practice is not nearly as simple, for *i.e.* to locate the

Step 1 relates to extremal problems and analytic operator theory, see Section . Step 2 involves optimization, and some Schur analysis to parametrize transfer matrices of given Mc-Millan degree when dealing with systems having several inputs and outputs, see Section . It also makes contact with the topology of rational functions, in particular to count critical points and to derive bounds, see Section . Step 2 raises further issues in approximation theory regarding the rate of convergence and the extent to which singularities of the approximant (*i.e.* its poles) tend to singularities of the approximated function; this is where logarithmic potential theory becomes instrumental, see Section .

Applying a realization procedure to the result of step 2 yields an identification procedure from incomplete frequency data which was first demonstrated in to tune resonant microwave filters. Harmonic identification of nonlinear systems around a stable equilibrium can also be envisaged by combining the previous steps with exact linearization techniques from .

A similar path can be taken to approach design problems in the frequency domain, replacing the measured behavior by some desired behavior. However, describing achievable responses in terms of the design parameters is often cumbersome, and most constructive techniques rely on specific criteria adapted to the physics of the problem. This is especially true of filters, the design of which traditionally appeals to polynomial extremal problems , . To this area, Apics contributed the use of Zolotarev-like problems for multi-band synthesis, although we presently favor interpolation techniques in which parameters arise in a more transparent manner, as well as convex relaxation of hyperbolic approximation problems, see Sections and .

The previous example of harmonic identification quickly suggests a generalization of itself. Indeed, on identifying *i.e.*, the field) on part of a hypersurface (a curve in 2-D) encompassing the support of

Inverse potential problems are severely indeterminate because infinitely many measures within an open set of *balayage* . In the two steps approach previously described, we implicitly removed this indeterminacy by requiring in step 1 that the measure be supported on the boundary (because we seek a function holomorphic throughout the right half-space), and by requiring in step 2 that the measure be discrete in the left half-plane (in fact: a finite sum of point masses

To recap, the gist of our approach is to approximate boundary data by (boundary traces of) fields arising from potentials of measures with specific support. This differs from standard approaches to inverse problems, where descent algorithms are applied to integration schemes of the direct problem; in such methods, it is the equation which gets approximated (in fact: discretized).

Along these lines, Factas advocates the use of steps 1 and 2 above, along with some singularity analysis, to approach issues of nondestructive control in 2-D and 3-D , , . The team is currently engaged in the generalization to inverse source problems for the Laplace equation in 3-D, to be described further in Section . There, holomorphic functions are replaced by harmonic gradients; applications are to inverse source problems in neurosciences (in particular in EEG/MEG) and inverse magnetization problems in geosciences, see Section .

The approximation-theoretic tools developed by Apics and now by Factas to handle issues mentioned so far are outlined in Section . In Section to come, we describe in more detail which problems are considered and which applications are targeted.

Note that the Inria project-team Apics reached the end of its life cycle by the end of 2017. The proposal for our new team Factas was processed by the CEP (Comité des Équipes-Projets) of the Research Center in 2018, and approved by the head of the Institute in 2019.

By standard properties of conjugate differentials, reconstructing Dirichlet-Neumann boundary conditions for a function harmonic in a plane domain, when these conditions are already known on a subset

Another application by the team deals with non-constant conductivity over a doubly connected domain, the set

This was actually carried out in collaboration with CEA (French nuclear agency) and the University of Nice (JAD Lab.), to data from *Tore Supra* in . The procedure is fast because no numerical integration of the underlying PDE is needed, as an explicit basis of solutions to the conjugate Beltrami equation in terms of Bessel functions was found in this case. Generalizing this approach in a more systematic manner to free boundary problems of Bernoulli type, using descent algorithms based on shape-gradient for such approximation-theoretic criteria, is an interesting prospect to the team.

The piece of work we just mentioned requires defining and studying Hardy spaces of conjugate Beltrami equations, which is an interesting topic. For Sobolev-smooth coefficients of exponent greater than 2, they were investigated in , . The case of the critical exponent 2 is treated in , which apparently provides the first example of well-posed Dirichlet problem in the non-strictly elliptic case: the conductivity may be unbounded or zero on sets of zero capacity and, accordingly, solutions need not be locally bounded. More importantly perhaps, the exponent 2 is also the key to a corresponding theory on very general (still rectifiable) domains in the plane, as coefficients of pseudo-holomorphic functions obtained by conformal transformation onto a disk are merely of

The 3-D version of step 1 in Section is another subject investigated by Factas: to recover a harmonic function (up to an additive constant) in a ball or a half-space from partial knowledge of its gradient. This prototypical inverse problem (*i.e.* inverse to the Cauchy problem for the Laplace equation) often recurs in electromagnetism. At present, Factas is involved with solving instances of this inverse problem arising in two fields, namely medical imaging *e.g.* for electroencephalography (EEG) or magneto-encephalography (MEG), and paleomagnetism (recovery of rocks magnetization) , , see Section . In this connection, we collaborate with two groups of partners: Athena Inria project-team and INS (Institut de Neurosciences des Systèmes, http://

The team is further concerned with 3-D generalizations and applications to non-destructive control of step 2 in Section . A typical problem is here to localize inhomogeneities or defaults such as cracks, sources or occlusions in a planar or 3-dimensional object, knowing thermal, electrical, or magnetic measurements on the boundary. These defaults can be expressed as a lack of harmonicity of the solution to the associated Dirichlet-Neumann problem, thereby posing an inverse potential problem in order to recover them. In 2-D, finding an optimal discretization of the potential in Sobolev norm amounts to solve a best rational approximation problem, and the question arises as to how the location of the singularities of the approximant (*i.e.* its poles) reflects the location of the singularities of the potential (*i.e.* the defaults we seek). This is a fairly deep issue in approximation theory, to which Apics contributed convergence results for certain classes of fields expressed as Cauchy integrals over extremal contours for the logarithmic potential , , . Initial schemes to locate cracks or sources *via* rational approximation on planar domains were obtained this way , , . It is remarkable that finite inverse source problems in 3-D balls, or more general algebraic surfaces, can be approached using these 2-D techniques upon slicing the domain into planar sections , . More precisely, each section cuts out a planar domain, the boundary of which carries data which can be proved to match an algebraic function. The singularities of this algebraic function are not located at the 3-D sources, but are related to them: the section contains a source if and only if some function of the singularities in that section meets a relative extremum. Using bisection it is thus possible to determine an extremal place along all sections parallel to a given plane direction, up to some threshold which has to be chosen small enough that one does not miss a source. This way, we reduce the original source problem in 3-D to a sequence of inverse poles and branchpoints problems in 2-D. This bottom line generates a steady research activity within Factas, and again applications are sought to medical imaging and geosciences, see Sections , and .

Conjectures may be raised on the behavior of optimal potential discretization in 3-D, but answering them is an ambitious program still in its infancy.

Through contacts with CNES (French space agency), members of the team became involved in identification and tuning of microwave electromagnetic filters used in space telecommunications, see Section . The initial problem was to recover, from band-limited frequency measurements, physical parameters of the device under examination. The latter consists of interconnected dual-mode resonant cavities with negligible loss, hence its scattering matrix is modeled by a

This is where system theory comes into play, through the so-called *realization* process mapping a rational transfer function in the frequency domain to a state-space representation of the underlying system of linear differential equations in the time domain. Specifically, realizing the scattering matrix allows one to construct a virtual electrical network, equivalent to the filter, the parameters of which mediate in between the frequency response and the geometric characteristics of the cavities (*i.e.* the tuning parameters).

Hardy spaces provide a framework to transform this ill-posed issue into a series of regularized analytic and meromorphic approximation problems. More precisely, the procedure sketched in Section goes as follows:

infer from the pointwise boundary data in the bandwidth a stable transfer function (*i.e.* one which is holomorphic in the right half-plane), that may be infinite dimensional (numerically: of high degree). This is done by solving a problem analogous to

A stable rational approximation of appropriate degree to the model obtained in the previous step is performed. For this, a descent method on the compact manifold of inner matrices of given size and degree is used, based on an original parametrization of stable transfer functions developed within the team , .

Realizations of this rational approximant are computed. To be useful, they must satisfy certain constraints imposed by the geometry of the device. These constraints typically come from the coupling topology of the equivalent electrical network used to model the filter. This network is composed of resonators, coupled according to some specific graph. This realization step can be recast, under appropriate compatibility conditions , as solving a zero-dimensional multivariate polynomial system. To tackle this problem in practice, we use Gröbner basis techniques and continuation methods which team up in the Dedale-HF software (see Section ).

We recently started a collaboration with the Chinese Hong Kong University on the topic of frequency depending couplings appearing in the equivalent circuits we compute continuing our work on wide-band design applications.

Factas also investigates issues pertaining to design rather than identification. Given the topology of the filter, a basic problem in this connection is to find the optimal response subject to specifications that bear on rejection, transmission and group delay of the scattering parameters. Generalizing the classical approach based on Chebyshev polynomials for single band filters, we recast the problem of multi-band response synthesis as a generalization of the classical Zolotarev min-max problem for rational functions . Thanks to quasi-convexity, the latter can be solved efficiently using iterative methods relying on linear programming. These were implemented in the software easy-FF (see easy-FF). Currently, the team is engaged in the synthesis of more complex microwave devices like multiplexers and routers, which connect several filters through wave guides. Schur analysis plays an important role here, because scattering matrices of passive systems are of Schur type (*i.e.* contractive in the stability region). The theory originates with the work of I. Schur , who devised a recursive test to check for contractivity of a holomorphic function in the disk. The so-called Schur parameters of a function may be viewed as Taylor coefficients for the hyperbolic metric of the disk, and the fact that Schur functions are contractions for that metric lies at the root of Schur's test. Generalizations thereof turn out to be efficient to parametrize solutions to contractive interpolation problems . Dwelling on this, Factas contributed differential parametrizations (atlases of charts) of lossless matrix functions , , which are fundamental to our rational approximation software RARL2 (see Section ). Schur analysis is also instrumental to approach de-embedding issues, and provides one with considerable insight into the so-called matching problem. The latter consists in maximizing the power a multiport can pass to a given load, and for reasons of efficiency it is all-pervasive in microwave and electric network design, *e.g.* of antennas, multiplexers, wifi cards and more. It can be viewed as a rational approximation problem in the hyperbolic metric, and the team presently deals with this hot topic using contractive interpolation with constraints on boundary peak points, within the framework of the (defense funded) ANR Cocoram, see Sections .

In recent years, our attention was driven by CNES and UPV (Bilbao) to questions about stability of high-frequency amplifiers. Contrary to previously discussed devices, these are *active* components. The response of an amplifier can be linearized around a set of primary current and voltages, and then admittances of the corresponding electrical network can be computed at various frequencies, using the so-called harmonic balance method. The initial goal is to check for stability of the linearized model, so as to ascertain existence of a well-defined working state. The network is composed of lumped electrical elements namely inductors, capacitors, negative *and* positive resistors, transmission lines, and controlled current sources. Our research so far has focused on describing the algebraic structure of admittance functions, so as to set up a function-theoretic framework where the two-steps approach outlined in Section can be put to work. The main discovery is that the unstable part of each partial transfer function is rational and can be computed by analytic projection, see Section . We now start investigating the linearized harmonic transfer-function around a periodic cycle, to check for stability under non necessarily small inputs. This topic generates the doctoral work of S. Fueyo.

To find an analytic function

Here *a priori* assumptions on the behavior of the model off

To fix terminology, we refer to *bounded extremal problem*. As shown in , , , the solution to this convex infinite-dimensional optimization problem can be obtained when

(

In the case

Various modifications of

The analog of Problem *seek the inner boundary*, knowing it is a level curve of the solution. In this case, the Lagrange parameter indicates how to deform the inner contour in order to improve data fitting. Similar topics are discussed in Section for more general equations than the Laplacian, namely isotropic conductivity equations of the form *i.e.*, varies in the space). Then, the Hardy spaces in Problem

Though originally considered in dimension 2, Problem

When

On the ball, the analog of Problem

When *Hardy-Hodge* decomposition, allowing us to express a *i.e.* those generating no field in the upper half space) .

Just like solving problem

Problem

Companion to problem

Note that

The techniques set forth in this section are used to solve step 2 in Section and they are instrumental to approach inverse boundary value problems for the Poisson equation

We put

A natural generalization of problem

(

Only for

The case where *stable* rational approximant to *not* be unique.

The Miaou project (predecessor of Apics) already designed a dedicated steepest-descent algorithm for the case *local minimum* is guaranteed; the algorithm ha evolved over years and still now, it seems to be the only procedure meeting this property. This gradient algorithm proceeds recursively with respect to *critical points* of lower degree (as is done by the RARL2 software, Section ).

In order to establish global convergence results, Apics has undertaken a deeper study of the number and nature of critical points (local minima, saddle points, ...), in which tools from differential topology and operator theory team up with classical interpolation theory , . Based on this work, uniqueness or asymptotic uniqueness of the approximant was proved for certain classes of functions like transfer functions of relaxation systems (*i.e.* Markov functions) and more generally Cauchy integrals over hyperbolic geodesic arcs . These are the only results of this kind. Research by Apics on this topic remained dormant for a while by reasons of opportunity, but revisiting the work in higher dimension is a worthy and timely endeavor today. Meanwhile, an analog to AAK theory was carried out for

A common feature to the above-mentioned problems is that critical point equations yield non-Hermitian orthogonality relations for the denominator of the approximant. This stresses connections with interpolation, which is a standard way to build approximants, and in many respects best or near-best rational approximation may be regarded as a clever manner to pick interpolation points. This was exploited in , , and is used in an essential manner to assess the behavior of poles of best approximants to functions with branched singularities, which is of particular interest for inverse source problems (*cf.* Sections and ).

In higher dimensions, the analog of Problem

Besides, certain constrained rational approximation problems, of special interest in identification and design of passive systems, arise when putting additional requirements on the approximant, for instance that it should be smaller than 1 in modulus (*i.e.* a Schur function). In particular, Schur interpolation lately received renewed attention from the team, in connection with matching problems. There, interpolation data are subject to a well-known compatibility condition (positive definiteness of the so-called Pick matrix), and the main difficulty is to put interpolation points on the boundary of

Matrix-valued approximation is necessary to handle systems with several inputs and outputs but it generates additional difficulties as compared to scalar-valued approximation, both theoretically and algorithmically. In the matrix case, the McMillan degree (*i.e.* the degree of a minimal realization in the System-Theoretic sense) generalizes the usual notion of degree for rational functions. For instance when poles are simple, the McMillan degree is the sum of the ranks of the residues.

The basic problem that we consider now goes as follows: *let $\mathcal{F}\in {\left({H}^{2}\right)}^{m\times l}$ and $n$ an integer; find a rational matrix of size $m\times l$ without poles in the unit disk and of McMillan degree at most $n$ which is nearest possible to $\mathcal{F}$ in ${\left({H}^{2}\right)}^{m\times l}$.* Here the

The scalar approximation algorithm derived in and mentioned in Section generalizes to the matrix-valued situation . The first difficulty here is to parametrize inner matrices (*i.e.* matrix-valued functions analytic in the unit disk and unitary on the unit circle) of given McMillan degree degree

Difficulties relative to multiple local minima of course arise in the matrix-valued case as well, and deriving criteria that guarantee uniqueness is even more difficult than in the scalar case. The case of rational functions of degree

Let us stress that RARL2 seems the only algorithm handling rational approximation in the matrix case that demonstrably converges to a local minimum while meeting stability constraints on the approximant. It is still a working pin of many developments by Factas on frequency optimization and design.

We refer here to the behavior of poles of best meromorphic approximants, in the

Generally speaking in approximation theory, assessing the behavior of poles of rational approximants is essential to obtain error rates as the degree goes large, and to tackle constructive issues like uniqueness. However, as explained in Section , the original twist by Apics, now Factas, is to consider this issue also as a means to extract information on singularities of the solution to a Dirichlet-Neumann problem. The general theme is thus: *how do the singularities of the approximant reflect those of the approximated function?* This approach to inverse problem for the 2-D Laplacian turns out to be attractive when singularities are zero- or one-dimensional (see Section ). It can be used as a computationally cheap initial condition for more precise but much heavier numerical optimizations which often do not even converge unless properly initialized. As regards crack detection or source recovery, this approach boils down to analyzing the behavior of best meromorphic approximants of given pole cardinality to a function with branch points, which is the prototype of a polar singular set. For piecewise analytic cracks, or in the case of sources, we were able to prove (, , ), that the poles of the approximants accumulate, when the degree goes large, to some extremal cut of minimum weighted logarithmic capacity connecting the singular points of the crack, or the sources . Moreover, the asymptotic density of the poles turns out to be the Green equilibrium distribution on this cut in

The case of two-dimensional singularities is still an outstanding open problem.

It is remarkable that inverse source problems inside a sphere or an ellipsoid in 3-D can be approached with such 2-D techniques, as applied to planar sections, see Section . The technique is implemented in the software FindSources3D, see Section .

In addition to the above-mentioned research activities, Factas develops and maintains a number of long-term software tools that either implement and illustrate effectiveness of the algorithms theoretically developed by the team or serve as tools to help further research by team members. We present briefly the most important of them.

Keywords: Electrical circuit - Stability

Functional Description: To minimise prototyping costs, the design of analog circuits is performed using computer-aided design tools which simulate the circuit's response as accurately as possible.

Some commonly used simulation tools do not impose stability, which can result in costly errors when the prototype turns out to be unstable. A thorough stability analysis is therefore a very important step in circuit design. This is where pisa is used.

pisa is a Matlab toolbox that allows designers of analog electronic circuits to determine the stability of their circuits in the simulator. It analyses the impedance presented by a circuit to determine the circuit's stability. When an instability is detected, pisa can estimate location of the unstable poles to help designers fix their stability issue.

Release Functional Description: First version

Authors: Adam Cooman, David Martinez Martinez, Fabien Seyfert and Martine Olivi

Contact: Fabien Seyfert

Publications: Model-Free Closed-Loop Stability Analysis: A Linear Functional Approach - On Transfer Functions Realizable with Active Electronic Components

Scientific Description

Dedale-HF consists in two parts: a database of coupling topologies as well as a dedicated predictor-corrector code. Roughly speaking each reference file of the database contains, for a given coupling topology, the complete solution to the coupling matrix synthesis problem (C.M. problem for short) associated to particular filtering characteristics. The latter is then used as a starting point for a predictor-corrector integration method that computes the solution to the C.M. corresponding to the user-specified filter characteristics. The reference files are computed off-line using Gröbner basis techniques or numerical techniques based on the exploration of a monodromy group. The use of such continuation techniques, combined with an efficient implementation of the integrator, drastically reduces the computational time.

Dedale-HF has been licensed to, and is currently used by TAS-Espana

Functional Description

Dedale-HF is a software dedicated to solve exhaustively the coupling matrix synthesis problem in reasonable time for the filtering community. Given a coupling topology, the coupling matrix synthesis problem consists in finding all possible electromagnetic coupling values between resonators that yield a realization of given filter characteristics. Solving the latter is crucial during the design step of a filter in order to derive its physical dimensions, as well as during the tuning process where coupling values need to be extracted from frequency measurements.

Participant: Fabien Seyfert

Contact: Fabien Seyfert

Keywords: Health - Neuroimaging - Visualization - Compilers - Medical - Image - Processing

FindSources3D is a software program dedicated to the resolution of inverse source problems in electroencephalography (EEG). From pointwise measurements of the electrical potential taken by electrodes on the scalp, FindSources3D estimates pointwise dipolar current sources within the brain in a spherical model.

After a first data transmission “cortical mapping” step, it makes use of best rational approximation on 2-D planar cross-sections and of the software RARL2 in order to locate singularities. From those planar singularities, the 3-D sources are estimated in a last step, see .

The present version of FindSources3D (called FindSources3D-bolis) provides a modular, ergonomic, accessible and interactive platform, with a convenient graphical interface for EEG medical imaging. Modularity is now granted (using the tools dtk, Qt, with compiled Matlab libraries). It offers a detailed and nice visualization of data and tuning parameters, processing steps, and of the computed results (using VTK).

A new version is being developed that will incorporate a first Singular Value Decomposition (SVD) step in order to be able to handle time dependent data and to find the corresponding principal static components.

Participants: Juliette Leblond, Maureen Clerc (team Athena, Inria Sophia), Jean-Paul Marmorat, Théodore Papadopoulo (team Athena).

Contact: Juliette Leblond

URL: http://

Scientific Description

For the matrix-valued rational approximation step, Presto-HF relies on RARL2. Constrained realizations are computed using the Dedale-HF software. As a toolbox, Presto-HF has a modular structure, which allows one for example to include some building blocks in an already existing software.

The delay compensation algorithm is based on the following assumption: far off the pass-band, one can reasonably expect a good approximation of the rational components of S11 and S22 by the first few terms of their Taylor expansion at infinity, a small degree polynomial in 1/s. Using this idea, a sequence of quadratic convex optimization problems are solved, in order to obtain appropriate compensations. In order to check the previous assumption, one has to measure the filter on a larger band, typically three times the pass band.

This toolbox has been licensed to (and is currently used by) Thales Alenia Space in Toulouse and Madrid, Thales airborne systems and Flextronics (two licenses). Xlim (University of Limoges) is a heavy user of Presto-HF among the academic filtering community and some free license agreements have been granted to the microwave department of the University of Erlangen (Germany) and the Royal Military College (Kingston, Canada).

Functional Description

Presto-HF is a toolbox dedicated to low-pass parameter identification for microwave filters. In order to allow the industrial transfer of our methods, a Matlab-based toolbox has been developed, dedicated to the problem of identification of low-pass microwave filter parameters. It allows one to run the following algorithmic steps, either individually or in a single stroke:

• Determination of delay components caused by the access devices (automatic reference plane adjustment),

• Automatic determination of an analytic completion, bounded in modulus for each channel,

• Rational approximation of fixed McMillan degree,

• Determination of a constrained realization.

Participants: Fabien Seyfert, Jean-Paul Marmorat and Martine Olivi

Contact: Fabien Seyfert

Réalisation interne et Approximation Rationnelle L2

Scientific Description

The method is a steepest-descent algorithm. A parametrization of MIMO systems is used, which ensures that the stability constraint on the approximant is met. The implementation, in Matlab, is based on state-space representations.

RARL2 performs the rational approximation step in the software tools PRESTO-HF and FindSources3D. It is distributed under a particular license, allowing unlimited usage for academic research purposes. It was released to the universities of Delft and Maastricht (the Netherlands), Cork (Ireland), Brussels (Belgium), Macao (China) and BITS-Pilani Hyderabad Campus (India).

Functional Description

RARL2 is a software for rational approximation. It computes a stable rational L2-approximation of specified order to a given L2-stable (L2 on the unit circle, analytic in the complement of the unit disk) matrix-valued function. This can be the transfer function of a multivariable discrete-time stable system. RARL2 takes as input either:

• its internal realization,

• its first N Fourier coefficients,

• discretized (uniformly distributed) values on the circle. In this case, a least-square criterion is used instead of the L2 norm.

It thus performs model reduction in the first or the second case, and leans on frequency data identification in the third. For band-limited frequency data, it could be necessary to infer the behavior of the system outside the bandwidth before performing rational approximation.

An appropriate Möbius transformation allows to use the software for continuous-time systems as well.

Participants: Jean-Paul Marmorat and Martine Olivi

Contact: Martine Olivi

Keywords: Numerical algorithm - Supremum norm - Curve plotting - Remez algorithm - Code generator - Proof synthesis

Functional Description

Sollya is an interactive tool where the developers of mathematical floating-point libraries (libm) can experiment before actually developing code. The environment is safe with respect to floating-point errors, i.e. the user precisely knows when rounding errors or approximation errors happen, and rigorous bounds are always provided for these errors.

Among other features, it offers a fast Remez algorithm for computing polynomial approximations of real functions and also an algorithm for finding good polynomial approximants with floating-point coefficients to any real function. As well, it provides algorithms for the certification of numerical codes, such as Taylor Models, interval arithmetic or certified supremum norms.

It is available as a free software under the CeCILL-C license.

Participants: Sylvain Chevillard, Christoph Lauter, Mioara Joldes and Nicolas Jourdan

Partners: CNRS - ENS Lyon - UCBL Lyon 1

Contact: Sylvain Chevillard

Application domains are naturally linked to the problems described in Sections and . By and large, they split into a systems-and-circuits part and an inverse-source-and-boundary-problems part, united under a common umbrella of function-theoretic techniques as described in Section .

Generally speaking, inverse potential problems, similar to the one appearing in Section , occur naturally in connection with systems governed by Maxwell's equation in the quasi-static approximation regime. In particular, they arise in magnetic reconstruction issues. A specific application is to geophysics, which led us to form the Inria Associate Team Impinge (Inverse Magnetization Problems IN GEosciences) together with MIT and Vanderbilt University. Though this Associate Team reached the end of its term in 2018, the collaborations it has generated are still active. A joint work with Cerege (CNRS, Aix-en-Provence), in the framework of the ANR-project MagLune, completes this picture, see Sections , .

To set up the context, recall that the Earth's geomagnetic field is generated by convection of the liquid metallic core (geodynamo) and that rocks become magnetized by the ambient field as they are formed or after subsequent alteration. Their remanent magnetization provides records of past variations of the geodynamo, which is used to study important processes in Earth sciences like motion of tectonic plates and geomagnetic reversals. Rocks from Mars, the Moon, and asteroids also contain remanent magnetization which indicates the past presence of core dynamos. Magnetization in meteorites may even record fields produced by the young sun and the protoplanetary disk which may have played a key role in solar system formation.

For a long time, paleomagnetic techniques were only capable of analyzing bulk samples and compute their net magnetic moment. The development of SQUID microscopes has recently extended the spatial resolution to sub-millimeter scales, raising new physical and algorithmic challenges. The associate team Impinge aims at tackling them, experimenting with the SQUID microscope set up in the Paleomagnetism Laboratory of the department of Earth, Atmospheric and Planetary Sciences at MIT. Typically, pieces of rock are sanded down to a thin slab, and the magnetization has to be recovered from the field measured on a planar region at small distance from the slab.

Mathematically speaking, both inverse source problems for EEG from Section and inverse magnetization problems described presently amount to recover the (3-D valued) quantity

outside the volume

Another timely instance of inverse magnetization problems lies with geomagnetism. Satellites orbiting around the Earth measure the magnetic field at many points, and nowadays it is a challenge to extract global information from those measurements. In collaboration with C. Gerhards (Geomathematics and Geoinformatics Group, Technische Universität Bergakademie Freiberg, Germany), we started to work on the problem of separating the magnetic field due to the magnetization of the globe's crust from the magnetic field due to convection in the liquid metallic core. The techniques involved are variants, in a spherical context, from those developed within the Impinge associate team for paleomagnetism, see Section .

Solving overdetermined Cauchy problems for the Laplace equation on a spherical layer (in 3-D) in order to extrapolate incomplete data (see Section ) is a necessary ingredient of the team's approach to inverse source problems, in particular for applications to EEG, see . Indeed, the latter involves propagating the initial conditions through several layers of different conductivities, from the boundary shell down to the center of the domain where the singularities (*i.e.* the sources) lie. Once propagated to the innermost sphere, it turns out that traces of the boundary data on 2-D cross sections coincide with analytic functions with branched singularities in the slicing plane , . The singularities are related to the actual location of the sources, namely their moduli reach in turn a maximum when the plane contains one of the sources. Hence we are back to the 2-D framework of Section , and recovering these singularities can be performed *via* best rational approximation. The goal is to produce a fast and sufficiently accurate initial guess on the number and location of the sources in order to run heavier descent algorithms on the direct problem, which are more precise but computationally costly and often fail to converge if not properly initialized. Our belief is that such a localization process can add a geometric, valuable piece of information to the standard temporal analysis of EEG signal records.

Numerical experiments obtained with our software FindSources3D give very good results on simulated data and we are now engaged in the process of handling real experimental data, simultaneously recorded by EEG and MEG devices, in collaboration with our partners at INS, hospital la Timone, Marseille (see Section ).

Furthermore, another approach is being studied for EEG, that consists in regularizing the inverse source problem by a total variation constraint on the source term (a measure), added to the quadratic data approximation criterion. It is similar to the path that is taken for inverse magnetization problems (see Sections and ), and it presently focuses on surface-distributed models.

This is joint work with Stéphane Bila (Xlim, Limoges).

One of the best training grounds for function-theoretic applications by the team is the identification and design of physical systems whose performance is assessed frequency-wise. This is the case of electromagnetic resonant systems which are of common use in telecommunications.

In space telecommunications (satellite transmissions), constraints specific to on-board technology lead to the use of filters with resonant cavities in the microwave range. These filters serve multiplexing purposes (before or after amplification), and consist of a sequence of cylindrical hollow bodies, magnetically coupled by irises (orthogonal double slits). The electromagnetic wave that traverses the cavities satisfies the Maxwell equations, forcing the tangent electrical field along the body of the cavity to be zero. A deeper study of the Helmholtz equation states that an essentially discrete set of wave vectors is selected. In the considered range of frequency, the electrical field in each cavity can be decomposed along two orthogonal modes, perpendicular to the axis of the cavity (other modes are far off in the frequency domain, and their influence can be neglected).

Near the resonance frequency, a good approximation to the Helmholtz equations is given by a second order differential equation. Thus, one obtains an electrical model of the filter as a sequence of electrically-coupled resonant circuits, each circuit being modeled by two resonators, one per mode, the resonance frequency of which represents the frequency of a mode, and whose resistance accounts for electric losses (surface currents) in the cavities.

This way, the filter can be seen as a quadripole, with two ports, when plugged onto a resistor at one end and fed with some potential at the other end. One is now interested in the power which is transmitted and reflected. This leads one to define a scattering matrix

In fact, resonance is not studied via the electrical model, but via a low-pass equivalent circuit obtained upon linearizing near the central frequency, which is no longer conjugate symmetric (*i.e.* the underlying system may no longer have real coefficients) but whose degree is divided by 2 (8 in the example).

In short, the strategy for identification is as follows:

measuring the scattering matrix of the filter near the optimal frequency over twice the pass band (which is 80MHz in the example).

Solving bounded extremal problems for the transmission and the reflection (the modulus of he response being respectively close to 0 and 1 outside the interval measurement, cf. Section ) in order to get a models for the scattering matrix as an analytic matrix-valued function. This provides us with a scattering matrix known to be close to a rational matrix of order roughly 1/4 of the number of data points.

Approximating this scattering matrix by a true rational transfer-function of appropriate degree (8 in this example) via the Endymion or RARL2 software (cf. Section ).

A state space realization of

Finally one builds a realization of the approximant and looks for a change of variables that eliminates non-physical couplings. This is obtained by using algebraic-solvers and continuation algorithms on the group of orthogonal complex matrices (symmetry forces this type of transformation).

The final approximation is of high quality. This can be interpreted as a confirmation of the linearity assumption on the system: the relative

The above considerations are valid for a large class of filters. These developments have also been used for the design of non-symmetric filters, which are useful for the synthesis of repeating devices.

The team further investigates problems relative to the design of optimal responses for microwave devices. The resolution of a quasi-convex Zolotarev problems was proposed, in order to derive guaranteed optimal multi-band filter responses subject to modulus constraints . This generalizes the classical single band design techniques based on Chebyshev polynomials and elliptic functions. The approach relies on the fact that the modulus of the scattering parameter

The filtering function appears to be the ratio of two polynomials

The relative simplicity of the derivation of a filter's response, under modulus constraints, owes much to the possibility of forgetting about Feldtkeller's equation and express all design constraints in terms of the filtering function. This no longer the case when considering the synthesis

Through contacts with CNES (Toulouse) and UPV (Bilbao), Apics got additionally involved in the design of amplifiers which, unlike filters, are active devices. A prominent issue here is stability. A twenty years back, it was not possible to simulate unstable responses, and only after building a device could one detect instability. The advent of so-called *harmonic balance* techniques, which compute steady state responses of linear elements in the frequency domain and look for a periodic state in the time domain of a network connecting these linear elements *via* static non-linearities made it possible to compute the harmonic response of a (possibly nonlinear and unstable) device . This has had tremendous impact on design, and there is a growing demand for software analyzers. The team is also becoming active in this area.

In this connection, there are two types of stability involved. The first is stability of a fixed point around which the linearized transfer function accounts for small signal amplification. The second is stability of a limit cycle which is reached when the input signal is no longer small and truly nonlinear amplification is attained (*e.g.* because of saturation). Applications by the team so far have been concerned with the first type of stability, and emphasis is put on defining and extracting the “unstable part” of the response, see Section . The stability check for limit cycles has made important theoretical advances, and numerical algorithms are now under investigation.

A contract was signed with the French small and midsize business (SMB) Inoveos for the realization of a robotic prototype for the mass tuning of microwave devices. In addition to Inria, this project includes the university of Limoges Xlim and the engineering center Cisteme https://

Improvement of the computational efficiency of our circuit methods in order to be compatible with real-time measurements techniques of filter. Typically a circuital extraction needs to be performed in less than 1 second when dealing with a filter of order 10.

Handling the ambiguity resulting from the use of multiple solutions coupling topologies yielding several equivalent circuits for a single DUT (device under tuning).

The overall goal is here to determine magnetic properties of rock samples (*e.g.* meteorites or stalactites), from weak field measurements close to the sample that can nowadays be obtained using SQUIDs (superconducting quantum interference devices). Depending on the geometry of the rock sample, the magnetization distribution can either be considered to lie in a plane (thin sample) or in a parallelepiped of thickness

We pursued our investigation of the recovery of
magnetizations modeled by signed measures on thin samples, and
we singled out an interesting class that we call slender samples. These
are sets of zero measure in

We also continued investigating the recovery of the moment of a
magnetization, an important physical quantity which is in principle easier
to reconstruct than the full magnetization because it is simply a vector in *i.e.* magnetizations that produce the zero field also have zero moment). For the case of thin samples, we
published an article reporting the construction of linear estimators for the moment from the field, based on the solution of certain bounded extremal problems
in the range of the adjoint of the forward operator
.
On a related side, we also setup other linear estimators based on asymptotic results, in the previous years. These estimators are not limited to thin samples and can in principle estimate the net moment of 3D samples, provided that the dimensions of the sample are small with respect to the measurement area. Numerical experiments confirm that linear estimators (both kinds) make essential use
of field values taken at the boundary of the measurement area, and are
easily blurred by noise. We experimentally confirmed this sensitivity on a rather simple case: a small spherule has been magnetized in a controlled way by our partners at MIT, and its net moment has been measured by a classical magnetometer. The spherule has then been measured with the SQUID microscope, with several choices for important parameters (height of the sensor with respect to the spherule, sensitivity of the instrument, size of the 2D rectangle on which measurements are performed, size of the sample step). We applied our (asymptotics based) linear estimator on these experimental maps and they turn out to be clearly affected, especially when the data at the edges of the map are involved. The nature of the noise due to the microscope itself (electronic and quantization noise) might play an important role, as it is known to be non-white, and therefore can affect our methods which sum it up. Subsequently, we now envisage the possibility of modeling the structure of the noise to pre-process the data.

Finally, we considered a simplified 2-D setup for magnetizations and magnetic potentials (of which the magnetic field is the gradient). When both the sample and the measurement set are parallel intervals, we set up some best approximation issues related to inverse recovery and relevant BEP problems in Hardy classes of holomorphic functions, see Section and , which is joint work with E. Pozzi (Department of Mathematics and Statistics, St Louis Univ., St Louis, Missouri, USA). Note that, in the present case, the criterion no longer acts on the boundary of the holomorphy domain (namely, the upper half-plane), but on a strict subset thereof, while the constraint acts on the support of the approximating function. Both involve functions in the Hilbert Hardy space of the upper half-plane.

The team Factas was a partner of the ANR project MagLune on Lunar magnetism, headed by the Geophysics and Planetology Department of Cerege, CNRS, Aix-en-Provence, which ended this year (see Section ). Recent studies let geoscientists think that the Moon used to have a magnetic dynamo for a while. However, the exact process that triggered and fed this dynamo is still not understood, much less why it stopped. The overall goal of the project was to devise models to explain how this dynamo phenomenon was possible on the Moon.

The geophysicists from Cerege went a couple of times to NASA to perform measurements on a few hundreds of samples brought back from the Moon by Apollo missions. The samples are kept inside bags with a protective atmosphere, and geophysicists are not allowed to open the bags, nor to take out samples from NASA facilities. Moreover, the process must be carried out efficiently as a fee is due to NASA by the time when handling these moon samples. Therefore, measurements were performed with some specific magnetometer designed by our colleagues from Cerege. This device measures the components of the magnetic field produced by the sample, at some discrete set of points located on circles belonging to three cylinders (see Figure ). The objective of Factas is to enhance the numerical efficiency of post-processing data obtained with this magnetometer.

Under the hypothesis that the field can be well explained by a single magnetic pointwise dipole, and using ideas similar to those underlying the FindSources3D tool (see Sections and ), we try to recover the position and the moment of the dipole using the available measurements. This work, which is still on-going, constitutes the topic of the PhD thesis of K. Mavreas, whose defense is scheduled on January 31, 2020. In a given cylinder, using the associated cylindrical system of coordinates, recovering the position of the dipole boils down to determine its height

This year has been mostly devoted to running numerical experiments on synthetic examples. The first important observation is that the minimization criterion that we use to recover

These observations are somehow bad news, as the method we propose is based on recovering the position of the dipole by using the values

In 3-D, functional or clinically active regions in the cortex are often modeled by pointwise sources that have to be localized from measurements, taken by electrodes on the scalp, of an electrical potential satisfying a Laplace equation (EEG, electroencephalography). In the works , on the behavior of poles in best rational approximants of fixed degree to functions with branch points, it was shown how to proceed via best rational approximation on a sequence of 2-D disks cut along the inner sphere, for the case where there are finitely many sources (see Section ).

It appears that, in the rational approximation step,
*multiple* poles possess a nice behavior with respect to branched
singularities. This is due to the very physical assumptions on the model from dipolar current sources:
for EEG data that correspond to measurements of the electrical potential, one should consider *triple* poles; this will also be the case for MEG – magneto-encephalography – data. However, for (magnetic) field data produced by magnetic dipolar sources, like in Section , one should consider poles of order five. Though numerically
observed in , there is no mathematical
justification so far why multiple poles generate such strong accumulation
of the poles of the approximants (see Section ). This intriguing property, however,
is definitely helping source recovery and will be the topic of further study. It is used in order to automatically estimate the “most plausible”
number of sources (numerically: up to 3, at the moment).

This year, we started considering a different class of models, not necessarily dipolar, and related estimation algorithms. Such models may be supported on the surface of the cortex or in the volume of the encephalon. We represent sources by vector-valued measures, and in order to favor sparsity in this infinite-dimensional setting we use a TV (i.e. total variation) regularization term as in Section . The approach follows that of and is implemented through two different algorithms, whose convergence properties are currently being studied. Tests on synthetic data from a few dipolar sources provide results of different qualities that need to be better understood. In particular, a weight is being added in the TV term in order to better identify deep sources. This is the topic of the starting PhD research of P. Asensio and M. Nemaire. Ultimately, the results will be compared to those of FS3D and other available software tools.

In the context of David Martinez Martinez's PhD funded partly by CNES the synthesis of multiplexer responses was considered using multipoint matching techniques. Indeed, synthesizing the response of multiplexer composed of a set of channel filters connected via common manifold junction to a common port can be seen as a matrix version application of our multipoint matching result for filters . For short a simultaneous matching solutions is sought for, where each channel filter matches the load it is connected at specified matching frequencies. The difficulty here is that the load seen by each filter, depends explicitly of the response of the other filters by means of the common junction's response: the multiplexer synthesis problem is therefore, in general, strictly harder than the filter multipoint matching problem, and can't be solved by a sequential solving of independent «scalar» problems. A notable exception to this statement is obtained when a totally decoupling common junction is considered. This somehow artificial situation was taken as a start of a continuation algorithm, during which the decoupling junction response is moved step by step via a linear trajectory towards the target junction while the simultaneous matching problem is solved all along via a differential predictor corrector method. Whereas all «accidents» of branch point type that can occur during this procedure are not classified yet, one major obstruction to the continuation process is the occurrence of manifold peaks. The latter are due to resonances occurring in the manifold junction and yield total reflection at some frequencies of the channel ports. When latter coincide with the matching frequencies of a particular channel filter, the simultaneous matching problem has no solution, and the continuation algorithm fails irredeemably.

We therefore gave a full characterization of this manifold peaks and designed a heuristic approach to avoid their appearance during the continuation process. We showed that they only depend on the out of band response of the channel filters, and can in first approximation be considered as constant along the continuation process and estimated by a full wave simulation of each channel filter. This is then used within a triangular adjustment procedure that looks for possible manifold length adjustments (within the channel filters, and between that channel filters and the manifold junction) that guaranties the absence of manifold peaks within the band of each channel filter. Details of this procedure that give important information to the designer about the feasibility of an effective multiplexer response by means of given manifold T-junction, and this before any channel filter optimization procedure, are detailed in , and were presented at Eumc 2019. In connection with the previously described continuation procedure, it was used to design a compact triplexer, based on frequency specifications considered as «hard to fulfill» and furnished by CNES. The triplexer was then realized using 3D printing techniques at Xlim (S. Bila and O. Tantot) our long standing academical partners on these topics (see Figure ). This work is part of the PhD thesis defended by David Martinez Martinez at the end of June.

This problem was proposed by Pauline Kergus, PhD student at Onera (Toulouse). In
her PhD, she studied the following data driven problem: given
frequency measurements of a plant, find a controller which allows to
follow a given reference model. The approach she proposed was to
directly identify the controller from frequency measurements induced
on the controller by the closed loop. Of course the quality of the controller, and in particular its
stability, highly depend on the chosen reference model. The question
is thus: how to choose a good reference model

The goal is here to help design amplifiers and oscillators, in particular to detect instability at an early stage of the design. This topic is studied in the doctoral work of S. Fueyo, co-advised with J.-B. Pomet (from the McTao Inria project-team). Application to oscillator design methodologies is studied in collaboration with Smain Amari from the Royal Military College of Canada (Kingston, Canada).

As opposed to Filters and Antennas, Amplifiers and Oscillators are active components that intrinsically entail a non-linear functioning. The latter is due to the use of transistors governed by electric laws exhibiting saturation effects, and therefore inducing input/output characteristics that are no longer proportional to the magnitude of the input signal. Hence, they typically produce non-linear distortions. A central question arising in the design of amplifiers is to assess stability. The latter may be understood around a functioning point when no input but noise is considered, or else around a periodic trajectory when an input signal at a specified frequency is applied. For oscillators, a precise estimation of their oscillating frequency is crucial during the design process. For devices operating at relatively low frequencies, time domain simulations perform satisfactorily to check stability. For complex microwave amplifiers and oscillators, the situation is however drastically different: the time step necessary to integrate the transmission line's dynamical equations (which behave like a simple electrical wire at low frequency) becomes so small that simulations are intractable in reasonable time. Moreover, most linear components of such circuits are known through their frequency response, and a preliminary, numerically unstable step is then needed to obtain their impulse response, prior to any time domain simulation.

For these reasons, the analysis of such systems is carried out in the frequency domain. In the case of stability issues around a functioning point, where only small input signals are considered, the stability of the linearized system obtained by a first order approximation of each non-linear component
can be studied *via* the transfer impedance functions computed at some ports of the circuit. In recent years, we showed that under realistic
dissipativity assumptions at high frequency for the building blocks of the circuit, these transfer functions are meromorphic in the complex frequency variable

Extensions of the procedure to the strong signal case, where linearisation is considered around a periodic trajectory,
have received attention over the last two years.
When stability is studied around a periodic trajectory,
determined in practice by Harmonic Balance algorithms, linearization yields a linear time varying dynamical system with periodic coefficients
and a periodic trajectory thereof. While in finite dimension the stability of such systems is well understood
via the Floquet theory, this is no longer the case in the present setting which is
infinite dimensional, due to the presence of delays. Dwelling on the theory of retarded systems,
S. Fueyo's PhD work has shown last year that, for
general circuits,
the monodromy operator of the linearized system along its periodic trajectory
is a compact perturbation of a high frequency, non dynamical
operator, which is stable under a realistic passivity assumption at high frequency. Therefore, only finitely many unstable points can arise in the spectrum
of the monodromy operator, and this year we established
a connection between these and the singularities of the harmonic transfer function, viewed as a holomorphic function with values in periodic

We also wrote an article reporting about the stability of the high frequency system, and recast this result in terms of exponential stability of certain delay systems .

In a joint work with T. Qian and P. Dang from the university of Macao,
we proved in previous years that on a compact hypersurface *i.e.* *via* balayage, to describe volumetric silent magnetizations.

We started an academic collaboration with LEAT (Univ. Nice, France, pers. involved: Jean-Yves Dauvignac, Nicolas Fortino, Yasmina Zaki) on the topic of inverse scattering using frequency dependent measurements. As opposed to classical electromagnetic imaging where several spatially located sensors are used to identify the shape of an object by means of scattering data at a single frequency, a discrimination process between different metallic objects is here being sought for by means of a single, or a reduced number of sensors that operate on a whole frequency band. For short the spatial multiplicity and complexity of antenna sensors is here traded against a simpler architecture performing a frequency sweep.

The subscripts

In order to gain some insight we started a full study of the particular case when the scatterer is a spherical PEC (Perfectly Electric Conductor). In this case Maxwell equations can be solved «explicitly» by means of expansions in series of vectorial spherical harmonics. We showed in particular that in this case

where

This is a recent activity of the team, linked to image classification in archaeology in the framework of the project ToMaT (see Regional Initiatives below) and to the post-doctoral stay of V. L. Coli; it is pursued in collaboration with L. Blanc-Féraud (project-team Morpheme, I3S-CNRS/Inria Sophia/iBV), D. Binder (CEPAM-CNRS, Nice), in particular.

The pottery style is classically used as the main cultural marker within Neolithic studies. Archaeological analyses focus on pottery technology, and particularly on the first stages of pottery manufacturing processes. These stages are the most demonstrative for identifying the technical traditions, as they are considered as crucial in apprenticeship processes.
Until now, the identification of pottery manufacturing methods was based on macro-traces analysis, i.e. surface topography, breaks and discontinuities indicating the type of elements (coils, slabs, ...) and the way they were put together for building the pots.
Overcoming the limitations inherent to the macroscopic pottery examination requires a complete access to the internal structure of the pots.
Micro-computed tomography (

The main challenge of our current analyses aims to overcome the lack of existing protocols to apply in order to quantify observations. In order to characterize the manufacturing sequences, the mapping of the paste variability (distribution and composition of temper) and the discontinuities linked to different classes of pores, fabrics and/or organic inclusions appears promising. The totality of the acquired images composes a set of 2-D and 3-D surface and volume data at different resolutions and with specific physical characteristics related to each acquisition modality (multimodal and multi-scale data). Specific shape recognition methods need to be developed by application of robust imaging techniques and 3-D-shapes recognition algorithms.

In a first step, we devised a method to isolate pores from the 3-D data volumes in binary 3-D images, to which we apply a process named Hough transform (derived from Radon transform). This method, of which the generalization from 2-D to 3-D is quite recent, allows us to evaluate the presence of parallel lines going through the pores. The quantity of such lines is a good indicator of the “coiling” manufacturing, that it allows to distinguish from the other “spiral patchwork” patchwork technique, in particular. These progresses are described in , , , and the object of an article in preparation.

The Hough and Radon transforms can also be applied to 2-D slices of the available 3-D images displaying pores locations. In this framework, the use of Radon transform to evaluate the density of points in the image that do belong to (or almost) parallel lines appears to be quite efficient, as was seen during P. Vatiwutipong's internship.

Other possibilities of investigation will be analyzed as well, such as machine learning techniques.

The numerous experiments that we performed on synthetic data in the context of the MagLune project (see Sections and ) revealed an intriguing behavior of the local minima of the optimization problem underlying our method. In the context of that application, we are provided with sampled values on the unit circle

In order to estimate

When

is known to have a unique local minimum on

In order to understand the reasons underlying our observations, we started studying the theoretical properties of the critical points of

We introduce the family *maxima* of *minima* of

We also obtained an explicit algebraic equation characterizing

We showed that best meromorphic approximation on a contour, in the uniform norm, to functions with countably many branched singularities with polar closure inside the contour produces poles whose counting measure accumulate weak-* to the Green equilibrium distribution on the cut of minimal capacity outside of which the function is single-valued. This is joint work with M. Yattselev (University of Indianapolis, Purdue University at Indianapolis). An article is currently being written on this topic.

This contract (reference Inria: 11282) accompanied the PhD of David Martinez Martinez and focused on the development of efficient techniques for the design of matching network tailored for frequency varying loads. Applications of the latter to the design output multiplexers occurring in space applications has also been considered (see new results section). The contract ended mid 2019.

A contract was signed with the SMB company Inoveos in order to build a prototypical robot dedicated to the automatic tuning of microwave devices, see Section .

The team co-advises a PhD (G. Bose) with the CMA team of LEAT (http://

The team participates in the project ToMaT, “Multiscale Tomography: imaging and modeling ancient materials, technical traditions and transfers”, funded by the Idex

The ANR project MagLune (Magnétisme de la Lune) was active from July 2014 to August 2019. It involved the Cerege (Centre de Recherche et d'Enseignement de Géosciences de l'Environnement, joint laboratory between Université Aix-Marseille, CNRS and IRD), the IPGP (Institut de Physique du Globe de Paris) and ISTerre (Institut des Sciences de la Terre). Associated with Cerege were Inria (Apics, then Factas team) and Irphe (Institut de Recherche sur les Phénomènes Hors Équilibre, joint laboratory between Université Aix-Marseille, CNRS and École Centrale de Marseille). The goal of this project (led by geologists) was to understand the past magnetic activity of the Moon, especially to answer the question whether it had a dynamo in the past and which mechanisms were at work to generate it. Factas participated in the project by providing mathematical tools and algorithms to recover the remanent magnetization of rock samples from the moon on the basis of measurements of the magnetic field it generates. The techniques described in Section were instrumental for this purpose.

ANR-18-CE40-0035, “REProducing Kernels in Analysis and beyond”, starting April 2019 (for 48 months).

Led by Aix-Marseille Univ. (IMM), involving Factas team, together with Bordeaux (IMB), Paris-Est, Toulouse Universities.

The project consists of several interrelated tasks dealing with topical problems in modern complex analysis, operator theory and their important applications to other fields of mathematics including approximation theory, probability, and control theory. The project is centered around the notion of the so-called reproducing kernel of a Hilbert space of holomorphic functions. Reproducing kernels are very powerful objects playing an important role in numerous domains such as determinantal point processes, signal theory, Sturm-Liouville and Schrödinger equations.

This project supports the PhD of M. Nemaire within Factas, co-advised by IMB partners.

Factas is part of the European Research Network on System Identification (ERNSI) since 1992.

System identification deals with the derivation, estimation and validation of mathematical models of dynamical phenomena from experimental data.

Following two Inria Associate teams (2013-2018) and a MIT-France seed funding (2014-2018), the team has a strong and regular collaboration with the Earth and Planetary Sciences department at Massachusetts Institute of Technology (Cambridge, MA, USA) and with the Mathematics department of Vanderbilt University (Nashville, TN, USA) on inverse problems for magnetic microscopy applied to the analysis of ancient rock magnetism.

Smain Amari (Royal Military College of Canada, Kingston, Canada), February 4-9.

Jonathan Partington (Univ. of Leeds, England), February 4-7.

Dmitry Ponomarev (T.U. Vienna, Vienna, Austria), June 24.

Élodie Pozzi (St Louis Univ., St Louis, Missouri, USA), Brett Wick (Washington Univ., St Louis, Missouri, USA), January 9-10.

Yves Rolain (Vrije Universiteit Brussel, VUB, Brussels, Belgium), February 5-7.

Maxim Yattselev (University of Indianapolis, Purdue University at Indianapolis, USA), June 29-July 1.

Paul Asensio, École Centrale Lyon, *Study of silent current sources in electroencephalography (EEG) and magnetoencephalography (MEG)*; advisors: L. Baratchart, J. Leblond.

Masimba Nemaire, MathMods Master, *Study of silent current sources in EEG and MEG*; advisors: L. Baratchart, J. Leblond.

Tuong Vy Nguyen Hoang, *Mathematical Circuit Modeling for Antennas*; advisors: F. Seyfert, M. Olivi.

Pat Vatiwutipong, MathMods Master, *Properties of the $d$-Radon transform and applications to imaging issues in archaeology*; advisors: V. L. Coli, J. Leblond.

Figure sums up who are our main collaborators, users and competitors.

L. Baratchart gave an oral communication at NCMIP 2019 in Cachan,

V. L. Coli gave oral communications at the 2nd “Journée Matériaux UCA”, Sophia Antipolis, September, and at the workshop
“Céramiques imprimées de Méditerranée occidentale. Matières premières, productions, usages” of the ANR CIMO, Nice, France, March, http://

D. Martinez Martinez gave an oral communication at «Journées Nationales des Microondes», Caen, France and at «European microwave Conference (EuMC) 2019», Paris, France.

F. Seyfert was invited to give a lecture at the Technical University of Cartagena University (Spain) and gave an invited talk at the workshop «Rational approximation for Electrical Engineering», Moscow, Russia sponsored by Huawei.

L. Baratchart was on the program committee of
“Applied Inverse Problems” (AIP) 20019, Grenoble, France
http://

L. Baratchart is on the editorial board of the journals “Computational Methods and Function Theory” and “Complex Analysis and Operator Theory”.

J. Leblond was a reviewer for the journals *Engineering with Computers*, *Inverse Problems*.

F. Seyfert was a reviewer for IEEE Transactions on Microwave Theory and Techniques

L. Baratchart gave an invited address at
the conference “One-Dimensional Complex Analysis and Operator Theory” in Saint Petersburg, May 13-17, https://

L. Baratchart and J. Leblond were invited to give talks at AIP 2019, Grenoble, France, July, http://

S. Chevillard was invited to give a talk at a NFS-sponsored workshop on magnetic imaging organized outside the American Geophysical Union meeting (December 7-8).

J. Leblond was an invited speaker at the final workshop of the ANR FastRelax, Lyon, France, May, http://

L. Baratchart was a member of selection panel 40 (Mathematics) of the Agence Nationale de la Recherche (ANR).

J. Leblond was an external reviewer for a promotion evaluation process at Chapman University (Orange, CA, USA).

F. Seyfert was a reviewer for the National Science Centre of Poland

J. Leblond is a member of the “Conseil Scientifique” and of the “Commission Administrative Paritaire” of Inria.

M. Olivi is a member of the CLDD (Commission Locale de Développement Durable) and in charge, with P. Bourgeois, of coordination.

**Colles**: S. Chevillard has given “Colles” (oral examination preparing undergraduate students for the competitive examination to enter French Engineering Schools) at Centre International de Valbonne (CIV) (2 hours per week) until June 2019.

PhD in progress: K. Mavreas, *Inverse source problems in planetary sciences: dipole localization in Moon rocks from sparse magnetic data*, since October 2015, advisors: S. Chevillard, J. Leblond; defense scheduled January 31, 2020.

PhD in progress: G. Bose, *Filter Design to Match Antennas*, since
December 2016, advisors: F. Ferrero, F. Seyfert and M. Olivi.

PhD in progress: S. Fueyo, *Cycles limites et stabilité dans les circuits*, since October 2016, advisors: L. Baratchart and J.-B. Pomet (Inria Sophia, McTao).

PhD in progress: P. Asensio, *Inverse source estimation problems in EEG and MEG*, since November 2019, advisors: L. Baratchart, J. Leblond.

PhD in progress: M. Nemaire, *Inverse potential problems with application to quasi-static electromagnetics*, since October 2019, advisors: L. Baratchart, J. Leblond, S. Kupin (IMB, Univ. Bordeaux).

Post-doc. in progress: V. L. Coli, *Multiscale Tomography: imaging and modeling ancient materials*, since March 2018, advisors: J. Leblond, L. Blanc-Féraud (project-team Morpheme, I3S-CNRS/Inria Sophia/iBV), D. Binder (CEPAM-CNRS, Nice).

L. Baratchart was a reviewer of the “Mémoire d'habilitation” of Moncef Mahjoub, ENIT, Tunis, September 2.

J. Leblond was a member of the PhD committees of I. Santos (Univ. Paul Sabatier, Toulouse, February), S. Amraoui and K. Maksymenko (Univ. Côte d'Azur, December).

M. Olivi was a member of the HdR committees of F. Seyfert (Univ. Côte d'Azur, February 6), C. Poussot-Vassals (Univ. Toulouse, July 12) and of the PhD committees of D. Martinez Martinez (Univ. Limoges, June 20) and P. Kergus (Univ. Toulouse, October 18)

F. Seyfert was a member of the PhD committee of Johan Sence (Univ. Limoges, November 15) and D. Martinez Martinez (Univ. Limoges, June 20).

M. Olivi was responsible for Scientific Mediation and president of the Committee MASTIC (Commission d'Animation et de Médiation Scientifique) https://

M. Olivi wrote a review of the book “Algorithms: la bombe à retardement” by C. O'Neil for Interstice https://

“La fête des Maths de l'ESPE Nice-Liégeard” (March 5 and 26): M. Olivi animated two half-day workshop sessions “jouons avec des expériences scientifiques” https://

“Fête de la science: Mouans-Sartoux fête les sciences du quotidien” (October 10-11 for scholars: 8 classes, October 12 for public: 1000 people): M. Olivi animated the activity “jouer à transmettre des images” in collaboration with the “espace de l'art concret” https://

“Stage MathC2+” (June 19-22): M. Olivi animated a workshop session on “How to analyze sounds with mathematical functions”.

V. L. Coli gave a talk “Archéologie et mathématiques : algorithmes pour l'identification des gestes des premiers potiers”, and participated to the organization of the exhibition of the ANR project CIMO, Forum des Sciences, 80 years of CNRS, October, CIV, Valbonne.

Fabien Seyfert gave a pitch on Factas activities during the visit of the company SICAME (March 27) and Martine Olivi gave a pitch on Factas activities for the celebration of InriaTech 10th birthday (April 3)

S. Chevillard gave a talk “Réchauffement climatique : où en est-on ? où va-t-on ?” at the c@fé-in of the Research Center, November.

V. L. Coli gave a talk “Archéologie et mathématiques : algorithmes pour l'identification des gestes des premiers potiers” at the c@fé-in of the Research Center, October. She also participated to the organization of the “1er Colloque doctoral préhistoire, paléoenvironnement, archéosciences”, November, MSHS, Nice, https://

M. Olivi co-organized about 10 “cafés scientifiques” (c@fé-in's and cafés Techno, 30 to 80
participants each) https://

M. Olivi co-supervised the creation of new scientific wooden objects by SNJ AZUR (funds from APOCS region): pixel art and transmission of images https://