New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
 PDF e-Pub

## Section: Research Program

### Vibration analysis

In this section, the main features for the key monitoring issues, namely identification, detection, and diagnostics, are provided, and a particular instantiation relevant for vibration monitoring is described.

It should be stressed that the foundations for identification, detection, and diagnostics, are fairly general, if not generic. Handling high order linear dynamical systems, in connection with finite elements models, which call for using subspace-based methods, is specific to vibration-based SHM. Actually, one particular feature of model-based sensor information data processing as exercised in I4S, is the combined use of black-box or semi-physical models together with physical ones. Black-box and semi-physical models are, for example, eigenstructure parameterizations of linear MIMO systems, of interest for modal analysis and vibration-based SHM. Such models are intended to be identifiable. However, due to the large model orders that need to be considered, the issue of model order selection is really a challenge. Traditional advanced techniques from statistics such as the various forms of Akaike criteria (AIC, BIC, MDL, ...) do not work at all. This gives rise to new research activities specific to handling high order models.

Our approach to monitoring assumes that a model of the monitored system is available. This is a reasonable assumption, especially within the SHM areas. The main feature of our monitoring method is its intrinsic ability to the early warning of small deviations of a system with respect to a reference (safe) behavior under usual operating conditions, namely without any artificial excitation or other external action. Such a normal behavior is summarized in a reference parameter vector ${\theta }_{0}$, for example a collection of modes and mode-shapes.

#### Identification

The behavior of the monitored continuous system is assumed to be described by a parametric model $\left\{{𝐏}_{\theta }\phantom{\rule{0.166667em}{0ex}},\phantom{\rule{0.166667em}{0ex}}\theta \in \Theta \right\}$, where the distribution of the observations (${Z}_{0},...,{Z}_{N}$) is characterized by the parameter vector $\theta \in \Theta$.

For reasons closely related to the vibrations monitoring applications, we have been investigating subspace-based methods, for both the identification and the monitoring of the eigenstructure $\left(\lambda ,{\phi }_{\lambda }\right)$ of the state transition matrix $F$ of a linear dynamical state-space system :

namely the $\left(\lambda ,{\varphi }_{\lambda }\right)$ defined by :

The (canonical) parameter vector in that case is :

where $\Lambda$ is the vector whose elements are the eigenvalues $\lambda$, $\Phi$ is the matrix whose columns are the ${\varphi }_{\lambda }$'s, and $\mathrm{vec}$ is the column stacking operator.

Let ${R}_{i}\stackrel{\Delta }{=}𝐄\left({Y}_{k}\phantom{\rule{3.33333pt}{0ex}}{Y}_{k-i}^{T}\right)$ and:

where:

Since the actual model order is generally not known, this procedure is run with increasing model orders.

#### Detection

Our approach to on-board detection is based on the so-called asymptotic statistical local approach. It is worth noticing that these investigations of ours have been initially motivated by a vibration monitoring application example. It should also be stressed that, as opposite to many monitoring approaches, our method does not require repeated identification for each newly collected data sample.

For achieving the early detection of small deviations with respect to the normal behavior, our approach generates, on the basis of the reference parameter vector ${\theta }_{0}$ and a new data record, indicators which automatically perform :

• The early detection of a slight mismatch between the model and the data;

• A preliminary diagnostics and localization of the deviation(s);

• The tradeoff between the magnitude of the detected changes and the uncertainty resulting from the estimation error in the reference model and the measurement noise level.

These indicators are computationally cheap, and thus can be embedded. This is of particular interest in some applications, such as flutter monitoring.

Choosing the eigenvectors of matrix $F$ as a basis for the state space of model (4) yields the following representation of the observability matrix:

where $\Delta \stackrel{\Delta }{=}\mathrm{diag}\left(\Lambda \right)$, and $\Lambda$ and $\Phi$ are as in (6). Whether a nominal parameter ${\theta }_{0}$ fits a given output covariance sequence ${\left({R}_{j}\right)}_{j}$ is characterized by:

This property can be checked as follows. From the nominal ${\theta }_{0}$, compute ${𝒪}_{p+1}\left({\theta }_{0}\right)$ using (10), and perform e.g. a singular value decomposition (SVD) of ${𝒪}_{p+1}\left({\theta }_{0}\right)$ for extracting a matrix $U$ such that:

Matrix $U$ is not unique (two such matrices relate through a post-multiplication with an orthonormal matrix), but can be regarded as a function of ${\theta }_{0}$. Then the characterization writes:

##### Residual associated with subspace identification.

Assume now that a reference ${\theta }_{0}$ and a new sample ${Y}_{1},\cdots ,{Y}_{N}$ are available. For checking whether the data agree with ${\theta }_{0}$, the idea is to compute the empirical Hankel matrix ${\stackrel{^}{ℋ}}_{p+1,q}$:

and to define the residual vector:

Let $\theta$ be the actual parameter value for the system which generated the new data sample, and ${𝐄}_{\theta }$ be the expectation when the actual system parameter is $\theta$. From (13), we know that ${\zeta }_{N}\left({\theta }_{0}\right)$ has zero mean when no change occurs in $\theta$, and nonzero mean if a change occurs. Thus ${\zeta }_{N}\left({\theta }_{0}\right)$ plays the role of a residual.

As in most fault detection approaches, the key issue is to design a residual, which is ideally close to zero under normal operation, and has low sensitivity to noises and other nuisance perturbations, but high sensitivity to small deviations, before they develop into events to be avoided (damages, faults, ...). The originality of our approach is to :

• Design the residual basically as a parameter estimating function,

• Evaluate the residual thanks to a kind of central limit theorem, stating that the residual is asymptotically Gaussian and reflects the presence of a deviation in the parameter vector through a change in its own mean vector, which switches from zero in the reference situation to a non-zero value.

The central limit theorem shows  [51] that the residual is asymptotically Gaussian :

where the asymptotic covariance matrix $\Sigma$ can be estimated, and manifests the deviation in the parameter vector by a change in its own mean value. Then, deciding between $\eta =0$ and $\eta \ne 0$ amounts to compute the following ${\chi }^{2}$-test, provided that $𝒥$ is full rank and $\Sigma$ is invertible :

where

#### Diagnostics

A further monitoring step, often called fault isolation, consists in determining which (subsets of) components of the parameter vector $\theta$ have been affected by the change. Solutions for that are now described. How this relates to diagnostics is addressed afterwards.

In most SHM applications, a complex physical system, characterized by a generally non identifiable parameter vector $\Phi$ has to be monitored using a simple (black-box) model characterized by an identifiable parameter vector $\theta$. A typical example is the vibration monitoring problem for which complex finite elements models are often available but not identifiable, whereas the small number of existing sensors calls for identifying only simplified input-output (black-box) representations. In such a situation, two different diagnosis problems may arise, namely diagnosis in terms of the black-box parameter $\theta$ and diagnosis in terms of the parameter vector $\Phi$ of the underlying physical model.

The isolation methods sketched above are possible solutions to the former. Our approach to the latter diagnosis problem is basically a detection approach again, and not a (generally ill-posed) inverse problem estimation approach.

The basic idea is to note that the physical sensitivity matrix writes $𝒥\phantom{\rule{0.166667em}{0ex}}{𝒥}_{\Phi \theta }$, where ${𝒥}_{\Phi \theta }$ is the Jacobian matrix at ${\Phi }_{0}$ of the application $\Phi ↦\theta \left(\Phi \right)$, and to use the sensitivity test for the components of the parameter vector $\Phi$. Typically this results in the following type of directional test :

It should be clear that the selection of a particular parameterization $\Phi$ for the physical model may have a non-negligible influence on such type of tests, according to the numerical conditioning of the Jacobian matrices ${𝒥}_{\Phi \theta }$.