Team, Visitors, External Collaborators
Overall Objectives
Research Program
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: Research Program

Vibration analysis

In this section, the main features for the key monitoring issues, namely identification, detection, and diagnostics, are provided, and a particular instantiation relevant for vibration monitoring is described.

It should be stressed that the foundations for identification, detection, and diagnostics, are fairly general, if not generic. Handling high order linear dynamical systems, in connection with finite elements models, which call for using subspace-based methods, is specific to vibration-based SHM. Actually, one particular feature of model-based sensor information data processing as exercised in I4S, is the combined use of black-box or semi-physical models together with physical ones. Black-box and semi-physical models are, for example, eigenstructure parameterizations of linear MIMO systems, of interest for modal analysis and vibration-based SHM. Such models are intended to be identifiable. However, due to the large model orders that need to be considered, the issue of model order selection is really a challenge. Traditional advanced techniques from statistics such as the various forms of Akaike criteria (AIC, BIC, MDL, ...) do not work at all. This gives rise to new research activities specific to handling high order models.

Our approach to monitoring assumes that a model of the monitored system is available. This is a reasonable assumption, especially within the SHM areas. The main feature of our monitoring method is its intrinsic ability to the early warning of small deviations of a system with respect to a reference (safe) behavior under usual operating conditions, namely without any artificial excitation or other external action. Such a normal behavior is summarized in a reference parameter vector θ0, for example a collection of modes and mode-shapes.


The behavior of the monitored continuous system is assumed to be described by a parametric model {𝐏θ,θΘ}, where the distribution of the observations (Z0,...,ZN) is characterized by the parameter vector θΘ.

For reasons closely related to the vibrations monitoring applications, we have been investigating subspace-based methods, for both the identification and the monitoring of the eigenstructure (λ,φλ) of the state transition matrix F of a linear dynamical state-space system :

namely the (λ,ϕλ) defined by :

The (canonical) parameter vector in that case is :

where Λ is the vector whose elements are the eigenvalues λ, Φ is the matrix whose columns are the ϕλ's, and vec is the column stacking operator.

Subspace-based methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role  [51].

Let Ri=Δ𝐄YkYk-iT and:

be the output covariance and Hankel matrices, respectively; and: G=Δ𝐄XkYk-1T. Direct computations of the Ri's from the equations (4) lead to the well known key factorizations :


are the observability and controllability matrices, respectively. The observation matrix H is then found in the first block-row of the observability matrix 𝒪. The state-transition matrix F is obtained from the shift invariance property of 𝒪. The eigenstructure (λ,φλ) then results from (5).

Since the actual model order is generally not known, this procedure is run with increasing model orders.


Our approach to on-board detection is based on the so-called asymptotic statistical local approach. It is worth noticing that these investigations of ours have been initially motivated by a vibration monitoring application example. It should also be stressed that, as opposite to many monitoring approaches, our method does not require repeated identification for each newly collected data sample.

For achieving the early detection of small deviations with respect to the normal behavior, our approach generates, on the basis of the reference parameter vector θ0 and a new data record, indicators which automatically perform :

These indicators are computationally cheap, and thus can be embedded. This is of particular interest in some applications, such as flutter monitoring.

Choosing the eigenvectors of matrix F as a basis for the state space of model (4) yields the following representation of the observability matrix:

where Δ=Δ diag (Λ), and Λ and Φ are as in (6). Whether a nominal parameter θ0 fits a given output covariance sequence (Rj)j is characterized by:

This property can be checked as follows. From the nominal θ0, compute 𝒪p+1(θ0) using (10), and perform e.g. a singular value decomposition (SVD) of 𝒪p+1(θ0) for extracting a matrix U such that:

Matrix U is not unique (two such matrices relate through a post-multiplication with an orthonormal matrix), but can be regarded as a function of θ0. Then the characterization writes:

Residual associated with subspace identification.

Assume now that a reference θ0 and a new sample Y1,,YN are available. For checking whether the data agree with θ0, the idea is to compute the empirical Hankel matrix ^p+1,q:

and to define the residual vector:

Let θ be the actual parameter value for the system which generated the new data sample, and 𝐄θ be the expectation when the actual system parameter is θ. From (13), we know that ζN(θ0) has zero mean when no change occurs in θ, and nonzero mean if a change occurs. Thus ζN(θ0) plays the role of a residual.

As in most fault detection approaches, the key issue is to design a residual, which is ideally close to zero under normal operation, and has low sensitivity to noises and other nuisance perturbations, but high sensitivity to small deviations, before they develop into events to be avoided (damages, faults, ...). The originality of our approach is to :

The central limit theorem shows  [45] that the residual is asymptotically Gaussian :

where the asymptotic covariance matrix Σ can be estimated, and manifests the deviation in the parameter vector by a change in its own mean value. Then, deciding between η=0 and η0 amounts to compute the following χ2-test, provided that 𝒥 is full rank and Σ is invertible :



A further monitoring step, often called fault isolation, consists in determining which (subsets of) components of the parameter vector θ have been affected by the change. Solutions for that are now described. How this relates to diagnostics is addressed afterwards.

The question: which (subsets of) components of θ have changed ?, can be addressed using either nuisance parameters elimination methods or a multiple hypotheses testing approach [44].

In most SHM applications, a complex physical system, characterized by a generally non identifiable parameter vector Φ has to be monitored using a simple (black-box) model characterized by an identifiable parameter vector θ. A typical example is the vibration monitoring problem for which complex finite elements models are often available but not identifiable, whereas the small number of existing sensors calls for identifying only simplified input-output (black-box) representations. In such a situation, two different diagnosis problems may arise, namely diagnosis in terms of the black-box parameter θ and diagnosis in terms of the parameter vector Φ of the underlying physical model.

The isolation methods sketched above are possible solutions to the former. Our approach to the latter diagnosis problem is basically a detection approach again, and not a (generally ill-posed) inverse problem estimation approach.

The basic idea is to note that the physical sensitivity matrix writes 𝒥𝒥Φθ, where 𝒥Φθ is the Jacobian matrix at Φ0 of the application Φθ(Φ), and to use the sensitivity test for the components of the parameter vector Φ. Typically this results in the following type of directional test :

It should be clear that the selection of a particular parameterization Φ for the physical model may have a non-negligible influence on such type of tests, according to the numerical conditioning of the Jacobian matrices 𝒥Φθ.