## Section: Scientific Foundations

### Subspace-based identification and detection

See module  6.5 .

For reasons closely related to the vibrations monitoring applications described in module  4.2 , we have been investigating subspace-based methods, for both the identification and the monitoring of the eigenstructure of the state transition matrix F of a linear dynamical state-space system :

 (9)

namely the defined by :

 (10)

The (canonical) parameter vector in that case is :

 (11)

where is the vector whose elements are the eigenvalues  , is the matrix whose columns are the  's, and vec is the column stacking operator.

Subspace-based methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role  [56] . A contribution of ours, minor but extremely fruitful, has been to write the output-only covariance-driven subspace identification method under a form that involves a parameter estimating function, from which we define a residual adapted to vibration monitoring [1] . This is explained next.

#### Covariance-driven subspace identification.

Let and:

 (12)

be the output covariance and Hankel matrices, respectively; and: . Direct computations of the Ri 's from the equations (9 ) lead to the well known key factorizations :

 (13)

where:

 (14)

are the observability and controllability matrices, respectively. The observation matrix H is then found in the first block-row of the observability matrix  . The state-transition matrix F is obtained from the shift invariance property of  . The eigenstructure  then results from (10 ).

Since the actual model order is generally not known, this procedure is run with increasing model orders.

#### Model parameter characterization.

Choosing the eigenvectors of matrix F as a basis for the state space of model (9 ) yields the following representation of the observability matrix:

 (15)

where  , and and are as in (11 ). Whether a nominal parameter 0 fits a given output covariance sequence (Rj)j is characterized by [1] :

 (16)

This property can be checked as follows. From the nominal 0 , compute  using (15 ), and perform e.g. a singular value decomposition (SVD) of for extracting a matrix U such that:

 (17)

Matrix U is not unique (two such matrices relate through a post-multiplication with an orthonormal matrix), but can be regarded as a function of 0 . Then the characterization writes:

 (18)

#### Residual associated with subspace identification.

Assume now that a reference 0 and a new sample  are available. For checking whether the data agree with 0 , the idea is to compute the empirical Hankel matrix  :

 (19)

and to define the residual vector:

 (20)

Let  be the actual parameter value for the system which generated the new data sample, and  be the expectation when the actual system parameter is  . From (18 ), we know that N(0) has zero mean when no change occurs in , and nonzero mean if a change occurs. Thus N(0) plays the role of a residual.

It is our experience that this residual has highly interesting properties, both for damage detection [1] and localization [3] , and for flutter monitoring [8] .

#### Other uses of the key factorizations.

Factorization ( 3.5.1 ) is the key for a characterization of the canonical parameter vector  in (11 ), and for deriving the residual. Factorization (13 ) is also the key for :

• Proving consistency and robustness results [6] ;

• Designing an extension of covariance-driven subspace identification algorithm adapted to the presence and fusion of non-simultaneously recorded multiple sensors setups [7] ;

• Proving the consistency and robustness of this extension [9] ;

• Designing various forms of input-output covariance-driven subspace identification algorithms adapted to the presence of both known inputs and unknown excitations [10] .

Logo Inria