Section: Scientific Foundations
Subspacebased identification and detection
See module 6.5 .
For reasons closely related to the vibrations monitoring applications described in module 4.2 , we have been investigating subspacebased methods, for both the identification and the monitoring of the eigenstructure of the state transition matrix F of a linear dynamical statespace system :
namely the defined by :
The (canonical) parameter vector in that case is :
where is the vector whose elements are the eigenvalues , is the matrix whose columns are the 's, and vec is the column stacking operator.
Subspacebased methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role [56] . A contribution of ours, minor but extremely fruitful, has been to write the outputonly covariancedriven subspace identification method under a form that involves a parameter estimating function, from which we define a residual adapted to vibration monitoring [1] . This is explained next.
Covariancedriven subspace identification.
Let and:
be the output covariance and Hankel matrices, respectively; and: . Direct computations of the R_{i} 's from the equations (9 ) lead to the well known key factorizations :
where:
are the observability and controllability matrices, respectively. The observation matrix H is then found in the first blockrow of the observability matrix . The statetransition matrix F is obtained from the shift invariance property of . The eigenstructure then results from (10 ).
Since the actual model order is generally not known, this procedure is run with increasing model orders.
Model parameter characterization.
Choosing the eigenvectors of matrix F as a basis for the state space of model (9 ) yields the following representation of the observability matrix:
where , and and are as in (11 ). Whether a nominal parameter _{0} fits a given output covariance sequence (R_{j})_{j} is characterized by [1] :
This property can be checked as follows. From the nominal _{0} , compute using (15 ), and perform e.g. a singular value decomposition (SVD) of for extracting a matrix U such that:
Matrix U is not unique (two such matrices relate through a postmultiplication with an orthonormal matrix), but can be regarded as a function of _{0} . Then the characterization writes:
Residual associated with subspace identification.
Assume now that a reference _{0} and a new sample are available. For checking whether the data agree with _{0} , the idea is to compute the empirical Hankel matrix :
and to define the residual vector:
Let be the actual parameter value for the system which generated the new data sample, and be the expectation when the actual system parameter is . From (18 ), we know that _{N}(_{0}) has zero mean when no change occurs in , and nonzero mean if a change occurs. Thus _{N}(_{0}) plays the role of a residual.
It is our experience that this residual has highly interesting properties, both for damage detection [1] and localization [3] , and for flutter monitoring [8] .
Other uses of the key factorizations.
Factorization ( 3.5.1 ) is the key for a characterization of the canonical parameter vector in (11 ), and for deriving the residual. Factorization (13 ) is also the key for :

Proving consistency and robustness results [6] ;

Designing an extension of covariancedriven subspace identification algorithm adapted to the presence and fusion of nonsimultaneously recorded multiple sensors setups [7] ;

Proving the consistency and robustness of this extension [9] ;

Designing various forms of inputoutput covariancedriven subspace identification algorithms adapted to the presence of both known inputs and unknown excitations [10] .