Team i4s

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Subspace-based identification and detection

See module  6.5 .

For reasons closely related to the vibrations monitoring applications described in module  4.2 , we have been investigating subspace-based methods, for both the identification and the monitoring of the eigenstructure Im34 ${(\#955 ,\#966 _\#955 )}$ of the state transition matrix F of a linear dynamical state-space system :

Im35 ${\mfenced o={  \mtable{...},}$(9)

namely the Im36 ${(\#955 ,\#981 _\#955 )}$ defined by :

Im37 ${det~{(F-\#955 ~I)}=0,~~{(F-\#955 ~I)}~\#966 _\#955 =0,~~\#981 _\#955 \mover =\#916 H~\#966 _\#955 }$(10)

The (canonical) parameter vector in that case is :

Im38 ${\#952 \mover =\#916 \mfenced o=( c=) \mtable{...}}$(11)

where $ \upper_lambda$ is the vector whose elements are the eigenvalues $ \lambda$ , $ \upper_phi$ is the matrix whose columns are the Im39 $\#981 _\#955 $ 's, and vec is the column stacking operator.

Subspace-based methods is the generic name for linear systems identification algorithms based on either time domain measurements or output covariance matrices, in which different subspaces of Gaussian random vectors play a key role  [56] . A contribution of ours, minor but extremely fruitful, has been to write the output-only covariance-driven subspace identification method under a form that involves a parameter estimating function, from which we define a residual adapted to vibration monitoring [1] . This is explained next.

Covariance-driven subspace identification.

Let Im40 ${R_i\mover =\#916 \#119812 \mfenced o=( c=) Y_k~Y_{k-i}^T}$ and:

Im41 ${\#8459 _{p+1,q}\mover =\#916 \mfenced o=( c=) \mtable{...}\mover =\#916 Hank\mfenced o=( c=) R_i}$(12)

be the output covariance and Hankel matrices, respectively; and: Im42 ${G\mover =\#916 \#119812 \mfenced o=( c=) X_kY_k^T}$ . Direct computations of the Ri 's from the equations (9 ) lead to the well known key factorizations :

Im43 $\mtable{...}$(13)

where:

Im44 ${\#119978 _{p+1}{(H,F)}\mover =\#916 \mfenced o=( c=) \mtable{...}~~~\mtext and~~~\#119966 _q{(F,G)}~\mover =\#916 ~{(G~FG~\#8943 ~F^{q-1}G)}}$(14)

are the observability and controllability matrices, respectively. The observation matrix H is then found in the first block-row of the observability matrix Im45 $\#119978 $ . The state-transition matrix F is obtained from the shift invariance property of Im45 $\#119978 $ . The eigenstructure Im34 ${(\#955 ,\#966 _\#955 )}$ then results from (10 ).

Since the actual model order is generally not known, this procedure is run with increasing model orders.

Model parameter characterization.

Choosing the eigenvectors of matrix F as a basis for the state space of model (9 ) yields the following representation of the observability matrix:

Im46 ${\#119978 _{p+1}{(\#952 )}=\mfenced o=( c=) \mtable{...}}$(15)

where Im47 ${\#916 \mover =\#916 diag{(\#923 )}}$ , and $ \upper_lambda$ and $ \upper_phi$ are as in (11 ). Whether a nominal parameter $ \theta$0 fits a given output covariance sequence (Rj)j is characterized by [1] :

Im48 ${\#119978 _{p+1}{(\#952 _0)}~~\mtext and~~\#8459 _{p+1,q}~~\mtext have~\mtext the~\mtext same~\mtext left~\mtext kernel~\mtext space.}$(16)

This property can be checked as follows. From the nominal $ \theta$0 , compute Im49 ${\#119978 _{p+1}{(\#952 _0)}}$ using (15 ), and perform e.g. a singular value decomposition (SVD) of Im49 ${\#119978 _{p+1}{(\#952 _0)}}$ for extracting a matrix U such that:

Im50 ${U^T~U=I_s~~\mtext and~~U^T~\#119978 _{p+1}{(\#952 _0)}=0}$(17)

Matrix U is not unique (two such matrices relate through a post-multiplication with an orthonormal matrix), but can be regarded as a function of $ \theta$0 . Then the characterization writes:

Im51 ${U{(\#952 _0)}^T~\#8459 _{p+1,q}=0}$(18)

Residual associated with subspace identification.

Assume now that a reference $ \theta$0 and a new sample Im52 ${Y_1,\#8943 ,Y_N}$ are available. For checking whether the data agree with $ \theta$0 , the idea is to compute the empirical Hankel matrix Im53 $\mover \#8459 ^_{p+1,q}$ :

Im54 ${\mover \#8459 ^_{p+1,q}\mover =\#916 Hank\mfenced o=( c=) \mover R^_i,~~~\mover R^_i\mover =\#916 1/{(N-i)}~\munderover \#8721 {k=i+1}NY_k~Y_{k-i}^T}$(19)

and to define the residual vector:

Im55 ${\#950 _N{(\#952 _0)}\mover =\#916 \sqrt N~vec\mfenced o=( c=) U{(\#952 _0)}^T~\mover \#8459 ^_{p+1,q}}$(20)

Let $ \theta$ be the actual parameter value for the system which generated the new data sample, and Im56 $\#119812 _\#952 $ be the expectation when the actual system parameter is $ \theta$ . From (18 ), we know that $ \zeta$N($ \theta$0) has zero mean when no change occurs in $ \theta$ , and nonzero mean if a change occurs. Thus $ \zeta$N($ \theta$0) plays the role of a residual.

It is our experience that this residual has highly interesting properties, both for damage detection [1] and localization [3] , and for flutter monitoring [8] .

Other uses of the key factorizations.

Factorization ( 3.5.1 ) is the key for a characterization of the canonical parameter vector $ \theta$ in (11 ), and for deriving the residual. Factorization (13 ) is also the key for :


previous
next

Logo Inria