Section: New Results
Semi and nonparametric methods
Modelling extremal events
Participants : Stéphane Girard, ElHadji Deme.
Joint work with: L. Gardes (Univ. Strasbourg) and E. Deme (Univ. Gaston Berger, Sénégal)
We are working on the estimation of the second order parameter $\rho $ (see paragraph 3.3.1 ). We proposed a new family of estimators encompassing the existing ones (see for instance [69] , [68] ). This work is in collaboration with ElHadji Deme who obtained a grant (IBNI price) to work within the Mistis team on extremevalue statistics. The results are published in [18] .
In addition to this work, we have established a review on the Weibulltail distributions [29] .
Conditional extremal events
Participants : Stéphane Girard, Gildas Mazo, Jonathan ElMethni.
Joint work with: L. Gardes (Univ. Strasbourg) and A. Daouia (Univ. Toulouse I and Univ. Catholique de Louvain)
The goal of the PhD thesis of Alexandre Lekina was to contribute to the development of theoretical and algorithmic models to tackle conditional extreme value analysis, ie the situation where some covariate information $X$ is recorded simultaneously with a quantity of interest $Y$. In such a case, the tail heaviness of $Y$ depends on $X$, and thus the tail index as well as the extreme quantiles are also functions of the covariate. We combine nonparametric smoothing techniques [66] with extremevalue methods in order to obtain efficient estimators of the conditional tail index and conditional extreme quantiles. When the covariate is functional and random (random design) we focus on kernel methods [16] .
Conditional extremes are studied in climatology where one is interested in how climate change over years might affect extreme temperatures or rainfalls. In this case, the covariate is univariate (time). Bivariate examples include the study of extreme rainfalls as a function of the geographical location. The application part of the study is joint work with the LTHE (Laboratoire d'étude des Transferts en Hydrologie et Environnement) located in Grenoble.
Estimation of extreme risk measures
Participants : Stéphane Girard, Jonathan ElMethni, ElHadji Deme.
Joint work with: L. Gardes and A. Guillou (Univ. Strasbourg)
One of the most popular risk measures is the ValueatRisk (VaR) introduced in the 1990's. In statistical terms, the VaR at level $\alpha \in (0,1)$ corresponds to the upper $\alpha $quantile of the loss distribution. The ValueatRisk however suffers from several weaknesses. First, it provides us only with a pointwise information: VaR($\alpha $) does not take into consideration what the loss will be beyond this quantile. Second, random loss variables with lighttailed distributions or heavytailed distributions may have the same ValueatRisk . Finally, ValueatRisk is not a coherent risk measure since it is not subadditive in general. A coherent alternative risk measure is the Conditional Tail Expectation (CTE), also known as TailValueatRisk, Tail Conditional Expectation or Expected Shortfall in case of a continuous loss distribution. The CTE is defined as the expected loss given that the loss lies above the upper $\alpha $quantile of the loss distribution. This risk measure thus takes into account the whole information contained in the upper tail of the distribution. It is frequently encountered in financial investment or in the insurance industry. In [36] , we have established the asymptotic properties of the classical CTE estimator in case of extreme losses, i.e. when $\alpha \to 0$ as the sample size increases. We have exhibited the asymptotic bias of this estimator, and proposed a bias correction based on extremevalue techniques [36] . Similar developments have been achieved in the case of the Proportional Hazard Premium measure of risk [19] . In [22] , we study the situation where some covariate information is available. We thus has to deal with conditional extremes (see paragraph 6.4.2 ). We also proposed a new risk measure (called the Conditional Tail Moment) which encompasses various risk measures, such as the CTE, as particular cases.
Multivariate extremal events
Participants : Stéphane Girard, Gildas Mazo, Florence Forbes, Van Trung Pham.
Joint work with: C. Amblard (TimB in TIMC laboratory, Univ. Grenoble I) and L. Menneteau (Univ. Montpellier II)
Copulas are a useful tool to model multivariate distributions [72] . At first, we developed an extension of some particular copulas [1] . It followed a new class of bivariate copulas defined on matrices [56] and some analogies have been shown between matrix and copula properties.
However, while there exist various families of bivariate copulas, much fewer has been done when the dimension is higher. To this aim an interesting class of copulas based on products of transformed copulas has been proposed in the literature. The use of this class for practical high dimensional problems remains challenging. Constraints on the parameters and the product form render inference, and in particular the likelihood computation, difficult. We proposed a new class of high dimensional copulas based on a product of transformed bivariate copulas [61] . No constraints on the parameters refrain the applicability of the proposed class which is well suited for applications in high dimension. Furthermore the analytic forms of the copulas within this class allow to associate a natural graphical structure which helps to visualize the dependencies and to compute the likelihood efficiently even in high dimension. The extreme properties of the copulas are also derived and an R package has been developed.
As an alternative, we also proposed a new class of copulas constructed by introducing a latent factor. Conditional independence with respect to this factor and the use of a nonparametric class of bivariate copulas lead to interesting properties like explicitness, flexibility and parsimony. In particular, various tail behaviours are exhibited, making possible the modeling of various extreme situations. A pairwise momentbased inference procedure has also been proposed and the asymptotic normality of the corresponding estimator has been established [53] .
Level sets estimation
Participant : Stéphane Girard.
Joint work with: A. Guillou and L. Gardes (Univ. Strasbourg), G. Stupfler (Univ. AixMarseille) and A. Daouia (Univ. Toulouse I and Univ. Catholique de Louvain)
The boundary bounding the set of points is viewed as the larger level set of the points distribution. This is then an extreme quantile curve estimation problem. We proposed estimators based on projection as well as on kernel regression methods applied on the extreme values set, for particular set of points [10] . We also investigate the asymptotic properties of existing estimators when used in extreme situations. For instance, we have established in collaboration with G. Stupfler that the socalled geometric quantiles have very counterintuitive properties in such situations [60] and thus should not be used to detect outliers.
In collaboration with A. Daouia, we investigate the application of such methods in econometrics [17] : A new characterization of partial boundaries of a free disposal multivariate support is introduced by making use of large quantiles of a simple transformation of the underlying multivariate distribution. Pointwise empirical and smoothed estimators of the full and partial support curves are built as extreme sample and smoothed quantiles. The extremevalue theory holds then automatically for the empirical frontiers and we show that some fundamental properties of extreme order statistics carry over to Nadaraya's estimates of upper quantilebased frontiers.
In collaboration with G. Stupfler and A. Guillou, new estimators of the boundary are introduced. The regression is performed on the whole set of points, the selection of the “highest” points being automatically performed by the introduction of high order moments [26] , [27] .
Retrieval of Mars surface physical properties from OMEGA hyperspectral images.
Participants : Stéphane Girard, Alessandro Chiancone.
Joint work with: S. Douté from Laboratoire de Planétologie de Grenoble, J. Chanussot (Gipsalab and GrenobleINP) and J. Saracco (Univ. Bordeaux).
Visible and near infrared imaging spectroscopy is one of the key techniques to detect, to map and to characterize mineral and volatile (eg. waterice) species existing at the surface of planets. Indeed the chemical composition, granularity, texture, physical state, etc. of the materials determine the existence and morphology of the absorption bands. The resulting spectra contain therefore very useful information. Current imaging spectrometers provide data organized as three dimensional hyperspectral images: two spatial dimensions and one spectral dimension. Our goal is to estimate the functional relationship $F$ between some observed spectra and some physical parameters. To this end, a database of synthetic spectra is generated by a physical radiative transfer model and used to estimate $F$. The high dimension of spectra is reduced by Gaussian regularized sliced inverse regression (GRSIR) to overcome the curse of dimensionality and consequently the sensitivity of the inversion to noise (illconditioned problems) [57] . We have also defined an adaptive version of the method which is able to deal with blockwise evolving data streams [15] .
Highdimensional changepoint detection with sparse alternatives
Participant : Farida Enikeeva.
Joint work with: Zaid Harchaoui from LEAR team Inria Grenoble
The changepoint problem is a classical problem of statistics that arises in various applications as signal processing, bioinformatics, financial market analysis. The goal of changepoint problems is to make an inference about the moment of a change in the distribution of the observed data. We consider the problem of detection of a simultaneous change in mean in a sequence of Gaussian vectors.
The stateoftheart approach to the changepoint detection/estimation is based on the assumption of growing number of observations and fixed dimension of the signal. We work in highdimensional setting assuming that the vector dimension tends to infinity and the length of the sequence grows slower than the dimension of the signal. Assuming that the change occurs only in a subset of the vector components of unknown cardinality we can reduce our problem to the problem of testing nonzero components in a sequence of sparse Gaussian vectors. We construct a testing procedure that is adaptive to the number of components with a change. This testing procedure is based on combination of two chisquared type test statistics. This combined test provides an optimal performance of the test both in the cases of high and moderate sparsity. We obtain the detection boundary of the test and show its rateoptimality in minimax sense.
The results of the paper [59] were presented at

NIPS 2013, Workshop on Modern Nonparametric Methods in Machine Learning (Dec. 2013)

Conference on Structural Inference in Statistics, Potsdam, Germany (Sept. 2013)
Yield Improvement by the Redundancy Method for Component Calibration
Participant : Farida Enikeeva.
Joint work with: Dominique Morche (CEALETI) and Alp Oguz (CEALETI)
This work [23] was done in the framework of the Optimyst II project of MINALOGIC in collaboration with CEALETI and LJKUJF. In this project we explore the benefits of the redundant channels methodology for the calibration of electronic components.
The demand for high data rate in communication puts stringent requirements on components' dynamic range. However, the extreme size reduction in advanced technology results inadvertently in increased process variability, which inherently limits the performances. The redundancy approach is based on the idea of dividing an elementary component (capacitor, resistor, transistor) into several subsets and then choosing an optimal combination of such subsets to provide the production of a component with very precise characteristics. For several years, the redundancy method has been identified as complementary to digital calibration to improve the performances. On practice, it is hard for a designer to select an optimal number of redundant components to provide the desired production yield and to minimize the area occupied by the components. The usual way to solve this problem is to resort to statistical simulations which are time consuming and sometimes misleading. We propose a normal approximation of the yield in order to estimate the number of redundant components needed to provide a minimal area occupied by the components.