PDF e-Pub

## Section: New Results

### Feature Grouping as a Stochastic Regularizer for High-Dimensional Structured Data

In many applications where collecting data is expensive , for example neuroscience or medical imaging, the sample size is typically small compared to the feature dimension. It is challenging in this setting to train expressive, non-linear models without overfitting. These datasets call for intelligent regularization that exploits known structure, such as correlations between the features arising from the measurement device. However, existing structured regularizers need specially crafted solvers, which are difficult to apply to complex models. We propose a new regularizer specifically designed to leverage structure in the data in a way that can be applied efficiently to complex models. Our approach relies on feature grouping, using a fast clustering algorithm inside a stochas-tic gradient descent loop: given a family of feature groupings that capture feature covariations, we randomly select these groups at each iteration. We show that this approach amounts to enforcing a denoising regularizer on the solution. The method is easy to implement in many model archi-tectures, such as fully connected neural networks, and has a linear computational cost. We apply this regularizer to a real-world fMRI dataset and the Olivetti Faces datasets. Experiments on both datasets demonstrate that the proposed approach produces models that generalize better than those trained with conventional regularizers, and also improves convergence speed.

Figure 6. Illustration of the proposed approach: Forward propa- gation of a neural network with a single hidden layer using feature grouping during training. The parameters of the neural network to be estimated are ${𝐖}_{0}$, ${𝐛}_{0}$, ${𝐖}_{1}$, $𝐛1$. A bank of feature grouping matrices are pre-generated where each matrix is calculated from a sub-sample of the training test. At each SGD iteration, a feature grouping matrix is sampled from the bank of pre-generated matri- ces. The gradient is then computed with respect to ${\stackrel{^}{𝐖}}_{0}$ to update ${𝐖}_{0}$ in backpropagation.

More information can be found in [25].