## Section: New Results

### Optimization

#### Nonsmooth analysis of spectral sets

Participant : Jérôme Malick.

Spectral sets are sets of matrices that depend only on the constraints on the eigenvalues: S is a spectral set if S = ^{-1}(C) with C a subset of . A spectral set S inherits from properties of the underlying set C , such as convexity. We prove in [30] that the spectral sets associated to smooth manifolds in (having some local symmetry) are themselves manifolds in the space of matrices. This result looks simple but generalizes several useful particular cases, and was extremely difficult to prove: we brace together tools from nonsmooth analysis, differential geometry, group theory and spectral analysis.

#### Semidefinite programming and applications

Participant : Jérôme Malick.

Many problems in Control and Combinatorial Optimization have modelizations as semidefinite optimization problems; but as for numerical resolution, this approach is limited by the performances of semidefinite optimization solvers that often run into numerical trouble when the sizes of problems get large. We contribute in solving partly this for 2 particular cases :

- Using a standard convex analysis algorithm (proximal algorithm) we develop in [23] a new algorithm for solving semidefinite optimization problems in presence of a very large number of constraints (and a small number of variables). Our approach turns out to be very efficient, as it outperforms all known methods for some combinatorial problems such as Lovász' number.

- We propose in [19] a new approach by projection to solve semidefinite feasibility problems - as for exemple computing SOS decomposition of Lyapunov polynoms. This natural, geometric idea is as simple as efficient : we release a short Matlab software implementing this idea, and as shown in [19] , it is competitive for solving those semidefinite feasibility problems with more evoluated reliable tools (as SeDuMi).

#### Advances on alternating projections theory

Participant : Jérôme Malick.

Alternating projections are simple and efficient methods to solve feasibility problems (that is to find a point in the intersection of several sets); they are widely used in engineering sciences. One striking example is to design “tight frames” [43] ; there are many other applications in image processing, “compress sensing” in particular.

In several successful applications, linear convergence is observed,
but not explained by the theory which focuses on alternating *convex*
projections - whereas these applications require projections onto
nonconvex sets.

Our paper [37] proves linear convergence of
the method under very mild assumptions, namely that the
intersection is *strong* (i.e. essentially “linearly regular”). Note
that convexity is not necessary to get the local convergence result. The
proof of these results rely heavily on tools from nonsmooth geometry
[40] .

#### Frictional contacts

Participants : Vincent Acary, Florent Cadoux, Claude Lemaréchal, Jérôme Malick.

We have designed a new algorithm to compute the Coulomb friction forces in a nonsmooth mechanical system; see [34] . The algorithm is hierarchical: in an inner stage, the sliding velocities are fixed and the corresponding forces are computed as solutions of a second-order cone program (a simple quadratic programming problem when the dimension is 2); in this formulation, the sliding velocities then have to satisfy a system of nonlinear equations, which is solved by a Newton method in the outer stage.

This approach has been implemented and compared with other ones, in particular [35] which we also improved by inserting a stabilizing device.

#### Proximal algorithm for smooth functions

Participants : Marc Fuentes, Claude Lemaréchal, Jérôme Malick.

This research was motivated by the minimization of a smooth but ill-conditioned
function f , such as presented in [39] . For this,
the proximal approach consists
in constructing the sequence x_{k + 1} = p(x_{k}) , where p(x) minimizes the
function . Unless f is simple enough, actual
implementations require a stopping rule for the internal minimization algorithm
computing p(x_{k}) . The rationale for most such rules is to guarantee
, where _{k} is essentially
pre-assigned. We propose in [17] a rule based on a sufficient
decrease f(x_{k})-f(x_{k + 1}) , applicable when f is differentiable. We prove
convergence of the resulting algorithm and illustrate it on some test-problems
from the CUTEr library.