Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: New Results

Revised definition of the Multiple Gradient Descent Algorithm (MGDA)

Participant : Jean-Antoine Désidéri.

The Multiple Gradient Descent Algorithm (MGDA) had been defined originally to identify a descent direction common to a set of gradient vectors. According to a completely general principle, the direction is opposite to the vector of minimum Euclidean norm in the convex hull of the gradients. The Euclidean norm is defined via a general scalar product in 𝐑n. From a theoretical viewpoint, the notion of Pareto-stationarity had been introduced and it was established that if a point is Pareto-optimal and if the objective functions are locally differentiable and convex, then the point is Pareto-stationary. From a computational viewpoint, the descent direction can be determined as the solution of a Quadratic-Programming (QP) formulation. However, when the gradients are linearly independent a direct construction via a Gram-Schmidt orthogonalization process was preferred. We have now generalized the orthogonalization process by the introduction of a hierarchical strategy in the ordering of the subfamily of gradients utilized to construct the orthogonal basis. This strategy aims at making the (multi-dimensional) cone associated with the convex hull of the subfamily as large as possible. As a result, in the case of linearly-dependent gradients, the orthogonalization process not only provides a basis of the spanned subspace, but the subfamily is selected such that its the convex hull is also very representative of a large cone, encompassing in the most favorable cases all the given gradients. By this change in the definition of the algorithm, we were able to reformulate the QP formulation, now stated in a suitable basis, in a way that is well-suited for the treatment of cases where the number of gradients exceeds, possibly vastly, the dimension of the vector space. This revision makes the algorithm much more general and robust [43] .