
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Preview
Focus research on architectures, learning principles and data to define the most frugal learning methods while preserving model performance.
Rémi Gribonval, Research Director Inria
The major challenge of the SHARP project is to design, analyze and deploy intrinsically frugal models (neural or not) able to achieve the versatility and performance of the best models while requiring only a vanishing fraction of the resources currently needed.
Key words : Statistical learning, algorithmic efficiency, sparsity, deep learning, computer vision, natural language processing
Project web site : https://project.inria.fr/sharp/
Missions
Our researches
Develop architectures
Explore the mathematical and algorithmic foundations of sparse deep learning (networks with few connections), reviewing several avenues:
– Spectral techniques based on recent advances in sparse factorization ;
– Optimally sparse distributed learning ;
– Binarized architectures enjoying PAC-Bayes guarantees ;
– Versatile dimension-reduction of vectors, gradients, and even of datasets ;
– Sound quantization principles driven by information theory and numerical linear algebra.
Develop theoretical principles and foundations of learning
Optimally combine traditional learning with knowledge in symmetries, prior probabilistic models, and representation learning, to reduce the dimension of models and the amount of data required. A key challenge will be to redefine a structured approach to computer vision, exploiting the physical laws of image formation (3D, material, illumination…).
Develop the volume of data required
Develop frugal models learning like biological systems, from limited, heterogeneous, incomplete or even corrupted data.
Harness the hidden structure of data by designing algorithms that can identify relationships between elements in a training set, to extract valuable information from “real-world” and imperfect data.
Consortium
Inria, Université Paris Dauphine-PSL, École des Ponts ParisTech, CNRS, CEA, Sorbonne Université, ENS Lyon, ESPCI Paris
SHARP will design a theoretical and algorithmic framework to leverage prior knowledge and the modern avatars of the notion of sparsity of predictors and/or algorithms. This new paradigm of representation learning aims to overcome the current technical and computing bottlenecks.
Two showcase demonstrations of the impact of SHARP will be:
- The frugal training of compact transformers with negligible performance loss
- The development of effective representation learning models on small unlabeled datasets, for a selected downstream application.
With foundational advances towards stronger principles, smaller models, smaller datasets, SHARP will allow tomorrow’s best AI systems to run on yesterday’s devices, somewhat providing a cure against obsolescence.
A community of around 20 permanent researchers, teaching researchers and engineers, plus 20 PhD students, 4 post-docs and few contractual research engineers as the project progresses.

Publication
Autres projets










