SHARP

Sharp theoretical and algorithmic principles for frugal ML

Preview

Focus research on architectures, learning principles and data to define the most frugal learning methods while preserving model performance.

Rémi Gribonval, Research Director Inria

The major challenge of the SHARP project is to design, analyze and deploy intrinsically frugal models (neural or not) able to achieve the versatility and performance of the best models while requiring only a vanishing fraction of the resources currently needed.

Key words : Statistical learning, algorithmic efficiency, sparsity, deep learning, computer vision, natural language processing

Project web site : https://project.inria.fr/sharp/

Missions

Our researches


Develop architectures

Explore the mathematical and algorithmic foundations of sparse deep learning (networks with few connections), reviewing several avenues: 
 – Spectral techniques based on recent advances in sparse factorization ;
 – Optimally sparse distributed learning ;
 – Binarized architectures enjoying PAC-Bayes guarantees ;
 – Versatile dimension-reduction of vectors, gradients, and even of datasets ;
 – Sound quantization principles driven by information theory and numerical linear algebra.


Develop theoretical principles and foundations of learning

Optimally combine traditional learning with knowledge in symmetries, prior probabilistic models, and representation learning, to reduce the dimension of models and the amount of data required. A key challenge will be to redefine a structured approach to computer vision, exploiting the physical laws of image formation (3D, material, illumination…).


Develop the volume of data required

Develop frugal models learning like biological systems, from limited, heterogeneous, incomplete or even corrupted data.

Harness the hidden structure of data by designing algorithms that can identify relationships between elements in a training set, to extract valuable information from “real-world” and imperfect data.

Consortium

Inria, Université Paris Dauphine-PSL, École des Ponts ParisTech, CNRS, CEA, Sorbonne Université, ENS Lyon, ESPCI Paris

Consortium location

Autres projets

 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 SAIF
SAIF
Safe AI through formal methods
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus