SHARP

Sharp theoretical and algorithmic principles for frugal ML

Preview

Focus research on architectures, learning principles and data to define the most frugal learning methods while preserving model performance.

Rémi Gribonval, Research Director Inria

The major challenge of the SHARP project is to design, analyze and deploy intrinsically frugal models (neural or not) able to achieve the versatility and performance of the best models while requiring only a vanishing fraction of the resources currently needed.

Key words : Statistical learning, algorithmic efficiency, sparsity, deep learning, computer vision, natural language processing

Project web site : https://project.inria.fr/sharp/

Missions

Our researches


Develop architectures

Explore the mathematical and algorithmic foundations of sparse deep learning (networks with few connections), reviewing several avenues: 
 – Spectral techniques based on recent advances in sparse factorization ;
 – Optimally sparse distributed learning ;
 – Binarized architectures enjoying PAC-Bayes guarantees ;
 – Versatile dimension-reduction of vectors, gradients, and even of datasets ;
 – Sound quantization principles driven by information theory and numerical linear algebra.


Develop theoretical principles and foundations of learning

Optimally combine traditional learning with knowledge in symmetries, prior probabilistic models, and representation learning, to reduce the dimension of models and the amount of data required. A key challenge will be to redefine a structured approach to computer vision, exploiting the physical laws of image formation (3D, material, illumination…).


Develop the volume of data required

Develop frugal models learning like biological systems, from limited, heterogeneous, incomplete or even corrupted data.

Harness the hidden structure of data by designing algorithms that can identify relationships between elements in a training set, to extract valuable information from “real-world” and imperfect data.

Consortium

Inria, Université Paris Dauphine-PSL, École des Ponts ParisTech, CNRS, CEA, Sorbonne Université, ENS Lyon, ESPCI Paris

Consortium location

Publication


Autres projets

 NNawaQ
NNawaQ
NNawaQ, Neural Network Adequate Hardware Architecture for Quantization (HOLIGRAIL project)
Voir plus
 Package Python Keops
Package Python Keops
Package Python Keops for (very) high-dimensional tensor calculations (PDE-AI project)
Voir plus
 MPTorch
MPTorch
MPTorch, a PyTorch-based framework for simulating and emulating custom precision DNN training (HOLIGRAIL project)
Voir plus
 CaBRNeT
CaBRNeT
CaBRNeT, a library for developing and evaluating Case-Based Reasoning Models (SAIF project)
Voir plus
 SNN Software
SNN Software
SNN Software, Open Source Tools for SNN Design (EMERGENCES project)
Voir plus
 SDOT
SDOT
SDOT, A C++ and Python library for Semi-Discrete Optimal Transport (PDE-AI project)
Voir plus
 FloPoCo
FloPoCo
FloPoCo (Floating-Point Cores), a generator of arithmetic cores and its applications to IA accelerators (HOLIGRAIL project)
Voir plus
 Lazylinop
Lazylinop
Lazylinop (Lazy Linear Operator), a high-level linear operator based on an arbitrary underlying implementation, (SHARP project)
Voir plus
 CAISAR
CAISAR
CAISAR, a platform for characterizing artificial intelligence safety and robustness
Voir plus
 P16
P16
P16 or to develop, distribute and maintain a set of sovereign libraries for AI
Voir plus
 AIDGE
AIDGE
AIDGE, the DEEPGREEN project's open embedded development platform
Voir plus
 Jean-Zay
Jean-Zay
Jean Zay or the national infrastructure for the AI research community
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 Call of chairs Attractivité
Call of chairs Attractivité
PEPR AI Chairs program offers exceptionally talented AI researchers the opportunity to establish and lead a research program and team for a duration of 4 years in France.
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 SAIF
SAIF
Safe AI through formal methods
Voir plus