FOUNDRY

The foundations of robustness and reliability in artificial intelligence

Preview

Develop the theoretical and methodological foundations of robustness and reliability needed to build and instill trust in AI technologies and systems.

Panayotis Mertikopoulos, Research Director at CNRS

The core vision of FOUNDRY is that robustness in AI – a desideratum which has eluded the field since its inception – cannot be achieved by blindly throwing more data and computing power to larger and larger models with exponentially growing energy requirements. Instead, we intend to rethink and develop the core theoretical and methodological foundations of robustness and reliability that are needed to build and instill trust in ML-powered technologies and systems from the ground up.

Keywords : Robustness, reliability, game theory, trust, fairness, privacy

Missions

Our researches


Achieving resilience to data-centric impediments

Develop algorithms and methodologies for overcoming shortfalls in the models’ training set (outliers, incomplete observations, label shifts, poisoning, etc.), as well as fortifying said models against impediments that arise at inference time.


Adapting to unmodeled phenomena and the environment

Develop the required theoretical and technical tools for AI systems that are able to adapt “on the fly” to non-stationary environments and gracefully interpolate from best- to worst-case guarantees .


Attaining robustness in the presence of concurrent aims and goals

Delineate how robustness criteria interact with standard performance metrics (e.g., a model’s predictive accuracy) and characterize the fundamental performance limits of ML models when the data are provided by selfishly-minded agents.

Consortium

CNRS, Université Paris-Dauphine, INRIA, Institut Mines Télécom, Ecole normale supérieure de Lyon, Université de Lille, ENSAE Paris, Ecole Polytechnique Palaiseau

Consortium location

Autres projets

 SHARP
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 SAIF
SAIF
Safe AI through formal methods
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus