SAIF

Safe AI through formal methods

Preview

Use the vast knowledge accumulated over decades in formal methods to rethink them and address the new safety concerns raised by AI revival.

Caterina Urban, Research manager Inria
Zakaria Chihani, Deputy Director LAB, CEA List

The SAIF project aims to specify the behavior of ML-based systems, to develop methodologies to validate their large-scale performance, and to guide their design using formal approaches, in order to guarantee their safety, reliability and explicability.

Keywords : Machine learning, neural networks, reinforcement learning, recurrent networks, graph networks, Transformers, interpretability, robustness, constraint satisfaction, stability, fairness, explicability, reliability

Missions

Our researches


Develop the specification of ML-based systems

Develop formal methods to specify the behavior of ML systems, including the exploration of extensional specifications (defining global robustness properties), and intentional specifications (identifying recurring patterns).


Develop the validation of ML-based systems

Design methodologies to extend formal verification to large-scale systems, improving verification efficiency and accuracy while studying more complex architectures (e.g., inferring invariants of complex architectures such as recurrent neural networks).


Develop the design of ML-based systems

Use formal methods to automatically build ML components from proven specifications, developing monitoring approaches to maintain their reliability and facilitate their post-validation. 

Consortium

CEA, Inria, Université de Bordeaux, Université Paris-Saclay, Institut Polytechnique de Paris

Consortium location

Autres projets

 SHARP
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus