SAIF

Safe AI through formal methods

Preview

Use the vast knowledge accumulated over decades in formal methods to rethink them and address the new safety concerns raised by AI revival.

Caterina Urban, Research manager Inria
Zakaria Chihani, Deputy Director LAB, CEA List

The SAIF project aims to specify the behavior of ML-based systems, to develop methodologies to validate their large-scale performance, and to guide their design using formal approaches, in order to guarantee their safety, reliability and explicability.

Keywords : Machine learning, neural networks, reinforcement learning, recurrent networks, graph networks, Transformers, interpretability, robustness, constraint satisfaction, stability, fairness, explicability, reliability

Project website : SaifProject.inria.fr

Missions

Our researches


Develop the specification of ML-based systems

Develop formal methods to specify the behavior of ML systems, including the exploration of extensional specifications (defining global robustness properties), and intentional specifications (identifying recurring patterns).


Develop the validation of ML-based systems

Design methodologies to extend formal verification to large-scale systems, improving verification efficiency and accuracy while studying more complex architectures (e.g., inferring invariants of complex architectures such as recurrent neural networks).


Develop the design of ML-based systems

Use formal methods to automatically build ML components from proven specifications, developing monitoring approaches to maintain their reliability and facilitate their post-validation. 

Consortium

CEA, CNRS, Inria, Université de Bordeaux, Université Paris-Saclay, Institut Polytechnique de Paris, CentraleSupelec, Ecole Nationale Supérieure de Paris-Saclay, Institut National Polytechnique de Bordeaux

Consortium location

Publication


Autres projets

 NNawaQ
NNawaQ
NNawaQ, Neural Network Adequate Hardware Architecture for Quantization (HOLIGRAIL project)
Voir plus
 Package Python Keops
Package Python Keops
Package Python Keops for (very) high-dimensional tensor calculations (PDE-AI project)
Voir plus
 MPTorch
MPTorch
MPTorch, a PyTorch-based framework for simulating and emulating custom precision DNN training (HOLIGRAIL project)
Voir plus
 CaBRNeT
CaBRNeT
CaBRNeT, a library for developing and evaluating Case-Based Reasoning Models (SAIF project)
Voir plus
 SNN Software
SNN Software
SNN Software, Open Source Tools for SNN Design (EMERGENCES project)
Voir plus
 SDOT
SDOT
SDOT, A C++ and Python library for Semi-Discrete Optimal Transport (PDE-AI project)
Voir plus
 FloPoCo
FloPoCo
FloPoCo (Floating-Point Cores), a generator of arithmetic cores and its applications to IA accelerators (HOLIGRAIL project)
Voir plus
 Lazylinop
Lazylinop
Lazylinop (Lazy Linear Operator), a high-level linear operator based on an arbitrary underlying implementation, (SHARP project)
Voir plus
 CAISAR
CAISAR
CAISAR, a platform for characterizing artificial intelligence safety and robustness
Voir plus
 P16
P16
P16 or to develop, distribute and maintain a set of sovereign libraries for AI
Voir plus
 AIDGE
AIDGE
AIDGE, the DEEPGREEN project's open embedded development platform
Voir plus
 Jean-Zay
Jean-Zay
Jean Zay or the national infrastructure for the AI research community
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 Call of chairs Attractivité
Call of chairs Attractivité
PEPR AI Chairs program offers exceptionally talented AI researchers the opportunity to establish and lead a research program and team for a duration of 4 years in France.
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 SHARP
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Voir plus