REDEEM

Resilient, decentralized and privacy-preserving machine learning

Preview

Resilient, decentralized and privacy-friendly machine learning

Cédric Gouy-Pailler , Research Engineer CEA
Sonia Ben Mokhtar, Research director CNRS

This project aims to explore new distributed learning approaches that are resilient, robust to noise and adversarial attacks, and respectful of privacy. These distributed approaches should make it possible to go beyond current federated learning. From a theoretical point of view, REDEEM aims to provide a solid foundation for the proposed approaches, particularly in the case of malicious protagonists participating in the learning phase, and with the overriding objective of ensuring data confidentiality as far as possible. In addition to new approaches to distributing learning, REDEEM also aims for efficient implementations, by offering the community open-source code and tools.

Keywords : Decentralized Machine learning, Robustness; Privacy; Byzantine-resilience; distributed optimization; consensus algorithms in machine learning; Foundation models/extremely large models

Project website: https://redeem-pepria.github.io

Missions

Our researches


Specifications and guideline for decentralized system design with identification of associated threats

Formalize the foundational framework of the project with the identification of the primary functions to be fulfilled by the learnt system (detection, classification, recommendation), mathematically defining potential existing constraints (communication, computation resources), and expliciting a set of targeted properties related to the robustness, privacy, resilience and personalization abilities of the systems.


Algorithmic aspects of decentralized learning in an adversary-free environment

Investigate decentralized learning by focusing on algorithmic aspects while assuming that participants are honest. These investigations will take into account specifications like a dynamic and heterogeneous environment, extremely large models and personalization. 


Decentralized learning under attack

Investigate novel privacy and Byzantine attacks as well as mitigation algorithms in a decentralized setting


Advanced trade-offs management

Consider advanced learning algorithms with new optimisation strategies over large and decentralized models in dynamic networks and in a hostile environment experiencing attacks

Consortium

CEA, INRIA, CNRS LAMSADE, Ecole Polytechnique

Consortium location

Publication


Autres projets

 NNawaQ
NNawaQ
NNawaQ, Neural Network Adequate Hardware Architecture for Quantization (HOLIGRAIL project)
Voir plus
 Package Python Keops
Package Python Keops
Package Python Keops for (very) high-dimensional tensor calculations (PDE-AI project)
Voir plus
 MPTorch
MPTorch
MPTorch, a PyTorch-based framework for simulating and emulating custom precision DNN training (HOLIGRAIL project)
Voir plus
 CaBRNeT
CaBRNeT
CaBRNeT, a library for developing and evaluating Case-Based Reasoning Models (SAIF project)
Voir plus
 SNN Software
SNN Software
SNN Software, Open Source Tools for SNN Design (EMERGENCES project)
Voir plus
 SDOT
SDOT
SDOT, A C++ and Python library for Semi-Discrete Optimal Transport (PDE-AI project)
Voir plus
 FloPoCo
FloPoCo
FloPoCo (Floating-Point Cores), a generator of arithmetic cores and its applications to IA accelerators (HOLIGRAIL project)
Voir plus
 Lazylinop
Lazylinop
Lazylinop (Lazy Linear Operator), a high-level linear operator based on an arbitrary underlying implementation, (SHARP project)
Voir plus
 CAISAR
CAISAR
CAISAR, a platform for characterizing artificial intelligence safety and robustness
Voir plus
 P16
P16
P16 or to develop, distribute and maintain a set of sovereign libraries for AI
Voir plus
 AIDGE
AIDGE
AIDGE, the DEEPGREEN project's open embedded development platform
Voir plus
 Jean-Zay
Jean-Zay
Jean Zay or the national infrastructure for the AI research community
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 Call of chairs Attractivité
Call of chairs Attractivité
PEPR AI Chairs program offers exceptionally talented AI researchers the opportunity to establish and lead a research program and team for a duration of 4 years in France.
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus
 SAIF
SAIF
Safe AI through formal methods
Voir plus
 SHARP
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Voir plus