PRODIGE-AI

PRObability, ranDom matrIx theory, Geometry and gEneralization for generative-AI

Preview

Developing rigorous mathematical foundations for the generalization problem in generative AI

Amaury Habrard, Professor, Université Jean Monnet Saint-Etienne

The PRODIGE-AI project aims to develop new theoretical models for more reliable, efficient, and transparent generative AI, focusing on three main areas. The first area aims to develop new theoretical frameworks to better understand the generalization capabilities of generative AI models. The second area focuses on developing innovative strategies for generative AI models that are both effective and explainable. The third and final area aims to design geometric frameworks for generative AI, with a particular focus on graphs.

The project draws on advanced mathematical tools, including probability theory, random matrices, and geometry, and focuses on diffusion models, flow matching, transformers, and state-space models. The project also aims to study entanglement in learned representations, as well as collapse and hallucination phenomena, in order to better understand the biases and limitations of generative models.

Keywords : Generative AI, generalization, statistical learning, probability theory, random matrix theory, information theory, graph theory, geometry.

Missions

Our researches


Better understanding the issue of generalization in generative AI

Derive generalization guarantees, particularly in the form of generalization bounds, using mathematical frameworks based on concentration theory, random matrices, or random tensors.

Identify key elements of “complexity” that characterize the generalization process and model creativity.

Derive self-certified algorithms, i.e., algorithms that directly minimize a generalization bound, by leveraging PAC-Bayes theory with the study of compression schemes.

Revisit diffusion and flow matching models from the perspective of the variational formulation of filter theory, continuous filtering, and the properties of velocity fields.


Improving the explainability and effectiveness of generative AI models

Use information theory and sensitivity analysis to disentangle generative factors and explore their causal relationships.

Exploit quantum information theory and random tensors to characterize collapse phenomena (mode collapse) and develop effective fine-tuning techniques for parameters.

Leverage random matrix theory and free probability theory to better understand and explain how generative AI models work.

Use the C* algebra framework to define new expressive and data-efficient models.


Developing geometric approaches for generative AI

Leverage invariant theory to construct graph embeddings and improve equivariant neural networks for generative AI.

Study generation capabilities based on metric properties of latent spaces, in particular by designing new distances that better capture geometric properties relevant to the learning process.

Design new approaches to distribution estimation on graph spaces to develop new generative graph models based on diffusion, flow matching, or optimal transport frameworks.

Consortium

Université Jean Monnet Saint-Etienne, CNRS, Inria, Université Côte d’Azur, Aix-Marseille Université, Université de Toulouse, Ecole Centrale de Marseille, INSA Rouen Normandie, Université de Rouen Normandie

Publication


Autres projets

 Géné-Pi
Géné-Pi
Mathematics of generative models
Voir plus
 MacLeOD
MacLeOD
Machine learning on geometries and distributions
Voir plus
 MadLearning
MadLearning
Deep Learning Mathematics: From Theory to Applications
Voir plus
 MAGICALL
MAGICALL
Mathematics of generative models: an interdisciplinary analysis of loss function landscapes
Voir plus
 PERSNET
PERSNET
PERsistent Structures in Neural NETworks
Voir plus
 TENSOR4ML
TENSOR4ML
TENSOR methods FOR mastering modern Machine Learning
Voir plus
 THEOREM
THEOREM
Theory for more efficient generative models
Voir plus
 Call for chairs Attractivités
Call for chairs Attractivités
The PEPR IA Research Program is opening its Call for Chairs Attractivité, aimed at junior and senior researchers, with the main criterion being an excellent track record in research in the PEPR IA themes.
Voir plus
 NNawaQ
NNawaQ
NNawaQ, Neural Network Adequate Hardware Architecture for Quantization (HOLIGRAIL project)
Voir plus
 Package Python Keops
Package Python Keops
Package Python Keops for (very) high-dimensional tensor calculations (PDE-AI project)
Voir plus
 MPTorch
MPTorch
MPTorch, a PyTorch-based framework for simulating and emulating custom precision DNN training (HOLIGRAIL project)
Voir plus
 CaBRNeT
CaBRNeT
CaBRNeT, a library for developing and evaluating Case-Based Reasoning Models (SAIF project)
Voir plus
 FloPoCo
FloPoCo
FloPoCo (Floating-Point Cores), a generator of arithmetic cores and its applications to IA accelerators (HOLIGRAIL project)
Voir plus
 SNN Software
SNN Software
SNN Software, Open Source Tools for SNN Design (EMERGENCES project)
Voir plus
 SDOT
SDOT
SDOT, A C++ and Python library for Semi-Discrete Optimal Transport (PDE-AI project)
Voir plus
 Lazylinop
Lazylinop
Lazylinop (Lazy Linear Operator), a high-level linear operator based on an arbitrary underlying implementation, (SHARP project)
Voir plus
 CAISAR
CAISAR
CAISAR, a platform for characterizing artificial intelligence safety and robustness
Voir plus
 P16
P16
P16 or to develop, distribute and maintain a set of sovereign libraries for AI
Voir plus
 AIDGE
AIDGE
AIDGE, the DEEPGREEN project's open embedded development platform
Voir plus
 Jean-Zay
Jean-Zay
Jean Zay or the national infrastructure for the AI research community
Voir plus
 ADAPTING
ADAPTING
An approach that goes further than current hardware architectures, with the aim of reaching the next generation of AI applications.
Voir plus
 CEA AI Rising Talents Grant
CEA AI Rising Talents Grant
The CEA AI Rising Talents program offers you a tremendous opportunity to bring your ideas to life and lead your own research project for the benefit of industry and society.
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 EMERGENCES
EMERGENCES
Near-physics emerging models for embedded AI
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 SAIF
SAIF
Safe AI through formal methods
Voir plus
 SHARP
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Voir plus