EMERGENCES

Near-physics emerging models for embedded AI

Preview

A breakthrough in emerging physics-based models and the use of different computational models and properties from different physical devices.

Marina Reyboz, Research Director at CEA

Gilles Sassatelli, Research Director at CNRS

The EMERGENCES project aims to advance the state-of-the-art in emerging physics-based models by collaboratively exploring various computational models using the properties of different physical devices. The project will focus on bio-inspired event-driven models, physics-inspired models and innovative physics-based machine learning solutions. Emergences also intends to extend collaborative research activities beyond the consortium’s perimeter, in conjunction with other PEPR projects and beyond other laboratories.

Key words: Emerging AI models, embedded AI, Near-physics AI, energy efficiency, Edge AI

Missions

Our researches


Spiking neural networks and event-based models

Define efficient hardware implementations, the use of fusion with neuromorphic sensors and multimodality:

– demonstrate the growing maturity of SNN with training and design flow for energy-proportional spiking hardware

– exploite the sparsity intrinsic to event-driven sensors that natively output sparse event activity for energy efficiency- investigate the use of multimodality to improve accuracy in link with the neuroscience community


Disruptive Physics-Inspired models

Develop more efficient and accurate training methods for probalistic/Bayesian neural networks by exploring algorithms inspired by the brain and physics.

  • Explore Energy Based Models implementation opportunities at the technological level exploiting analog and emerging memory technologies.
  • Investigate brain – and physics- inspired algorithms for such models

Near-physics design for machine learning

Improve the energy efficiency of deep learning models for inference and learning.

Two aspects will be taken into account: 

  • the hardware/software co-optimization of emerging algorithms : such as attentional layers or incremental learning
  • the hardware architecture investigations for leveraging the benefits of emerging technologies.

Consortium

CEA, CNRS, Université Côte d’Azur, Université Aix-Marseille, Université de Bordeaux, Université de Lille, Université de Paris-Saclay, Universités Grenoble Alpes, INSA Rennes

Consortium location

Autres projets

 SHARP
SHARP
Sharp theoretical and algorithmic principles for frugal ML
Voir plus
 HOLIGRAIL
HOLIGRAIL
Hollistic approaches to greener model architectures for inference and learning
Voir plus
 ADAPTING
ADAPTING
Adaptive architectures for embedded artificial intelligence
Voir plus
 REDEEM
REDEEM
Resilient, decentralized and privacy-preserving machine learning
Voir plus
 CAUSALI-T-AI
CAUSALI-T-AI
When causality and AI teams up to enhance interpretability and robustness of AI algorithms
Voir plus
 FOUNDRY
FOUNDRY
The foundations of robustness and reliability in artificial intelligence
Voir plus
 SAIF
SAIF
Safe AI through formal methods
Voir plus
 PDE-AI
PDE-AI
Numerical analysis, optimal control and optimal transport for AI / "New architectures for machine learning".
Voir plus