Chairs | Core elements of AI


Knowledge representation and reasoning 

  • Combining machine learning with symbolic methods 
  • Web-based knowledge representation and processing 
  • Bridging unstructured, structured and semantic data 
  • Reason on complex heterogeneous dynamic networks 

Interpretable, explainable and trustable AI 

  • Traceable knowledge representation Ontology-based pruning and specialization 
  • Certified AI algorithms and data security 
  • Normalization and future legislation of AI 

Statistical, machine and deep learning 

  • Unsupervised/self-supervised learning 
  • Learning with heterogeneous data 
  • Optimal transport and mean-field games 
  • Topological and geometrical data analysis 

Constraint-aware AI 

  • Small data, active learning, approximate methods
  • Distributed and federated AI/edge AI 
  • Online/real-time learning and decision 
  • Reasoning under and against uncertainty

3IA International Chair 


Marco Gori (University of Siena)

Learning and reasoning with constraints

Learning and inference are traditionally regarded as the two opposite, yet complementary and puzzling components of intelligence. In the last few years, Prof. Gori has been carrying out research on constrained based
models of the environmental agent interactions, with the main purpose of unifying learning, inference, and reasoning within the same mathematical framework. The unification is based on the abstract notion of constraint, which provides a representation of knowledge granules, gained from the interaction with the environment, as well as of supervised examples. The theory offers a natural bridge between the formalization of knowledge, expressed by logic formalisms, and the inductive acquisition of concepts from data.

3IA Chair holders


Jean-Daniel Boissonnat (Inria) 

Topological data analysis  

We are studying the mathematical, statistical, algorithmic and applied aspects of topological data analysis, a fast-growing field with a well-funded theory that is attracting increasing interest in both fundamental research and in industry. Our ambition is to uncover, understand and exploit the topological and geometric structures underlying complex data. 

Charles Bouveyron (Université Côte d'Azur)

Generative models for unsupervised and deep learning with complex data 

We focus on learning problems that are made difficult by real-world constraints, such as unsupervised deep learning, choosing a deep architecture for a given situation, learning from heterogeneous data or in ultra-high-dimensional scenarios. We seek to develop deep generative models, encoding sparsity priors, to address those issues. 

François Delarue (Université Côte d'Azur) 

Mean field multi-agent systems in AI

We study AI systems with a large number of rational agents with mean field interactions. Theoretical questions remain open, specifically when related Nash or Pareto equilibria are not unique, and thus corresponding numerical and learning methods are key issues. Applications include neural networks, power grids, crowd management, cybersecurity, etc. 

Maurizio Filippone (EURECOM)

Probabilistic machine learning 

Probabilistic machine learning offers a principled framework for quantification of uncertainty across various sciences. The Chair will tackle three major modeling and computational issues: (i) the need to develop practical and scalable tools for accurate quantification of uncertainty, (ii) the lack of interpretability, and (iii) the unsustainable trend in energy consumption. 

Rémi Flamary (Université Côte d'Azur)

Optimal transport for machine learning 

The main objective of this project is to change the way we learn from empirical data using optimal transport. We will first investigate optimal transport for transfer learning with biomedical and astronomical applications. Second, we will adapt the Gromov-Wasserstein distance for structured data and transfer between deep learning models with different architectures. 

Fabien Gandon (Inria)

Combining artificial and augmented intelligence technics on and through the web  

Formalizing knowledge-based models and designing algorithms to manage interactions between different forms of artificial intelligence (e.g. rule-based, connectionist, and evolutionary) and natural intelligences (e.g. individual user, and crowd) on the web. 

Marco Lorenzi (Inria) 

Interpretability and security of statistical learning in healthcare

Statistical learning in healthcare must ensure interpretability and compliance with secured data access. To tackle this problem, I will focus on 1) interpretable biomedical data modeling via probabilistic inference of dynamical systems, and 2) variational inference in federated learning for the modeling of multicentric brain imaging and genetics data.

Xavier Pennec (Inria)

Geometric statistics and geometric subspace learning

We study the impact of topology (singularities) and geometry (non-linearity) of the data and model spaces on statistical learning, with applications to computational anatomy and the life sciences. The tenet is that geometry is critical when learning with limited resources and real-world constraints such as small data and limited computational resources.

Jean-Charles Régin (Université Côte d'Azur)

Decision intelligence

We are designing explainable decision-making processes satisfying real world constraints in a multi-objective environment including incomplete, fuzzy or stochastic data. 

Carlos Simpson (CNRS) 

AI and mathematics

My research addresses the interactions between research areas in algebra, category theory and geometry, and machine learning. This includes applications of AI to the classification of interesting algebraic and geometric structures, and the interactions between AI and formal verification of proofs in both directions. 

Andrea G.B Tettamanzi (Université Côte d'Azur) 

Towards an evolutionary epistemology of ontology learning

I am developing symbolic learning methods based on evolutionary computation to overcome the knowledge acquisition bottleneck in knowledge base construction and enrichment. This project, straddling machine learning and knowledge representation and reasoning, combines symbolic aspects of AI with easily parallelizable computational methods. 

Serena Villata (CNRS)

Artificial argumentation for humans

The goal of my research is to design and create intelligent machines with the ability to communicate with, collaborate with, and augment people more effectively. To achieve this challenging goal, intelligent machines need to understand human language, emotions, intentions, behaviors, interact at multiple scales, and be able to explain their decisions. 


Elena Cabrio (Université Côte d'Azur)

AI and natural language

The goal of my research is to design debating technologies for advanced decision support systems, to support the exchange of information and opinions in different domains (as healthcare and politics), leveraging interdisciplinarity and advances in machine learning for Natural Language Processing.

Motonobu Kanagawa (EURECOM)

Machine Learning for Computer Simulation

Computer simulation has been widely used for planning high-impact decision-making (e.g., policies on climate change and Covid-19), but its reliability depends on how accurate simulations can imitate reality. This project develops machine learning methods to improve a simulator’s reliability and the resulting decision-making. 

Pierre-Alexandre Mattei (Inria)

Deep learning for dirty data: a statistical perspective 

The successes of machine learning remain limited to clean and curated data sets. By contrast, real-world data are generally much messier. We work on designing new machine learning models that can deal with “dirty” data sets that may contain missing values, anomalies, or may not be properly normalised. Collaborators include doctors and astronomers.

Giovanni Neglia (Inria)

Pervasive Sustainable Learning Systems (PERUSALS) 

PERUSALS (Pervasive Sustainable Learning Systems) seeks to identify design principles of Internet-scale distributed learning systems, with a focus on the tradeoff between performance (in particular training and inference times), economic and environmental costs, and privacy.



Vincent Vandewalle (Université Côte d'Azur)

Finding structures in heterogeneous data

We study heterogeneous data both in their kind and in their distribution. Our ambition is to discover structures in the data that will help the user in understanding it and taking decisions. We focus on designing generative models able to reveal several clustering viewpoints and we will adapt them to the deep-learning setting. We collaborate with doctors and retailers.


3IA affiliate chairs


Freddy Limpens (Inria) | Mnemotix

Big Knowledge graphs evolution and life cycle

When it comes to deploying big Knowledge Graphs (KG) based information systems in the real world, be it for distributed organisations, small or big companies, some fundamental problems remain unanswered. Our research focuses on solving some of them such as managing the history and evolution of big KG, offering rapid access to KG's data, automatizing the validation of data, and providing generic templating language for more easily exploiting query results.

2021 - JUNE 2023

Greger Ottosson (Inria) | IBM

Trustworthy AI and Explainable Decisions for Business Automation

As we apply Machine Learning to automate decisions in financial services, healthcare and government, there is increasing user need and regulatory demand for transparency and explainability. Our AI research is focused on explainability for decisions that combine ML-based predictions and rule-based business policies.