Core elements of AI

Developing “core AI” models and algorithms for real-world problems.

Knowledge representation and reasoning 

Combining machine learning with symbolic methods 
Web-based knowledge representation and processing 
Bridging unstructured, structured and semantic data 
Reason on complex heterogeneous dynamic networks 

Interpretable, explainable and trustable AI 

Traceable knowledge representation 
Ontology-based pruning and specialization 
Certified AI algorithms and data security 
Normalization and future legislation of AI 

Statistical, machine and deep learning 

Unsupervised/self-supervised learning 
Learning with heterogeneous data 
Optimal transport and mean-field games 
Topological and geometrical data analysis 

Constraint-aware AI 

Small data, active learning, approximate methods 
Distributed and federated AI/edge AI 
Online/real-time learning and decision 
Reasoning under and against uncertainty

3IA Chair holders

3IA Interntional Chair

Marco Gori - 

Full professor of computer science at the University of Siena and head of SAILab (Siena Artificial
Intelligence Lab)

 
3IA Chairs awarded in 2019

Jean-Daniel Boissonnat (Inria) - Topological data analysis  

We are studying the mathematical, statistical, algorithmic and applied aspects of topological data analysis, a fast-growing field with a well-funded theory that is attracting increasing interest in both fundamental research and in industry. Our ambition is to uncover, understand and exploit the topological and geometric structures underlying complex data. 

Charles Bouveyron (Université Côte d’Azur) - Generative models for unsupervised and deep learning with complex data  

We focus on learning problems that are made difficult by real-world constraints, such as unsupervised deep learning, choosing a deep architecture for a given situation, learning from heterogeneous data or in ultra-high-dimensional scenarios. We seek to develop deep generative models, encoding sparsity priors, to address those issues. 

François Delarue (Université Côte d’Azur) - Mean field multi-agent systems in AI

We study AI systems with a large number of rational agents with mean field interactions. Theoretical questions remain open, specifically when related Nash or Pareto equilibria are not unique, and thus corresponding numerical and learning methods are key issues. Applications include neural networks, power grids, crowd management, cybersecurity, etc. 

Maurizio Filippone (EURECOM) - Probabilistic machine learning 

Probabilistic machine learning offers a principled framework for quantification of uncertainty across various sciences. The Chair will tackle three major modeling and computational issues: (i) the need to develop practical and scalable tools for accurate quantification of uncertainty, (ii) the lack of interpretability, and (iii) the unsustainable trend in energy consumption. 

Rémi Flamary (Université Côte d’Azur) - Optimal transport for machine learning 

The main objective of this project is to change the way we learn from empirical data using optimal transport. We will first investigate optimal transport for transfer learning with biomedical and astronomical applications. Second, we will adapt the Gromov-Wasserstein distance for structured data and transfer between deep learning models with different architectures. 

Fabien Gandon (Inria) - Combining artificial and augmented intelligence technics on and through the web  

Formalizing knowledge-based models and designing algorithms to manage interactions between different forms of artificial intelligence (e.g. rule-based, connectionist, and evolutionary) and natural intelligences (e.g. individual user, and crowd) on the web. 

Marco Lorenzi (Inria) - Interpretability and security of statistical learning in healthcare 

Statistical learning in healthcare must ensure interpretability and compliance with secured data access. To tackle this problem, I will focus on 1) interpretable biomedical data modeling via probabilistic inference of dynamical systems, and 2) variational inference in federated learning for the modeling of multicentric brain imaging and genetics data. 

Xavier Pennec (Inria) - Geometric statistics and geometric subspace learning 

We study the impact of topology (singularities) and geometry (non-linearity) of the data and model spaces on statistical learning, with applications to computational anatomy and the life sciences. The tenet is that geometry is critical when learning with limited resources and real-world constraints such as small data and limited computational resources. 

Jean-Charles Régin (Université Côte d’Azur) - Decision intelligence 

We are designing explainable decision-making processes satisfying real world constraints in a multi-objective environment including incomplete, fuzzy or stochastic data. 

Carlos Simpson (CNRS) - AI and mathematics  

My research addresses the interactions between research areas in algebra, category theory and geometry, and machine learning. This includes applications of AI to the classification of interesting algebraic and geometric structures, and the interactions between AI and formal verification of proofs in both directions. 

Andrea G.B. Tettamanzi (Université Côte d’Azur) - Towards an evolutionary epistemology of ontology learning 

I am developing symbolic learning methods based on evolutionary computation to overcome the knowledge acquisition bottleneck in knowledge base construction and enrichment. This project, straddling machine learning and knowledge representation and reasoning, combines symbolic aspects of AI with easily parallelizable computational methods. 

Serena Villata (CNRS) - Artificial argumentation for humans 

The goal of my research is to design and create intelligent machines with the ability to communicate with, collaborate with, and augment people more effectively. To achieve this challenging goal, intelligent machines need to understand human language, emotions, intentions, behaviors, interact at multiple scales, and be able to explain their decisions.