Chairs | Core elements of AI

 

Knowledge representation and reasoning 

  • Combining machine learning with symbolic methods 
  • Web-based knowledge representation and processing 
  • Bridging unstructured, structured and semantic data 
  • Reason on complex heterogeneous dynamic networks 

Interpretable, explainable and trustable AI 

  • Traceable knowledge representation Ontology-based pruning and specialization 
  • Certified AI algorithms and data security 
  • Normalization and future legislation of AI 
 

Statistical, machine and deep learning 

  • Unsupervised/self-supervised learning 
  • Learning with heterogeneous data 
  • Optimal transport and mean-field games 
  • Topological and geometrical data analysis 

Constraint-aware AI 

  • Small data, active learning, approximate methods
  • Distributed and federated AI/edge AI 
  • Online/real-time learning and decision 
  • Reasoning under and against uncertainty

3IA International Chair 

2019

Marco Gori (University of Siena)

Learning and reasoning with constraints

Learning and inference are traditionally regarded as the two opposite, yet complementary and puzzling components of intelligence. In the last few years, Prof. Gori has been carrying out research on constrained based
models of the environmental agent interactions, with the main purpose of unifying learning, inference, and reasoning within the same mathematical framework. The unification is based on the abstract notion of constraint, which provides a representation of knowledge granules, gained from the interaction with the environment, as well as of supervised examples. The theory offers a natural bridge between the formalization of knowledge, expressed by logic formalisms, and the inductive acquisition of concepts from data.

3IA Chair holders

CHAIRS 2019

Charles Bouveyron (Université Côte d'Azur)

Generative models for unsupervised and deep learning with complex data 

We focus on learning problems that are made difficult by real-world constraints, such as unsupervised deep learning, choosing a deep architecture for a given situation, learning from heterogeneous data or in ultra-high-dimensional scenarios. We seek to develop deep generative models, encoding sparsity priors, to address those issues. 

Fabien Gandon (Inria)

Combining artificial and augmented intelligence technics on and through the web  

Formalizing knowledge-based models and designing algorithms to manage interactions between different forms of artificial intelligence (e.g. rule-based, connectionist, and evolutionary) and natural intelligences (e.g. individual user, and crowd) on the web. 

Marco Lorenzi (Inria) 

Interpretability and security of statistical learning in healthcare

Statistical learning in healthcare must ensure interpretability and compliance with secured data access. To tackle this problem, I will focus on 1) interpretable biomedical data modeling via probabilistic inference of dynamical systems, and 2) variational inference in federated learning for the modeling of multicentric brain imaging and genetics data.

Xavier Pennec (Inria)

Geometric statistics and geometric subspace learning

We study the impact of topology (singularities) and geometry (non-linearity) of the data and model spaces on statistical learning, with applications to computational anatomy and the life sciences. The tenet is that geometry is critical when learning with limited resources and real-world constraints such as small data and limited computational resources.

Jean-Charles Régin (Université Côte d'Azur)

Decision intelligence

We are designing explainable decision-making processes satisfying real world constraints in a multi-objective environment including incomplete, fuzzy or stochastic data. 

Serena Villata (CNRS)

Artificial argumentation for humans

The goal of my research is to design and create intelligent machines with the ability to communicate with, collaborate with, and augment people more effectively. To achieve this challenging goal, intelligent machines need to understand human language, emotions, intentions, behaviors, interact at multiple scales, and be able to explain their decisions. 

CHAIRS 2021

Elena Cabrio (Université Côte d'Azur)

AI and natural language

The goal of my research is to design debating technologies for advanced decision support systems, to support the exchange of information and opinions in different domains (as healthcare and politics), leveraging interdisciplinarity and advances in machine learning for Natural Language Processing.

Motonobu Kanagawa (EURECOM)

Machine Learning for Computer Simulation

Computer simulation has been widely used for planning high-impact decision-making (e.g., policies on climate change and Covid-19), but its reliability depends on how accurate simulations can imitate reality. This project develops machine learning methods to improve a simulator’s reliability and the resulting decision-making. 
 

Pierre-Alexandre Mattei (Inria)

Deep learning for dirty data: a statistical perspective 

The successes of machine learning remain limited to clean and curated data sets. By contrast, real-world data are generally much messier. We work on designing new machine learning models that can deal with “dirty” data sets that may contain missing values, anomalies, or may not be properly normalised. Collaborators include doctors and astronomers.

Giovanni Neglia (Inria)

Pervasive Sustainable Learning Systems (PERUSALS) 

PERUSALS (Pervasive Sustainable Learning Systems) seeks to identify design principles of Internet-scale distributed learning systems, with a focus on the tradeoff between performance (in particular training and inference times), economic and environmental costs, and privacy.

 

CHAIRS 2022

Vincent Vandewalle (Université Côte d'Azur)

Finding structures in heterogeneous data

We study heterogeneous data both in their kind and in their distribution. Our ambition is to discover structures in the data that will help the user in understanding it and taking decisions. We focus on designing generative models able to reveal several clustering viewpoints and we will adapt them to the deep-learning setting. We collaborate with doctors and retailers.


 

3IA Affiliate Chair 

Freddy Limpens (Inria) | Mnemotix

Big Knowledge graphs evolution and life cycle

When it comes to deploying big Knowledge Graphs (KG) based information systems in the real world, be it for distributed organisations, small or big companies, some fundamental problems remain unanswered. Our research focuses on solving some of them such as managing the history and evolution of big KG, offering rapid access to KG's data, automatizing the validation of data, and providing generic templating language for more easily exploiting query results.