3IA PhD/Postdoc Seminar #21

Published on September 21, 2022 Updated on January 4, 2023

on the January 13, 2023

from 10:30am to 12:00pm


10:30 - 11:00
David Loiseaux (Inria, Data Shape team)
Chair of Jean-Daniel Boissonnat

Towards multiparameter persistent homology descriptors for machine learning

The developpement of data science and data acquisition technologies in the industry in the last decade led to the emergence of enormous datasets. In the context of Machine Learning, this induces several challenges both on the theorical side, as well as on the practical side. On one hand, the so-called curse of dimensionality prevents the construction of usual statistics directly from such datasets, and on the other hand, the size of these datasets constraints the complexity of algorithms that we can use. Topological Data Analysis (TDA) aims at proposing solutions to these issues for geometrical datasets, by computing concise and interpretable geometric features that can be used afterwards along with various machine learning techniques, such as classification, statistical regularization, clustering, visualization. Surprisingly, several general machine learning problems and data sets, ranging from time series to medical images, can be framed as geometrical questions. This wide range of applications has highlighted the usefulness of TDA tools, which attracted a lot of attention over the last years. In this talk, I will introduce the main descriptor of TDA, Persistent Homology, as well as its generalization called Multiparameter Persistent Homology.

11:00 - 11:30 
Alessandro Betti (Inria, Maasai team)
Chair of Marco Gori

Deep Learning to See

The remarkable progress in computer vision on object recognition in the last few years achieved by deep convolutional neural networks is strongly connected with the availability of huge labeled data paired with strong and suitable computational resources. Clearly, the corresponding supervised communication protocol between machines and the visual environments is far from being natural. Current deep learning approaches based on supervised images mostly neglect the crucial role of temporal coherence: When computer scientists began to cultivate the idea of interpreting natural video, in order to simplify the problem they removed time, the connecting wire between frames. As we decide to frame visual learning processes in their own natural video environment, we soon realize that perceptual visual skills cannot emerge from massive supervision on different object categories. Foveated animals move their eyes, which means that even still images are perceived as patterns that change over time. Since information is interwound with motion, we propose to explore the consequences of stressing the assumption that the focus on motion is in fact nearly "all that we need". When trusting this viewpoint, one realizes that time plays a crucial role, and the underlying computational model must refer to single pixels at a certain time.

11:30 - 12:00

Open discussion on the two contributions



More information