Doctoral & Postdoctoral Seminar

Date & Location

  • June 2, 2023
  • Inria Sophia Antipolis

Program

10:30 - 11:00
Nissim Maruani (PhD student, Inria, Titane)
Chair: P. Alliez

VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams

In sharp contrast to images, finding a concise, learnable discrete representation of 3D surface remains a scientific challenge. In particular, while polygon meshes are arguably the most common surface representation used in geometry processing tools, their irregular and combinatorial structure often makes them unsuitable for learning-based applications. In this work, we present a novel and differentiable Voronoi-based representation of watertight 3D shape surfaces. From a set of 3D points (called generators) and their associated occupancy (inside/outside) with respect to an input shape, we define our boundary representation through the Voronoi diagram of the generators as the subset of Voronoi faces whose two equidistant generators are of opposite occupancy: the resulting polygon mesh then forms a watertight approximation of the input shape's boundary. To learn the position of the generators we propose a novel loss function that minimizes the distance from groundtruth surface samples to the closest face of the Voronoi diagram without needing an explicit construction of the entire Voronoi diagram. We demonstrate how the proposed VoroLoss applies to either direct optimization of generators, or to the training of a neural network for inference-based prediction of generators. We demonstrate the geometric efficiency of our representation compared to axiomatic meshing algorithms and recent learning-based mesh representations on the Thingi32 dataset. We also match or outperform recent methods in mesh prediction from SDF inputs on the ABC dataset while guaranteeing closed surfaces without self-intersection.

11:00 - 11:30 
Julien Aubert (PhD student, CNRS, LJAD)
Chair: P. Reynaud-Bouret

On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm

Imagine that you observe a rat in a maze, learning progressively to find food. How would you guess the learning process it actually uses ? This question is of paramount importance in cognitive science where the problem is not to find the fastest or best learning algorithm to learn a specific task but to discover the most realistic learning model (always formulated as an algorithm) [1]. The problem of proving that a certain model is better suited to model reality than others is so crucial in cognitive science that the methodology for fitting any kind of learning algorithm to real learning data has been well established and emphasized [2]. Any scientist wishing to develop their own new learning model can follow the same numerical experiments to test whether their model is realistic or not. The first step of the methodology is performing MLE (Maximum Likelihood Estimation) on the data for parameters estimation of a model. Recall that we are observing an individual learning a specific task. Therefore, not only the training data (i.e. the observations) strongly depend on each other, but they are also often non stationary (otherwise the individual could not have learned). Extensive simulations are often required : depending on the set of chosen parameters, not only can a model learn or not learn, but there is also oftena set of parameters for which the estimator behaves poorly. Unfortunately, there is a lack of theoretical guarantees on whether it is possible to estimate the parameters of these models consistently. Our goal is to prove rigorously what can be said about the properties of the MLE when fitting a learning algorithm to real data. Instead of studying a particular cognitive model and in order to work within an established general theoretical framework, we focus on the adversarial multi armed bandit problem. The algorithm we specifically study (Exp3 : Exponential weights for Exploration and Exploitation) is probably the simplest algorithm for adversarial bandits [3]. Even though it is not used in the cognition literature, it shares many features with famous cognitive algorithms [4] and has given rise to many variants. In the presentation, we will show in a particular case that trying to estimate constant learning rates leads to poor estimation whatever the estimation procedure : the estimation error decreases more slowly than logarithmically with the number of observations. And, in the setting where the learning rate decreases polynomially with the number of observations, we show a polynomial decrease of the prediction error and in a particular case of the estimation error of a truncated MLE.

References
[1] Botvinick (2008). Hierarchical models of behavior and prefrontal function.
[2] Wilson and Collins (2019). Ten simple rules for the computational modeling of behavioral
data.
[3] Lattimore and Szepesvári (2020). Bandit Algorithms.
[4] Gluck and Bower (1988). From conditioning to category learning : An adaptive network model.

11:30 - 12:00

Open discussion on the two contributions



 
Save the Date!
Next Seminar
  • July 7, 2023 at Inria
Past Seminars

Doctoral & Postdoctoral Seminar #25

Speakers:

  • Stefano Spaziani (CNRS) | Estimating the functional connectivity in the brain via a multiscale spike-LFP autoregressive model
  • Virginia d'Auria (invited researcher) | Quantum information science
See details

Doctoral & Postdoctoral Seminar #24

Speakers:

  • Zhijie Fang (Inria) Landmark detection via convolutional neural network
  • Josué Tchouanti (CNRS LJAD) Detection of neural synchronization and implication for neuroscience experimental design
See details

Doctoral & Postdoctoral Seminar #23

Speakers:

  • Thi Khuyen Le (Inria)Comparison of handcrafted radiomics and 3D-CNN models to diagnose striatal dopamine deficiency in Parkinsonian syndromes based on 18F-FDOPA PET images
  • Irene Balelli (external researcher - Inria) The search for causality in the analysis and modeling of biomedical data
See details

Doctoral & Postdoctoral Seminar #22

Speakers:

  • Valerya Strizhkova (Inria)Multi-View Video Masked Autoencoder for Emotion Recognition
  • Lucie Cadorel (invited outside 3IA - I3S & Inria)Geospatial Knowledge in Real Estate Listings : Extracting and Localizing Uncertain Spatial Information from Text
See details

Doctoral & Postdoctoral Seminar #21

Speakers:

  • David Loiseaux (Inria)Towards multiparameter persistent homology descriptors for machine learning
  • Alessandro Betti (Inria)Deep Learning to See
See details

Doctoral & Postdoctoral Seminar #20

Speakers:

  • Yacine Khacef (Université Côte d'Azur)High-Resolution Traffic Monitoring with Distributed Acoustic Sensing and AI
  • Daniel Inzunza (Inria)A PINN approach for traffic state estimation and model calibration based on loop detector flow data
See details

Doctoral & Postdoctoral Seminar #19

Speakers:

  • Lucia Innocenti (Inria) Analysis of Multi-centric AI-based frame-works in prostate segmentation
  • Riccardo Taiello (Inria) Privacy Preserving Image Registration

See details

Doctoral & Postdoctoral Seminar #18

Speakers:

  • Rémi Felin (UCA, I3S) | Optimizing the Computation of a Possibilistic Heuristic to Test OWL 2 SubClassOf Axioms Against RDF Data
  • Huiyu Li (Inria, Epione) | Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?
See details

Doctoral & Postdoctoral Seminar #17

Speakers:

  • Benjamin Ocampo (UCA, I3S) | "We Need Two Poke Flutes to Wake You Up'' an In-depth Analysis of Implicit and Subtle Hate Speech Messages
  • Tong Zhao (Inria) | Progressive Discrete Domains for Implicit Surface Reconstruction

See details

Doctoral & Postdoctoral Seminar #16

Speakers:

  • Prof. Ludovic Dibiaggio | Introduction to the OTESIA institute
  • Daniel Inzunza (Inria) | PINNs approach for traffic model calibration
  • Angelo Rodio (Inria) | Resource-aware Federated Learning

See details

Doctoral & Postdoctoral Seminar #15

Speakers:

  • Andrea Castagnetti (UCA, LEAT) | Neural information coding for efficient spike-based image denoising
  • Christos Bountzouklis (UCA, LJAD) | Environmental factors affecting wildfire-burned areas in southeastern France

See details

Doctoral & Postdoctoral Seminar #14

Speakers:

  • Antonia Ettorre (UCA, I3S) | A systematic approach to identify the information captured by Knowledge Graph Embeddings
  • Victoriya Kashtanova (Inria) | Deep Learning Approach for Cardiac Electrophysiology Modeling

See details

Doctoral & Postdoctoral Seminar #13

Speakers:

  • Aude Sportisse (Inria) | Informative labels in Semi-Supervised Learning
  • Alexandra Würth (Inria) | Data driven traffic management by Macroscopic models

See details

Doctoral & Postdoctoral Seminar #12

Speakers:

  • Hind Dadoun (Inria) | AI-based Real Time Diagnosis of Abdominal Ultrasound Images
  • Victor Jung (UCA) | Checking Constraint Satisfaction

See details

Doctoral & Postdoctoral Seminar #11

Speakers:

  • Bogdan Kozyrskiy (EURECOM) | Binarization for Optical Processing Units via REINFORCE
  • Paul Tourniaire (Inria) | Attention-based Multiple Instance Learning for Histopathology

See details

Doctoral & Postdoctoral Seminar #10

Speakers:

  • Etrit Haxholli (Inria) | On the Estimation of Shape Parameters of Tails of Marginal Distributions
  • Cedric Vincent-Cuaz (UCA) | Semi-relaxed Gromov-Wasserstein divergence with applications on graphs

See details

Doctoral & Postdoctoral Seminar #9

Speakers:

  • Santiago Marro (CNRS) | Natural Language Argumentation Quality Assessment maps
  • Artem Muliukov (UCA) | Cortex-inspired multimodal AI approach based on self-organizing 

See details

Doctoral & Postdoctoral Seminar #8

Speakers:

  • Kristof Huszar (Inria) | Towards Efficient Algorithms in Computational Topology
  • Mauro Zucchelli (Inria) | Diffusion MRI based Brain Tissue Microstructure Characterization Using Autoencoder Neural-Networks

See details

Doctoral & Postdoctoral Seminar #7

Speakers:

  • Vasiliki Stergiopoulou (CNRS) | COL0RME: Super-Resolution Microscopy Based on the Localization of Sparse Blinking Fluorophores
  • Ali Ballout (UCA) | Predicting the Possibilistic Score of Atomic Candidate OWL Axioms

See details

Doctoral & Postdoctoral Seminar #6

Speakers:

  • Hugo Schmutz (Inria) | Towards safe deep semi-supervised learning
  • Amirhossein Tavakoli (MINES ParisTech) | Hybrid combinatorial optimization and machine learning algorithms for energy-efficient water network

See details

Doctoral & Postdoctoral Seminar #5

Speakers:

  • Ayse Unsal (Eurecom) | A Statistical Threshold for Adversarial Classification in Laplace Mechanisms
  • Ashwin James (CNRS) | Inference of choice granularity via learning model selection in a cognitive task 

See details

Doctoral & Postdoctoral Seminar #4

Speakers:

  • Mohsen Tabejamaat (Inria) | Conditional image generation using structural priors
  • William Hammersley (UCA) | Randomizing gradient descents on the space of probability measures
  • Athanasios Vasileiadis (UCA) | Exploration noise for Mean Field Games

See details

Doctoral & Postdoctoral Seminar #3

Speakers:

  • Amar Bouali (3IA Côte d'Azur) | Introduction to the Partnership and Innovation initiatives of 3IA Côte d'Azur
  • Antoine Collin (CNRS) | Automatic cell type annotation for cell atlas construction
  • Ziming Liu (Inria) | High-resolution Detection Network for Small Objects

See details

Doctoral & Postdoctoral Seminar #2

Speakers:

  • Aurélie Delort (3IA Côte d'Azur) | Presentation of the Education & Training program
  • Boris Shminke (CNRS) | Using Denoising Autoencoder for Cayley table completion task
  • Dingge Liang (Inria) | A Deep Latent Recommender System based on User Ratings and Reviews
  • Martijn Van Den Ende (UCA) | Fibre-optics, earthquakes, and a zebra: intelligent signal denoising with Deep Learning

See details

Doctoral & Postdoctoral Seminar #1