3IA PhD/Postdoc Seminar #26

Published on May 24, 2023 Updated on May 25, 2023

on the June 2, 2023

from 10:30am to 12:00pm
Inria Sophia Antipolis


10:30 - 11:00
Nissim Maruani (PhD student, Inria, Titane)
Chair: P. Alliez

VoroMesh: Learning Watertight Surface Meshes with Voronoi Diagrams

In sharp contrast to images, finding a concise, learnable discrete representation of 3D surface remains a scientific challenge. In particular, while polygon meshes are arguably the most common surface representation used in geometry processing tools, their irregular and combinatorial structure often makes them unsuitable for learning-based applications. In this work, we present a novel and differentiable Voronoi-based representation of watertight 3D shape surfaces. From a set of 3D points (called generators) and their associated occupancy (inside/outside) with respect to an input shape, we define our boundary representation through the Voronoi diagram of the generators as the subset of Voronoi faces whose two equidistant generators are of opposite occupancy: the resulting polygon mesh then forms a watertight approximation of the input shape's boundary. To learn the position of the generators we propose a novel loss function that minimizes the distance from groundtruth surface samples to the closest face of the Voronoi diagram without needing an explicit construction of the entire Voronoi diagram. We demonstrate how the proposed VoroLoss applies to either direct optimization of generators, or to the training of a neural network for inference-based prediction of generators. We demonstrate the geometric efficiency of our representation compared to axiomatic meshing algorithms and recent learning-based mesh representations on the Thingi32 dataset. We also match or outperform recent methods in mesh prediction from SDF inputs on the ABC dataset while guaranteeing closed surfaces without self-intersection.

10:30 - 11:00
Julien Aubert (PhD student, CNRS, LJAD)
Chair: P. Reynaud-Bouret

On the convergence of the MLE as an estimator of the learning rate in the Exp3 algorithm

Imagine that you observe a rat in a maze, learning progressively to find food. How would you guess the learning process it actually uses ? This question is of paramount importance in cognitive science where the problem is not to find the fastest or best learning algorithm to learn a specific task but to discover the most realistic learning model (always formulated as an algorithm) [1]. The problem of proving that a certain model is better suited to model reality than others is so crucial in cognitive science that the methodology for fitting any kind of learning algorithm to real learning data has been well established and emphasized [2]. Any scientist wishing to develop their own new learning model can follow the same numerical experiments to test whether their model is realistic or not. The first step of the methodology is performing MLE (Maximum Likelihood Estimation) on the data for parameters estimation of a model. Recall that we are observing an individual learning a specific task. Therefore, not only the training data (i.e. the observations) strongly depend on each other, but they are also often non stationary (otherwise the individual could not have learned). Extensive simulations are often required : depending on the set of chosen parameters, not only can a model learn or not learn, but there is also oftena set of parameters for which the estimator behaves poorly. Unfortunately, there is a lack of theoretical guarantees on whether it is possible to estimate the parameters of these models consistently. Our goal is to prove rigorously what can be said about the properties of the MLE when fitting a learning algorithm to real data. Instead of studying a particular cognitive model and in order to work within an established general theoretical framework, we focus on the adversarial multi armed bandit problem. The algorithm we specifically study (Exp3 : Exponential weights for Exploration and Exploitation) is probably the simplest algorithm for adversarial bandits [3]. Even though it is not used in the cognition literature, it shares many features with famous cognitive algorithms [4] and has given rise to many variants. In the presentation, we will show in a particular case that trying to estimate constant learning rates leads to poor estimation whatever the estimation procedure : the estimation error decreases more slowly than logarithmically with the number of observations. And, in the setting where the learning rate decreases polynomially with the number of observations, we show a polynomial decrease of the prediction error and in a particular case of the estimation error of a truncated MLE.

[1] Botvinick (2008). Hierarchical models of behavior and prefrontal function.
[2] Wilson and Collins (2019). Ten simple rules for the computational modeling of behavioral
[3] Lattimore and Szepesvári (2020). Bandit Algorithms.
[4] Gluck and Bower (1988). From conditioning to category learning : An adaptive network model.

11:30 - 12:00

Open discussion on the two contributions

More information