3IA PhD/Postdoc Seminar #10

Published on January 3, 2022 Updated on January 3, 2022

on the January 7, 2022

from 10:30am to 12:00pm



10:30 - 11:00
Etrit Haxholli (Inria)

On the Estimation of Shape Parameters of Tails of Marginal Distributions

Abstract: Anomalies are data patterns that under the absence of epistemic uncertainty, correspond to observations with different characteristics from normal instances. The principal idea in most applications is that the behavior of a model trained on normal data will change significantly when switching to an abnormal period. Extreme value theory (EVT) is useful in modeling the tails of distributions and thus is helpful in the choice of an anomaly threshold, which indicates when such a transition from a normal period occurs. Under some regularity conditions, we give theoretical guarantees that the tail of the marginal distribution coincides with the thickest tail of distributions defined on the range of the marginalized out variables. This result can be used to make tail estimations of loss functions more robust against the choice of the training set and enables us to estimate the shape of the tails with fewer samples, while at the same time reducing computational complexity.

11:00 - 11:30
Cedric Vincent-Cuaz (UCA)

Semi-relaxed Gromov-Wasserstein divergence with applications on graphs

Abstract: Comparing structured objects such as graphs is a fundamental operation involved in many learning tasks. To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects. More specifically, through the nodes connectivity relations, GW operates on graphs, seen as probability measures over specific spaces. At the core of OT is the idea of conservation of mass, which imposes a coupling between all the nodes from the two considered graphs. We argue that this property can be detrimental for tasks such as graph dictionary or partition learning, and we relax it by proposing a new semi-relaxed Gromov-Wasserstein divergence. Aside from immediate computational benefits, we discuss its properties, and show that it can lead to an efficient graph dictionary learning algorithm. We empirically demonstrate its relevance for complex tasks on graphs such as partitioning, clustering and completion.

11:30 - 12:00

Open discussion on the two contributions