3IA PhD/Postdoc Seminar #27

Published on September 5, 2023 Updated on February 20, 2025
Dates

on the September 15, 2023

from 10:30am to 12:00pm
Location
Inria Sophia Antipolis

Program

 

10:30
Tomasz Stanczyk (PhD student, Inria)
Chair of F. Bremond, Axis 2

Flash presentation: Long-term multi-object tracking

10:30 - 11:00
Faisal Jayousi (PhD student, CNRS)
Chair of L. Blanc-Féraud,
Axis 3

Geometric and statistical analysis of the extracellular matrix 

The extracellular matrix (ECM) functions as the architectural scaffold of organs and tissues, providing a dynamic milieu of physical and biochemical signals to cells. Throughout tumour progression, the ECM undergoes pronounced topological and biophysical transformations. Our study aims to investigate the geometric and topological properties of the tumour microenvironment, with the goal of gaining insights into disease progression and identifying and characterising biomarkers that can effectively predict treatment efficacy. While the study of cellular components has been extensively explored, the characterisation of non-cellular elements, particularly the extracellular matrix (ECM), remains an underexplored domain. Fibronectin (FN), one of the fibrous ECM proteins, exhibits three primary forms: reticular fiber-like structures, aligned fibers, and aggregates. These patterns are presumed to have a significant impact on cancer cell survival and proliferation. The first part of the study will address the challenge of segmenting FN images into the aforementioned classes. To achieve this, several methods will be presented including established Deep learning methods, texture analysis, and Graph-based methods. The second part of the study will delve into the biological implications arising from the segmentation maps to shed some light on their relevance to tumour progression.

10:30 - 11:00
Oualid Zari (PhD student, EURECOM)
Chair of M. Önen, Axis 4

Privacy Attacks in Machine Learning and Defenses 

In the expansion of machine learning, concerns have emerged about models unintentionally revealing their training data. In this presentation, drawing from our recent research, we will discuss the "Membership Inference Attack" (MIA) — a pervasive technique that seeks to infer if specific data points were part of a model's training data, highlighting its implications on Principal Component Analysis (PCA). We will also address the "Link Inference Attack," which aims to identify private edges in graph-structured data within Graph Neural Networks (GNNs). In response to these challenges, we introduce Differential Privacy (DP), a technique designed to protect individual data by introducing controlled randomness. Our findings emphasize the significant privacy vulnerabilities inherent in machine learning and the evolving strategies to mitigate them.

11:30 - 12:00

Open discussion on the two contributions


More information