3IA PhD/Postdoc Seminar #18

Published on September 21, 2022 Updated on February 20, 2025
Dates

on the October 7, 2022

from 10:30am to 12:00pm
Location
Nice - Campus Valrose, Laboratoire J-A Dieudonné


Program

10:30 - 11:00
Rémi Felin (UCA, I3S)
Chair of Andrea Tettamanzi

Optimizing the Computation of a Possibilistic Heuristic to Test OWL 2 SubClassOf Axioms Against RDF Data

The growth of the Semantic Web requires various tools to manage data, make them available to all and use them for a wide range of applications. In particular, tools dedicated to ontology management are a keystone for semantic Web applications. We consider a possibilistic framework and an evolutionary approach for ontology enrichment with OWL 2 axioms. The assessment of OWL 2 axioms against an RDF knowledge graph requires a high computational cost, especially in terms of computation time, which may limit the applicability of the framework. Our contribution consists in (i) a multi-threading system to parallelize axiom assessment, (ii) a heuristic to avoid redundant computation and (iii) an optimization relying on an extension of the SPARQL 1.1 Federated Query standard. The results of a comparative evaluation show that our proposal significantly outperforms the original algorithm, enabling a significant reduction in computation time (CPU).

11:00 - 11:30 
Huiyu Li (Inria, Epione)
Chair of Hervé Delingette

Data Stealing Attack on Medical Images: Is it Safe to Export Networks from Data Lakes?

In privacy-preserving machine learning, it is common that the owner of the learned model does not have any physical access to the data. Instead, only a secured remote access to a data lake is granted to the model owner without any ability to retrieve the data from the data lake. Yet, the model owner may want to export the trained model periodically from the remote repository and a question arises whether this may cause is a risk of data leakage. In this paper, we introduce the concept of data stealing attack during the export of neural networks. It consists in hiding some information in the exported network that allows the reconstruction outside the data lake of images initially stored in that data lake. More precisely, we show that it is possible to train a network that can perform lossy image compression and at the same time solve some utility tasks such as image segmentation. The attack then proceeds by exporting the compression decoder network together with some image codes that leads to the image reconstruction outside the data lake. We explore the feasibility of such attacks on databases of CT and MR images, showing that it is possible to obtain perceptually meaningful reconstructions of the target dataset, and that the stolen dataset can be used in turns to solve a broad range of tasks. Comprehensive experiments and analyses show that data stealing attacks should be considered as a threat for sensitive imaging data sources.

11:30 - 12:00

Open discussion on the two contributions