3IA PhD/Postdoc Seminar #36

Published on May 13, 2024 Updated on May 16, 2024

on the June 7, 2024

from 10:30 am to 12:00 pm

Campus Valrose



Hiba Laghrissi (PhD, Université Côte d'Azur)

Flash presentation

10:30 - 11:00
Pierpaolo Goffredo (PhD, CNRS - I3S)

Argument-based Detection and Classification of Fallacies in Political Debates

Abstract: Fallacies are arguments that employ faulty reasoning, playing a prominent role in argumentation since antiquity due to their contribution to critical thinking education. Their role is even more crucial nowadays as contemporary argumentation technologies face challenging tasks like misleading and manipulative information detection in news articles and political discourse, and counter-narrative generation. Given their persuasive and seemingly valid nature, fallacious arguments are often employed in political debates, which can have detrimental societal consequences of leading to inaccurate public opinions and invalid policy inferences.

Automatically detecting and classifying fallacious arguments represents a crucial challenge to limit the spread of misleading claims and promote healthier political discourse. This work presents a novel annotated resource of 31 U.S. presidential campaign debates, extended by incorporating the recent Trump-Biden debate, with roughly 2000 labeled instances of six main fallacy categories (ad hominem, appeal to authority/emotion, false cause, slogan, slippery slope) annotated at the token-level for argumentative components/relations.

To tackle this novel task, neural architectures based on transformers are defined, combining text representations with argument components/relations and engineered features. The results outperform state-of-the-art methods and baselines, demonstrating the advantages of complementing text representations with non-textual argument features and highlighting the important role of argument components/relations in fallacy classification, crucial for advancing argumentation technologies and promoting informed political discourse.

10:30 - 11:00
Célian Ringwald (PhD, Inria - I3S)

Impact of Syntaxes on Data Extraction with Language Models

The fine-tuning of generative pre-trained language models (PLMs) on a new task can be impacted by the choice made for representing the inputs and outputs. This article focuses on the linearization process used to structure and represent, as output, facts extracted from text. On a restricted relation extraction (RE) task, we challenged T5 and BART by fine-tuning them on 12 linearizations, including RDF standard syntaxes and variations. Our benchmark covers: the validity of the produced triples, the performance of the model, the training behaviours and the resources needed. We show these PLMs can learn some syntaxes more easily than others, and we identify an efficient ``Turtle Light'' syntax supporting the quick and robust learning of the RE task.

11:30 - 12:00

Open discussion about the two contributions

More information

Event reserved for 3IA Côte d'Azur PhD students and post-docs. ID check at the entrance of the site with visual bag inspection.