Seminar "Multimodal Affective Computing in Videos for Personalized Digital Health" by Eric Granger

  • Science and society
Published on May 11, 2026 Updated on May 11, 2026
Dates

on the May 13, 2026

10:00 am - 12:00 pm
Location
Centre Inria d'Université Côte d'Azur
Summary:
AI technologies for affective computing are emerging for a wide range of digital health applications, most notably in diagnosis, monitoring, and behavioral skills training. However, current methods cannot accurately recognize a person’s affective state in these applications because of the subtle expressions that vary across individuals and capture conditions. This talk describes cost-effective deep learning models for expression recognition based on facial, vocal, textual, and physiological modalities. These models accurately recognize subtle and subject-specific expressions linked to an individual's affective state, like ambivalence, pain, depression, stress, empathy, and fatigue using data captured in videos. They are developed for multimodal and spatiotemporal fusion, multimodal learning using privileged training information unavailable at test time, and weakly supervised learning of data with limited/ambiguous annotations. This talk also describes methods for domain adaptation from unlabeled videos captured at test time to rapidly personalize DL models to individuals and capture conditions.  Summary:  AI technologies for affective computing are emerging for a wide range of digital health applications, most notably in diagnosis, monitoring, and behavioral skills training. However, current methods cannot accurately recognize a person’s affective state in these applications because of the subtle expressions that vary across individuals and capture conditions. This talk describes cost-effective deep learning models for expression recognition based on facial, vocal, textual, and physiological modalities. These models accurately recognize subtle and subject-specific expressions linked to an individual's affective state, like ambivalence, pain, depression, stress, empathy, and fatigue using data captured in videos. They are developed for multimodal and spatiotemporal fusion, multimodal learning using privileged training information unavailable at test time, and weakly supervised learning of data with limited/ambiguous annotations. This talk also describes methods for domain adaptation from unlabeled videos captured at test time to rapidly personalize DL models to individuals and capture conditions.  

Bio:
Eric Granger received a Ph.D. degree in Electrical Engineering from École Polytechnique de Montréal in 2002. He was a Defense Scientist with DRDC Ottawa from 1999 to 2001, and in R&D with Mitel Networks from 2001 to 2004. He joined the Dept. of Systems Engineering at ETS Montreal, Canada, in 2004, where he is currently a Full Professor and the Director of LIVIA, a research laboratory focused on computer vision and artificial intelligence. He is the ETS Industrial Research Co-Chair in Embedded Neural Networks for Intelligent Connected Buildings (Distech Controls Inc.) and was the FRQS Co-Chair in AI and Health (2021-25). His research interests include pattern recognition, machine learning, information fusion, and computer vision, with applications in affective computing, biometrics, medical imaging, and video analysis. He has (co-)authored 350+ peer-reviewed papers, and (co-)supervised 180+ HQPs in these areas of research. He is an associate editor for Elsevier Pattern Recognition and IEEE Transactions on Affective Computing.  

Eric Granger is invited by François Brémond, 3IA Chairholder.