Deep Learning School @UCA 2021


UCA Event 2021, from July 12-16

The Deep Learning School @UCA is back! This program will be in English, online and safe, open to more attendees. Participants will be able to attend online and at their own local time. This event is certified by the Interdisciplinary Institute of Artificial Intelligence (3IA Côte d'Azur).
Whether you are a researcher, an engineer, an expert in "Deep Learning" or eager to learn more about these crucial methods at the core of modern AI, this program is for you. It includes:

  • 5 Lectures given by High-profile Speakers, internationally renowned in the field; these lectures will take place in the morning or in the afternoon.
  • 5 "Expert Labs" on to the topics of the lectures, and supervised by subject-matter experts; these labs will take place the same day of the corresponding lecture, in the afternoon or in the morning accordingly.


July 12th

Lecture | from 9am to 12:15am | Speech Recognition and Machine Translation: From Bayes Decision Theory to Machine Learning and Deep Neural Networks by Prof. Hermann Ney

The last 40 years have seen a dramatic progress in machine learning and statistical methods for speech and language processing like speech recognition, handwriting recognition and machine translation. Many of the key statistical concepts had originally been developed for speech recognition. Examples of such key concepts are the Bayes decision rule for minimum error rate and sequence-to-sequence processing using approaches like the alignment mechanism based on hidden Markov models and the attention mechanism based on neural networks.
Recently the accuracy of speech recognition, handwriting recognition machine translation could be improved significantly by the use of artificial neural networks and specific architectures, such as deep feedforward multi-layer perceptrons and recurrent neural networks, attention and transformer architectures. We will discuss these approaches in detail and how they form part of the probabilistic approach.

Lab | from 2pm to 5:15pm | Speech Recognition and Machine Translation

This lab will be dedicated to audio data analysis and speech recognition. In this lab, we will experiment how deep learning works with audio signals; more specifically, we will learn how to build and train some efficient deep learning models to recognize speech by combining CNNs, RNNs, and Attention mechanisms.

July 13th

Lab | from 9am to 12:15am | Deep Reinforcement Learning 

While applications of RL are typically limited to discrete, low-dimensional constraints, recent advances in Deep RL (DQN for Atari 2600, AlphaGo, and more lately AlphaGo Zero) have demonstrated human-level or super-human performance in complex, high-dimensional spaces.
This lab will be dedicated to Deep Reinforcement learning. This lab is meant to provide a first experience on using Deep Reinforcement Learning (DRL), for both synthetic and more realistic problems.

Lecture | from 2pm to 5:15pm |  Reinforcement Learning and Neural Networks by Prof. Andrew G. Barto 

The union of reinforcement learning (RL) and deep neural networks has recently produced remarkable contributions to AI. A better appreciation of these contributions can be gained by understanding that computational studies of RL and neural networks have tightly intertwined histories. Both originated as  hypotheses about how brains function and learn, and their development has been coupled from the very beginning. The computational power of deep RL united with recent results about the brain’s reward system point to how a next round of advances may arise. 

July 14th

Lecture | from 9am to 12:15am | Learning to Adapt: a Deeper Look at Domain Adaptation for Visual Recognition by Prof. Elisa Ricci

Deep networks have significantly improved the state of the arts for several tasks in computer vision. Unfortunately, the impressive performance gains have come at the price of the use of massive amounts of labeled data. As the cost of collecting and annotating data is often prohibitive, given a target task where few or no training samples are available, it would be desirable to build effective learners that can leverage information from labeled data of a different but related source domain. However, a major obstacle in adapting models to the target task is the shift in data distributions across different domains. This problem, typically referred to as domain shift, has motivated research into Domain Adaptation (DA). In this talk I will provide an overview of the problems of DA for visual recognition and describe our recent efforts on building models which can learn adaptively in case of real-world scenarios.

Lab | from 2pm to 5:15pm |  Visual recognition and Domain adaptation

In this lab we will see some well-known deep architectures for image classification and object detection. We will use these models to learn domain adaptation in real-life scenarios.

July 15th

Lecture | from 9am to 12:15am | Deep Generative Models: Foundations, applications and open problems by Danilo J. Rezende

Generative models are at the forefront of machine learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. This talk will cover a few standard methods for approximate inference and density estimation and recent advances which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important applications of these models to density estimation in fundamental sciences, missing data imputation, data compression and planning.

Lab | from 2pm to 5:15pm | Deep Generative Models

In this lab, we will implement several likelihood-based deep generative models. We will focus on variational autoencoders (VAEs) and variations thereof, and will also discuss normalizing flows and autoregressive models. Applications will include dimensionality reduction and missing data imputation.

July 16th

Lecture | from 9am to 12:15am | Graph Neural Networks and Neural-Symbolic Computation by Prof. Marco Gori

This lecture will focus on the theory and applications of Graph Neural Networks (GNN) and to related topics in Neural-Symbolic Computation. The course gives the foundations on neural computation involving patterns represented by graphs in fields ranging from computer vision to bioinformatics. In addition, GNN will be presented for different applications in the case of graph-based domains, where inferential processes are expected to involve also the neighbors of vertexes (e.g. social networks). Finally, the discussion mechanisms taking place by GNN will be integrated with more general Neural-Symbolic models where the decision mechanisms need to be coherent with external representations of environmental knowledge.

Lab | from 2pm to 5:15pm | Graph Neural Networks and Neural-Symbolic Computation

The lab will start with a brief introduction to the available GNN frameworks, then how to represent a graph, and how to define a GNN Model in the available frameworks. We will then explore learning tasks such as Node Classification, Graph Classification, or Link Prediction. We will finish with one or two projects among Graph Visualization, Subgraph matching, Clique detection and communities, Learning PageRank.