How to collaborate with 3IA Côte d’Azur? Three industrial partnerships

  • Partnerships
Published on May 5, 2021 Updated on May 17, 2021

Discover how 3IA puts the expertise of its network of researchers to work with companies

 
  • David Gesbert (Eurecom), 3IA chair holder - Mitsubishi Electric R&D Centre Europe MITSUBISHI Research Europe - "Federated learning for drones in wireless networks” 

Eurecom, with the framework of David Gesbert's 3IA Chair, signed two  industrial research contracts with the company Mitsubishi Electric R&D Centre Europe MITSUBISHI Research Europe. 

The partnership consists in, amoung other things, funding a CIFRE thesis on the topic "federated learning for drones in wireless networks” 


 

  • Maurizio Filippone (Eurecom), 3IA chair holder - LightOn – “Accelerate machine learning through the use of optical hardware” 

Discover how to exploit light for fast and low-power computations! 

Maurizio Filippone (3IA Chair - Eurecom) and his team are working with the French company LightOn on novel approaches to accelerate machine learning through the use of optical hardware. 

These French company has recently developed a novel optics-based device, which they named Optical Processing Unit (OPU). OPUs offer a truly promising solution, by offering randomized computations with low power consumption information and Communication Technologies (ICT) are constantly producing advancements that translate into a variety of societal improvements. The widespread use and growth of ICT, however, is posing a huge threat to the sustainability of this development, given that the energy consumption of current computing devices is growing at an uncontrolled pace. Within ICT, machine learning is currently one of the fastest growing fields. 

Apart from isolated application-specific attempts, machine learning is generally implemented using transistor-based technology, and little effort have been devoted to address some of the issues pertaining to its sustainability. 

In this project, we aim to radically change this and to propose a novel angle of attack to the sustainability of computations in machine learning. Our group is currently collaborating with the French company LightOn, which has developed a novel Optical Processing Unit (OPU). OPUs perform a specific matrix operation in hardware exploiting the properties of scattering of light. The operation performed by the OPU is a matrix-vector product followed by a nonlinear transformation. This happens at the speed of light and with a power consumption that is much lower than current computing devices; also, it is possible to operate with large Gaussian random matrices, orders of magnitude larger than current computing devices. 

OPUs are perfectly suited to accelerate Gaussian processes with random feature expansions, given the possibility to create a large number of random features at the speed of light. 

This project aims to move beyond Gaussian process-based models by studying ways in which OPUs can be used for Bayesian Deep Learning. 

The motivation to study such models is that they offer the flexibility of modern neural networks, and the possibility to quantify uncertainty due to the Bayesian treatment. Such models promise to deliver advances in various applications where quantification of uncertainty is of primary interest, and this project will be an opportunity to show this on selected applications in life and environmental sciences.

 

  • Benoit Miramond (UCA), 3IA Chair holder – Renault – “Deep Spiking networks for Embedded and Efficient event-based intelligence” 

Deep Spiking networks for Embedded and Efficient event-based intelligence, in collaboration with Renault 

1st March 2021: Kick-off of the ANR Project DeepSee leaded by B. Miramond at LEAT (3IA Chair – Université Côte d’Azur) in collaboration with Renault, I3S, Cerco and Prophesee.  

The partners of the project will work during the next 3 years on Deep Spiking networks for Embedded and Efficient event-based intelligence. 

Autonomous and intelligent embedded solutions are mainly designed as cognitive systems composed of a three steps process: perception, decision and action, periodically invoked in a closed-loop manner in order to detect changes in the environment and appropriately choose the actions to be performed according to the mission to be achieved. In an autonomous agent such as a robot, a drone or a vehicle, these 3 stages are quite naturally instantiated in the form of i) the fusion of information from different sensors, ii) then the scene analysis typically performed by artificial neural networks, and iii) finally the selection of an action to be operated on actuators such as engines, mechanical arms or any mean to interact with the environment. In that context, the growing maturity of the complementary technologies of Event-Based Sensors (EBS) and Spiking Neural Networks (SNN) is proven by recent results. The nature of these sensors questions the very way in which autonomous systems interact with their environment. Indeed, an Event-Based Sensor reverses the perception paradigm currently adopted by Frame-Based Sensors (FBS) from systematic and periodical sampling (whether an event has happened or not) to an approach reflecting the true causal relationship where the event triggers the sampling of the information. We propose to study the disruptive change of the perception stage and how event-based processing can cooperate with the current frame-based approach to make the system more reactive and robust.  

Hence, SNN models have been studied for several years as an interesting alternative to Formal Neural Networks (FNN) both for their reduction of computational complexity in deep network topology, but also for their natural ability to support unsupervised and bio-inspired learning rules. The most recent results show that these methods are becoming more and more mature and are almost catching up with the performance of formal networks, even though most of the learning is done without data labels. But should we compare the two approaches when the very nature of their input-data is different? In the context of interest of image processing, one (FNN) deals with whole frames and categorizes objects, the other (SNN) is particularly suitable for event-based sensors and is, therefore, more suited to capture spatio-temporal regularities in a constant flow of events. The approach we propose to follow in the DeepSee project is to associate spiking networks with formal networks rather than putting them in competition.