Interview: 3IA Côte d’Azur welcomes new International Chairholder Adrian Raftery

  • Research
Published on March 23, 2026 Updated on March 23, 2026
Prof. Adrian E. Raftery new International Chairholder at 3IA Côte d'Azur
Prof. Adrian E. Raftery new International Chairholder at 3IA Côte d'Azur

The 3IA Côte d'Azur is proud to welcome Professor Adrian Raftery, from the University of Washington in Seattle, as its newest International Chairholder.

His research project, titled "Bayesian Model Selection and Uncertainty Quantification for Inference in Artificial Intelligence Models", falls within the Institute's first research axis: Core Elements of AI. To mark the launch of this chair, we sat down with him to explore the potential synergies between his work and that of Professor Charles Bouveyron, 3IA Chairholder, who invited him to join the 3IA.

You and Professor Bouveyron have a long-standing scientific relationship, notably through your co-authored work on model-based clustering. What prompted you to take this a step further by joining his 3IA Côte d'Azur Chair as an International Chairholder?

Our collaboration goes back to 2013, when we found common ground around model-based clustering, a field in which Charles (Prof. Bouveyron – Ed.) has been a leading figure and in which I was one of the early researchers. From 2013 to 2019, we worked intensively on a book on the subject, co-authored with Gilles Celeux and Brendan Murphy. Published in 2019, it has been warmly received, and I'm very proud of what we achieved together.

Around the same time, the 3IA Institute was being established, and I was involved from the very beginning. There was already talk of an international chair for me at that stage, but for practical reasons it didn't come together then. Since then, I've stayed in close contact with the 3IA and have been genuinely impressed by everything it has accomplished, including serving on the review committee recently, which gave me a broader picture of the institute's work.

I should mention that I'm primarily a statistician; my approach to AI is through the lens of statistical modeling. But this year, the timing finally worked out, and given that I've been involved since before the institute even existed, it felt like I simply had to come.


Your work on Bayesian model averaging and uncertainty quantification complements Professor Bouveyron's research on deep latent variable models. How do you see these two traditions enriching each other within the framework of this collaboration?

These are both important aspects of statistical modeling, and I think they can complement one another very well. A particularly exciting direction has recently come into view, rooted in work I've been doing for many years on uncertainty quantification in demography, specifically population projections and estimation.

Demography has traditionally relied on deterministic mathematical models, with very little focus on uncertainty assessment or systematic parameter estimation. For over a century, population forecasting has largely been driven by subjective expert opinion. For a long time, I've been working with the United Nations to develop more rigorous, statistically-grounded methods that properly account for uncertainty. These methods have now been adopted by the UN for their official population forecasts for all countries, including France, even though France's own national institute (INSEE – Ed.) has not yet followed suit.

Within this framework, some questions arise where AI methods could make a real difference, short-term fertility forecasting being a prime example. We've actually already started exploring this. It's a genuinely exciting direction, and it has emerged directly out of this chair. It brings together a nice group of people, including Pierre-Alexandre Mattei (3IA Deputy Scientific Director and Chairholder – Ed.), and a younger researcher, Rémy Sun (MAASAI team researcher – Ed.), who is also interested in these questions. We're just getting started, but it feels very promising.


The 3IA Côte d'Azur treats ethical AI not as a separate concern, but as a transversal dimension running through all its research axes, including core elements of AI. How do you see Bayesian methods, with their principled treatment of uncertainty and model transparency, contributing to this vision of responsible AI by design?

I think the key contribution of Bayesian methods lies precisely in their ability to provide a principled framework for assessing uncertainty accurately, and that has clear ethical implications. Take demography as an example: if a model predicts that France's population will reach 68 million in ten years, there is inevitably a great deal of uncertainty around that figure. Being honest about that uncertainty, acknowledging that the forecast will be wrong to some degree, and trying to quantify how wrong it could be, that is, I think, an example of a modest and responsible approach to forecasting, and more broadly, to AI.

AI is a rapidly developing field, and like all new fields, it tends to attract a great deal of hype. People focus on what it can do well, but every model is wrong in some way. The real question is: how wrong can it be? Bayesian analysis offers a rigorous way to quantify that, and I hope it can become a routine part of how AI results are reported and interpreted, helping both researchers and the public engage with these tools more accurately and more responsibly.


What are the most exciting open questions you and Professor Bouveyron are hoping to explore together within this chair, and are there application domains, such as health or environmental science, where you see the most promising opportunities?

To answer this in the context of demography: unlike most scientific disciplines, demography has something akin to the laws of conservation in physics. The number of people next year will equal exactly the number alive today, plus births, minus deaths, plus immigrants, minus emigrants. That is an exact, deterministic relationship, and most statistical and AI modeling frameworks are simply not designed to incorporate that kind of constraint. So a key open question is: how do you combine statistical and AI approaches, which are powerful for quantifying uncertainty and accounting for measurement error, with these exact deterministic relationships? This hasn't really been done in a satisfactory way, and I think it represents a genuinely interesting methodological challenge.

The second area is climate science. One of the major inputs to assessments of present and future climate change is population, combined with economics and technology, it drives projections of carbon emissions and their downstream effects. If we could improve demographic forecasting using AI, and potentially do the same for economic and technological projections, that could lead to meaningfully better assessments of climate change. These are the two directions I'm most excited to explore within this collaboration. 


Learn more about Adrian E. Raftery's International Chair
Learn more about Charles Bouveyron's Chair
Learn more about Pierre-Alexandre Mattei's Chair
Learn more about Rémy Sun's research