Skip to main content
Data science
PNG City / adobe stock

Interview Building Reliable AI

Vincent Fortuin on the Future of Bayesian Deep Learning

In a groundbreaking collaboration, Dr. Vincent Fortuin and 24 esteemed researchers from around the globe unveil a pioneering approach in the realm of artificial intelligence (AI). Their latest work challenges the status quo, presenting a vision for AI that is more reliable, trustworthy, and safe. Their research will be presented at the prestigious International Conference on Machine Learning in Vienna, where around 10,000 international AI experts are expected to attend. In this interview, Dr. Vincent Fortuin explains their new approach for reliable AI.

In this interview, Dr. Vincent Fortuin explains their new approach for reliable AI:

AI systems that can judge their own reliability become more than just prediction machines; they become partners in decision-making.
Dr. Vincent Fortuin

Could you explain, in simple terms, what Bayesian deep learning is and how it differs from traditional deep learning approaches?

VF: Bayesian deep learning (BDL) is an approach that combines the power of deep learning with the wisdom of Bayesian statistics. Deep learning has proven to be incredibly effective at making predictions, but it often lacks the ability to quantify its uncertainty. This is where Bayesian statistics comes in. By treating the parameters of a deep learning model as random variables, BDL allows us to estimate not just a single best prediction but a whole distribution of possible outcomes. This distribution captures the uncertainty inherent in the data and the model, providing a more nuanced understanding of the predictions. This can be crucial for making informed decisions, especially in safety-critical applications.

What motivated you and your team to focus on Bayesian deep learning in the context of large-scale AI systems?

VF: As deep learning models have grown in size and complexity, so have the challenges associated with their deployment. Large-scale AI systems, such as foundation models, have achieved remarkable performance in various tasks. However, they often lack the ability to quantify their uncertainty or adapt to new data efficiently. This can lead to overconfident predictions and a lack of robustness when faced with data that differs from their training distribution.

We believe that BDL offers a promising solution to these challenges.

Can you elaborate on how AI systems can benefit from being able to judge their own reliability?

VF: AI systems that can judge their own reliability become more than just prediction machines; they become partners in decision-making. By expressing their uncertainty, these systems provide users with crucial information to make informed choices. For example, in medical diagnosis, an AI system might suggest a particular treatment with high confidence based on past data. However, when presented with a unique patient, it might recognize that its prediction is less certain due to limited similar cases in its training data, and can thus abstain from making a prediction and defer the decision to an experienced clinician.

This ability to judge reliability is especially valuable in high-stakes situations, such as autonomous driving or medical diagnosis. It enables the system to defer to human expertise or request additional data when its confidence is low, ensuring that critical decisions are made with the necessary level of caution.

“I envision a future where AI is

not just smart but also wise.”

Dr. Vincent Fortuin

 

What are some practical applications of Bayesian deep learning in everyday technology, such as chatbots and virtual assistants?

VF: Bayesian deep learning has the potential to revolutionize everyday technology by making it more reliable and user-friendly. For example, consider a chatbot or virtual assistant equipped with BDL. If you have ever used any of the current chatbots yourself, you might have noticed that they often give very confident answers, even when they are wrong. A Bayesian version of a chatbot, on the other hand, would be able to provide an answer along with an estimate of its confidence. If the question is outside its expertise or the available data is insufficient, it can communicate this uncertainty to the user, inviting further clarification or suggesting alternative sources of information.

This level of transparency enhances the user experience and builds trust. It also helps manage user expectations, reducing the risk of overreliance on the system's predictions. Additionally, BDL can enable these systems to adapt and improve over time by efficiently incorporating new data and user feedback.

 

In what ways could this approach revolutionize fields like medicine and scientific research?

VF: In the field of medicine, BDL has the potential to improve diagnosis, treatment planning, and drug discovery. For example, when analyzing medical images, BDL can provide uncertainty estimates for its predictions, helping radiologists identify potential errors or areas that require further examination. Additionally, BDL can facilitate the development of personalized treatment plans by incorporating patient-specific data and providing uncertainty estimates for different treatment options.

In scientific research, BDL can aid in discovering new materials, designing molecules, and understanding complex physical systems. By efficiently exploring the vast space of possibilities, BDL can guide scientists toward promising directions and identify areas where more exploration is needed. This not only accelerates the research process but also enhances the reliability of scientific discoveries.

What is your vision for the future of AI with the integration of Bayesian principles? How do you see AI evolving over the next decade?

VF: I envision a future where AI is not just smart but also wise. By integrating Bayesian principles, AI systems will become more robust, reliable, and trustworthy. They will be able to express their uncertainty, adapt to new data efficiently, and provide transparent explanations for their decisions. This will enable AI to assist humans in a broader range of tasks, from everyday conversations to scientific discovery.

Over the next decade, I believe we will see a shift towards more Bayesian-inspired AI systems. The integration of Bayesian principles will become a key differentiator, with companies and researchers striving to develop AI that is not just accurate but also reliable and safe. This will lead to a new generation of AI applications that are more aligned with the complexities and uncertainties of the real world.

“It is our belief that Bayesian deep learning has the potential to revolutionize AI, and we want to invite others to join us on this journey.”

Dr. Vincent Fortuin

 

Can you tell us about the recent collaboration that led to the position paper, "Bayesian Deep Learning is Needed in the Age of Large-Scale AI"? How did this partnership come about?

VF: The collaboration behind this position paper brings together a diverse group of researchers from many prestigious institutions around the globe, such as the universities of Oxford, Cambridge, Harvard, MIT, and Google Deepmind. We were all united by a shared passion for Bayesian deep learning and its potential to revolutionize AI. We recognized the need to highlight the importance of BDL, especially in the context of large-scale AI systems. By combining our diverse expertise and insights, we aim to raise awareness, spark discussion, and inspire further research in this direction.

You are presenting your work at the International Conference on Machine Learning in Vienna. What are you hoping to achieve through this presentation?

VF: The International Conference on Machine Learning is a prestigious venue that brings together leading researchers and practitioners in the field and is one of the largest AI conferences in the world. By presenting our work there, we aim to reach a broad audience and spark discussions about the role of BDL in the age of large-scale AI. We hope to inspire researchers, practitioners, and policymakers to consider the benefits of BDL and explore its potential in their respective domains.

Additionally, we want to showcase the exciting research avenues that lie ahead. By highlighting some of the challenges and potential solutions, we aim to encourage further exploration and collaboration in this exciting field. It is our belief that BDL has the potential to revolutionize AI, and we want to invite others to join us on this journey.

The International Conference on Machine Learning (IMCL)

The ICML is the leading international academic conference in machine learning and one of the fastest growing artificial intelligence conferences in the world.

How can other researchers and institutions contribute to or collaborate on your vision for Bayesian deep learning?

VF: That's a great question! With our paper, we try to encourage researchers and practitioners in different fields to explore the potential of BDL in their respective domains. BDL is a versatile approach that can be applied to a wide range of problems, from computer vision and natural language processing to scientific discovery and healthcare. By experimenting with BDL and sharing their findings, researchers can contribute to a growing body of knowledge and help refine the techniques.

Additionally, there is a need for further research on the fundamental questions of Bayesian deep learning. In our paper, we identify several open problems and promising research directions, in the hope that researchers and students take inspiration from them. BDL is a complex field that requires expertise from various disciplines, including machine learning, statistics, and application-specific domains. By fostering collaborations and building communities, we can accelerate the development and adoption of BDL, bringing its benefits to a wider range of applications.

Finally, there are also some specialized scientific meetings and venues for Bayesian machine learning, such as the annual international Symposium on Advances in Approximate Bayesian Inference (AABI), for which I am currently serving as the General Chair. Interested researchers and students are always welcome to join these events to learn more about the current research questions in the field.

Passionate about AI in Health

Artificial intelligence (AI) is not limited to creating visuals or answering simple questions. It has the exceptional potential to improve human health: Advanced data analysis and machine learning methods developed by our experts are revolutionizing medical research, enabling early disease detection, personalized medicine, and improved patient outcomes. Helmholtz Munich is a leading organization in pioneering leading new ways to enhance people’s health by using AI.

How does incorporating Bayesian statistics into AI models enhance their reliability and trustworthiness?

Incorporating Bayesian statistics into AI models brings several benefits that enhance their reliability and trustworthiness.

Firstly, BDL provides a principled way to quantify uncertainty, allowing the model to express its confidence in its predictions. This is especially important in safety-critical applications, where incorrect or overconfident predictions can have serious consequences.

Secondly, BDL enables AI models to adapt to new data more efficiently. By treating parameters as random variables, BDL can update its beliefs in a more flexible manner, avoiding the need to retrain the entire model from scratch when new data becomes available. This adaptability is crucial for keeping up with the ever-evolving nature of real-world data.

 

Lastly, BDL facilitates a better understanding of the model's decision-making process. By examining the posterior distribution over parameters, we can gain insights into the importance of different features and the relationships between them. This transparency helps build trust and facilitates the detection of potential biases or errors in the model.

Find Out More About Dr. Vincent Fortuin

Dr. Vincent Fortuin is a Early Career Investigator at the Computational Health Center at Helmholtz Munich and tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS), and a faculty member at the Technical University of Munich. He is also a Branco Weiss Fellow, a Fellow of the Konrad Zuse School of Excellence in Reliable AI, and affiliated with the Munich Center for Machine Learning. His research focuses on reliable and data-efficient AI approaches leveraging Bayesian deep learning, deep generative modeling, meta-learning, and PAC-Bayesian theory. Before that, he did his PhD in Machine Learning at ETH Zürich and was a Research Fellow at the University of Cambridge. He is a unit faculty member of ELLIS, a regular reviewer and area chair for all major machine learning conferences, an action editor for TMLR, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative.

Latest update: July 2024.