Bayesian Modelling for Prediction and Uncertainty Quantification in Scientific Machine Learning
Please login to view abstract download link
Predictive deep learning models are a foundational technology driving a considerable part of the current advancements in natural Science, powering applications like molecular and protein design, weather forecasting or aerodynamic shape optimization. Despite their remarkable success, these models face a significant challenge: they struggle to assess the confidence in their predictions, especially when tasked with making predictions outside their training data. This limitation raises safety concerns in many applications, where understanding the uncertainty behind a prediction is as important as the prediction itself. As such, ensuring reliable uncertainty quantification is, therefore, essential for enhancing model safety and performance. In this contribution, we present how deep Bayesian modelling can be leveraged to address the challenge of uncertainty quantification in predictive deep learning models. Specifically, we present BARNN, a variational Bayesian approach for autoregressive or recurrent networks applicable to quantify uncertainty in sequential data; and BLIP, a variational Bayesian approach for uncertainty quantification in graph neural networks. Extensive experimental results across fluid dynamics, time series forecasting, and molecular design demonstrate that Bayesian methods not only achieve competitive predictive accuracy but also deliver well-calibrated uncertainty estimates. These findings highlight the potential of such models to play a key role in advancing AI-driven scientific discovery by providing models that are not only accurate but also aware of their limitations.