Simulation Based Inference: Likelihood-free MCMC via Normalizing Flows and Variational Autoencoders
Please login to view abstract download link
We present a new methodology for solving inverse problems in a variational framework, where a quantification of uncertainty is also obtained in addition to the prediction results. To achieve this, we developed a specific MCMC framework to estimate the posterior distribution of the system parameters that generated the given observation, employing a Differential Evolution Metropolis sampling method. In complex problems, where the likelihood function is typically intractable or unavailable, we introduced a Normalizing Flow structure, specifically using Real-valued Non-Volume Preserv- ing transformations (RealNVP) introduced by Dinh et al. The scaling and translation functions of the affine coupling layers are modeled by neural networks that are conditioned on the label, allowing the model to capture complex posterior distributions. The inference process begins with the observation being processed by a Variational Autoen- coder (VAE). This step reduces the dimensionality of the input to be handled by the RealNVP and extracts the most relevant features for parameter estimation. To further enhance the infor- mativeness of the latent space, we added dense layers that sampled from the latent distribution and aimed to predict the correct label. By including this supervised loss term alongside the stan- dard VAE losses, the latent space becomes more structured and informative for the downstream inference task. The proposed methodology is validated on two case studies: a steady-state groundwater flow problem governed by Darcy’s law and a railway bridge structure. In the first case, the method shows strong performance in reconstructing the spatial distribution of the conductivity field. In the second case, it effectively estimates both the location and severity of structural damage.