## Robust Optimal Experimental Design for Non-Linear Bayesian Inference

Seminar Talk, Argonne National Lab, Lemont, Illinois

Seminar Talk, Argonne National Lab, Lemont, Illinois

Conference Talk, Amsterdam RAI, Amsterdam, Netherlands

**Abstract**: We consider sensitivity analysis of Bayesian linear inverse problems with respect to modeling uncertainties. To this end, we consider sensitivity analysis of the information gain, as measured by the Kullbackâ€”Leibler divergence from the posterior to the prior. This choice provides a principled approach that leverages key structures within the Bayesian inverse problem. Also, the information gain admits a closed-form expression in the case of linear Gaussian inverse problems. The derivatives of the information gain are extremely challenging to compute. To address this challenge, we present accurate and efficient methods that combine eigenvalue sensitivities and hyper-differential sensitivity analysis that take advantage of adjoint based gradient and Hessian computation. This results in a computational approach whose cost, in number of PDE solves, does not grow upon mesh refinement. These results are presented in an application-driven model problem, considering a simplified earthquake model to infer fault slip from surface measurements.

Seminar Talk, Applied Math Graduate Student Seminar at NCSU, Raleigh, North Carolina

**Abstract**: We consider sensitivity analysis of Bayesian inverse problems with respect to modeling uncertainties. To this end, we consider sensitivity analysis of the information gain, as measured by the Kullback-Leibler divergence from the posterior to the prior. This choice provides a principled approach that leverages key structures within the Bayesian inverse problem. Also, the information gain reduces to the Bayesian D-optimal design criterion in the case of linear Gaussian inverse problems. However, the derivatives of the information gain are not simple to compute, nor are finite differences always possible let alone scalable. To solve half the puzzle, in this talk we present the method of computing eigenvalue sensitivities for implicitly defined linear operators appearing in PDE-constrained optimization problems. Specifically, we consider eigenvalue sensitivities of the so-called data misfit Hessian and its preconditioned counterpart. We start with simple examples and work our way up to the expressions in the information gain. Our approach relies on adjoint based methods for gradient and Hessian computation. The resulting expressions for the sensitivities will be exact and can be computed in a scalable manner.

Seminar Talk, Applied Math Graduate Student Seminar at NCSU, Raleigh, North Carolina

**Abstract**: Given the inability to directly observe the conditions of a fault line, inversion of parameters describing them has been a subject of practical interest for the past couple of decades. To resolve this under a linear elasticity forward model, we consider Bayesian inference in the infinite-dimensional setting given some surface displacement measurements, resulting in a posterior distribution characterizing the initial fault displacement. We employ adjoint-based gradient computation in order to resolve the underlying partial differential equation constrained optimization problem and take care to leverage both dimensionality reductions in the parameter space and the low-rank nature of the resulting posterior covariance, owing to sparse measurements locations, to do said computation in a scalable manner.