Computing Eigenvalue Sensitivities for Sensitivity Analysis of the Information Gain in Bayesian Linear Inverse Problems

Date:

Abstract: We consider sensitivity analysis of Bayesian inverse problems with respect to modeling uncertainties. To this end, we consider sensitivity analysis of the information gain, as measured by the Kullback-Leibler divergence from the posterior to the prior. This choice provides a principled approach that leverages key structures within the Bayesian inverse problem. Also, the information gain reduces to the Bayesian D-optimal design criterion in the case of linear Gaussian inverse problems. However, the derivatives of the information gain are not simple to compute, nor are finite differences always possible let alone scalable. To solve half the puzzle, in this talk we present the method of computing eigenvalue sensitivities for implicitly defined linear operators appearing in PDE-constrained optimization problems. Specifically, we consider eigenvalue sensitivities of the so-called data misfit Hessian and its preconditioned counterpart. We start with simple examples and work our way up to the expressions in the information gain. Our approach relies on adjoint based methods for gradient and Hessian computation. The resulting expressions for the sensitivities will be exact and can be computed in a scalable manner.