Open Access
September 2019 A Bayesian Conjugate Gradient Method (with Discussion)
Jon Cockayne, Chris J. Oates, Ilse C.F. Ipsen, Mark Girolami
Bayesian Anal. 14(3): 937-1012 (September 2019). DOI: 10.1214/19-BA1145

Abstract

A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about, for example, the magnitude of the error. In this paper we propose a novel statistical model for this error, set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging.

Note

BA Webinar: https://youtu.be/RDTOaPtxAXU.

Citation

Download Citation

Jon Cockayne. Chris J. Oates. Ilse C.F. Ipsen. Mark Girolami. "A Bayesian Conjugate Gradient Method (with Discussion)." Bayesian Anal. 14 (3) 937 - 1012, September 2019. https://doi.org/10.1214/19-BA1145

Information

Published: September 2019
First available in Project Euclid: 18 May 2019

zbMATH: 07118905
MathSciNet: MR4012393
Digital Object Identifier: 10.1214/19-BA1145

Subjects:
Primary: 62C10 , 62F15 , 65F10

Keywords: Krylov subspaces , Linear systems , probabilistic numerics

Vol.14 • No. 3 • September 2019
Back to Top