Please use this identifier to cite or link to this item:
Title: Unbiased estimation of the gradient of the log-likelihood in inverse problems
Authors: Jasra, Ajay
Law, Kody J. H.
Lu, Deng
Keywords: Inverse problems
Parameter estimation
Stochastic gradient
Unbiased estimation
Issue Date: 3-Mar-2021
Publisher: Springer
Citation: Jasra, Ajay, Law, Kody J. H., Lu, Deng (2021-03-03). Unbiased estimation of the gradient of the log-likelihood in inverse problems. Statistics and Computing 31 (3) : 21. ScholarBank@NUS Repository.
Rights: Attribution 4.0 International
Abstract: We consider the problem of estimating a parameter ????Rd? associated with a Bayesian inverse problem. Typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm is highly amenable to parallel computation. © 2021, The Author(s).
Source Title: Statistics and Computing
ISSN: 0960-3174
DOI: 10.1007/s11222-021-09994-6
Rights: Attribution 4.0 International
Appears in Collections:Students Publications

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_1007_s11222-021-09994-6.pdf1.27 MBAdobe PDF



Google ScholarTM



This item is licensed under a Creative Commons License Creative Commons