Please use this identifier to cite or link to this item: https://doi.org/10.1007/s11009-013-9357-4
DC FieldValue
dc.titleGradient Free Parameter Estimation for Hidden Markov Models with Intractable Likelihoods
dc.contributor.authorEhrlich, E.
dc.contributor.authorJasra, A.
dc.contributor.authorKantas, N.
dc.date.accessioned2016-06-02T10:30:15Z
dc.date.available2016-06-02T10:30:15Z
dc.date.issued2013
dc.identifier.citationEhrlich, E., Jasra, A., Kantas, N. (2013). Gradient Free Parameter Estimation for Hidden Markov Models with Intractable Likelihoods. Methodology and Computing in Applied Probability : 1-35. ScholarBank@NUS Repository. https://doi.org/10.1007/s11009-013-9357-4
dc.identifier.issn13875841
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/125053
dc.description.abstractIn this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. Although these ABC approximations will induce a bias, this can be controlled to arbitrary precision via a positive parameter ε{lunate}, so that the bias decreases with decreasing ε{lunate}. We first establish that when using an ABC approximation of the HMM for a fixed batch of data, then the bias of the resulting log- marginal likelihood and its gradient is no worse than {Mathematical expression}, where n is the total number of data-points. Therefore, when using gradient methods to perform MLE for the ABC approximation of the HMM, one may expect parameter estimates of reasonable accuracy. To compute an estimate of the unknown and fixed model parameters, we propose a gradient approach based on simultaneous perturbation stochastic approximation (SPSA) and Sequential Monte Carlo (SMC) for the ABC approximation of the HMM. The performance of this method is illustrated using two numerical examples. © 2013 Springer Science+Business Media New York.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s11009-013-9357-4
dc.sourceScopus
dc.subjectApproximate Bayesian computation
dc.subjectHidden Markov models
dc.subjectParameter estimation
dc.subjectSequential Monte Carlo
dc.typeArticle
dc.contributor.departmentSTATISTICS & APPLIED PROBABILITY
dc.description.doi10.1007/s11009-013-9357-4
dc.description.sourcetitleMethodology and Computing in Applied Probability
dc.description.page1-35
dc.identifier.isiut000354094300003
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.