Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/171708
DC FieldValue
dc.titleEfficient Statistics for Sparse Graphical Models from Truncated Samples.
dc.contributor.authorARNAB BHATTACHARYYA
dc.contributor.authorDesai, Rathin
dc.contributor.authorNagarajan, Sai Ganesh
dc.contributor.authorPanageas, Ioannis
dc.date.accessioned2020-07-27T04:06:48Z
dc.date.available2020-07-27T04:06:48Z
dc.date.issued2020
dc.identifier.citationARNAB BHATTACHARYYA, Desai, Rathin, Nagarajan, Sai Ganesh, Panageas, Ioannis (2020). Efficient Statistics for Sparse Graphical Models from Truncated Samples.. CoRR abs/2006.09735. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/171708
dc.description.abstractIn this paper, we study high-dimensional estimation from truncated samples. We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models. (i) For Gaussian graphical models, suppose $d$-dimensional samples ${\bf x}$ are generated from a Gaussian $N(\mu,\Sigma)$ and observed only if they belong to a subset $S \subseteq \mathbb{R}^d$. We show that ${\mu}$ and ${\Sigma}$ can be estimated with error $\epsilon$ in the Frobenius norm, using $\tilde{O}\left(\frac{\textrm{nz}({\Sigma}^{-1})}{\epsilon^2}\right)$ samples from a truncated $\mathcal{N}({\mu},{\Sigma})$ and having access to a membership oracle for $S$. The set $S$ is assumed to have non-trivial measure under the unknown distribution but is otherwise arbitrary. (ii) For sparse linear regression, suppose samples $({\bf x},y)$ are generated where $y = {\bf x}^\top{{\Omega}^*} + \mathcal{N}(0,1)$ and $({\bf x}, y)$ is seen only if $y$ belongs to a truncation set $S \subseteq \mathbb{R}$. We consider the case that ${\Omega}^*$ is sparse with a support set of size $k$. Our main result is to establish precise conditions on the problem dimension $d$, the support size $k$, the number of observations $n$, and properties of the samples and the truncation that are sufficient to recover the support of ${\Omega}^*$. Specifically, we show that under some mild assumptions, only $O(k^2 \log d)$ samples are needed to estimate ${\Omega}^*$ in the $\ell_\infty$-norm up to a bounded error. For both problems, our estimator minimizes the sum of the finite population negative log-likelihood function and an $\ell_1$-regularization term.
dc.sourceElements
dc.subjectstat.ML
dc.subjectstat.ML
dc.subjectcs.DS
dc.subjectcs.LG
dc.subjectmath.ST
dc.subjectstat.CO
dc.subjectstat.TH
dc.typeArticle
dc.date.updated2020-07-23T09:20:43Z
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.sourcetitleCoRR
dc.description.volumeabs/2006.09735
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
2006.09735v1.pdf918.03 kBAdobe PDF

OPEN

Pre-printView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.