Please use this identifier to cite or link to this item: https://doi.org/10.1145/3406325.3451066
Title: Near-optimal learning of tree-structured distributions by Chow-Liu
Authors: Bhattacharyya, Arnab 
Gayen, Sutanu 
Price, Eric
Vinodchandran, NV 
Keywords: cs.DS
cs.DS
cs.IT
cs.LG
math.IT
Issue Date: 15-Jun-2021
Publisher: ACM
Citation: Bhattacharyya, Arnab, Gayen, Sutanu, Price, Eric, Vinodchandran, NV (2021-06-15). Near-optimal learning of tree-structured distributions by Chow-Liu. Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing abs/2011.04144. ScholarBank@NUS Repository. https://doi.org/10.1145/3406325.3451066
Abstract: We provide finite sample guarantees for the classical Chow-Liu algorithm (IEEE Trans.~Inform.~Theory, 1968) to learn a tree-structured graphical model of a distribution. For a distribution $P$ on $\Sigma^n$ and a tree $T$ on $n$ nodes, we say $T$ is an $\varepsilon$-approximate tree for $P$ if there is a $T$-structured distribution $Q$ such that $D(P\;||\;Q)$ is at most $\varepsilon$ more than the best possible tree-structured distribution for $P$. We show that if $P$ itself is tree-structured, then the Chow-Liu algorithm with the plug-in estimator for mutual information with $\widetilde{O}(|\Sigma|^3 n\varepsilon^{-1})$ i.i.d.~samples outputs an $\varepsilon$-approximate tree for $P$ with constant probability. In contrast, for a general $P$ (which may not be tree-structured), $\Omega(n^2\varepsilon^{-2})$ samples are necessary to find an $\varepsilon$-approximate tree. Our upper bound is based on a new conditional independence tester that addresses an open problem posed by Canonne, Diakonikolas, Kane, and Stewart~(STOC, 2018): we prove that for three random variables $X,Y,Z$ each over $\Sigma$, testing if $I(X; Y \mid Z)$ is $0$ or $\geq \varepsilon$ is possible with $\widetilde{O}(|\Sigma|^3/\varepsilon)$ samples. Finally, we show that for a specific tree $T$, with $\widetilde{O} (|\Sigma|^2n\varepsilon^{-1})$ samples from a distribution $P$ over $\Sigma^n$, one can efficiently learn the closest $T$-structured distribution in KL divergence by applying the add-1 estimator at each node.
Source Title: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing
URI: https://scholarbank.nus.edu.sg/handle/10635/192525
DOI: 10.1145/3406325.3451066
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
2011.04144v1.pdf397.06 kBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.