Please use this identifier to cite or link to this item:
https://doi.org/10.1109/ICDMW.2010.64
DC Field | Value | |
---|---|---|
dc.title | Robust low-rank subspace segmentation with semidefinite guarantees | |
dc.contributor.author | Ni, Y. | |
dc.contributor.author | Sun, J. | |
dc.contributor.author | Yuan, X. | |
dc.contributor.author | Yan, S. | |
dc.contributor.author | Cheong, L.-F. | |
dc.date.accessioned | 2013-07-23T09:30:44Z | |
dc.date.available | 2013-07-23T09:30:44Z | |
dc.date.issued | 2010 | |
dc.identifier.citation | Ni, Y., Sun, J., Yuan, X., Yan, S., Cheong, L.-F. (2010). Robust low-rank subspace segmentation with semidefinite guarantees. Proceedings - IEEE International Conference on Data Mining, ICDM : 1179-1188. ScholarBank@NUS Repository. https://doi.org/10.1109/ICDMW.2010.64 | |
dc.identifier.isbn | 9780769542577 | |
dc.identifier.issn | 15504786 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/43314 | |
dc.description.abstract | Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group)1\ high-dimensional structural data such as those (approximately) lying on subspaces2 or low-dimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, techniques proposed in this vein often considerably boost the performance in subspace settings where traditional SC can fail. Despite the success, there are fundamental problems that have been left unsolved: the spectrum property of the learned affinity matrix cannot be gauged in advance, and there is often one ugly symmetrization step that post-processes the affinity for SC input. Hence we advocate to enforce the symmetric positive semi definite constraint explicitly during learning (Low-Rank Representation with Positive Semi Definite constraint, or LRR-PSD), and show that factually it can be solved in an exquisite scheme efficiently instead of general-purpose SDP solvers that usually scale up poorly. We provide rigorous mathematical derivations to show that, in its canonical form, LRR-PSD is equivalent to the recently proposed Low-Rank Representation (LRR) scheme[1], and hence offer theoretic and practical insights to both LRR-PSD and LRR, inviting future research. As per the computational cost, our proposal is at most comparable to that of LRR, if not less. We validate our theoretic analysis and optimization scheme by experiments on both synthetic and real data sets. © 2010 IEEE. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/ICDMW.2010.64 | |
dc.source | Scopus | |
dc.subject | Affinity matrix learning | |
dc.subject | Eigenvalue thresholding | |
dc.subject | Rank minimization | |
dc.subject | Robust estimation | |
dc.subject | Spectral clustering | |
dc.type | Conference Paper | |
dc.contributor.department | COMPUTATIONAL SCIENCE | |
dc.contributor.department | INTERACTIVE & DIGITAL MEDIA INSTITUTE | |
dc.contributor.department | ELECTRICAL & COMPUTER ENGINEERING | |
dc.description.doi | 10.1109/ICDMW.2010.64 | |
dc.description.sourcetitle | Proceedings - IEEE International Conference on Data Mining, ICDM | |
dc.description.page | 1179-1188 | |
dc.identifier.isiut | NOT_IN_WOS | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.