Please use this identifier to cite or link to this item:
Title: An efficient sparse metric learning in high-dimensional space via l 1-penalized log-determinant regularization
Authors: Qi, G.-J.
Tang, J. 
Zha, Z.-J. 
Chua, T.-S. 
Zhang, H.-J.
Issue Date: 2009
Citation: Qi, G.-J.,Tang, J.,Zha, Z.-J.,Chua, T.-S.,Zhang, H.-J. (2009). An efficient sparse metric learning in high-dimensional space via l 1-penalized log-determinant regularization. ACM International Conference Proceeding Series 382. ScholarBank@NUS Repository.
Abstract: This paper proposes an efficient sparse metric learning algorithm in high dimensional space via an l1-penalized log-determinant regularization. Compare to the most existing distance metric learning algorithms, the proposed algorithm exploits the sparsity nature underlying the intrinsic high dimensional feature space. This sparsity prior of learning distance metric serves to regularize the complexity of the distance model especially in the "less example number p and high dimension d" setting. Theoretically, by analogy to the covariance estimation problem, we find the proposed distance learning algorithm has a consistent result at rate O(√(m2 log d)/n) to the target distance matrix with at most m nonzeros per row. Moreover, from the implementation perspective, this l1-penalized log-determinant formulation can be efficiently optimized in a block coordinate descent fashion which is much faster than the standard semi-definite programming which has been widely adopted in many other advanced distance learning algorithms. We compare this algorithm with other state-of-the-art ones on various datasets and competitive results are obtained. Copy-right 2009 by the author(s)/owner(s).
Source Title: ACM International Conference Proceeding Series
ISBN: 9781605585161
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Jul 22, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.