Please use this identifier to cite or link to this item:
|Title:||A unified supervised codebook learning framework for classification||Authors:||Lang, C.
|Issue Date:||1-Feb-2012||Citation:||Lang, C., Feng, S., Cheng, B., Ni, B., Yan, S. (2012-02-01). A unified supervised codebook learning framework for classification. Neurocomputing 77 (1) : 281-288. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neucom.2011.09.010||Abstract:||In this paper, we investigate a discriminative visual dictionary learning method for boosting the classification performance. Tied to the K-means clustering philosophy, those popular algorithms for visual dictionary learning cannot guarantee the well-separation of the normalized visual word frequency vectors from distinctive classes or large label distances. The rationale of this work is to harness sample label information for learning visual dictionary in a supervised manner, and this target is then formulated as an objective function, where each sample element, e.g., SIFT descriptor, is expected to be close to its assigned visual word, and at the same time the normalized aggregative visual word frequency vectors are expected to possess the property that kindred samples shall be close to each other while inhomogeneous samples shall be far away. By relaxing the hard binary constraints to soft nonnegative ones, a multiplicative nonnegative update procedure is proposed to optimize the objective function along with theoretic convergence proof. Extensive experiments on classification tasks (i.e., natural scene and sports event classifications) all demonstrate the superiority of this proposed framework over conventional clustering based visual dictionary learning. © 2011 Elsevier B.V.||Source Title:||Neurocomputing||URI:||http://scholarbank.nus.edu.sg/handle/10635/54839||ISSN:||09252312||DOI:||10.1016/j.neucom.2011.09.010|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.