Please use this identifier to cite or link to this item:
https://doi.org/10.1145/3209978.3210035
DC Field | Value | |
---|---|---|
dc.title | Fast Scalable Supervised Hashing | |
dc.contributor.author | Xin Luo | |
dc.contributor.author | Liqiang Nie | |
dc.contributor.author | Xiangnan He | |
dc.contributor.author | Ye Wu | |
dc.contributor.author | Zhen-Duo Chen | |
dc.contributor.author | Xin-Shun Xu | |
dc.date.accessioned | 2020-04-28T02:31:25Z | |
dc.date.available | 2020-04-28T02:31:25Z | |
dc.date.issued | 2018-07-12 | |
dc.identifier.citation | Xin Luo, Liqiang Nie, Xiangnan He, Ye Wu, Zhen-Duo Chen, Xin-Shun Xu (2018-07-12). Fast Scalable Supervised Hashing. ACM SIGIR Conference on Information Retrieval 2018 : 735-744. ScholarBank@NUS Repository. https://doi.org/10.1145/3209978.3210035 | |
dc.identifier.isbn | 9781450356572 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/167301 | |
dc.description.abstract | Despite significant progress in supervised hashing, there are three common limitations of existing methods. First, most pioneer methods discretely learn hash codes bit by bit, making the learning procedure rather time-consuming. Second, to reduce the large complexity of the n by n pairwise similarity matrix, most methods apply sampling strategies during training, which inevitably results in information loss and suboptimal performance; some recent methods try to replace the large matrix with a smaller one, but the size is still large. Third, among the methods that leverage the pairwise similarity matrix, most of them only encode the semantic label information in learning the hash codes, failing to fully capture the characteristics of data. In this paper, we present a novel supervised hashing method, called Fast Scalable Supervised Hashing (FSSH), which circumvents the use of the large similarity matrix by introducing a pre-computed intermediate term whose size is independent with the size of training data. Moreover, FSSH can learn the hash codes with not only the semantic information but also the features of data. Extensive experiments on three widely used datasets demonstrate its superiority over several state-of-the-art methods in both accuracy and scalability. Our experiment codes are available at: https://lcbwlx.wixsite.com/fssh. © 2018 ACM. | |
dc.publisher | Association for Computing Machinery, Inc | |
dc.subject | Discrete optimization | |
dc.subject | Large-scale retrieval | |
dc.subject | Learning to hash | |
dc.subject | Supervised hashing | |
dc.type | Conference Paper | |
dc.contributor.department | DEPARTMENT OF COMPUTER SCIENCE | |
dc.description.doi | 10.1145/3209978.3210035 | |
dc.description.sourcetitle | ACM SIGIR Conference on Information Retrieval 2018 | |
dc.description.page | 735-744 | |
dc.published.state | Published | |
dc.grant.id | R-252-300-002-490 | |
dc.grant.fundingagency | Infocomm Media Development Authority | |
dc.grant.fundingagency | National Research Foundation | |
Appears in Collections: | Elements Staff Publications |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
Fast Scalable Supervised Hashing.pdf | 759.61 kB | Adobe PDF | OPEN | Published | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.