Please use this identifier to cite or link to this item: https://doi.org/10.1145/3209978.3210035
DC FieldValue
dc.titleFast Scalable Supervised Hashing
dc.contributor.authorXin Luo
dc.contributor.authorLiqiang Nie
dc.contributor.authorXiangnan He
dc.contributor.authorYe Wu
dc.contributor.authorZhen-Duo Chen
dc.contributor.authorXin-Shun Xu
dc.date.accessioned2020-04-28T02:31:25Z
dc.date.available2020-04-28T02:31:25Z
dc.date.issued2018-07-12
dc.identifier.citationXin Luo, Liqiang Nie, Xiangnan He, Ye Wu, Zhen-Duo Chen, Xin-Shun Xu (2018-07-12). Fast Scalable Supervised Hashing. ACM SIGIR Conference on Information Retrieval 2018 : 735-744. ScholarBank@NUS Repository. https://doi.org/10.1145/3209978.3210035
dc.identifier.isbn9781450356572
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/167301
dc.description.abstractDespite significant progress in supervised hashing, there are three common limitations of existing methods. First, most pioneer methods discretely learn hash codes bit by bit, making the learning procedure rather time-consuming. Second, to reduce the large complexity of the n by n pairwise similarity matrix, most methods apply sampling strategies during training, which inevitably results in information loss and suboptimal performance; some recent methods try to replace the large matrix with a smaller one, but the size is still large. Third, among the methods that leverage the pairwise similarity matrix, most of them only encode the semantic label information in learning the hash codes, failing to fully capture the characteristics of data. In this paper, we present a novel supervised hashing method, called Fast Scalable Supervised Hashing (FSSH), which circumvents the use of the large similarity matrix by introducing a pre-computed intermediate term whose size is independent with the size of training data. Moreover, FSSH can learn the hash codes with not only the semantic information but also the features of data. Extensive experiments on three widely used datasets demonstrate its superiority over several state-of-the-art methods in both accuracy and scalability. Our experiment codes are available at: https://lcbwlx.wixsite.com/fssh. © 2018 ACM.
dc.publisherAssociation for Computing Machinery, Inc
dc.subjectDiscrete optimization
dc.subjectLarge-scale retrieval
dc.subjectLearning to hash
dc.subjectSupervised hashing
dc.typeConference Paper
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1145/3209978.3210035
dc.description.sourcetitleACM SIGIR Conference on Information Retrieval 2018
dc.description.page735-744
dc.published.statePublished
dc.grant.idR-252-300-002-490
dc.grant.fundingagencyInfocomm Media Development Authority
dc.grant.fundingagencyNational Research Foundation
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Fast Scalable Supervised Hashing.pdf759.61 kBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.