Please use this identifier to cite or link to this item: https://doi.org/10.1145/3397271.3401151
DC FieldValue
dc.titleTree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval
dc.contributor.authorXun Yang
dc.contributor.authorJianfeng Dong
dc.contributor.authorYixin Cao
dc.contributor.authorXun Wang
dc.contributor.authorMeng Wang
dc.contributor.authorTat-Seng Chua
dc.date.accessioned2020-10-21T04:03:54Z
dc.date.available2020-10-21T04:03:54Z
dc.date.issued2020-10-12
dc.identifier.citationXun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, Tat-Seng Chua (2020-10-12). Tree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval : 1339 - 1348. ScholarBank@NUS Repository. https://doi.org/10.1145/3397271.3401151
dc.identifier.isbn9781450000000
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/178642
dc.description.abstractThe rapid growth of user-generated videos on the Internet has intensified the need for text-based video retrieval systems. Traditional methods mainly favor the concept-based paradigm on retrieval with simple queries, which are usually ineffective for complex queries that carry far more complex semantics. Recently, embedding-based paradigm has emerged as a popular approach. It aims to map the queries and videos into a shared embedding space where semantically-similar texts and videos are much closer to each other. Despite its simplicity, it forgoes the exploitation of the syntactic structure of text queries, making it suboptimal to model the complex queries. To facilitate video retrieval with complex queries, we propose a Tree-augmented Cross-modal Encoding method by jointly learning the linguistic structure of queries and the temporal representation of videos. Specifically, given a complex user query, we first recursively compose a latent semantic tree to structurally describe the text query. We then design a tree-augmented query encoder to derive structure-aware query representation and a temporal attentive video encoder to model the temporal characteristics of videos. Finally, both the query and videos are mapped into a joint embedding space for matching and ranking. In this approach, we have a better understanding and modeling of the complex queries, thereby achieving a better video retrieval performance. Extensive experiments on large scale video retrieval benchmark datasets demonstrate the effectiveness of our approach. © 2020 ACM.
dc.publisherAssociation for Computing Machinery, Inc
dc.typeConference Paper
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1145/3397271.3401151
dc.description.sourcetitleSIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
dc.description.page1339 - 1348
dc.grant.idR-252-300-002-490
dc.grant.fundingagencyInfocomm Media Development Authority
dc.grant.fundingagencyNational Research Foundation
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Tree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval.pdf2.83 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.