Please use this identifier to cite or link to this item:
https://doi.org/10.1145/3397271.3401151
DC Field | Value | |
---|---|---|
dc.title | Tree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval | |
dc.contributor.author | Xun Yang | |
dc.contributor.author | Jianfeng Dong | |
dc.contributor.author | Yixin Cao | |
dc.contributor.author | Xun Wang | |
dc.contributor.author | Meng Wang | |
dc.contributor.author | Tat-Seng Chua | |
dc.date.accessioned | 2020-10-21T04:03:54Z | |
dc.date.available | 2020-10-21T04:03:54Z | |
dc.date.issued | 2020-10-12 | |
dc.identifier.citation | Xun Yang, Jianfeng Dong, Yixin Cao, Xun Wang, Meng Wang, Tat-Seng Chua (2020-10-12). Tree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval : 1339 - 1348. ScholarBank@NUS Repository. https://doi.org/10.1145/3397271.3401151 | |
dc.identifier.isbn | 9781450000000 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/178642 | |
dc.description.abstract | The rapid growth of user-generated videos on the Internet has intensified the need for text-based video retrieval systems. Traditional methods mainly favor the concept-based paradigm on retrieval with simple queries, which are usually ineffective for complex queries that carry far more complex semantics. Recently, embedding-based paradigm has emerged as a popular approach. It aims to map the queries and videos into a shared embedding space where semantically-similar texts and videos are much closer to each other. Despite its simplicity, it forgoes the exploitation of the syntactic structure of text queries, making it suboptimal to model the complex queries. To facilitate video retrieval with complex queries, we propose a Tree-augmented Cross-modal Encoding method by jointly learning the linguistic structure of queries and the temporal representation of videos. Specifically, given a complex user query, we first recursively compose a latent semantic tree to structurally describe the text query. We then design a tree-augmented query encoder to derive structure-aware query representation and a temporal attentive video encoder to model the temporal characteristics of videos. Finally, both the query and videos are mapped into a joint embedding space for matching and ranking. In this approach, we have a better understanding and modeling of the complex queries, thereby achieving a better video retrieval performance. Extensive experiments on large scale video retrieval benchmark datasets demonstrate the effectiveness of our approach. © 2020 ACM. | |
dc.publisher | Association for Computing Machinery, Inc | |
dc.type | Conference Paper | |
dc.contributor.department | DEPARTMENT OF COMPUTER SCIENCE | |
dc.description.doi | 10.1145/3397271.3401151 | |
dc.description.sourcetitle | SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval | |
dc.description.page | 1339 - 1348 | |
dc.grant.id | R-252-300-002-490 | |
dc.grant.fundingagency | Infocomm Media Development Authority | |
dc.grant.fundingagency | National Research Foundation | |
Appears in Collections: | Elements Staff Publications |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
Tree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval.pdf | 2.83 MB | Adobe PDF | OPEN | None | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.