Please use this identifier to cite or link to this item: https://doi.org/10.1145/3209978.3210003
Title: Attentive Moment Retrieval in Videos
Authors: Meng Liu
Xiang Wang 
Liqiang Nie 
Xiangnan He 
Baoquan Chen
Tat-Seng Chua 
Keywords: Cross-modal retrieval
Moment localization
Temporal memory attention
Tensor fusion
Issue Date: 12-Jul-2018
Publisher: Association for Computing Machinery, Inc
Citation: Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, Tat-Seng Chua (2018-07-12). Attentive Moment Retrieval in Videos. ACM SIGIR Conference on Information Retrieval 2018 : 15-24. ScholarBank@NUS Repository. https://doi.org/10.1145/3209978.3210003
Abstract: In the past few years, language-based video retrieval has attracted a lot of attention. However, as a natural extension, localizing the specific video moments within a video given a description query is seldom explored. Although these two tasks look similar, the latter is more challenging due to two main reasons: 1) The former task only needs to judge whether the query occurs in a video and returns an entire video, but the latter is expected to judge which moment within a video matches the query and accurately returns the start and end points of the moment. Due to the fact that different moments in a video have varying durations and diverse spatial-temporal characteristics, uncovering the underlying moments is highly challenging. 2) As for the key component of relevance estimation, the former usually embeds a video and the query into a common space to compute the relevance score. However, the later task concerns moment localization where not only the features of a specific moment matter, but the context information of the moment also contributes a lot. For example, the query may contain temporal constraint words, such as "first'', therefore need temporal context to properly comprehend them. To address these issues, we develop an Attentive Cross-Modal Retrieval Network. In particular, we design a memory attention mechanism to emphasize the visual features mentioned in the query and simultaneously incorporate their context. In the light of this, we obtain the augmented moment representation. Meanwhile, a cross-modal fusion sub-network learns both the intra-modality and inter-modality dynamics, which can enhance the learning of moment-query representation. We evaluate our method on two datasets: DiDeMo and TACoS. Extensive experiments show the effectiveness of our model as compared to the state-of-the-art methods. © 2018 ACM.
Source Title: ACM SIGIR Conference on Information Retrieval 2018
URI: https://scholarbank.nus.edu.sg/handle/10635/167297
ISBN: 9781450356572
DOI: 10.1145/3209978.3210003
Appears in Collections:Elements
Staff Publications

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Attentive Moment Retrieval in Videos.pdf1.65 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.