Please use this identifier to cite or link to this item: https://doi.org/10.1145/3209978.3210003
DC FieldValue
dc.titleAttentive Moment Retrieval in Videos
dc.contributor.authorMeng Liu
dc.contributor.authorXiang Wang
dc.contributor.authorLiqiang Nie
dc.contributor.authorXiangnan He
dc.contributor.authorBaoquan Chen
dc.contributor.authorTat-Seng Chua
dc.date.accessioned2020-04-28T02:30:53Z
dc.date.available2020-04-28T02:30:53Z
dc.date.issued2018-07-12
dc.identifier.citationMeng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, Tat-Seng Chua (2018-07-12). Attentive Moment Retrieval in Videos. ACM SIGIR Conference on Information Retrieval 2018 : 15-24. ScholarBank@NUS Repository. https://doi.org/10.1145/3209978.3210003
dc.identifier.isbn9781450356572
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/167297
dc.description.abstractIn the past few years, language-based video retrieval has attracted a lot of attention. However, as a natural extension, localizing the specific video moments within a video given a description query is seldom explored. Although these two tasks look similar, the latter is more challenging due to two main reasons: 1) The former task only needs to judge whether the query occurs in a video and returns an entire video, but the latter is expected to judge which moment within a video matches the query and accurately returns the start and end points of the moment. Due to the fact that different moments in a video have varying durations and diverse spatial-temporal characteristics, uncovering the underlying moments is highly challenging. 2) As for the key component of relevance estimation, the former usually embeds a video and the query into a common space to compute the relevance score. However, the later task concerns moment localization where not only the features of a specific moment matter, but the context information of the moment also contributes a lot. For example, the query may contain temporal constraint words, such as "first'', therefore need temporal context to properly comprehend them. To address these issues, we develop an Attentive Cross-Modal Retrieval Network. In particular, we design a memory attention mechanism to emphasize the visual features mentioned in the query and simultaneously incorporate their context. In the light of this, we obtain the augmented moment representation. Meanwhile, a cross-modal fusion sub-network learns both the intra-modality and inter-modality dynamics, which can enhance the learning of moment-query representation. We evaluate our method on two datasets: DiDeMo and TACoS. Extensive experiments show the effectiveness of our model as compared to the state-of-the-art methods. © 2018 ACM.
dc.publisherAssociation for Computing Machinery, Inc
dc.subjectCross-modal retrieval
dc.subjectMoment localization
dc.subjectTemporal memory attention
dc.subjectTensor fusion
dc.typeConference Paper
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1145/3209978.3210003
dc.description.sourcetitleACM SIGIR Conference on Information Retrieval 2018
dc.description.page15-24
dc.published.statePublished
dc.grant.idR-252-300-002-490
dc.grant.fundingagencyInfocomm Media Development Authority
dc.grant.fundingagencyNational Research Foundation
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Attentive Moment Retrieval in Videos.pdf1.65 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.