Please use this identifier to cite or link to this item: https://doi.org/10.1145/2534409
Title: Memory recall based video search: Finding videos you have seen before based on your memory
Authors: Yuan, J.
Zhao, Y.-L.
Luan, H. 
Wang, M. 
Chua, T.-S. 
Issue Date: Feb-2014
Citation: Yuan, J., Zhao, Y.-L., Luan, H., Wang, M., Chua, T.-S. (2014-02). Memory recall based video search: Finding videos you have seen before based on your memory. ACM Transactions on Multimedia Computing, Communications and Applications 10 (2) : -. ScholarBank@NUS Repository. https://doi.org/10.1145/2534409
Abstract: We often remember images and videos that we have seen or recorded before but cannot quite recall the exact venues or details of the contents. We typically have vague memories of the contents, which can often be expressed as a textual description and/or rough visual descriptions of the scenes. Using these vague memories, we then want to search for the corresponding videos of interest. We call this "Memory Recall based Video Search" (MRVS). To tackle this problem, we propose a video search system that permits a user to input his/her vague and incomplete query as a combination of text query, a sequence of visual queries, and/or concept queries. Here, a visual query is often in the form of a visual sketch depicting the outline of scenes within the desired video, while each corresponding concept query depicts a list of visual concepts that appears in that scene. As the query specified by users is generally approximate or incomplete, we need to develop techniques to handle this inexact and incomplete specification by also leveraging on user feedback to refine the specification. We utilize several innovative approaches to enhance the automatic search. First, we employ a visual query suggestion model to automatically suggest potential visual features to users as better queries. Second, we utilize a color similarity matrix to help compensate for inexact color specification in visual queries. Third, we leverage on the ordering of visual queries and/or concept queries to rerank the results by using a greedy algorithm. Moreover, as the query is inexact and there is likely to be only one or few possible answers, we incorporate an interactive feedback loop to permit the users to label related samples which are visually similar or semantically close to the relevant sample. Based on the labeled samples, we then propose optimization algorithms to update visual queries and concept weights to refine the search results. We conduct experiments on two large-scale video datasets: TRECVID 2010 and YouTube. The experimental results demonstrate that our proposed system is effective for MRVS tasks. © 2014 ACM.
Source Title: ACM Transactions on Multimedia Computing, Communications and Applications
URI: http://scholarbank.nus.edu.sg/handle/10635/77885
ISSN: 15516857
DOI: 10.1145/2534409
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

5
checked on Jul 16, 2018

WEB OF SCIENCETM
Citations

2
checked on Jun 20, 2018

Page view(s)

79
checked on Jul 20, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.