Please use this identifier to cite or link to this item: https://doi.org/10.1145/2072298.2072312
DC FieldValue
dc.titleAutomatic tag generation and ranking for sensor-rich outdoor videos
dc.contributor.authorShen, Z.
dc.contributor.authorAy, S.A.
dc.contributor.authorKim, S.H.
dc.contributor.authorZimmermann, R.
dc.date.accessioned2013-07-04T08:02:05Z
dc.date.available2013-07-04T08:02:05Z
dc.date.issued2011
dc.identifier.citationShen, Z.,Ay, S.A.,Kim, S.H.,Zimmermann, R. (2011). Automatic tag generation and ranking for sensor-rich outdoor videos. MM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops : 93-102. ScholarBank@NUS Repository. <a href="https://doi.org/10.1145/2072298.2072312" target="_blank">https://doi.org/10.1145/2072298.2072312</a>
dc.identifier.isbn9781450306164
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/40341
dc.description.abstractVideo tag annotations have become a useful and powerful feature to facilitate video search in many social media and web applications. The majority of tags assigned to videos are supplied by users - a task which is time consuming and may result in annotations that are subjective and lack precision. A number of studies have utilized content-based exraction techniques to automate tag generation. However, these methods are compute-intensive and challenging to apply across domains. Here, we describe a complementary approach for generating tags based on the geographic properties of videos. With today's sensor-equipped smartphones, the location and orientation of a camera can be continuously acquired in conjunction with the captured video stream. Our novel technique utilizes these sensor meta-data to automatically tag outdoor videos in a two step process. First, we model the viewable scenes of the video as geometric shapes by means of its accompanied sensor data and determine the geographic objects that are visible in the video by querying geo-information databases through the viewable scene descriptions. Subsequently we extract textual information about the visible objects to serve as tags. Second, we define six criteria to score the tag relevance and rank the obtained tags based on these scores. Then we associate the tags with the video and the accurately delimited segments of the video. To evaluate the proposed technique we implemented a prototype tag generator and conducted a user study. The results demonstrate significant benefits of our method in terms of automation and tag utility. © 2011 ACM.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1145/2072298.2072312
dc.sourceScopus
dc.subjectGeospatial
dc.subjectLocation sensors
dc.subjectMobile video
dc.subjectVideo tags
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1145/2072298.2072312
dc.description.sourcetitleMM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops
dc.description.page93-102
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

36
checked on Aug 14, 2022

Page view(s)

147
checked on Aug 18, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.