Please use this identifier to cite or link to this item: https://doi.org/10.1145/2557642.2579372
DC FieldValue
dc.titleUser preference-aware music video generation based on modeling scene moods
dc.contributor.authorShah, R.R.
dc.contributor.authorYu, Y.
dc.contributor.authorZimmermann, R.
dc.date.accessioned2014-07-04T03:16:01Z
dc.date.available2014-07-04T03:16:01Z
dc.date.issued2014
dc.identifier.citationShah, R.R.,Yu, Y.,Zimmermann, R. (2014). User preference-aware music video generation based on modeling scene moods. Proceedings of the 5th ACM Multimedia Systems Conference, MMSys 2014 : 156-159. ScholarBank@NUS Repository. <a href="https://doi.org/10.1145/2557642.2579372" target="_blank">https://doi.org/10.1145/2557642.2579372</a>
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/78416
dc.description.abstractDue to technical advances in mobile devices (e.g., smart- phones, tablets) and wireless communications, people now can easily capture user-generated videos (UGVs) anywhere, anytime and instantly share their real-life experiences via social web sites. Enjoying videos has become very popular entertainment. One challenge is that many mobile videos do not have very appealing audio that was captured with the video. In this demonstration, to overcome this issue we propose a music video generation/creation system (Android app and backend system) that aims to make UGVs more attractive by generating scene-adaptive and user-preference aware music tracks. In our system, we take geographic cat- egories, visual content and user listening history into ac- count. In particular, the sequences of geographic categories and visual features are integrated into a SVMhmm model to predict video scene moods. The music genre, as a user preference is also exploited to personalize the recommended songs. We believe this is the first work that predicts scene moods from a real-world video dataset collected by users' daily outdoor recordings to facilitate user-preference aware music video generation. Our experiments confirm that our system can effectively combine objective scene moods and individual music tastes to recommend appealing soundtracks for videos. Our Android app only sends recorded sensor data and a few keyframes of a UGV to a cloud service (backend system) to retrieve recommended music tracks, therefore it is bandwidth efficient since the transmission of video data is not required for analysis. Copyright is held by the owner/author(s).
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1145/2557642.2579372
dc.sourceScopus
dc.subjectGeographic category
dc.subjectScene mood prediction
dc.subjectUser preference
dc.subjectVideo soundtrack generation
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1145/2557642.2579372
dc.description.sourcetitleProceedings of the 5th ACM Multimedia Systems Conference, MMSys 2014
dc.description.page156-159
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.