Please use this identifier to cite or link to this item: https://doi.org/10.1145/1873951.1873992
DC FieldValue
dc.titleLearning to photograph
dc.contributor.authorCheng, B.
dc.contributor.authorNi, B.
dc.contributor.authorYan, S.
dc.contributor.authorTian, Q.
dc.date.accessioned2014-06-19T03:16:11Z
dc.date.available2014-06-19T03:16:11Z
dc.date.issued2010
dc.identifier.citationCheng, B.,Ni, B.,Yan, S.,Tian, Q. (2010). Learning to photograph. MM'10 - Proceedings of the ACM Multimedia 2010 International Conference : 291-300. ScholarBank@NUS Repository. <a href="https://doi.org/10.1145/1873951.1873992" target="_blank">https://doi.org/10.1145/1873951.1873992</a>
dc.identifier.isbn9781605589336
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/70784
dc.description.abstractIn this paper, we propose an intelligent photography system, which automatically and professionally generates/recommends user-favorite photo(s) from a wide view or a continuous view sequence. This task is quite challenging given that the evaluation of photo quality is under-determined and usually subjective. Motivated by the recent prevalence of online media, we present a solution y mining the underlying knowledge and experience of the photographers from massively crawled professional photos (about 100,000 images, which are highly ranked by users) of those popular photo sharing websites, e.g. Flickr.com. Generally far contexts are critical in characterizing the composition rules for professional photos, and thus we present a method called omni-range context modeling to learn the patch/object spatial correlation distribution for the concurrent patch/object pair of arbitrary distance. The learned photo omni-range context priors then serve as rules to guide the composition of professional photos. When a wide view is fed into the system, these priors are utilized together with other cues (e.g., placements of faces at different poses, patch number, etc) to form a posterior probability formulation for professional sub-view finding. Moreover, this system can function as intelligent professionalview guider based on real-time view quality assessment and the embedded compass (for recording capture direction). Beyond the salient areas targeted by most existing view recommendation algorithms, the proposed system targets at professional photo composition. Qualitative experiments as well as comprehensive user studies well demonstrate the validity and efficiency of the proposed omnirange context learning method as well as the automatic view finding framework. © 2010 ACM.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1145/1873951.1873992
dc.sourceScopus
dc.subjectautomatic view finding
dc.subjectdigital photography
dc.subjectomni-range context
dc.typeConference Paper
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1145/1873951.1873992
dc.description.sourcetitleMM'10 - Proceedings of the ACM Multimedia 2010 International Conference
dc.description.page291-300
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.